On Thu, Aug 27, 2020 at 10:39 PM Vinícius Ferrão <ferrao(a)versatushpc.com.br>
wrote:
On 27 Aug 2020, at 16:26, Arik Hadas <ahadas(a)redhat.com> wrote:
On Thu, Aug 27, 2020 at 10:23 PM Arik Hadas <ahadas(a)redhat.com> wrote:
>
>
> On Thu, Aug 27, 2020 at 10:13 PM Vinícius Ferrão <
> ferrao(a)versatushpc.com.br> wrote:
>
>>
>>
>> On 27 Aug 2020, at 16:03, Arik Hadas <ahadas(a)redhat.com> wrote:
>>
>>
>>
>> On Thu, Aug 27, 2020 at 8:40 PM Vinícius Ferrão via Users <
>> users(a)ovirt.org> wrote:
>>
>>> Hi Michal,
>>>
>>> On 27 Aug 2020, at 05:08, Michal Skrivanek
<michal.skrivanek(a)redhat.com>
>>> wrote:
>>>
>>>
>>>
>>> On 26 Aug 2020, at 20:50, Vinícius Ferrão via Users <users(a)ovirt.org>
>>> wrote:
>>>
>>> Okay here we go Arik.
>>>
>>> With your insight I’ve done the following:
>>>
>>> # rpm -Va
>>>
>>> This showed what’s zeroed on the machine, since it was a lot of things,
>>> I’ve just gone crazy and done:
>>>
>>>
>>> you should still have host deploy logs on the engine machine. it’s
>>> weird it succeeded, unless it somehow happened afterwards?
>>>
>>>
>>> It only succeeded my yum reinstall rampage.
>>>
>>> yum list installed | cut -f 1 -d " " > file
>>> yum -y reinstall `cat file | xargs`
>>>
>>> Reinstalled everything.
>>>
>>> Everything worked as expected and I finally added the machine back to
>>> the cluster. It’s operational.
>>>
>>>
>>> eh, I wouldn’t trust it much. did you run redeploy at least?
>>>
>>>
>>> I’ve done reinstall on the web interface of the engine. I can reinstall
>>> the host, there’s nothing running on it… gonna try a third format.
>>>
>>>
>>>
>>> Now I’ve another issue, I have 3 VM’s that are ppc64le, when trying to
>>> import them, the Hosted Engine identifies them as x86_64:
>>>
>>> <PastedGraphic-2.png>
>>>
>>> So…
>>>
>>> This appears to be a bug. Any ideia on how to force it back to ppc64? I
>>> can’t manually force the import on the Hosted Engine since there’s no
>>> buttons to do this…
>>>
>>>
>>> how exactly did you import them? could be a bug indeed.
>>> we don’t support changing it as it doesn’t make sense, the guest can’t
>>> be converted
>>>
>>>
>>> Yeah. I done the normal procedure, added the storage domain to the
>>> engine and clicked on “Import VM”. Immediately it was detected as x86_64.
>>>
>>> Since I wasn’t able to upgrade my environment from 4.3.10 to 4.4.1 due
>>> to random errors when redeploying the engine with the backup from 4.3.10, I
>>> just reinstalled it, reconfigured everything and them imported the storage
>>> domains.
>>>
>>> I don’t know where the information about architecture is stored in the
>>> storage domain, I tried to search for some metadata files inside the domain
>>> but nothing come up. Is there a way to force this change? It must be a way.
>>>
>>> I even tried to import the machine as x86_64. So I can delete the VM
>>> and just reattach the disks in a new only, effectively not losing the data,
>>> but…
>>>
>>> <PastedGraphic-1.png>
>>>
>>> Yeah, so something is broken. The check during the import appears to be
>>> OK, but the interface does not me allow to import it to the ppc64le
>>> machine, since it’s read as x86_64.
>>>
>>
>> Could you please provide the output of the following query from the
>> database:
>> select * from unregistered_ovf_of_entities where entity_name='
>> energy.versatushpc.com.br';
>>
>>
>> Sure, there you go:
>>
>> 46ad1d80-2649-48f5-92e6-e5489d11d30c | energy.versatushpc.com.br | VM
>> | 1 | |
>> d19456e4-0051-456e-b33c-57348a78c2e0 |
>> <?xml version="1.0" encoding="UTF-8"?><ovf:Envelope
xmlns:ovf="
>>
http://schemas.dmtf.org/ovf/envelope/1/" xmlns:rasd="
>>
http://schemas.dmtf.org/wbem/wscim/1/cim
>> -schema/2/CIM_ResourceAllocationSettingData" xmlns:vssd="
>>
http://schemas.dmtf.org/wbem/wscim/1/cim-schema/2/CIM_VirtualSystemSettin...
>> xmlns:xsi="http://ww
>>
w.w3.org/2001/XMLSchema-instance"
>> ovf:version="4.1.0.0"><References><File
>>
ovf:href="775b24a9-6a32-431a-831f-4ac9b3b31152/b1d9832e-076f-48f3-a300-0b5cdf0949af
>> " ovf:id="b1d9832e-076f-48f3-a300-0b5cdf0949af"
ovf:size="512"
>> ovf:description="Active VM" ovf:disk_storage_type="IMAGE"
>> ovf:cinder_volume_type=""></File></R
>> eferences><NetworkSection><Info>List of
networks</Info><Network
>>
ovf:name="legacyservers"></Network></NetworkSection><Section
>> xsi:type="ovf:DiskSection_Type">
>> <Info>List of Virtual Disks</Info><Disk
>> ovf:diskId="b1d9832e-076f-48f3-a300-0b5cdf0949af"
ovf:size="40"
>> ovf:actual_size="1" ovf:vm_snapshot_id="6de58683-c586
>> -4e97-b0e8-ee7ee3baf754" ovf:parentRef=""
>>
ovf:fileRef="775b24a9-6a32-431a-831f-4ac9b3b31152/b1d9832e-076f-48f3-a300-0b5cdf0949af"
>> ovf:format="http://www.vmwa
>>
re.com/specifications/vmdk.html#sparse" ovf:volume-format="RAW"
>> ovf:volume-type="Sparse" ovf:disk-interface="VirtIO_SCSI"
>> ovf:read-only="false" ovf:shareable
>> ="false" ovf:boot="true" ovf:pass-discard="false"
>> ovf:disk-alias="energy.versatushpc.com.br_Disk1"
ovf:disk-description=""
>> ovf:wipe-after-delete="false"></Di
>> sk></Section><Content ovf:id="out"
>>
xsi:type="ovf:VirtualSystem_Type"><Name>energy.versatushpc.com.br</Name><Description>Holds
>> Kosen backend and frontend prod
>> services (nginx +
>>
docker)</Description><Comment></Comment><CreationDate>2020/08/19
>> 20:11:33</CreationDate><ExportDate>2020/08/20
18:37:41</ExportDate><Delet
>>
>>
eProtected>false</DeleteProtected><SsoMethod>guest_agent</SsoMethod><IsSmartcardEnabled>false</IsSmartcardEnabled><NumOfIoThreads>1</NumOfIoThreads><TimeZone
>>
>>
>Etc/GMT</TimeZone><default_boot_sequence>9</default_boot_sequence><Generation>8</Generation><ClusterCompatibilityVersion>4.3</ClusterCompatibilityVersion><V
>>
>>
mType>1</VmType><ResumeBehavior>AUTO_RESUME</ResumeBehavior><MinAllocatedMem>2730</MinAllocatedMem><IsStateless>false</IsStateless><IsRunAndPause>false</IsRu
>>
>>
nAndPause><AutoStartup>false</AutoStartup><Priority>1</Priority><CreatedByUserId>6ea16f22-45d7-11ea-bd83-00163e518b7c</CreatedByUserId><MigrationSupport>0</M
>>
>>
igrationSupport><IsBootMenuEnabled>false</IsBootMenuEnabled><IsSpiceFileTransferEnabled>true</IsSpiceFileTransferEnabled><IsSpiceCopyPasteEnabled>true</IsSpi
>>
>>
ceCopyPasteEnabled><AllowConsoleReconnect>true</AllowConsoleReconnect><ConsoleDisconnectAction>LOCK_SCREEN</ConsoleDisconnectAction><CustomEmulatedMachine></
>>
>>
CustomEmulatedMachine><BiosType>0</BiosType><CustomCpuName></CustomCpuName><PredefinedProperties></PredefinedProperties><UserDefinedProperties></UserDefinedP
>>
>>
roperties><MaxMemorySizeMb>16384</MaxMemorySizeMb><MultiQueuesEnabled>true</MultiQueuesEnabled><UseHostCpu>false</UseHostCpu><ClusterName>Blastoise</ClusterN
>>
>>
ame><TemplateId>00000000-0000-0000-0000-000000000000</TemplateId><TemplateName>Blank</TemplateName><IsInitilized>true</IsInitilized><Origin>0</Origin><quota_
>>
>>
id>32644894-755e-4588-b967-8fb9dc327795</quota_id><DefaultDisplayType>2</DefaultDisplayType><TrustedService>false</TrustedService><OriginalTemplateId>0000000
>>
>>
0-0000-0000-0000-000000000000</OriginalTemplateId><OriginalTemplateName>Blank</OriginalTemplateName><CpuPinning></CpuPinning><UseLatestVersion>false</UseLate
>> stVersion><StopTime>2020/08/20 17:52:35</StopTime><Section
>> ovf:id="46ad1d80-2649-48f5-92e6-e5489d11d30c"
ovf:required="false"
>> xsi:type="ovf:OperatingSystemSe
>> ction_Type"><Info>Guest Operating
>>
System</Info><Description>other_linux_ppc64</Description></Section><Section
>> xsi:type="ovf:VirtualHardwareSection_Type"><Inf
>> o>2 CPU, 4096
Memory</Info><System><vssd:VirtualSystemType>ENGINE
>>
4.1.0.0</vssd:VirtualSystemType></System><Item><rasd:Caption>2
virtual
>> cpu</rasd:Caption><r
>> asd:Description>Number of virtual
>>
CPU</rasd:Description><rasd:InstanceId>1</rasd:InstanceId><rasd:ResourceType>3</rasd:ResourceType><rasd:num_of_sockets>2</r
>>
>>
asd:num_of_sockets><rasd:cpu_per_socket>1</rasd:cpu_per_socket><rasd:threads_per_cpu>1</rasd:threads_per_cpu><rasd:max_num_of_vcpus>16</rasd:max_num_of_vcpus
>>
><rasd:VirtualQuantity>2</rasd:VirtualQuantity></Item><Item><rasd:Caption>4096
>> MB of memory</rasd:Caption><rasd:Description>Memory
>> Size</rasd:Description><ra
>>
>>
sd:InstanceId>2</rasd:InstanceId><rasd:ResourceType>4</rasd:ResourceType><rasd:AllocationUnits>MegaBytes</rasd:AllocationUnits><rasd:VirtualQuantity>4096</ra
>>
>>
sd:VirtualQuantity></Item><Item><rasd:Caption>energy.versatushpc.com.br_Disk1</rasd:Caption><rasd:InstanceId>b1d9832e-076f-48f3-a300-0b5cdf0949af</rasd:Insta
>>
>>
nceId><rasd:ResourceType>17</rasd:ResourceType><rasd:HostResource>775b24a9-6a32-431a-831f-4ac9b3b31152/b1d9832e-076f-48f3-a300-0b5cdf0949af</rasd:HostResourc
>>
>>
e><rasd:Parent>00000000-0000-0000-0000-000000000000</rasd:Parent><rasd:Template>00000000-0000-0000-0000-000000000000</rasd:Template><rasd:ApplicationList></r
>>
>>
asd:ApplicationList><rasd:StorageId>d19456e4-0051-456e-b33c-57348a78c2e0</rasd:StorageId><rasd:StoragePoolId>6c54f91e-89bf-45b4-bc48-56e74c4efd5e</rasd:Stora
>> gePoolId><rasd:CreationDate>2020/08/19
>> 20:13:05</rasd:CreationDate><rasd:LastModified>1970/01/01
>> 00:00:00</rasd:LastModified><rasd:last_modified_date>2020/08
>> /20
>>
18:37:41</rasd:last_modified_date><Type>disk</Type><Device>disk</Device><rasd:Address>{type=drive,
>> bus=0, controller=1, target=0, unit=0}</rasd:Address><
>>
>>
BootOrder>1</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false</IsReadOnly><Alias>ua-775b24a9-6a32-431a-831f-4ac9b3b31152</Alias></Item><Item><rasd:Capt
>> ion>Ethernet adapter on
>>
legacyservers</rasd:Caption><rasd:InstanceId>e6e37ae1-f263-4986-a039-e8e01e72d1f4</rasd:InstanceId><rasd:ResourceType>10</rasd:Resour
>>
>>
ceType><rasd:OtherResourceType>legacyservers</rasd:OtherResourceType><rasd:ResourceSubType>3</rasd:ResourceSubType><rasd:Connection>legacyservers</rasd:Conne
>>
>>
ction><rasd:Linked>true</rasd:Linked><rasd:Name>nic1</rasd:Name><rasd:ElementName>nic1</rasd:ElementName><rasd:MACAddress>56:6f:f0:b3:00:23</rasd:MACAddress>
>>
>>
<rasd:speed>10000</rasd:speed><Type>interface</Type><Device>bridge</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><I
>>
sReadOnly>false</IsReadOnly><Alias>ua-e6e37ae1-f263-4986-a039-e8e01e72d1f4</Alias></Item><Item><rasd:Caption>USB
>> Controller</rasd:Caption><rasd:InstanceId>3<
>>
/rasd:InstanceId><rasd:ResourceType>23</rasd:ResourceType><rasd:UsbPolicy>DISABLED</rasd:UsbPolicy></Item><Item><rasd:Caption>Graphical
>> Controller</rasd:Capt
>>
>>
ion><rasd:InstanceId>1440c749-728e-4a86-afc1-8237c6055fa5</rasd:InstanceId><rasd:ResourceType>20</rasd:ResourceType><rasd:VirtualQuantity>1</rasd:VirtualQuan
>>
>>
tity><rasd:SinglePciQxl>false</rasd:SinglePciQxl><Type>video</Type><Device>vga</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</
>>
>>
IsPlugged><IsReadOnly>false</IsReadOnly><Alias>ua-1440c749-728e-4a86-afc1-8237c6055fa5</Alias><SpecParams><vram>16384</vram></SpecParams></Item><Item><rasd:C
>> aption>Graphical
>>
Framebuffer</rasd:Caption><rasd:InstanceId>603e7f0c-8d28-4c3e-bd90-c5685b752100</rasd:InstanceId><rasd:ResourceType>26</rasd:ResourceType><T
>>
>>
ype>graphics</Type><Device>vnc</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false</IsReadOnly><Alias><
>>
>>
/Alias></Item><Item><rasd:Caption>CDROM</rasd:Caption><rasd:InstanceId>3e21d3d7-f898-4cd8-8f49-441bfc2d99ad</rasd:InstanceId><rasd:ResourceType>15</rasd:Reso
>>
urceType><Type>disk</Type><Device>cdrom</Device><rasd:Address>{type=drive,
>> bus=0, controller=0, target=0,
>> unit=2}</rasd:Address><BootOrder>2</BootOrder><IsPl
>>
>>
ugged>true</IsPlugged><IsReadOnly>true</IsReadOnly><Alias>ua-3e21d3d7-f898-4cd8-8f49-441bfc2d99ad</Alias><SpecParams><path>CentOS-8.1.1911-x86_64-boot.iso</p
>>
>>
ath></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>66f3a2b8-d2c5-4032-9f10-8742d65a0a3e</rasd:InstanceId><Type>controller
>>
>>
</Type><Device>scsi</Device><rasd:Address>{type=spapr-vio}</rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false</IsReadOnly><Al
>>
>>
ias></Alias><SpecParams><index>0</index></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>e065acb2-e7db-4f55-a1df-385f19299b
>>
>>
d0</rasd:InstanceId><Type>rng</Type><Device>virtio</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>false<
>>
>>
/IsReadOnly><Alias>ua-e065acb2-e7db-4f55-a1df-385f19299bd0</Alias><SpecParams><source>urandom</source></SpecParams></Item><Item><rasd:ResourceType>0</rasd:Re
>>
>>
sourceType><rasd:InstanceId>7b4c4ef6-2a9a-4120-b838-3127db0fd703</rasd:InstanceId><Type>balloon</Type><Device>memballoon</Device><rasd:Address></rasd:Address
>>
>>
><BootOrder>0</BootOrder><IsPlugged>true</IsPlugged><IsReadOnly>true</IsReadOnly><Alias>ua-7b4c4ef6-2a9a-4120-b838-3127db0fd703</Alias><SpecParams><model>vir
>>
>>
tio</model></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>5aade6c7-8f77-4fea-a5de-66350b214935</rasd:InstanceId><Type>con
>>
>>
troller</Type><Device>virtio-scsi</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>false</IsPlugged><IsReadOnly>false</IsReadOnly><Ali
>>
>>
as></Alias><SpecParams><ioThreadId></ioThreadId></SpecParams></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>4d4d7bfd-b1e8-45c3-a5e8-7e
>>
>>
0b7773bbf2</rasd:InstanceId><Type>controller</Type><Device>virtio-serial</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugged>false</IsPlu
>>
>>
gged><IsReadOnly>false</IsReadOnly><Alias>58ca7b19-0071-00c0-01d6-000000000212</Alias></Item><Item><rasd:ResourceType>0</rasd:ResourceType><rasd:InstanceId>9
>>
>>
cea63da-7afd-41d4-925f-369f993b280f</rasd:InstanceId><Type>controller</Type><Device>usb</Device><rasd:Address></rasd:Address><BootOrder>0</BootOrder><IsPlugg
>>
ed>false</IsPlugged><IsReadOnly>false</IsReadOnly><Alias></Alias><SpecParams><index>0</index><model>nec-xhci</model></SpecParams></Item></Section><Section
>> xs
>> i:type="ovf:SnapshotsSection_Type"><Snapshot
>>
ovf:id="6de58683-c586-4e97-b0e8-ee7ee3baf754"><Type>ACTIVE</Type><Description>Active
>> VM</Description><CreationDa
>> te>2020/08/19
>>
20:11:33</CreationDate></Snapshot></Section></Content></ovf:Envelope>
|
>> | 0
>>
>> Thank you!
>>
>
> thanks
> so yeah - we may have an issue with that operating system
> 'other_linux_ppc64' that has the same name as 'other_linux' in our
os-info
> configuration
> as a possible workaround, assuming all those unregistered VMs you can try
> to override the architecture with:
> update unregistered_ovf_of_entities set architecture = 2;
>
as a possible workaround, assuming all those unregistered VMs are from
clusters with the same architecture, you can try to override the
architecture with: *
Wooha!!!
engine=# update unregistered_ovf_of_entities set architecture = 2;
UPDATE 8
Worked and the VMs are now imported.
But… hahaha.
I have another issues, any of the three VM’s starts now. Perhaps I’ll
reinstall the host for the third time as recommended by Michal, anyway here
are the logs that I was able to fetch during the failed power on process:
ON THE ENGINE:
==> /var/log/ovirt-engine/engine.log <==
2020-08-27 16:35:59,437-03 INFO [org.ovirt.engine.core.bll.RunVmCommand]
(default task-66) [b5231d22-4a33-45a6-acf4-3af7669caf96] Lock Acquired to
object
'EngineLock:{exclusiveLocks='[ccccd416-c6b4-4c95-8372-417480be5365=VM]',
sharedLocks=''}'
2020-08-27 16:35:59,446-03
INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
(default task-66) [b5231d22-4a33-45a6-acf4-3af7669caf96] START,
IsVmDuringInitiatingVDSCommand(
IsVmDuringInitiatingVDSCommandParameters:{vmId='ccccd416-c6b4-4c95-8372-417480be5365'}),
log id: 5e701801
2020-08-27 16:35:59,446-03
INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
(default task-66) [b5231d22-4a33-45a6-acf4-3af7669caf96] FINISH,
IsVmDuringInitiatingVDSCommand, return: false, log id: 5e701801
2020-08-27 16:35:59,500-03 INFO [org.ovirt.engine.core.bll.RunVmCommand]
(EE-ManagedThreadFactory-engine-Thread-145178)
[b5231d22-4a33-45a6-acf4-3af7669caf96] Running command: RunVmCommand
internal: false. Entities affected : ID:
ccccd416-c6b4-4c95-8372-417480be5365 Type: VMAction group RUN_VM with role
type USER
2020-08-27 16:35:59,506-03
INFO [org.ovirt.engine.core.bll.utils.EmulatedMachineUtils]
(EE-ManagedThreadFactory-engine-Thread-145178)
[b5231d22-4a33-45a6-acf4-3af7669caf96] Emulated machine 'pseries-rhel8.2.0'
which is different than that of the cluster is set for '
jupyter.nix.versatushpc.com.br'(ccccd416-c6b4-4c95-8372-417480be5365)
2020-08-27 16:35:59,528-03
INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-145178)
[b5231d22-4a33-45a6-acf4-3af7669caf96]
START, UpdateVmDynamicDataVDSCommand(
UpdateVmDynamicDataVDSCommandParameters:{hostId='null',
vmId='ccccd416-c6b4-4c95-8372-417480be5365',
vmDynamic='org.ovirt.engine.core.common.businessentities.VmDynamic@14322872'}),
log id: 7709ba81
2020-08-27 16:35:59,530-03
INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-145178)
[b5231d22-4a33-45a6-acf4-3af7669caf96]
FINISH, UpdateVmDynamicDataVDSCommand, return: , log id: 7709ba81
2020-08-27 16:35:59,533-03
INFO [org.ovirt.engine.core.vdsbroker.CreateVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-145178)
[b5231d22-4a33-45a6-acf4-3af7669caf96] START, CreateVDSCommand(
CreateVDSCommandParameters:{hostId='394e0e68-60f5-42b3-aec4-5d8368efedd1',
vmId='ccccd416-c6b4-4c95-8372-417480be5365', vm='VM [
jupyter.nix.versatushpc.com.br]'}), log id: 4a0db679
2020-08-27 16:35:59,534-03
INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-145178)
[b5231d22-4a33-45a6-acf4-3af7669caf96]
START, CreateBrokerVDSCommand(HostName = rhvpower.local.versatushpc.com.br,
CreateVDSCommandParameters:{hostId='394e0e68-60f5-42b3-aec4-5d8368efedd1',
vmId='ccccd416-c6b4-4c95-8372-417480be5365', vm='VM [
jupyter.nix.versatushpc.com.br]'}), log id: 25bc7e6e
2020-08-27 16:35:59,548-03
INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-145178)
[b5231d22-4a33-45a6-acf4-3af7669caf96] VM <?xml
version="1.0" encoding="UTF-8"?><domain type="kvm"
xmlns:ovirt-tune="
http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0"
xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0">
<name>jupyter.nix.versatushpc.com.br</name>
<uuid>ccccd416-c6b4-4c95-8372-417480be5365</uuid>
<memory>536870912</memory>
<currentMemory>536870912</currentMemory>
<vcpu current="128">384</vcpu>
<clock offset="variable" adjustment="0">
<timer name="rtc" tickpolicy="catchup"/>
<timer name="pit" tickpolicy="delay"/>
</clock>
<cpu mode="host-model">
<model>power9</model>
<topology cores="16" threads="4" sockets="6"/>
<numa>
<cell id="0" cpus="0-383" memory="536870912"/>
</numa>
</cpu>
<cputune/>
<qemu:capabilities>
<qemu:add capability="blockdev"/>
<qemu:add capability="incremental-backup"/>
</qemu:capabilities>
<devices>
<input type="tablet" bus="usb"/>
<channel type="unix">
<target type="virtio" name="ovirt-guest-agent.0"/>
<source mode="bind"
path="/var/lib/libvirt/qemu/channels/ccccd416-c6b4-4c95-8372-417480be5365.ovirt-guest-agent.0"/>
</channel>
<channel type="unix">
<target type="virtio" name="org.qemu.guest_agent.0"/>
<source mode="bind"
path="/var/lib/libvirt/qemu/channels/ccccd416-c6b4-4c95-8372-417480be5365.org.qemu.guest_agent.0"/>
</channel>
<emulator text="/usr/bin/qemu-system-ppc64"/>
<controller type="scsi" model="ibmvscsi"
index="0"/>
<rng model="virtio">
<backend model="random">/dev/urandom</backend>
<alias name="ua-1e18aea0-076a-40d0-9b85-21ac6049a94d"/>
</rng>
<controller type="usb" model="nec-xhci"
index="0">
<alias name="ua-47e67d9f-a191-4dc0-9c09-b2db9f1d373e"/>
</controller>
<controller type="virtio-serial" index="0"
ports="16">
<alias name="ua-4d92fb2f-aaf6-465c-8571-e49e1d12191d"/>
</controller>
<watchdog model="i6300esb" action="none">
<alias name="ua-7b756cc3-c9ec-4b79-84ef-d6ad15021f1a"/>
</watchdog>
<graphics type="vnc" port="-1" autoport="yes"
passwd="*****"
passwdValidTo="1970-01-01T00:00:01" keymap="en-us">
<listen type="network" network="vdsm-ovirtmgmt"/>
</graphics>
<controller type="scsi" model="virtio-scsi"
index="1">
<alias name="ua-8e146e76-e038-4f8a-a526-e7e1c626f54e"/>
</controller>
<memballoon model="virtio">
<stats period="5"/>
<alias name="ua-d8d37c06-de66-4912-bf8d-fc1017c85c68"/>
</memballoon>
<video>
<model type="vga" vram="16384" heads="1"/>
<alias name="ua-e96e6050-b1aa-4664-a856-8df923e3dc66"/>
</video>
<controller type="scsi" index="0">
<address type="spapr-vio"/>
</controller>
<interface type="bridge">
<model type="virtio"/>
<link state="up"/>
<source bridge="servers"/>
<driver queues="4" name="vhost"/>
<alias name="ua-152c3f8a-69d2-420f-8b6a-c1fb4a11594f"/>
<mac address="56:6f:1a:f4:00:03"/>
<mtu size="1500"/>
<filterref filter="vdsm-no-mac-spoofing"/>
<bandwidth/>
</interface>
<interface type="bridge">
<model type="virtio"/>
<link state="up"/>
<source bridge="nfs"/>
<driver queues="4" name="vhost"/>
<alias name="ua-1369da6c-4f9b-4fe3-9f45-7b37ecb34ac2"/>
<mac address="56:6f:1a:f4:00:04"/>
<mtu size="1500"/>
<filterref filter="vdsm-no-mac-spoofing"/>
<bandwidth/>
</interface>
<disk type="file" device="cdrom" snapshot="no">
<driver name="qemu" type="raw"
error_policy="report"/>
<source file="" startupPolicy="optional">
<seclabel model="dac" type="none"
relabel="no"/>
</source>
<target dev="sdc" bus="scsi"/>
<readonly/>
<alias name="ua-2d6db7ca-2fe1-4af4-9741-7b5332805d94"/>
<address bus="0" controller="0" unit="2"
type="drive" target="0"/>
</disk>
<disk snapshot="no" type="file" device="disk">
<target dev="sda" bus="scsi"/>
<source
file="/rhev/data-center/804e857c-461d-4642-86c4-7ff4a5e7da47/d19456e4-0051-456e-b33c-57348a78c2e0/images/8100a756-92a7-4160-9a31-5a843810cb61/0183b177-71b5-4c0e-b7d3-becc5da152ce">
<seclabel model="dac" type="none"
relabel="no"/>
</source>
<driver name="qemu" io="threads" type="raw"
error_policy="stop"
cache="none"/>
<alias name="ua-8100a756-92a7-4160-9a31-5a843810cb61"/>
<address bus="0" controller="1" unit="0"
type="drive" target="0"/>
<boot order="1"/>
<serial>8100a756-92a7-4160-9a31-5a843810cb61</serial>
</disk>
<lease>
<key>ccccd416-c6b4-4c95-8372-417480be5365</key>
<lockspace>d19456e4-0051-456e-b33c-57348a78c2e0</lockspace>
<target offset="24117248"
path="/rhev/data-center/mnt/192.168.10.14:
_mnt_pool0_ovirt_vm/d19456e4-0051-456e-b33c-57348a78c2e0/dom_md/xleases"/>
</lease>
</devices>
<os>
<type arch="ppc64"
machine="pseries-rhel8.2.0">hvm</type>
</os>
<metadata>
<ovirt-tune:qos/>
<ovirt-vm:vm>
<ovirt-vm:minGuaranteedMemoryMb
type="int">524288</ovirt-vm:minGuaranteedMemoryMb>
<ovirt-vm:clusterVersion>4.4</ovirt-vm:clusterVersion>
<ovirt-vm:custom/>
<ovirt-vm:device mac_address="56:6f:1a:f4:00:04">
<ovirt-vm:custom/>
</ovirt-vm:device>
<ovirt-vm:device mac_address="56:6f:1a:f4:00:03">
<ovirt-vm:custom/>
</ovirt-vm:device>
<ovirt-vm:device devtype="disk" name="sda">
<ovirt-vm:poolID>804e857c-461d-4642-86c4-7ff4a5e7da47</ovirt-vm:poolID>
<ovirt-vm:volumeID>0183b177-71b5-4c0e-b7d3-becc5da152ce</ovirt-vm:volumeID>
<ovirt-vm:imageID>8100a756-92a7-4160-9a31-5a843810cb61</ovirt-vm:imageID>
<ovirt-vm:domainID>d19456e4-0051-456e-b33c-57348a78c2e0</ovirt-vm:domainID>
</ovirt-vm:device>
<ovirt-vm:launchPaused>false</ovirt-vm:launchPaused>
<ovirt-vm:resumeBehavior>kill</ovirt-vm:resumeBehavior>
</ovirt-vm:vm>
</metadata>
</domain>
2020-08-27 16:35:59,566-03
INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateBrokerVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-145178)
[b5231d22-4a33-45a6-acf4-3af7669caf96] FINISH,
CreateBrokerVDSCommand, return: , log id: 25bc7e6e
2020-08-27 16:35:59,570-03
INFO [org.ovirt.engine.core.vdsbroker.CreateVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-145178)
[b5231d22-4a33-45a6-acf4-3af7669caf96] FINISH, CreateVDSCommand, return:
WaitForLaunch, log id: 4a0db679
2020-08-27 16:35:59,570-03 INFO [org.ovirt.engine.core.bll.RunVmCommand]
(EE-ManagedThreadFactory-engine-Thread-145178)
[b5231d22-4a33-45a6-acf4-3af7669caf96] Lock freed to object
'EngineLock:{exclusiveLocks='[ccccd416-c6b4-4c95-8372-417480be5365=VM]',
sharedLocks=''}'
2020-08-27 16:35:59,576-03
INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engine-Thread-145178)
[b5231d22-4a33-45a6-acf4-3af7669caf96] EVENT_ID: USER_STARTED_VM(153), VM
jupyter.nix.versatushpc.com.br was started by admin@internal-authz (Host:
rhvpower.local.versatushpc.com.br).
2020-08-27 16:36:01,803-03
INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-13) [] VM 'ccccd416-c6b4-4c95-8372-417480be5365' was
reported as Down on VDS '394e0e68-60f5-42b3-aec4-5d8368efedd1'(
rhvpower.local.versatushpc.com.br)
2020-08-27 16:36:01,804-03
INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(ForkJoinPool-1-worker-13) [] START, DestroyVDSCommand(HostName =
rhvpower.local.versatushpc.com.br,
DestroyVmVDSCommandParameters:{hostId='394e0e68-60f5-42b3-aec4-5d8368efedd1',
vmId='ccccd416-c6b4-4c95-8372-417480be5365',
secondsToWait='0', gracefully='false', reason='',
ignoreNoVm='true'}), log
id: 39e346b9
2020-08-27 16:36:01,959-03
INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand]
(ForkJoinPool-1-worker-13) [] FINISH, DestroyVDSCommand, return: , log id:
39e346b9
2020-08-27 16:36:01,959-03
INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-13) [] VM 'ccccd416-c6b4-4c95-8372-417480be5365'(
jupyter.nix.versatushpc.com.br) moved from 'WaitForLaunch' --> 'Down'
2020-08-27 16:36:02,024-03 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ForkJoinPool-1-worker-13)
[] EVENT_ID: VM_DOWN_ERROR(119), VM jupyter.nix.versatushpc.com.br is
down with error. Exit message: Hook Error: (b'Traceback (most recent call
last):\n File "/usr/libexec/vdsm/hooks/before_vm_start/50_vhostmd", line
124, in <module>\n
main(VhostmdConf())\n File
"/usr/libexec/vdsm/hooks/before_vm_start/50_vhostmd",
line 47, in __init__\n dom =
minidom.parse(path)\n File "/usr/lib64/python3.6/xml/dom/minidom.py", line
1958, in parse\n return
expatbuilder.parse(file)\n File
"/usr/lib64/python3.6/xml/dom/expatbuilder.py",
line 911, in parse\n result =
builder.parseFile(fp)\n File "/usr/lib64/python3.6/xml/dom/expatbuilder.py",
line 211, in parseFile\n
parser.Parse("", True)\nxml.parsers.expat.ExpatError: no element found:
line 1, column 0\n',).
yeah, I never encountered this issue before - could be a consequence of an
improper deployment of that host
2020-08-27 16:36:02,025-03
INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-13) [] add VM 'ccccd416-c6b4-4c95-8372-417480be5365'(
jupyter.nix.versatushpc.com.br) to rerun treatment
2020-08-27 16:36:02,029-03 ERROR
[org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring]
(ForkJoinPool-1-worker-13) [] Rerun VM
'ccccd416-c6b4-4c95-8372-417480be5365'. Called from VDS '
rhvpower.local.versatushpc.com.br'
2020-08-27 16:36:02,041-03
WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engine-Thread-145179) [] EVENT_ID:
USER_INITIATED_RUN_VM_FAILED(151), Failed to run VM
jupyter.nix.versatushpc.com.br on Host rhvpower.local.versatushpc.com.br.
2020-08-27 16:36:02,066-03 INFO [org.ovirt.engine.core.bll.RunVmCommand]
(EE-ManagedThreadFactory-engine-Thread-145179) [] Lock Acquired to object
'EngineLock:{exclusiveLocks='[ccccd416-c6b4-4c95-8372-417480be5365=VM]',
sharedLocks=''}'
2020-08-27 16:36:02,077-03
INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-145179) [] START,
IsVmDuringInitiatingVDSCommand(
IsVmDuringInitiatingVDSCommandParameters:{vmId='ccccd416-c6b4-4c95-8372-417480be5365'}),
log id: 5480ad0b
2020-08-27 16:36:02,077-03
INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
(EE-ManagedThreadFactory-engine-Thread-145179) [] FINISH,
IsVmDuringInitiatingVDSCommand, return: false, log id: 5480ad0b
2020-08-27 16:36:02,093-03 WARN [org.ovirt.engine.core.bll.RunVmCommand]
(EE-ManagedThreadFactory-engine-Thread-145179) [] Validation of action
'RunVm' failed for user admin@internal-authz.
Reasons:
VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,SCHEDULING_NO_HOSTS
2020-08-27 16:36:02,093-03 INFO [org.ovirt.engine.core.bll.RunVmCommand]
(EE-ManagedThreadFactory-engine-Thread-145179) [] Lock freed to object
'EngineLock:{exclusiveLocks='[ccccd416-c6b4-4c95-8372-417480be5365=VM]',
sharedLocks=''}'
2020-08-27 16:36:02,101-03 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(EE-ManagedThreadFactory-engine-Thread-145179) [] EVENT_ID:
USER_FAILED_RUN_VM(54), Failed to run VM jupyter.nix.versatushpc.com.br (User:
admin@internal-authz).
2020-08-27 16:36:02,105-03
INFO [org.ovirt.engine.core.bll.ProcessDownVmCommand]
(EE-ManagedThreadFactory-engine-Thread-145180) [71c52499] Running command:
ProcessDownVmCommand internal: true.
ON THE HOST:
/var/log/messages
Aug 27 16:36:01 rhvpower python3[73682]: detected unhandled Python
exception in '/usr/libexec/vdsm/hooks/before_vm_start/50_vhostmd'
Aug 27 16:36:01 rhvpower abrt-server[73684]: Deleting problem directory
Python3-2020-08-27-16:36:01-73682 (dup of Python3-2020-08-27-16:33:11-73428)
Aug 27 16:36:01 rhvpower dbus-daemon[9441]: [system] Activating service
name='org.freedesktop.problems' requested by ':1.183' (uid=0 pid=73691
comm="/usr/libexec/platform-python
/usr/bin/abrt-action-" label="system_u:system_r:abrt_t:s0-s0:c0.c1023")
(using servicehelper)
Aug 27 16:36:01 rhvpower dbus-daemon[73694]: [system] Failed to reset fd
limit before activating service: org.freedesktop.DBus.Error.AccessDenied:
Failed to restore old fd limit: Operation not permitted
Aug 27 16:36:01 rhvpower dbus-daemon[9441]: [system] Successfully
activated service 'org.freedesktop.problems'
Aug 27 16:36:02 rhvpower abrt-server[73684]: /bin/sh:
reporter-systemd-journal: command not found
Regarding the import problem. Is that really a bug right? I can describe
it on Red Hat Bugzilla if I need to. It’s the minimal that I can do for the
help. Is it ok?
yes, please do
Thanks,
>
>
>
>>
>>
>>
>>>
>>>
>>> Thanks,
>>> michal
>>>
>>>
>>> Ideias?
>>>
>>> On 26 Aug 2020, at 15:04, Vinícius Ferrão <ferrao(a)versatushpc.com.br>
>>> wrote:
>>>
>>> What a strange thing is happening here:
>>>
>>> [root@power ~]# file /usr/bin/vdsm-client
>>> /usr/bin/vdsm-client: empty
>>> [root@power ~]# ls -l /usr/bin/vdsm-client
>>> -rwxr-xr-x. 1 root root 0 Jul 3 06:23 /usr/bin/vdsm-client
>>>
>>> A lot of files are just empty, I’ve tried reinstalling vdsm-client, it
>>> worked, but there’s other zeroed files:
>>>
>>> Transaction test succeeded.
>>> Running transaction
>>> Preparing :
>>>
>>> 1/1
>>> Reinstalling : vdsm-client-4.40.22-1.el8ev.noarch
>>>
>>> 1/2
>>> Cleanup : vdsm-client-4.40.22-1.el8ev.noarch
>>>
>>> 2/2
>>> Running scriptlet: vdsm-client-4.40.22-1.el8ev.noarch
>>>
>>> 2/2
>>> /sbin/ldconfig: File /lib64/libkadm5clnt.so is empty, not checked.
>>> /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so is empty, not checked.
>>> /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11 is empty, not
>>> checked.
>>> /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11.0 is empty, not
>>> checked.
>>> /sbin/ldconfig: File /lib64/libkadm5srv.so is empty, not checked.
>>> /sbin/ldconfig: File /lib64/libkadm5srv_mit.so is empty, not checked.
>>> /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11 is empty, not checked.
>>> /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11.0 is empty, not
>>> checked.
>>> /sbin/ldconfig: File /lib64/libsensors.so.4 is empty, not checked.
>>> /sbin/ldconfig: File /lib64/libsensors.so.4.4.0 is empty, not checked.
>>> /sbin/ldconfig: File /lib64/libvirt-admin.so.0 is empty, not checked.
>>> /sbin/ldconfig: File /lib64/libvirt-admin.so.0.6000.0 is empty, not
>>> checked.
>>> /sbin/ldconfig: File /lib64/libvirt-lxc.so.0 is empty, not checked.
>>> /sbin/ldconfig: File /lib64/libvirt-lxc.so.0.6000.0 is empty, not
>>> checked.
>>> /sbin/ldconfig: File /lib64/libvirt-qemu.so.0 is empty, not checked.
>>> /sbin/ldconfig: File /lib64/libvirt-qemu.so.0.6000.0 is empty, not
>>> checked.
>>> /sbin/ldconfig: File /lib64/libvirt.so.0 is empty, not checked.
>>> /sbin/ldconfig: File /lib64/libvirt.so.0.6000.0 is empty, not checked.
>>> /sbin/ldconfig: File /lib64/libisns.so.0 is empty, not checked.
>>> /sbin/ldconfig: File /lib64/libiscsi.so.0 is empty, not checked.
>>> /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0 is empty, not checked.
>>> /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0.2.0 is empty, not
>>> checked.
>>>
>>> /sbin/ldconfig: File /lib64/libkadm5clnt.so is empty, not checked.
>>> /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so is empty, not checked.
>>> /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11 is empty, not
>>> checked.
>>> /sbin/ldconfig: File /lib64/libkadm5clnt_mit.so.11.0 is empty, not
>>> checked.
>>> /sbin/ldconfig: File /lib64/libkadm5srv.so is empty, not checked.
>>> /sbin/ldconfig: File /lib64/libkadm5srv_mit.so is empty, not checked.
>>> /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11 is empty, not checked.
>>> /sbin/ldconfig: File /lib64/libkadm5srv_mit.so.11.0 is empty, not
>>> checked.
>>> /sbin/ldconfig: File /lib64/libsensors.so.4 is empty, not checked.
>>> /sbin/ldconfig: File /lib64/libsensors.so.4.4.0 is empty, not checked.
>>> /sbin/ldconfig: File /lib64/libvirt-admin.so.0 is empty, not checked.
>>> /sbin/ldconfig: File /lib64/libvirt-admin.so.0.6000.0 is empty, not
>>> checked.
>>> /sbin/ldconfig: File /lib64/libvirt-lxc.so.0 is empty, not checked.
>>> /sbin/ldconfig: File /lib64/libvirt-lxc.so.0.6000.0 is empty, not
>>> checked.
>>> /sbin/ldconfig: File /lib64/libvirt-qemu.so.0 is empty, not checked.
>>> /sbin/ldconfig: File /lib64/libvirt-qemu.so.0.6000.0 is empty, not
>>> checked.
>>> /sbin/ldconfig: File /lib64/libvirt.so.0 is empty, not checked.
>>> /sbin/ldconfig: File /lib64/libvirt.so.0.6000.0 is empty, not checked.
>>> /sbin/ldconfig: File /lib64/libisns.so.0 is empty, not checked.
>>> /sbin/ldconfig: File /lib64/libiscsi.so.0 is empty, not checked.
>>> /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0 is empty, not checked.
>>> /sbin/ldconfig: File /lib64/libopeniscsiusr.so.0.2.0 is empty, not
>>> checked.
>>>
>>> Verifying : vdsm-client-4.40.22-1.el8ev.noarch
>>>
>>> 1/2
>>> Verifying : vdsm-client-4.40.22-1.el8ev.noarch
>>>
>>> 2/2
>>> Installed products updated.
>>>
>>> Reinstalled:
>>> vdsm-client-4.40.22-1.el8ev.noarch
>>>
>>>
>>>
>>> I’ve never seen something like this.
>>>
>>> I’ve already reinstalled the host from the ground and the same thing
>>> happens.
>>>
>>>
>>> On 26 Aug 2020, at 14:28, Vinícius Ferrão via Users <users(a)ovirt.org>
>>> wrote:
>>>
>>> Hello Arik,
>>> This is probably the issue. Output totally empty:
>>>
>>> [root@power ~]# vdsm-client Host getCapabilities
>>> [root@power ~]#
>>>
>>> Here are the packages installed on the machine: (grepped ovirt and vdsm
>>> on rpm -qa)
>>> ovirt-imageio-daemon-2.0.8-1.el8ev.ppc64le
>>> ovirt-imageio-client-2.0.8-1.el8ev.ppc64le
>>> ovirt-host-4.4.1-4.el8ev.ppc64le
>>> ovirt-vmconsole-host-1.0.8-1.el8ev.noarch
>>> ovirt-host-dependencies-4.4.1-4.el8ev.ppc64le
>>> ovirt-imageio-common-2.0.8-1.el8ev.ppc64le
>>> ovirt-vmconsole-1.0.8-1.el8ev.noarch
>>> vdsm-hook-vmfex-dev-4.40.22-1.el8ev.noarch
>>> vdsm-hook-fcoe-4.40.22-1.el8ev.noarch
>>> vdsm-hook-ethtool-options-4.40.22-1.el8ev.noarch
>>> vdsm-hook-openstacknet-4.40.22-1.el8ev.noarch
>>> vdsm-common-4.40.22-1.el8ev.noarch
>>> vdsm-python-4.40.22-1.el8ev.noarch
>>> vdsm-jsonrpc-4.40.22-1.el8ev.noarch
>>> vdsm-api-4.40.22-1.el8ev.noarch
>>> vdsm-yajsonrpc-4.40.22-1.el8ev.noarch
>>> vdsm-4.40.22-1.el8ev.ppc64le
>>> vdsm-network-4.40.22-1.el8ev.ppc64le
>>> vdsm-http-4.40.22-1.el8ev.noarch
>>> vdsm-client-4.40.22-1.el8ev.noarch
>>> vdsm-hook-vhostmd-4.40.22-1.el8ev.noarch
>>>
>>> Any ideias to try?
>>>
>>> Thanks.
>>>
>>> On 26 Aug 2020, at 05:09, Arik Hadas <ahadas(a)redhat.com> wrote:
>>>
>>>
>>>
>>> On Mon, Aug 24, 2020 at 1:30 AM Vinícius Ferrão via Users <
>>> users(a)ovirt.org> wrote:
>>>
>>>> Hello, I was using oVirt 4.3.10 with IBM AC922 (POWER9 / ppc64le)
>>>> without any issues.
>>>>
>>>> Since I’ve moved to 4.4.1 I can’t add the AC922 machine to the engine
>>>> anymore, it complains with the following error:
>>>> The host CPU does not match the Cluster CPU type and is running in
>>>> degraded mode. It is missing the following CPU flags: model_POWER9,
powernv.
>>>>
>>>> Any ideia of what’s may be happening? The engine runs on x86_64, and I
>>>> was using this way on 4.3.10.
>>>
>>>
>>>> Machine info:
>>>> timebase : 512000000
>>>> platform : PowerNV
>>>> model : 8335-GTH
>>>> machine : PowerNV 8335-GTH
>>>> firmware : OPAL
>>>> MMU : Radix
>>>>
>>>
>>> Can you please provide the output of 'vdsm-client Host
getCapabilities'
>>> on that host?
>>>
>>>
>>>>
>>>> Thanks,
>>>>
>>>>
>>>> _______________________________________________
>>>> Users mailing list -- users(a)ovirt.org
>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>> Privacy Statement:
https://www.ovirt.org/privacy-policy.html
>>>> oVirt Code of Conduct:
>>>>
https://www.ovirt.org/community/about/community-guidelines/
>>>> List Archives:
>>>>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RV6FHRGKGPP...
>>>
>>>
>>> _______________________________________________
>>> Users mailing list -- users(a)ovirt.org
>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>> Privacy Statement:
https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct:
>>>
https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3DFMIR7764V...
>>>
>>>
>>>
>>> _______________________________________________
>>> Users mailing list -- users(a)ovirt.org
>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>> Privacy Statement:
https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct:
>>>
https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MLSRBXRNNBP...
>>>
>>>
>>> _______________________________________________
>>> Users mailing list -- users(a)ovirt.org
>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>> Privacy Statement:
https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct:
>>>
https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YMNMYMBMWTC...
>>>
>>
>>