I may have made matters worse.  So I changed to 4.3 compatible cluster then 4.3 compatible data center.  All VMs were marked as requiring a reboot.  I restarted a couple of them and none of them will start up, they are saying "bad volume specification".  The ones running that I did not yet restart are still running ok.  I need to figure out why the VMs aren't restarting. 

Here is an example from vdsm.log

olumeError: Bad volume specification {'address': {'function': '0x0', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'slot': '0x06'}, 'serial': 'd81a6826-dc46-44db-8de7-405d30e44d57', 'index': 0, 'iface': 'virtio', 'apparentsize': '64293699584', 'specParams': {}, 'cache': 'none', 'imageID': 'd81a6826-dc46-44db-8de7-405d30e44d57', 'truesize': '64293814272', 'type': 'disk', 'domainID': '1f2e9989-9ab3-43d5-971d-568b8feca918', 'reqsize': '0', 'format': 'cow', 'poolID': 'a45e442e-9989-11e8-b0e4-00163e4bf18a', 'device': 'disk', 'path': '/rhev/data-center/a45e442e-9989-11e8-b0e4-00163e4bf18a/1f2e9989-9ab3-43d5-971d-568b8feca918/images/d81a6826-dc46-44db-8de7-405d30e44d57/2d6d5f87-ccb0-48ce-b3ac-84495bd12d32', 'propagateErrors': 'off', 'name': 'vda', 'bootOrder': '1', 'volumeID': '2d6d5f87-ccb0-48ce-b3ac-84495bd12d32', 'diskType': 'file', 'alias': 'ua-d81a6826-dc46-44db-8de7-405d30e44d57', 'discard': False}

On Wed, Feb 13, 2019 at 1:01 PM Jayme <jaymef@gmail.com> wrote:
I think I just figured out what I was doing wrong.  On edit cluster screen I was changing both the CPU type and cluster level 4.3.  I tried it again by switching to the new CPU type first (leaving cluster on 4.2) then saving, then going back in and switching compat level to 4.3.  It appears that you need to do this in two steps for it to work.



On Wed, Feb 13, 2019 at 12:57 PM Jayme <jaymef@gmail.com> wrote:
Hmm interesting, I wonder how you were able to switch from SandyBridge IBRS to SandyBridge IBRS SSBD.  I just attempted the same in both regular mode and in global maintenance mode and it won't allow me to, it says that all hosts have to be in maintenance mode (screenshots attached).   Are you also running HCI/Gluster setup?



On Wed, Feb 13, 2019 at 12:44 PM Ron Jerome <ronjero@gmail.com> wrote:
> Environment setup:
>
> 3 Host HCI GlusterFS setup.  Identical hosts, Dell R720s w/ Intel E5-2690
> CPUs
>
> 1 default data center (4.2 compat)
> 1 default cluster (4.2 compat)
>
> Situation: I recently upgraded my three node HCI cluster from Ovirt 4.2 to
> 4.3.  I did so by first updating the engine to 4.3 then upgrading each
> ovirt-node host to 4.3 and rebooting.
>
> Currently engine and all hosts are running 4.3 and all is working fine.
>
> To complete the upgrade I need to update cluster compatibility to 4.3 and
> then data centre to 4.3.  This is where I am stuck.
>
> The CPU type on cluster is "Intel SandyBridge IBRS Family".  This option is
> no longer available if I select 4.3 compatibility.  Any other option chosen
> such as SandyBridge IBRS SSBD will not allow me to switch to 4.3 as all
> hosts must be in maintenance mode (which is not possible w/ self hosted
> engine).
>
> I saw another post about this where someone else followed steps to create a
> second cluster on 4.3 with new CPU type then move one host to it, start
> engine on it then perform other steps to eventually get to 4.3
> compatibility.
>

I have the exact same hardware configuration and was able to change to "SandyBridge IBRS SSBD" without creating a new cluster.  How I made that happen, I'm not so sure, but the cluster may have been in "Global Maintenance" mode when I changed it.


_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/5B3TAXKO7IBTWRVNF2K4II472TDISO6P/