Correction, the issue came back, but I fixed it again, the actual issue was multipathd.  I had to set up device filters in /etc/multipath.conf

blacklist {
    protocol "(scsi:adt|scsi:sbp)"
       devnode "^hd[a-z]"
       devnode "^sd[a-z]$"
       devnode "^sd[a-z]"
       devnode "^nvme0n1"
       devnode "^nvme0n1p$"blacklist {
}

Probably overkill, but it works.

From: Robert Tongue <phunyguy@neverserio.us>
Sent: Tuesday, January 26, 2021 2:24 PM
To: users <users@ovirt.org>
Subject: Re: VM templates
 
I fixed my own issue, and for everyone else that may run into this, the issue was the fact that I created the first oVirt node VM inside VMware, and got it fully configured with all the software/disks/partitioning/settings, then cloned it to two more VMs.   Then I ran the hosted-engine deployment and set up the cluster.   I think it was because I used clones for each cluster node, and that confused things due to device/system identifiers. 

I rebuilt all 3 node VMs from scratch, and everything works perfectly now. 

Thanks for listening.

From: Robert Tongue
Sent: Monday, January 25, 2021 10:03 AM
To: users <users@ovirt.org>
Subject: VM templates
 
Hello,

Another weird issue over here.  I have the latest oVirt running inside VMware Vcenter, as a proof of concept/testing platform.  Things are working well finally, for the most part, however I am noticing strange behavior with templates, and deployed VMs from that template.  Let me explain:

I created a basic Ubuntu Server VM, captured that VM as a template, then deployed 4 VMs from that template.  The deployment went fine; however I can only start 3 of the 4 VMs.  If I shut one down one of the 3 that I started, I can then start the other one that refused to start, then the one I JUST shut down will then refuse to start.  The error is:

VM test3 is down with error. Exit message: Bad volume specification {'device': 'disk', 'type': 'disk', 'diskType': 'file', 'specParams': {}, 'alias': 'ua-2dc7fbff-da30-485d-891f-03a0ed60fd0a', 'address': {'bus': '0', 'controller': '0', 'unit': '0', 'type': 'drive', 'target': '0'}, 'domainID': '804c6a0c-b246-4ccc-b3ab-dd4ceb819cea', 'imageID': '2dc7fbff-da30-485d-891f-03a0ed60fd0a', 'poolID': '3208bbce-5e04-11eb-9313-00163e281c6d', 'volumeID': 'f514ab22-07ae-40e4-9146-1041d78553fd', 'path': '/rhev/data-center/3208bbce-5e04-11eb-9313-00163e281c6d/804c6a0c-b246-4ccc-b3ab-dd4ceb819cea/images/2dc7fbff-da30-485d-891f-03a0ed60fd0a/f514ab22-07ae-40e4-9146-1041d78553fd', 'discard': True, 'format': 'cow', 'propagateErrors': 'off', 'cache': 'none', 'iface': 'scsi', 'name': 'sda', 'bootOrder': '1', 'serial': '2dc7fbff-da30-485d-891f-03a0ed60fd0a', 'index': 0, 'reqsize': '0', 'truesize': '2882392576', 'apparentsize': '3435134976'}.

The underlying storage is GlusterFS, self-managed outside of oVirt.

I can provide any logs needed, please let me know which.  Thanks in advance.