Thanks for the reply.  Here is my glusterfs options for the volume, am I missing anything critical?

[root@cluster1-vm ~]# gluster volume info storage

Volume Name: storage
Type: Distributed-Disperse
Volume ID: 67112b70-e319-4629-b768-03df9d9a0e84
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x (4 + 2) = 12
Transport-type: tcp
Bricks:
Brick1: node1-vm:/var/glusterfs/storage/1
Brick2: node2-vm:/var/glusterfs/storage/1
Brick3: node3-vm:/var/glusterfs/storage/1
Brick4: node1-vm:/var/glusterfs/storage/2
Brick5: node2-vm:/var/glusterfs/storage/2
Brick6: node3-vm:/var/glusterfs/storage/2
Brick7: node1-vm:/var/glusterfs/storage/3
Brick8: node2-vm:/var/glusterfs/storage/3
Brick9: node3-vm:/var/glusterfs/storage/3
Brick10: node1-vm:/var/glusterfs/storage/4
Brick11: node2-vm:/var/glusterfs/storage/4
Brick12: node3-vm:/var/glusterfs/storage/4
Options Reconfigured:
storage.owner-gid: 36
storage.owner-uid: 36
network.ping-timeout: 5
performance.client-io-threads: on
server.event-threads: 4
client.event-threads: 4
cluster.choose-local: off
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
performance.strict-o-direct: on
network.remote-dio: disable
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on

From: Strahil Nikolov <hunter86_bg@yahoo.com>
Sent: Monday, January 25, 2021 10:56 AM
To: Robert Tongue <phunyguy@neverserio.us>; users <users@ovirt.org>
Subject: Re: [ovirt-users] VM templates
 
First of all ,

verify the gluster volume options (gluster volume info <VOLNAME> ; gluster volume status <VOLNAME>).When you use HCI, ovirt sets up a lot of optimized options in order to gain the maximum of the Gluster storage.

Best Regards,
Strahil Nikolov

В 15:03 +0000 на 25.01.2021 (пн), Robert Tongue написа:
Hello,

Another weird issue over here.  I have the latest oVirt running inside VMware Vcenter, as a proof of concept/testing platform.  Things are working well finally, for the most part, however I am noticing strange behavior with templates, and deployed VMs from that template.  Let me explain:

I created a basic Ubuntu Server VM, captured that VM as a template, then deployed 4 VMs from that template.  The deployment went fine; however I can only start 3 of the 4 VMs.  If I shut one down one of the 3 that I started, I can then start the other one that refused to start, then the one I JUST shut down will then refuse to start.  The error is:

VM test3 is down with error. Exit message: Bad volume specification {'device': 'disk', 'type': 'disk', 'diskType': 'file', 'specParams': {}, 'alias': 'ua-2dc7fbff-da30-485d-891f-03a0ed60fd0a', 'address': {'bus': '0', 'controller': '0', 'unit': '0', 'type': 'drive', 'target': '0'}, 'domainID': '804c6a0c-b246-4ccc-b3ab-dd4ceb819cea', 'imageID': '2dc7fbff-da30-485d-891f-03a0ed60fd0a', 'poolID': '3208bbce-5e04-11eb-9313-00163e281c6d', 'volumeID': 'f514ab22-07ae-40e4-9146-1041d78553fd', 'path': '/rhev/data-center/3208bbce-5e04-11eb-9313-00163e281c6d/804c6a0c-b246-4ccc-b3ab-dd4ceb819cea/images/2dc7fbff-da30-485d-891f-03a0ed60fd0a/f514ab22-07ae-40e4-9146-1041d78553fd', 'discard': True, 'format': 'cow', 'propagateErrors': 'off', 'cache': 'none', 'iface': 'scsi', 'name': 'sda', 'bootOrder': '1', 'serial': '2dc7fbff-da30-485d-891f-03a0ed60fd0a', 'index': 0, 'reqsize': '0', 'truesize': '2882392576', 'apparentsize': '3435134976'}.

The underlying storage is GlusterFS, self-managed outside of oVirt.

I can provide any logs needed, please let me know which.  Thanks in advance.
_______________________________________________
Users mailing list -- 
users@ovirt.org

To unsubscribe send an email to 
users-leave@ovirt.org

Privacy Statement: 
https://www.ovirt.org/privacy-policy.html

oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/

List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/TZMAQ56NIQK7DY4WM4RZV4YRERMDEZRO/