
On February 3, 2020 5:06:16 AM GMT+02:00, Christian Reiss <email@christian-reiss.de> wrote:
Hey,
they're all in this form:
[root@node03:[..]/images/6113f079-fd28-4165-a807-61bb7625cd48] # l total 49G drwxr-xr-x. 2 vdsm kvm 8.0K Jan 29 23:02 . drwxr-xr-x. 46 vdsm kvm 8.0K Feb 2 23:18 .. -rw-rw----. 1 vdsm kvm 50G Jan 29 02:02 83f7942f-c74e-4bc4-a816-09988e7ab308 -rw-rw----. 1 vdsm kvm 1.0M Jan 23 12:16 83f7942f-c74e-4bc4-a816-09988e7ab308.lease -rw-r--r--. 1 vdsm kvm 323 Jan 29 23:02 83f7942f-c74e-4bc4-a816-09988e7ab308.meta -rw-rw----. 1 vdsm kvm 20G Feb 2 21:42 f72a4a62-b280-4bdf-9570-96d4b6577d89 -rw-rw----. 1 vdsm kvm 1.0M Jan 29 23:02 f72a4a62-b280-4bdf-9570-96d4b6577d89.lease -rw-r--r--. 1 vdsm kvm 251 Jan 29 23:02 f72a4a62-b280-4bdf-9570-96d4b6577d89.meta
Looks good (enough) to me.
On 03/02/2020 04:01, Darrell Budic wrote:
Check the contents of these directories:
[root@node03:/rhev/data-center/mnt/glusterSD/node01.dc-dus.dalason.net
<http://node01.dc-dus.dalason.net>:_ssd__storage/fec2eb5e-21b5-496b-9ea5-f718b2cb5556/images]
# l total 345K drwxr-xr-x. 46 vdsm kvm 8.0K Feb 2 23:18 . drwxr-xr-x. 5 vdsm kvm 64 Feb 3 00:31 .. drwxr-xr-x. 2 vdsm kvm 8.0K Jan 17 15:54 0b21c949-7133-4b34-b909-a6660ae12800 drwxr-xr-x. 2 vdsm kvm 165 Feb 3 01:48 0dde79ab-d773-4d23-b397-7c39371ccc60 drwxr-xr-x. 2 vdsm kvm 8.0K Jan 17 09:49 1347d489-012b-40fc-acb5-d00a9ea133a4 drwxr-xr-x. 2 vdsm kvm 8.0K Jan 22 15:04 1ccc4db6-f47d-4474-b0fa-a0c1eddb0fa7 drwxr-xr-x. 2 vdsm kvm 8.0K Jan 21 16:28 22cab044-a26d-4266-9af7-a6408eaf140c drwxr-xr-x. 2 vdsm kvm 8.0K Jan 30 06:03 288d061a-6c6c-4536-a594-3bede63c0654 drwxr-xr-x. 2 vdsm kvm 8.0K Jan 9 16:46 40c51753-1533-45ab-b9de-2c51d8a18370
and what version of Ovirt are you running? This looks a bit like a libvirt change/bug that changed ownership on the actual disk image to
root.root on shutdown/migrations, preventing later start attempts.
This may help if that’s the case: chown -R vdsm.kvm /rhev/data-center/mnt/glusterSD/node01.dc-dus.dalason.net
<http://node01.dc-dus.dalason.net>:_ssd__storage/fec2eb5e-21b5-496b-9ea5-f718b2cb5556/images
On Feb 2, 2020, at 8:54 PM, Christian Reiss
<email@christian-reiss.de
<mailto:email@christian-reiss.de>> wrote:
Hey,
it was _while_ placing the host _into_ maintenance, to be precise. I restarted the volumes and even each machine and the entire cluster
to no avail.
I am currently migrating the disk images out of ovirt into openvz/kvm to get them running. The copied disk images are flawless and working.
On 03/02/2020 03:28, Jayme wrote:
I checked my HCI cluster and those permissions seem to match what I'm seeing. Since there's no VMs running currently have you tried restarting the gluster volumes as well as the glusterd service? I'm
not sure what would have caused this with one host placed in maintenance.
-- with kind regards, mit freundlichen Gruessen,
Christian Reiss _______________________________________________ Users mailing list -- users@ovirt.org <mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org <mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FOFNQO4FLJAIUB...
Deja view for me. Enable brick trace log (for sgort time or you run out of space) and check if acl is the reason. What is your gluster version ? Did you test VM power off & power on after the last gluster upgrade ? If it is acl, you have 3 options (not valid for 7.1 & 7.2): 1. Mount with acl enabled mount -t glusterfs -o acl brick1:/volume1 /mnt And run a dummy setfacl: find /mnt -exec setfacl -u:root:rw {} \; 2. Kill gluster processes and start the volume with 'force' option: gluster volume start <volume> force (or something like that. 3. Maybe a downgrade, yet 'm not in productive environment and that could be different for you. Best Regards, Strahil Nikolov