oVirt + Gluster issues

Hello, running ovirt 4.4.4.7-1.el8 and gluster 8.3. When i performe a restore of Zimbra Collaboration Email with features.shard on, the VM pauses with an unknown storage error. When I performe a restore of Zimbra Collaboration Email with features.shard off, it fills all the gluster storage domain disks. With older versions of gluster and ovirt the same happens. If I use a NFS storage domain it runs OK. -- Jose Ferradeira http://www.logicworks.pt

Maybe the shard xlator cannot cope with the speed of the shard creation speed. Are you using preallocated disks on the Zimbra VM ? Best Regards,Strahil Nikolov On Tue, Jun 8, 2021 at 17:57, José Ferradeira via Users<users@ovirt.org> wrote: Hello, running ovirt 4.4.4.7-1.el8 and gluster 8.3. When i performe a restore of Zimbra Collaboration Email with features.shard on, the VM pauses with an unknown storage error. When I performe a restore of Zimbra Collaboration Email with features.shard off, it fills all the gluster storage domain disks. With older versions of gluster and ovirt the same happens. If I use a NFS storage domain it runs OK. -- Jose Ferradeira http://www.logicworks.pt _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/QGBFSIHHTDOTTO...

I've run into a similar problem when using VDO + LVM + XFS stacks, also with ZFS. If you're trying to use ZFS on 4.4, my recommendation is don't. You have to run the testing branch at minimum, and quiet a few things just don't work. As for VDO, i ran into this issue when using VDO and a NVME for LVM caching of the thin pool, VDO would throw a fit and under high load scenario, VM's would regularly pause. VDO with no cache was fine however, seems to be related to mixing device types / block sizes (even if you override block sizes). Not sure if that helps. On 2021-06-08 12:26, Strahil Nikolov via Users wrote:
Maybe the shard xlator cannot cope with the speed of the shard creation speed.
Are you using preallocated disks on the Zimbra VM ?
Best Regards, Strahil Nikolov
On Tue, Jun 8, 2021 at 17:57, José Ferradeira via Users <users@ovirt.org> wrote:
Hello,
running ovirt 4.4.4.7-1.el8 and gluster 8.3. When i performe a restore of Zimbra Collaboration Email with features.shard on, the VM pauses with an unknown storage error. When I performe a restore of Zimbra Collaboration Email with features.shard off, it fills all the gluster storage domain disks.
With older versions of gluster and ovirt the same happens. If I use a NFS storage domain it runs OK.
--
-------------------------
Jose Ferradeira http://www.logicworks.pt _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/QGBFSIHHTDOTTO...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/MOJJZJG7LCGHDI...

This is a glusterfs on top of CentOS 8.3, using LVM De: "Alex McWhirter" <alex@triadic.us> Para: "Strahil Nikolov" <hunter86_bg@yahoo.com> Cc: suporte@logicworks.pt, "José Ferradeira via Users" <users@ovirt.org> Enviadas: Terça-feira, 8 De Junho de 2021 17:48:48 Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues I've run into a similar problem when using VDO + LVM + XFS stacks, also with ZFS. If you're trying to use ZFS on 4.4, my recommendation is don't. You have to run the testing branch at minimum, and quiet a few things just don't work. As for VDO, i ran into this issue when using VDO and a NVME for LVM caching of the thin pool, VDO would throw a fit and under high load scenario, VM's would regularly pause. VDO with no cache was fine however, seems to be related to mixing device types / block sizes (even if you override block sizes). Not sure if that helps. On 2021-06-08 12:26, Strahil Nikolov via Users wrote: Maybe the shard xlator cannot cope with the speed of the shard creation speed. Are you using preallocated disks on the Zimbra VM ? Best Regards, Strahil Nikolov BQ_BEGIN On Tue, Jun 8, 2021 at 17:57, José Ferradeira via Users <users@ovirt.org> wrote: Hello, running ovirt 4.4.4.7-1.el8 and gluster 8.3. When i performe a restore of Zimbra Collaboration Email with features.shard on, the VM pauses with an unknown storage error. When I performe a restore of Zimbra Collaboration Email with features.shard off, it fills all the gluster storage domain disks. With older versions of gluster and ovirt the same happens. If I use a NFS storage domain it runs OK. -- Jose Ferradeira http://www.logicworks.pt _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/QGBFSIHHTDOTTO... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/MOJJZJG7LCGHDI... BQ_END

The gluster storage domain, now is a mess, I cannot run a VM, always get this error: VM webmail.domain.pt-3 is down with error. Exit message: Unable to set XATTR trusted.libvirt.security.selinux on /rhev/data-center/mnt/glusterSD/gs1.domain.pt:_data1/d680d289-bcaa-46f2-b464-4d06d37ec1d3/images/5167f58d-68c9-475f-8b88-f278b7d4ef65/9b34eff0-c9a4-48e1-8ea7-87ad66a8736c: No space left on device. I had a second brick and have 1.5TB free, I don't know why it says "No space left on device" Also, I cannot use the iso files I have in this storage domain. Regards José De: "José Ferradeira via Users" <users@ovirt.org> Para: "Alex McWhirter" <alex@triadic.us> Cc: "Strahil Nikolov" <hunter86_bg@yahoo.com>, "José Ferradeira via Users" <users@ovirt.org> Enviadas: Terça-feira, 8 De Junho de 2021 22:47:25 Assunto: [ovirt-users] Re: oVirt + Gluster issues This is a glusterfs on top of CentOS 8.3, using LVM De: "Alex McWhirter" <alex@triadic.us> Para: "Strahil Nikolov" <hunter86_bg@yahoo.com> Cc: suporte@logicworks.pt, "José Ferradeira via Users" <users@ovirt.org> Enviadas: Terça-feira, 8 De Junho de 2021 17:48:48 Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues I've run into a similar problem when using VDO + LVM + XFS stacks, also with ZFS. If you're trying to use ZFS on 4.4, my recommendation is don't. You have to run the testing branch at minimum, and quiet a few things just don't work. As for VDO, i ran into this issue when using VDO and a NVME for LVM caching of the thin pool, VDO would throw a fit and under high load scenario, VM's would regularly pause. VDO with no cache was fine however, seems to be related to mixing device types / block sizes (even if you override block sizes). Not sure if that helps. On 2021-06-08 12:26, Strahil Nikolov via Users wrote: Maybe the shard xlator cannot cope with the speed of the shard creation speed. Are you using preallocated disks on the Zimbra VM ? Best Regards, Strahil Nikolov BQ_BEGIN On Tue, Jun 8, 2021 at 17:57, José Ferradeira via Users <users@ovirt.org> wrote: Hello, running ovirt 4.4.4.7-1.el8 and gluster 8.3. When i performe a restore of Zimbra Collaboration Email with features.shard on, the VM pauses with an unknown storage error. When I performe a restore of Zimbra Collaboration Email with features.shard off, it fills all the gluster storage domain disks. With older versions of gluster and ovirt the same happens. If I use a NFS storage domain it runs OK. -- Jose Ferradeira http://www.logicworks.pt _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/QGBFSIHHTDOTTO... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/MOJJZJG7LCGHDI... BQ_END _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/QDEHBKD4OVRK3M...

You need to use thick VM disks on Gluster, which is the default behavior for a long time.Also, check all bricks' free space. Most probably you are out of space on one of the bricks (term for server + mountpoint combination). Best Regards,Strahil Nikolov On Wed, Jun 9, 2021 at 12:41, José Ferradeira via Users<users@ovirt.org> wrote: _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/SC6RR2YC35OOOU...

So, how is it going ?Do you have space ? Best Regards,Strahil Nikolov On Thu, Jun 10, 2021 at 18:19, Strahil Nikolov<hunter86_bg@yahoo.com> wrote: You need to use thick VM disks on Gluster, which is the default behavior for a long time.Also, check all bricks' free space. Most probably you are out of space on one of the bricks (term for server + mountpoint combination). Best Regards,Strahil Nikolov On Wed, Jun 9, 2021 at 12:41, José Ferradeira via Users<users@ovirt.org> wrote: _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/SC6RR2YC35OOOU...

Well, I have one brick without space, 1.8TB. In fact I don't know why, because I only have one VM on that domain storage with less the 1TB. When I try to start the VM I get this error: VM webmail.domain.pt-3 is down with error. Exit message: Unable to set XATTR trusted.libvirt.security.selinux on /rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1/d680d289-bcaa-46f2-b464-4d06d37ec1d3/images/5167f58d-68c9-475f-8b88-f278b7d4ef65/9b34eff0-c9a4-48e1-8ea7-87ad66a8736c: No space left on device. I'm stuck in here Thanks José De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt, "José Ferradeira via Users" <users@ovirt.org> Cc: "Alex McWhirter" <alex@triadic.us>, "José Ferradeira via Users" <users@ovirt.org> Enviadas: Segunda-feira, 14 De Junho de 2021 7:21:09 Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues So, how is it going ? Do you have space ? Best Regards, Strahil Nikolov On Thu, Jun 10, 2021 at 18:19, Strahil Nikolov <hunter86_bg@yahoo.com> wrote: You need to use thick VM disks on Gluster, which is the default behavior for a long time. Also, check all bricks' free space. Most probably you are out of space on one of the bricks (term for server + mountpoint combination). Best Regards, Strahil Nikolov BQ_BEGIN On Wed, Jun 9, 2021 at 12:41, José Ferradeira via Users <users@ovirt.org> wrote: _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/SC6RR2YC35OOOU... BQ_END

Can you provode the output of:gluster volume info VOLUME gluster volume status VOLUMEgluster volume heal VOLUME info summary df -h /rhev/data-center/mnt/glusterSD/<server>:_<volume> In pure replica volumes , the bricks should be of the same size. If not - the smallest one defines the size of the volume.If the VM has thin qcow2 disks, it will grow slowly till it reaches its maximum size or till the volume space is finished. Best Regards,Strahil Nikolov Well, I have one brick without space, 1.8TB. In fact I don't know why, because I only have one VM on that domain storage with less the 1TB. When I try to start the VM I get this error: VM webmail.domain.pt-3 is down with error. Exit message: Unable to set XATTR trusted.libvirt.security.selinux on /rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1/d680d289-bcaa-46f2-b464-4d06d37ec1d3/images/5167f58d-68c9-475f-8b88-f278b7d4ef65/9b34eff0-c9a4-48e1-8ea7-87ad66a8736c: No space left on device. I'm stuck in here Thanks José De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt, "José Ferradeira via Users" <users@ovirt.org> Cc: "Alex McWhirter" <alex@triadic.us>, "José Ferradeira via Users" <users@ovirt.org> Enviadas: Segunda-feira, 14 De Junho de 2021 7:21:09 Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues So, how is it going ?Do you have space ? Best Regards,Strahil Nikolov On Thu, Jun 10, 2021 at 18:19, Strahil Nikolov<hunter86_bg@yahoo.com> wrote:You need to use thick VM disks on Gluster, which is the default behavior for a long time.Also, check all bricks' free space. Most probably you are out of space on one of the bricks (term for server + mountpoint combination). Best Regards,Strahil Nikolov On Wed, Jun 9, 2021 at 12:41, José Ferradeira via Users<users@ovirt.org> wrote:_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/SC6RR2YC35OOOU...

# gluster volume info data1 Volume Name: data1 Type: Distribute Volume ID: d7eb2c38-2707-4774-9873-a7303d024669 Status: Started Snapshot Count: 0 Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: gs.domain.pt:/home/brick1 Brick2: gs.domain.pt:/home2/brick2 Options Reconfigured: nfs.disable: on transport.address-family: inet storage.fips-mode-rchecksum: on storage.owner-uid: 36 storage.owner-gid: 36 cluster.min-free-disk: 10% performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.low-prio-threads: 32 network.remote-dio: enable cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-wait-qlength: 10000 features.shard: off user.cifs: off cluster.choose-local: off client.event-threads: 4 server.event-threads: 4 performance.client-io-threads: on # gluster volume status data1 Status of volume: data1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gs.domain.pt:/home/brick1 49153 0 Y 1824862 Brick gs.domain.pt:/home2/brick2 49154 0 Y 1824880 Task Status of Volume data1 ------------------------------------------------------------------------------ There are no active volume tasks # gluster volume heal data1 info summary This command is supported for only volumes of replicate/disperse type. Volume data1 is not of type replicate/disperse Volume heal failed. # df -h /rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1/ Sist.fichs Tama Ocup Livre Uso% Montado em gs.domain.pt:/data1 1,9T 1,8T 22G 99% /rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1 thanks José De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: "José Ferradeira via Users" <users@ovirt.org>, "Alex McWhirter" <alex@triadic.us> Enviadas: Segunda-feira, 14 De Junho de 2021 14:54:41 Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues Can you provode the output of: gluster volume info VOLUME gluster volume status VOLUME gluster volume heal VOLUME info summary df -h /rhev/data-center/mnt/glusterSD/<server>:_<volume> In pure replica volumes , the bricks should be of the same size. If not - the smallest one defines the size of the volume. If the VM has thin qcow2 disks, it will grow slowly till it reaches its maximum size or till the volume space is finished. Best Regards, Strahil Nikolov Well, I have one brick without space, 1.8TB. In fact I don't know why, because I only have one VM on that domain storage with less the 1TB. When I try to start the VM I get this error: VM webmail.domain.pt-3 is down with error. Exit message: Unable to set XATTR trusted.libvirt.security.selinux on /rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1/d680d289-bcaa-46f2-b464-4d06d37ec1d3/images/5167f58d-68c9-475f-8b88-f278b7d4ef65/9b34eff0-c9a4-48e1-8ea7-87ad66a8736c: No space left on device. I'm stuck in here Thanks José De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt, "José Ferradeira via Users" <users@ovirt.org> Cc: "Alex McWhirter" <alex@triadic.us>, "José Ferradeira via Users" <users@ovirt.org> Enviadas: Segunda-feira, 14 De Junho de 2021 7:21:09 Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues So, how is it going ? Do you have space ? Best Regards, Strahil Nikolov BQ_BEGIN On Thu, Jun 10, 2021 at 18:19, Strahil Nikolov <hunter86_bg@yahoo.com> wrote: You need to use thick VM disks on Gluster, which is the default behavior for a long time. Also, check all bricks' free space. Most probably you are out of space on one of the bricks (term for server + mountpoint combination). Best Regards, Strahil Nikolov BQ_BEGIN On Wed, Jun 9, 2021 at 12:41, José Ferradeira via Users <users@ovirt.org> wrote: _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/SC6RR2YC35OOOU... BQ_END BQ_END

And what is the status of the bricks: df -h /home/brick1 /home2/brick2 When sharding is not enabled, the qcow2 disks cannot be spread between the bricks. Best Regards,Strahil Nikolov # gluster volume info data1 Volume Name: data1 Type: Distribute Volume ID: d7eb2c38-2707-4774-9873-a7303d024669 Status: Started Snapshot Count: 0 Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: gs.domain.pt:/home/brick1 Brick2: gs.domain.pt:/home2/brick2 Options Reconfigured: nfs.disable: on transport.address-family: inet storage.fips-mode-rchecksum: on storage.owner-uid: 36 storage.owner-gid: 36 cluster.min-free-disk: 10% performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.low-prio-threads: 32 network.remote-dio: enable cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-wait-qlength: 10000 features.shard: off user.cifs: off cluster.choose-local: off client.event-threads: 4 server.event-threads: 4 performance.client-io-threads: on # gluster volume status data1 Status of volume: data1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gs.domain.pt:/home/brick1 49153 0 Y 1824862 Brick gs.domain.pt:/home2/brick2 49154 0 Y 1824880 Task Status of Volume data1 ------------------------------------------------------------------------------ There are no active volume tasks # gluster volume heal data1 info summary This command is supported for only volumes of replicate/disperse type. Volume data1 is not of type replicate/disperse Volume heal failed. # df -h /rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1/ Sist.fichs Tama Ocup Livre Uso% Montado em gs.domain.pt:/data1 1,9T 1,8T 22G 99% /rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1 thanks José De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: "José Ferradeira via Users" <users@ovirt.org>, "Alex McWhirter" <alex@triadic.us> Enviadas: Segunda-feira, 14 De Junho de 2021 14:54:41 Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues Can you provode the output of:gluster volume info VOLUME gluster volume status VOLUMEgluster volume heal VOLUME info summary df -h /rhev/data-center/mnt/glusterSD/<server>:_<volume> In pure replica volumes , the bricks should be of the same size. If not - the smallest one defines the size of the volume.If the VM has thin qcow2 disks, it will grow slowly till it reaches its maximum size or till the volume space is finished. Best Regards,Strahil Nikolov Well, I have one brick without space, 1.8TB.In fact I don't know why, because I only have one VM on that domain storage with less the 1TB. When I try to start the VM I get this error: VM webmail.domain.pt-3 is down with error. Exit message: Unable to set XATTR trusted.libvirt.security.selinux on /rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1/d680d289-bcaa-46f2-b464-4d06d37ec1d3/images/5167f58d-68c9-475f-8b88-f278b7d4ef65/9b34eff0-c9a4-48e1-8ea7-87ad66a8736c: No space left on device. I'm stuck in here Thanks José De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt, "José Ferradeira via Users" <users@ovirt.org> Cc: "Alex McWhirter" <alex@triadic.us>, "José Ferradeira via Users" <users@ovirt.org> Enviadas: Segunda-feira, 14 De Junho de 2021 7:21:09 Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues So, how is it going ?Do you have space ? Best Regards,Strahil Nikolov On Thu, Jun 10, 2021 at 18:19, Strahil Nikolov<hunter86_bg@yahoo.com> wrote:You need to use thick VM disks on Gluster, which is the default behavior for a long time.Also, check all bricks' free space. Most probably you are out of space on one of the bricks (term for server + mountpoint combination). Best Regards,Strahil Nikolov On Wed, Jun 9, 2021 at 12:41, José Ferradeira via Users<users@ovirt.org> wrote:_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/SC6RR2YC35OOOU...

# df -h /home/brick1 Sist.fichs Tama Ocup Livre Uso% Montado em /dev/mapper/cl-home 1,8T 1,8T 18G 100% /home # df -h /home2/brick2 Sist.fichs Tama Ocup Livre Uso% Montado em /dev/mapper/cl-root 50G 28G 23G 56% / If sharding is enable the restore pauses the VM with unknown storage error Thanks José De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: "José Ferradeira via Users" <users@ovirt.org>, "Alex McWhirter" <alex@triadic.us> Enviadas: Segunda-feira, 14 De Junho de 2021 17:14:15 Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues And what is the status of the bricks: df -h /home/brick1 /home2/brick2 When sharding is not enabled, the qcow2 disks cannot be spread between the bricks. Best Regards, Strahil Nikolov # gluster volume info data1 Volume Name: data1 Type: Distribute Volume ID: d7eb2c38-2707-4774-9873-a7303d024669 Status: Started Snapshot Count: 0 Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: gs.domain.pt:/home/brick1 Brick2: gs.domain.pt:/home2/brick2 Options Reconfigured: nfs.disable: on transport.address-family: inet storage.fips-mode-rchecksum: on storage.owner-uid: 36 storage.owner-gid: 36 cluster.min-free-disk: 10% performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.low-prio-threads: 32 network.remote-dio: enable cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-wait-qlength: 10000 features.shard: off user.cifs: off cluster.choose-local: off client.event-threads: 4 server.event-threads: 4 performance.client-io-threads: on # gluster volume status data1 Status of volume: data1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gs.domain.pt:/home/brick1 49153 0 Y 1824862 Brick gs.domain.pt:/home2/brick2 49154 0 Y 1824880 Task Status of Volume data1 ------------------------------------------------------------------------------ There are no active volume tasks # gluster volume heal data1 info summary This command is supported for only volumes of replicate/disperse type. Volume data1 is not of type replicate/disperse Volume heal failed. # df -h /rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1/ Sist.fichs Tama Ocup Livre Uso% Montado em gs.domain.pt:/data1 1,9T 1,8T 22G 99% /rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1 thanks José De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: "José Ferradeira via Users" <users@ovirt.org>, "Alex McWhirter" <alex@triadic.us> Enviadas: Segunda-feira, 14 De Junho de 2021 14:54:41 Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues Can you provode the output of: gluster volume info VOLUME gluster volume status VOLUME gluster volume heal VOLUME info summary df -h /rhev/data-center/mnt/glusterSD/<server>:_<volume> In pure replica volumes , the bricks should be of the same size. If not - the smallest one defines the size of the volume. If the VM has thin qcow2 disks, it will grow slowly till it reaches its maximum size or till the volume space is finished. Best Regards, Strahil Nikolov BQ_BEGIN Well, I have one brick without space, 1.8TB. In fact I don't know why, because I only have one VM on that domain storage with less the 1TB. When I try to start the VM I get this error: VM webmail.domain.pt-3 is down with error. Exit message: Unable to set XATTR trusted.libvirt.security.selinux on /rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1/d680d289-bcaa-46f2-b464-4d06d37ec1d3/images/5167f58d-68c9-475f-8b88-f278b7d4ef65/9b34eff0-c9a4-48e1-8ea7-87ad66a8736c: No space left on device. I'm stuck in here Thanks José De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt, "José Ferradeira via Users" <users@ovirt.org> Cc: "Alex McWhirter" <alex@triadic.us>, "José Ferradeira via Users" <users@ovirt.org> Enviadas: Segunda-feira, 14 De Junho de 2021 7:21:09 Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues So, how is it going ? Do you have space ? Best Regards, Strahil Nikolov BQ_BEGIN On Thu, Jun 10, 2021 at 18:19, Strahil Nikolov <hunter86_bg@yahoo.com> wrote: You need to use thick VM disks on Gluster, which is the default behavior for a long time. Also, check all bricks' free space. Most probably you are out of space on one of the bricks (term for server + mountpoint combination). Best Regards, Strahil Nikolov BQ_BEGIN On Wed, Jun 9, 2021 at 12:41, José Ferradeira via Users <users@ovirt.org> wrote: _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/SC6RR2YC35OOOU... BQ_END BQ_END BQ_END

If you enabled sharding, don't disable it any more.My previous comment was just a statement and not a recommendation. Based on the output, the /home2/brick2 is part of the root. Obviosly, gluster thinks that the brick is full and you got very few options:- extend any brick or add more bricks to the volume- shutdown all VMs on that volume, reduce the minimum reserved space in gluster (gluster volume set data1 cluster.min-free-disk 0%) and then storage migrate the disk to a bigger datastore- shitdown all VMs on that volume(datastore),reduce the minimum reserved space in gluster (gluster volume set data1 cluster.min-free-disk 0%) and then delete unnecessary data (like snapshots you don't need any more). The first option is the most reliable. Once gluster has some free space to operate, you will be able to delete (consolidate) VM snapshots or to delete VMs that are less important (for example sandboxes, test VMs, etc). Yet, paused VMs will resume automatically and will continue to fill in Gluster which could lead to the same situation - so just power them off (ugly but most probably necessary). Best Regards,Strahil Nikolov On Mon, Jun 14, 2021 at 20:21, suporte@logicworks.pt<suporte@logicworks.pt> wrote: # df -h /home/brick1 Sist.fichs Tama Ocup Livre Uso% Montado em /dev/mapper/cl-home 1,8T 1,8T 18G 100% /home # df -h /home2/brick2 Sist.fichs Tama Ocup Livre Uso% Montado em /dev/mapper/cl-root 50G 28G 23G 56% / If sharding is enable the restore pauses the VM with unknown storage error Thanks José De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: "José Ferradeira via Users" <users@ovirt.org>, "Alex McWhirter" <alex@triadic.us> Enviadas: Segunda-feira, 14 De Junho de 2021 17:14:15 Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues And what is the status of the bricks: df -h /home/brick1 /home2/brick2 When sharding is not enabled, the qcow2 disks cannot be spread between the bricks. Best Regards,Strahil Nikolov # gluster volume info data1 Volume Name: data1 Type: Distribute Volume ID: d7eb2c38-2707-4774-9873-a7303d024669 Status: Started Snapshot Count: 0 Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: gs.domain.pt:/home/brick1 Brick2: gs.domain.pt:/home2/brick2 Options Reconfigured: nfs.disable: on transport.address-family: inet storage.fips-mode-rchecksum: on storage.owner-uid: 36 storage.owner-gid: 36 cluster.min-free-disk: 10% performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.low-prio-threads: 32 network.remote-dio: enable cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-wait-qlength: 10000 features.shard: off user.cifs: off cluster.choose-local: off client.event-threads: 4 server.event-threads: 4 performance.client-io-threads: on # gluster volume status data1 Status of volume: data1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gs.domain.pt:/home/brick1 49153 0 Y 1824862 Brick gs.domain.pt:/home2/brick2 49154 0 Y 1824880 Task Status of Volume data1 ------------------------------------------------------------------------------ There are no active volume tasks # gluster volume heal data1 info summary This command is supported for only volumes of replicate/disperse type. Volume data1 is not of type replicate/disperse Volume heal failed. # df -h /rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1/ Sist.fichs Tama Ocup Livre Uso% Montado em gs.domain.pt:/data1 1,9T 1,8T 22G 99% /rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1 thanks José De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: "José Ferradeira via Users" <users@ovirt.org>, "Alex McWhirter" <alex@triadic.us> Enviadas: Segunda-feira, 14 De Junho de 2021 14:54:41 Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues Can you provode the output of:gluster volume info VOLUME gluster volume status VOLUMEgluster volume heal VOLUME info summary df -h /rhev/data-center/mnt/glusterSD/<server>:_<volume> In pure replica volumes , the bricks should be of the same size. If not - the smallest one defines the size of the volume.If the VM has thin qcow2 disks, it will grow slowly till it reaches its maximum size or till the volume space is finished. Best Regards,Strahil Nikolov Well, I have one brick without space, 1.8TB.In fact I don't know why, because I only have one VM on that domain storage with less the 1TB. When I try to start the VM I get this error: VM webmail.domain.pt-3 is down with error. Exit message: Unable to set XATTR trusted.libvirt.security.selinux on /rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1/d680d289-bcaa-46f2-b464-4d06d37ec1d3/images/5167f58d-68c9-475f-8b88-f278b7d4ef65/9b34eff0-c9a4-48e1-8ea7-87ad66a8736c: No space left on device. I'm stuck in here Thanks José De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt, "José Ferradeira via Users" <users@ovirt.org> Cc: "Alex McWhirter" <alex@triadic.us>, "José Ferradeira via Users" <users@ovirt.org> Enviadas: Segunda-feira, 14 De Junho de 2021 7:21:09 Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues So, how is it going ?Do you have space ? Best Regards,Strahil Nikolov On Thu, Jun 10, 2021 at 18:19, Strahil Nikolov<hunter86_bg@yahoo.com> wrote:You need to use thick VM disks on Gluster, which is the default behavior for a long time.Also, check all bricks' free space. Most probably you are out of space on one of the bricks (term for server + mountpoint combination). Best Regards,Strahil Nikolov On Wed, Jun 9, 2021 at 12:41, José Ferradeira via Users<users@ovirt.org> wrote:_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/SC6RR2YC35OOOU...

# df -h /home/brick1 Sist.fichs Tama Ocup Livre Uso% Montado em /dev/mapper/cl-home 1,8T 1,8T 18G 100% /home # df -h /home2/brick2 Sist.fichs Tama Ocup Livre Uso% Montado em /dev/mapper/cl-root 50G 28G 23G 56% / If sharding is enable the restore pauses the VM with unknown storage error Thanks José De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: "José Ferradeira via Users" <users@ovirt.org>, "Alex McWhirter" <alex@triadic.us> Enviadas: Segunda-feira, 14 De Junho de 2021 17:14:15 Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues And what is the status of the bricks: df -h /home/brick1 /home2/brick2 When sharding is not enabled, the qcow2 disks cannot be spread between the bricks. Best Regards, Strahil Nikolov # gluster volume info data1 Volume Name: data1 Type: Distribute Volume ID: d7eb2c38-2707-4774-9873-a7303d024669 Status: Started Snapshot Count: 0 Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: gs.domain.pt:/home/brick1 Brick2: gs.domain.pt:/home2/brick2 Options Reconfigured: nfs.disable: on transport.address-family: inet storage.fips-mode-rchecksum: on storage.owner-uid: 36 storage.owner-gid: 36 cluster.min-free-disk: 10% performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.low-prio-threads: 32 network.remote-dio: enable cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-wait-qlength: 10000 features.shard: off user.cifs: off cluster.choose-local: off client.event-threads: 4 server.event-threads: 4 performance.client-io-threads: on # gluster volume status data1 Status of volume: data1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gs.domain.pt:/home/brick1 49153 0 Y 1824862 Brick gs.domain.pt:/home2/brick2 49154 0 Y 1824880 Task Status of Volume data1 ------------------------------------------------------------------------------ There are no active volume tasks # gluster volume heal data1 info summary This command is supported for only volumes of replicate/disperse type. Volume data1 is not of type replicate/disperse Volume heal failed. # df -h /rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1/ Sist.fichs Tama Ocup Livre Uso% Montado em gs.domain.pt:/data1 1,9T 1,8T 22G 99% /rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1 thanks José De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: "José Ferradeira via Users" <users@ovirt.org>, "Alex McWhirter" <alex@triadic.us> Enviadas: Segunda-feira, 14 De Junho de 2021 14:54:41 Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues Can you provode the output of: gluster volume info VOLUME gluster volume status VOLUME gluster volume heal VOLUME info summary df -h /rhev/data-center/mnt/glusterSD/<server>:_<volume> In pure replica volumes , the bricks should be of the same size. If not - the smallest one defines the size of the volume. If the VM has thin qcow2 disks, it will grow slowly till it reaches its maximum size or till the volume space is finished. Best Regards, Strahil Nikolov BQ_BEGIN Well, I have one brick without space, 1.8TB. In fact I don't know why, because I only have one VM on that domain storage with less the 1TB. When I try to start the VM I get this error: VM webmail.domain.pt-3 is down with error. Exit message: Unable to set XATTR trusted.libvirt.security.selinux on /rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1/d680d289-bcaa-46f2-b464-4d06d37ec1d3/images/5167f58d-68c9-475f-8b88-f278b7d4ef65/9b34eff0-c9a4-48e1-8ea7-87ad66a8736c: No space left on device. I'm stuck in here Thanks José De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt, "José Ferradeira via Users" <users@ovirt.org> Cc: "Alex McWhirter" <alex@triadic.us>, "José Ferradeira via Users" <users@ovirt.org> Enviadas: Segunda-feira, 14 De Junho de 2021 7:21:09 Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues So, how is it going ? Do you have space ? Best Regards, Strahil Nikolov BQ_BEGIN On Thu, Jun 10, 2021 at 18:19, Strahil Nikolov <hunter86_bg@yahoo.com> wrote: You need to use thick VM disks on Gluster, which is the default behavior for a long time. Also, check all bricks' free space. Most probably you are out of space on one of the bricks (term for server + mountpoint combination). Best Regards, Strahil Nikolov BQ_BEGIN On Wed, Jun 9, 2021 at 12:41, José Ferradeira via Users <users@ovirt.org> wrote: _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/SC6RR2YC35OOOU... BQ_END BQ_END BQ_END

# df -h /home/brick1 Sist.fichs Tama Ocup Livre Uso% Montado em /dev/mapper/cl-home 1,8T 1,8T 18G 100% /home # df -h /home2/brick2 Sist.fichs Tama Ocup Livre Uso% Montado em /dev/mapper/cl-root 50G 28G 23G 56% / If sharding is enable the restore pauses the VM with unknown storage error Thanks José De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: "José Ferradeira via Users" <users@ovirt.org>, "Alex McWhirter" <alex@triadic.us> Enviadas: Segunda-feira, 14 De Junho de 2021 17:14:15 Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues And what is the status of the bricks: df -h /home/brick1 /home2/brick2 When sharding is not enabled, the qcow2 disks cannot be spread between the bricks. Best Regards, Strahil Nikolov # gluster volume info data1 Volume Name: data1 Type: Distribute Volume ID: d7eb2c38-2707-4774-9873-a7303d024669 Status: Started Snapshot Count: 0 Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: gs.domain.pt:/home/brick1 Brick2: gs.domain.pt:/home2/brick2 Options Reconfigured: nfs.disable: on transport.address-family: inet storage.fips-mode-rchecksum: on storage.owner-uid: 36 storage.owner-gid: 36 cluster.min-free-disk: 10% performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.low-prio-threads: 32 network.remote-dio: enable cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-wait-qlength: 10000 features.shard: off user.cifs: off cluster.choose-local: off client.event-threads: 4 server.event-threads: 4 performance.client-io-threads: on # gluster volume status data1 Status of volume: data1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick gs.domain.pt:/home/brick1 49153 0 Y 1824862 Brick gs.domain.pt:/home2/brick2 49154 0 Y 1824880 Task Status of Volume data1 ------------------------------------------------------------------------------ There are no active volume tasks # gluster volume heal data1 info summary This command is supported for only volumes of replicate/disperse type. Volume data1 is not of type replicate/disperse Volume heal failed. # df -h /rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1/ Sist.fichs Tama Ocup Livre Uso% Montado em gs.domain.pt:/data1 1,9T 1,8T 22G 99% /rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1 thanks José De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: "José Ferradeira via Users" <users@ovirt.org>, "Alex McWhirter" <alex@triadic.us> Enviadas: Segunda-feira, 14 De Junho de 2021 14:54:41 Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues Can you provode the output of: gluster volume info VOLUME gluster volume status VOLUME gluster volume heal VOLUME info summary df -h /rhev/data-center/mnt/glusterSD/<server>:_<volume> In pure replica volumes , the bricks should be of the same size. If not - the smallest one defines the size of the volume. If the VM has thin qcow2 disks, it will grow slowly till it reaches its maximum size or till the volume space is finished. Best Regards, Strahil Nikolov BQ_BEGIN Well, I have one brick without space, 1.8TB. In fact I don't know why, because I only have one VM on that domain storage with less the 1TB. When I try to start the VM I get this error: VM webmail.domain.pt-3 is down with error. Exit message: Unable to set XATTR trusted.libvirt.security.selinux on /rhev/data-center/mnt/glusterSD/gs.domain.pt:_data1/d680d289-bcaa-46f2-b464-4d06d37ec1d3/images/5167f58d-68c9-475f-8b88-f278b7d4ef65/9b34eff0-c9a4-48e1-8ea7-87ad66a8736c: No space left on device. I'm stuck in here Thanks José De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt, "José Ferradeira via Users" <users@ovirt.org> Cc: "Alex McWhirter" <alex@triadic.us>, "José Ferradeira via Users" <users@ovirt.org> Enviadas: Segunda-feira, 14 De Junho de 2021 7:21:09 Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues So, how is it going ? Do you have space ? Best Regards, Strahil Nikolov BQ_BEGIN On Thu, Jun 10, 2021 at 18:19, Strahil Nikolov <hunter86_bg@yahoo.com> wrote: You need to use thick VM disks on Gluster, which is the default behavior for a long time. Also, check all bricks' free space. Most probably you are out of space on one of the bricks (term for server + mountpoint combination). Best Regards, Strahil Nikolov BQ_BEGIN On Wed, Jun 9, 2021 at 12:41, José Ferradeira via Users <users@ovirt.org> wrote: _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/SC6RR2YC35OOOU... BQ_END BQ_END BQ_END

You will need to free some space.Check my previous e-mail. Bedt Regards,Strahil Nikolov

I just free every space I could. That's the only VM in that storage domain. De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt, suporte@logicworks.pt Cc: "José Ferradeira via Users" <users@ovirt.org>, "Alex McWhirter" <alex@triadic.us> Enviadas: Terça-feira, 15 De Junho de 2021 16:04:56 Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues You will need to free some space. Check my previous e-mail. Bedt Regards, Strahil Nikolov

Did you check for snapshots ? You can check the contents of the /rhev... mount point. Best Regards, Strahil Nikolov В вторник, 15 юни 2021 г., 18:49:41 ч. Гринуич+3, <suporte@logicworks.pt> написа: I just free every space I could. That's the only VM in that storage domain. ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt, suporte@logicworks.pt Cc: "José Ferradeira via Users" <users@ovirt.org>, "Alex McWhirter" <alex@triadic.us> Enviadas: Terça-feira, 15 De Junho de 2021 16:04:56 Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues You will need to free some space. Check my previous e-mail. Bedt Regards, Strahil Nikolov

Yes, there is one snapshot but I cannot remove it: Error while executing action: Cannot remove Disk Snapshot. Low disk space on Storage Domain DATA1. Regards José De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: "José Ferradeira via Users" <users@ovirt.org>, "Alex McWhirter" <alex@triadic.us> Enviadas: Terça-feira, 15 De Junho de 2021 18:45:49 Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues Did you check for snapshots ? You can check the contents of the /rhev... mount point. Best Regards, Strahil Nikolov В вторник, 15 юни 2021 г., 18:49:41 ч. Гринуич+3, <suporte@logicworks.pt> написа: I just free every space I could. That's the only VM in that storage domain. ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt, suporte@logicworks.pt Cc: "José Ferradeira via Users" <users@ovirt.org>, "Alex McWhirter" <alex@triadic.us> Enviadas: Terça-feira, 15 De Junho de 2021 16:04:56 Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues You will need to free some space. Check my previous e-mail. Bedt Regards, Strahil Nikolov

Did you reduce the minimum free space option in gluster prior removing of the snapshot (and it's failure) ? Best Regards,Strahil Nikolov On Wed, Jun 16, 2021 at 0:35, suporte@logicworks.pt<suporte@logicworks.pt> wrote: Yes, there is one snapshot but I cannot remove it: Error while executing action: Cannot remove Disk Snapshot. Low disk space on Storage Domain DATA1. Regards José De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: "José Ferradeira via Users" <users@ovirt.org>, "Alex McWhirter" <alex@triadic.us> Enviadas: Terça-feira, 15 De Junho de 2021 18:45:49 Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues Did you check for snapshots ? You can check the contents of the /rhev... mount point. Best Regards, Strahil Nikolov В вторник, 15 юни 2021 г., 18:49:41 ч. Гринуич+3, <suporte@logicworks.pt> написа: I just free every space I could. That's the only VM in that storage domain. ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt, suporte@logicworks.pt Cc: "José Ferradeira via Users" <users@ovirt.org>, "Alex McWhirter" <alex@triadic.us> Enviadas: Terça-feira, 15 De Junho de 2021 16:04:56 Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues You will need to free some space. Check my previous e-mail. Bedt Regards, Strahil Nikolov

Do you mean, in manage domain - Critical Space Action Blocker (GB) - change 5 for 1 ? De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt, suporte@logicworks.pt Cc: "José Ferradeira via Users" <users@ovirt.org>, "Alex McWhirter" <alex@triadic.us> Enviadas: Quarta-feira, 16 De Junho de 2021 4:50:42 Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues Did you reduce the minimum free space option in gluster prior removing of the snapshot (and it's failure) ? Best Regards, Strahil Nikolov On Wed, Jun 16, 2021 at 0:35, suporte@logicworks.pt <suporte@logicworks.pt> wrote: Yes, there is one snapshot but I cannot remove it: Error while executing action: Cannot remove Disk Snapshot. Low disk space on Storage Domain DATA1. Regards José De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: "José Ferradeira via Users" <users@ovirt.org>, "Alex McWhirter" <alex@triadic.us> Enviadas: Terça-feira, 15 De Junho de 2021 18:45:49 Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues Did you check for snapshots ? You can check the contents of the /rhev... mount point. Best Regards, Strahil Nikolov В вторник, 15 юни 2021 г., 18:49:41 ч. Гринуич+3, <suporte@logicworks.pt> написа: I just free every space I could. That's the only VM in that storage domain. ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt, suporte@logicworks.pt Cc: "José Ferradeira via Users" <users@ovirt.org>, "Alex McWhirter" <alex@triadic.us> Enviadas: Terça-feira, 15 De Junho de 2021 16:04:56 Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues You will need to free some space. Check my previous e-mail. Bedt Regards, Strahil Nikolov

I just change cluster.min-free-disk to 5% but still get the message: Error while executing action: Cannot remove Disk Snapshot. Low disk space on Storage Domain DATA1. # gluster volume get data1 all|grep cluster.min-free-disk cluster.min-free-disk 5% De: suporte@logicworks.pt Para: "Strahil Nikolov" <hunter86_bg@yahoo.com> Cc: "José Ferradeira via Users" <users@ovirt.org>, "Alex McWhirter" <alex@triadic.us> Enviadas: Quarta-feira, 16 De Junho de 2021 16:31:46 Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues Do you mean, in manage domain - Critical Space Action Blocker (GB) - change 5 for 1 ? De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt, suporte@logicworks.pt Cc: "José Ferradeira via Users" <users@ovirt.org>, "Alex McWhirter" <alex@triadic.us> Enviadas: Quarta-feira, 16 De Junho de 2021 4:50:42 Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues Did you reduce the minimum free space option in gluster prior removing of the snapshot (and it's failure) ? Best Regards, Strahil Nikolov On Wed, Jun 16, 2021 at 0:35, suporte@logicworks.pt <suporte@logicworks.pt> wrote: Yes, there is one snapshot but I cannot remove it: Error while executing action: Cannot remove Disk Snapshot. Low disk space on Storage Domain DATA1. Regards José De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: "José Ferradeira via Users" <users@ovirt.org>, "Alex McWhirter" <alex@triadic.us> Enviadas: Terça-feira, 15 De Junho de 2021 18:45:49 Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues Did you check for snapshots ? You can check the contents of the /rhev... mount point. Best Regards, Strahil Nikolov В вторник, 15 юни 2021 г., 18:49:41 ч. Гринуич+3, <suporte@logicworks.pt> написа: I just free every space I could. That's the only VM in that storage domain. ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt, suporte@logicworks.pt Cc: "José Ferradeira via Users" <users@ovirt.org>, "Alex McWhirter" <alex@triadic.us> Enviadas: Terça-feira, 15 De Junho de 2021 16:04:56 Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues You will need to free some space. Check my previous e-mail. Bedt Regards, Strahil Nikolov

Exactly that one. Sadly, I got no clue how big is your snapshot(s). Maybe even with those extra Gigs, it's still too much to merge it. If you have any other VMs there, try to move their disks away in order to release more space. Otherwise, you will have to find another brick to extend the volume. How many snapshots do you see in oVirt ? Maybe you have more than one snapshot. Best Regards,Strahil Nikolov On Wed, Jun 16, 2021 at 21:22, suporte@logicworks.pt<suporte@logicworks.pt> wrote: I just change cluster.min-free-disk to 5% but still get the message: Error while executing action: Cannot remove Disk Snapshot. Low disk space on Storage Domain DATA1. # gluster volume get data1 all|grep cluster.min-free-disk cluster.min-free-disk 5% De: suporte@logicworks.pt Para: "Strahil Nikolov" <hunter86_bg@yahoo.com> Cc: "José Ferradeira via Users" <users@ovirt.org>, "Alex McWhirter" <alex@triadic.us> Enviadas: Quarta-feira, 16 De Junho de 2021 16:31:46 Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues Do you mean, in manage domain - Critical Space Action Blocker (GB) - change 5 for 1 ? De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt, suporte@logicworks.pt Cc: "José Ferradeira via Users" <users@ovirt.org>, "Alex McWhirter" <alex@triadic.us> Enviadas: Quarta-feira, 16 De Junho de 2021 4:50:42 Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues Did you reduce the minimum free space option in gluster prior removing of the snapshot (and it's failure) ? Best Regards,Strahil Nikolov On Wed, Jun 16, 2021 at 0:35, suporte@logicworks.pt<suporte@logicworks.pt> wrote:Yes, there is one snapshot but I cannot remove it: Error while executing action: Cannot remove Disk Snapshot. Low disk space on Storage Domain DATA1. Regards José De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt Cc: "José Ferradeira via Users" <users@ovirt.org>, "Alex McWhirter" <alex@triadic.us> Enviadas: Terça-feira, 15 De Junho de 2021 18:45:49 Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues Did you check for snapshots ? You can check the contents of the /rhev... mount point. Best Regards, Strahil Nikolov В вторник, 15 юни 2021 г., 18:49:41 ч. Гринуич+3, <suporte@logicworks.pt> написа: I just free every space I could. That's the only VM in that storage domain. ________________________________ De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt, suporte@logicworks.pt Cc: "José Ferradeira via Users" <users@ovirt.org>, "Alex McWhirter" <alex@triadic.us> Enviadas: Terça-feira, 15 De Junho de 2021 16:04:56 Assunto: Re: [ovirt-users] Re: oVirt + Gluster issues You will need to free some space. Check my previous e-mail. Bedt Regards, Strahil Nikolov

Most peobably that VM has a very old snapshot. Sadly, 'deletion' ( merging snapahot to base disk) takes extra space. You have to find temporary storage that can be added to the volume , in order to delete that snapahot and free enough space. Best Regards,Strahil Nikolov On Mon, Jun 21, 2021 at 20:42, José Ferradeira via Users<users@ovirt.org> wrote: _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/GDF3V6AAMAU43O...

The disks on the VM are Thin Provision Regards José De: "Strahil Nikolov" <hunter86_bg@yahoo.com> Para: suporte@logicworks.pt, "José Ferradeira via Users" <users@ovirt.org>, "oVirt Users" <users@ovirt.org> Enviadas: Terça-feira, 8 De Junho de 2021 17:26:42 Assunto: Re: [ovirt-users] oVirt + Gluster issues Maybe the shard xlator cannot cope with the speed of the shard creation speed. Are you using preallocated disks on the Zimbra VM ? Best Regards, Strahil Nikolov On Tue, Jun 8, 2021 at 17:57, José Ferradeira via Users <users@ovirt.org> wrote: Hello, running ovirt 4.4.4.7-1.el8 and gluster 8.3. When i performe a restore of Zimbra Collaboration Email with features.shard on, the VM pauses with an unknown storage error. When I performe a restore of Zimbra Collaboration Email with features.shard off, it fills all the gluster storage domain disks. With older versions of gluster and ovirt the same happens. If I use a NFS storage domain it runs OK. -- Jose Ferradeira http://www.logicworks.pt _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/QGBFSIHHTDOTTO...
participants (3)
-
Alex McWhirter
-
Strahil Nikolov
-
suporte@logicworks.pt