Hi, I am using distributed-replicated volume for vm's storage. There are 3 bricks (mounts on separate disks) in each of 3 hosts, so topology is 3x3 Volume Name: vms Type: Distributed-Replicate Volume ID: 52032ec6-99d4-4210-8fb8-ffbd7a1e0bf7 Status: Started Snapshot Count: 0 Number of Bricks: 3 x 3 = 9 Transport-type: tcp Bricks: Brick1: 10.0.4.11:/gluster_bricks/vms/vms Brick2: 10.0.4.13:/gluster_bricks/vms/vms Brick3: 10.0.4.12:/gluster_bricks/vms/vms Brick4: 10.0.4.11:/gluster_bricks/vms2/vms2 Brick5: 10.0.4.13:/gluster_bricks/vms2/vms2 Brick6: 10.0.4.12:/gluster_bricks/vms2/vms2 Brick7: 10.0.4.11:/gluster_bricks/vms3/vms3 Brick8: 10.0.4.12:/gluster_bricks/vms3/vms3 Brick9: 10.0.4.13:/gluster_bricks/vms3/vms3 I can see that bricks vms and vms2 are used but vms3 is almost (256MB vs hundreds of GBs) empty. In the fact I added vms3 bricks later and did not mention it because free volume space was reported (as I believed) correctly. Today I added one vm with 400GB disk and creating of disk failed because "OSError: [Errno 28] No space left on device". Now vms and vms2 bricks are full but vms volume has (should have) 914GiB of free space. I cannot see nothing interesting in glusterfs logs, glusterfs volume info and status looks good. There are errors in engine and vdsm logs but as can I see, it is just because of lack of space (vdsm side) and because image creation failed (engine side) Also as side effect, vms volume is full and disk from that new vm is locked (I would like to delete it). Any ideas what can be wrong or where to look and how to debug? Thanks in advance, Jiri
Hi, I tried to unlock that locked disk but I got error "VDSM command DeleteImageGroupVDS failed: Image does not exist in domain: 'image=42ae950a-3d2f-4590-b072-eed936b0667f, domain=6de5ae6d-c7cc-4292-bdbf-10495a38837b'" and space on glusterfs was not recalimed. /dev/mapper/gluster_vg_sdb-gluster_lv_engine 100G 28G 73G 28% /gluster_bricks/engine /dev/mapper/gluster_vg_sdb-gluster_lv_vms 794G 792G 2.1G 100% /gluster_bricks/vms /dev/mapper/gluster_vg_sdd-gluster_lv_vms2 930G 929G 1.3G 100% /gluster_bricks/vms2 /dev/mapper/gluster_vg_vms3-gluster_lv_vms3 932G 6.8G 925G 1% /gluster_bricks/vms3 Can I clean it somehow? Problem with not using one of brick persists and is probably on gluster side. Still looking for help but will try in glusterfs list... Cheers, Jiri https://paste.slu.cz/?a6028942ae324b47#5CUMRkUdNsKFF9sWC2JJFku2bgfEGSTZUJAhS... well, I had this problem when expand glusterfs these days https://lists.ovirt.org/archives/list/users@ovirt.org/thread/JOVHJZUUEI3SEPG... maybe it is glue On 2/9/26 8:06 PM, Jiri Slezka via Users wrote:
Hi,
I am using distributed-replicated volume for vm's storage. There are 3 bricks (mounts on separate disks) in each of 3 hosts, so topology is 3x3
Volume Name: vms Type: Distributed-Replicate Volume ID: 52032ec6-99d4-4210-8fb8-ffbd7a1e0bf7 Status: Started Snapshot Count: 0 Number of Bricks: 3 x 3 = 9 Transport-type: tcp Bricks: Brick1: 10.0.4.11:/gluster_bricks/vms/vms Brick2: 10.0.4.13:/gluster_bricks/vms/vms Brick3: 10.0.4.12:/gluster_bricks/vms/vms Brick4: 10.0.4.11:/gluster_bricks/vms2/vms2 Brick5: 10.0.4.13:/gluster_bricks/vms2/vms2 Brick6: 10.0.4.12:/gluster_bricks/vms2/vms2 Brick7: 10.0.4.11:/gluster_bricks/vms3/vms3 Brick8: 10.0.4.12:/gluster_bricks/vms3/vms3 Brick9: 10.0.4.13:/gluster_bricks/vms3/vms3
I can see that bricks vms and vms2 are used but vms3 is almost (256MB vs hundreds of GBs) empty. In the fact I added vms3 bricks later and did not mention it because free volume space was reported (as I believed) correctly.
Today I added one vm with 400GB disk and creating of disk failed because "OSError: [Errno 28] No space left on device".
Now vms and vms2 bricks are full but vms volume has (should have) 914GiB of free space.
I cannot see nothing interesting in glusterfs logs, glusterfs volume info and status looks good.
There are errors in engine and vdsm logs but as can I see, it is just because of lack of space (vdsm side) and because image creation failed (engine side)
Also as side effect, vms volume is full and disk from that new vm is locked (I would like to delete it).
Any ideas what can be wrong or where to look and how to debug?
Thanks in advance,
Jiri
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FEV7RFX6RW7URF...
well, I was able to delete unused image from gluster mount and reclaim space. Problem with unused bricks persists and any hints are welcome. Cheers, Jiri On 2/10/26 11:32 AM, Jiri Slezka via Users wrote:
Hi,
I tried to unlock that locked disk but I got error "VDSM command DeleteImageGroupVDS failed: Image does not exist in domain: 'image=42ae950a-3d2f-4590-b072-eed936b0667f, domain=6de5ae6d-c7cc-4292-bdbf-10495a38837b'" and space on glusterfs was not recalimed.
/dev/mapper/gluster_vg_sdb-gluster_lv_engine 100G 28G 73G 28% /gluster_bricks/engine /dev/mapper/gluster_vg_sdb-gluster_lv_vms 794G 792G 2.1G 100% /gluster_bricks/vms /dev/mapper/gluster_vg_sdd-gluster_lv_vms2 930G 929G 1.3G 100% /gluster_bricks/vms2 /dev/mapper/gluster_vg_vms3-gluster_lv_vms3 932G 6.8G 925G 1% /gluster_bricks/vms3
Can I clean it somehow?
Problem with not using one of brick persists and is probably on gluster side. Still looking for help but will try in glusterfs list...
Cheers,
Jiri
https://paste.slu.cz/?a6028942ae324b47#5CUMRkUdNsKFF9sWC2JJFku2bgfEGSTZUJAhS...
well, I had this problem when expand glusterfs these days
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/JOVHJZUUEI3SEPG...
maybe it is glue
On 2/9/26 8:06 PM, Jiri Slezka via Users wrote:
Hi,
I am using distributed-replicated volume for vm's storage. There are 3 bricks (mounts on separate disks) in each of 3 hosts, so topology is 3x3
Volume Name: vms Type: Distributed-Replicate Volume ID: 52032ec6-99d4-4210-8fb8-ffbd7a1e0bf7 Status: Started Snapshot Count: 0 Number of Bricks: 3 x 3 = 9 Transport-type: tcp Bricks: Brick1: 10.0.4.11:/gluster_bricks/vms/vms Brick2: 10.0.4.13:/gluster_bricks/vms/vms Brick3: 10.0.4.12:/gluster_bricks/vms/vms Brick4: 10.0.4.11:/gluster_bricks/vms2/vms2 Brick5: 10.0.4.13:/gluster_bricks/vms2/vms2 Brick6: 10.0.4.12:/gluster_bricks/vms2/vms2 Brick7: 10.0.4.11:/gluster_bricks/vms3/vms3 Brick8: 10.0.4.12:/gluster_bricks/vms3/vms3 Brick9: 10.0.4.13:/gluster_bricks/vms3/vms3
I can see that bricks vms and vms2 are used but vms3 is almost (256MB vs hundreds of GBs) empty. In the fact I added vms3 bricks later and did not mention it because free volume space was reported (as I believed) correctly.
Today I added one vm with 400GB disk and creating of disk failed because "OSError: [Errno 28] No space left on device".
Now vms and vms2 bricks are full but vms volume has (should have) 914GiB of free space.
I cannot see nothing interesting in glusterfs logs, glusterfs volume info and status looks good.
There are errors in engine and vdsm logs but as can I see, it is just because of lack of space (vdsm side) and because image creation failed (engine side)
Also as side effect, vms volume is full and disk from that new vm is locked (I would like to delete it).
Any ideas what can be wrong or where to look and how to debug?
Thanks in advance,
Jiri
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FEV7RFX6RW7URF...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/O5USEGKV6TSA5E...
participants (1)
-
Jiri Slezka