Hi, I am using distributed-replicated volume for vm's storage. There are 3 bricks (mounts on separate disks) in each of 3 hosts, so topology is 3x3 Volume Name: vms Type: Distributed-Replicate Volume ID: 52032ec6-99d4-4210-8fb8-ffbd7a1e0bf7 Status: Started Snapshot Count: 0 Number of Bricks: 3 x 3 = 9 Transport-type: tcp Bricks: Brick1: 10.0.4.11:/gluster_bricks/vms/vms Brick2: 10.0.4.13:/gluster_bricks/vms/vms Brick3: 10.0.4.12:/gluster_bricks/vms/vms Brick4: 10.0.4.11:/gluster_bricks/vms2/vms2 Brick5: 10.0.4.13:/gluster_bricks/vms2/vms2 Brick6: 10.0.4.12:/gluster_bricks/vms2/vms2 Brick7: 10.0.4.11:/gluster_bricks/vms3/vms3 Brick8: 10.0.4.12:/gluster_bricks/vms3/vms3 Brick9: 10.0.4.13:/gluster_bricks/vms3/vms3 I can see that bricks vms and vms2 are used but vms3 is almost (256MB vs hundreds of GBs) empty. In the fact I added vms3 bricks later and did not mention it because free volume space was reported (as I believed) correctly. Today I added one vm with 400GB disk and creating of disk failed because "OSError: [Errno 28] No space left on device". Now vms and vms2 bricks are full but vms volume has (should have) 914GiB of free space. I cannot see nothing interesting in glusterfs logs, glusterfs volume info and status looks good. There are errors in engine and vdsm logs but as can I see, it is just because of lack of space (vdsm side) and because image creation failed (engine side) Also as side effect, vms volume is full and disk from that new vm is locked (I would like to delete it). Any ideas what can be wrong or where to look and how to debug? Thanks in advance, Jiri