First of all you diddn't 'mkfs.xfs -i size=512' . You just 'mkfs.xfs' , whis is not good and could have caused your VM problems. Also , check with xfs_info the isize of the FS.

You have to find the uuid of the disks of the affected VM.
Then go to the removed host,and find that file -> this is the so called shard1.
Then you need to find the gfid of the file.
The easiest way is to go to the "dead" cluster and find the hard links in the .gluster directory.

Something like this:

ssh old host (the one specified in the remove-brick)
cd /gluster_bricks/data/data/<storage uuid>/images/<disk uuid>
ls -li <disk name> -> take the first number
find /gluster_bricks/data/data -inum <number from ls>

It should show you both the file and the gfid.

Then copy the file from images/<disk uuid>/file.
Go to /gluster_bricks/data/data/.shard
List all files of:

ls -l <gfid string from find>.*

These are your shards. Just cat first file + shards (in the number order) into another file.


This should be your VM disk.


Best Regards,
Strahil Nikolov


On Tue, Aug 3, 2021 at 12:58, David White via Users
<users@ovirt.org> wrote:
Hi Patrick,
This would be amazing, if possible.

Checking /gluster_bricks/data/data on the host where I've removed (but not replaced) the bricks, I see a single directory.
When I go into that directory, I see two directories:

dom_md
images

If I go into the images directory, I think I see the hash folders that you're referring to, and inside each of those, I see the 3 files you referenced.

Unfortunately, those files clearly don't have all of the data.
The parent folder for all of the hash folders is only 687M.

[root@cha1-storage data]# du -skh *
687M    31366488-d845-445b-b371-e059bf71f34f

And the "iso" files are small. The one I'm looking at now is only 19M.
It appears that most of the actual data is located in /gluster_bricks/data/data/.glusterfs, and all of those folders are totally random, incomprehensible directories that I'm not sure how to understand.

Perhaps you were on an older version of Gluster, and the actual data hierarchy is different?
I don't know. But I do see the 3 files you referenced, so that's a start, even if they are nowhere near the correct size.

Sent with ProtonMail Secure Email.

‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐

On Tuesday, August 3rd, 2021 at 1:49 AM, Patrick Lomakin <patrick.lomakin@gmail.com> wrote:

> Greetings, I once wondered how data is stored between replicated bricks. Specifically, how disks are stored on the storage domain in Gluster. I checked a mounted brick via the standard path (path may be different) /gluster/data/data and saw many directories there. Maybe the hierarchy is different, can't check now. But in the end I got a list of directories. Each directory name is a disk image hash. After going to a directory such as /HASH there were 3 files. The first is a disk in raw/iso/qcow2 format (but the file has no extension, I looked at the size) the other two files are the configuration and metadata. I downloaded the disk image file (.iso) to my computer via the curl command and service www.station307.com (no ads). And I got the original .iso which uploaded to the storage domain through the hosted engine interface. Maybe this way you can download the disk image to your computer and then load it via the GUI and connect it to a virtual machine. Good luck!
>

> Users mailing list -- users@ovirt.org
>

> To unsubscribe send an email to users-leave@ovirt.org
>

> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>

> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
>

> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/A6XITCEX5RNQB37YKDCR4EUKTV6W4HIR/