Re: Damaged hard disk in volume replica gluster

As you are replacing an old brick, you have to recreate the old LV and mount it on the same location. Then you can use the gluster's "reset-brick" (i thing ovirt UI has that option too) and all data will be replicated there. You got also the "replace-brick" option if you decided to change the mount location. P.S.: with replica volumes your volume should be still working, in case it has stopped - you have to investigate before proceeding. Best Regards, Strahil NikolovOn Oct 12, 2019 13:12, matteo fedeli <matmilan97@gmail.com> wrote:
Hi, I have in my ovirt HCI a volume that not work properly because one of three hdd fail. I bought a new hdd, I recreate old lvm partitioning and mount point. Now, how can I attach this empty new brick? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/U5XQ4M2K5RBKL2...

doing reset-brick - get this error: Errore durante l'esecuzione dell'azione Start Gluster Volume Reset Brick: Volume reset brick start failed: rc=-1 out=() err=['brick: asia_planet_bn:/gluster_bricks/data/data does not exist in volume: data']

What is the output of 'gluster volume info data' ? Best Regards,Strahil Nikolov В събота, 12 октомври 2019 г., 16:48:49 ч. Гринуич+3, matteo fedeli <matmilan97@gmail.com> написа: doing reset-brick - get this error: Errore durante l'esecuzione dell'azione Start Gluster Volume Reset Brick: Volume reset brick start failed: rc=-1 out=() err=['brick: asia_planet_bn:/gluster_bricks/data/data does not exist in volume: data'] _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/RTN7MRXGWVLWCV...

Volume Name: data Type: Replicate Volume ID: abcb0ee0-0e9d-4796-8d7d-143ea2692486 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: kansas.planet.bn:/gluster_bricks/data/data Brick2: germany.planet.bn:/gluster_bricks/data/data Brick3: singapore.planet.bn:/gluster_bricks/data/data Options Reconfigured: performance.client-io-threads: on nfs.disable: on transport.address-family: inet performance.strict-o-direct: on performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.low-prio-threads: 32 network.remote-dio: off cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off cluster.choose-local: off client.event-threads: 4 server.event-threads: 4 network.ping-timeout: 30 storage.owner-uid: 36 storage.owner-gid: 36 cluster.granular-entry-heal: enable diagnostics.latency-measurement: on diagnostics.count-fop-hits: on

It seems that there is no 'asia_planet_bn:/gluster_bricks/data/data' in that volume.Reset-brick expects that you have replaced the same brick and mounted it on the same place (and is empty).If you want to replace with brick on another host - you should use 'replace-brick' option . Best Regards,Strahil Nikolov В събота, 12 октомври 2019 г., 18:24:22 ч. Гринуич+3, matteo fedeli <matmilan97@gmail.com> написа: Volume Name: data Type: Replicate Volume ID: abcb0ee0-0e9d-4796-8d7d-143ea2692486 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: kansas.planet.bn:/gluster_bricks/data/data Brick2: germany.planet.bn:/gluster_bricks/data/data Brick3: singapore.planet.bn:/gluster_bricks/data/data Options Reconfigured: performance.client-io-threads: on nfs.disable: on transport.address-family: inet performance.strict-o-direct: on performance.quick-read: off performance.read-ahead: off performance.io-cache: off performance.low-prio-threads: 32 network.remote-dio: off cluster.eager-lock: enable cluster.quorum-type: auto cluster.server-quorum-type: server cluster.data-self-heal-algorithm: full cluster.locking-scheme: granular cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 features.shard: on user.cifs: off cluster.choose-local: off client.event-threads: 4 server.event-threads: 4 network.ping-timeout: 30 storage.owner-uid: 36 storage.owner-gid: 36 cluster.granular-entry-heal: enable diagnostics.latency-measurement: on diagnostics.count-fop-hits: on _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/G4QFBUIDY6A25F...

What do you mean by same place? before this hdd was on /dev/md0 now on /dev/sdc. Now if I do lsblk the lvm configurantion and mount point are the same... My target is replace a damaged hard disk, so on the same hosts.

According to your message you have these bricks:Bricks: Brick1: kansas.planet.bn:/gluster_bricks/data/data Brick2: germany.planet.bn:/gluster_bricks/data/data Brick3: singapore.planet.bn:/gluster_bricks/data/data And you tried to reset-brick 'asia.planet.bn:/gluster_bricks/data/data'which is not in the brick list. The actual name of the device is not important , you just need to mount it on the same location on the same server - otherwise you need the replace-brick operation.As you haven't mention which brick is broken , I will give you a fictional one. For example,the disk/disks at server 'singapore.planet.bn' and mounted at '/gluster_bricks/data/data' has broken.You replace the disk/disks and mount them at same mount point, so server 'singapore.planet.bn' has a mounted block device at '/gluster_bricks/data/data' and is completely empty. Then you just run : gluster volume reset-brick data singapore.planet.bn:/gluster_bricks/data/data startWhich should stop the brick process (if you haven't rebooted the server) gluster volume reset-brick data singapore.planet.bn:/gluster_bricks/data/data singapore.planet.bn:/gluster_bricks/data/data commit To restart the processes. You may have to add "force" at the end. And last ,force a heal:gluster volume heal data full Best Regards,Strahil Nikolov В събота, 12 октомври 2019 г., 23:40:49 ч. Гринуич+3, matteo fedeli <matmilan97@gmail.com> написа: What do you mean by same place? before this hdd was on /dev/md0 now on /dev/sdc. Now if I do lsblk the lvm configurantion and mount point are the same... My target is replace a damaged hard disk, so on the same hosts. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ASK42LHKCJA62I...

HI Strahil, I am also running into an issue trying to replace a brick. 1.-/dev/sdd failed which is our /gluster_bricks/data2/data2 brick, hat disk was taken out of the array controller and added a new one. 2.-there are quite a bit of entries related to /dev/sdd , i.e. [root@host1]# dmsetup info |grep sdd Name: gluster_vg_sdd-gluster_thinpool_gluster_vg_sdd-tpool Name: gluster_vg_sdd-gluster_thinpool_gluster_vg_sdd_tdata Name: gluster_vg_sdd-gluster_thinpool_gluster_vg_sdd_tmeta Name: gluster_vg_sdd-gluster_thinpool_gluster_vg_sdd Name: gluster_vg_sdd-gluster_lv_data2 this causes the OS to see the new disk as /dev/sde (the controller presents it as /dev/sde) array C Logical Drive: 3 Size: 2.7 TB ... Disk Name: /dev/sde Mount Points: None ..... Drive Type: Data LD Acceleration Method: Controller Cache If i just remove all the thinpool entries (if i find them) do you think that might do the trick? thanks, Adrian

On February 18, 2020 12:18:24 AM GMT+02:00, adrianquintero@gmail.com wrote:
HI Strahil, I am also running into an issue trying to replace a brick.
1.-/dev/sdd failed which is our /gluster_bricks/data2/data2 brick, hat disk was taken out of the array controller and added a new one.
2.-there are quite a bit of entries related to /dev/sdd , i.e.
[root@host1]# dmsetup info |grep sdd Name: gluster_vg_sdd-gluster_thinpool_gluster_vg_sdd-tpool Name: gluster_vg_sdd-gluster_thinpool_gluster_vg_sdd_tdata Name: gluster_vg_sdd-gluster_thinpool_gluster_vg_sdd_tmeta Name: gluster_vg_sdd-gluster_thinpool_gluster_vg_sdd Name: gluster_vg_sdd-gluster_lv_data2
this causes the OS to see the new disk as /dev/sde (the controller presents it as /dev/sde)
array C
Logical Drive: 3 Size: 2.7 TB ... Disk Name: /dev/sde Mount Points: None ..... Drive Type: Data LD Acceleration Method: Controller Cache
If i just remove all the thinpool entries (if i find them) do you think that might do the trick?
thanks,
Adrian _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/QGJ3JNK7CF2PJG...
Hi Adrian, I don't understand yoiur layout. Can you clarify the following: /dev/sdd was a PV /dev/sdd was not the only PV in the VG A thin LV was created from the VG The Thin LV was used as a gluster brick. I don't get why you want the new disk to become /dev/sdd again, as it won't have the necesary LVM metadata. Gluster works with miunt points . You can mount whatever you wish at /gluster_bricks/data2 and it will try to use that. As the brick doesn't exist any more - you will have to use 'reset-brick' or 'replace-brick' commands. Best Regards, Strahil Nikolov

Strahil, Let me take a step back and provide a bit more context of what we need to achieve. We have a 3 node HCI setup, and host1 has a failed drive (/dev/sdd) that is used entirely for 1 brick ( /gluster_bricks/data2/data2), this is a Replica 3 brick setup. The issue we have is that we don't have any more drive bays in our enclosure cages to add an extra disk and use it to replace the bad drive/Brick (/dev/sdd). What would be the best way to replace the drive/brick in this situation? and what is the order in which the steps need to be completed? I think I know how to replace a bad brick with a different brick and get things up and running, but in this case as mentioned above I have not more drive bays to allocate a new drive.

On February 19, 2020 10:31:04 PM GMT+02:00, adrianquintero@gmail.com wrote:
Strahil, Let me take a step back and provide a bit more context of what we need to achieve.
We have a 3 node HCI setup, and host1 has a failed drive (/dev/sdd) that is used entirely for 1 brick ( /gluster_bricks/data2/data2), this is a Replica 3 brick setup.
The issue we have is that we don't have any more drive bays in our enclosure cages to add an extra disk and use it to replace the bad drive/Brick (/dev/sdd).
What would be the best way to replace the drive/brick in this situation? and what is the order in which the steps need to be completed?
I think I know how to replace a bad brick with a different brick and get things up and running, but in this case as mentioned above I have not more drive bays to allocate a new drive. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/HQPQKTP5LUOEUQ...
Hm... What about LVM setup. I will assume some stuff, but this doesn't mean that it is right. Asumptions: /dev/sdd is the only PV in the VG You have thin pool ontop /dev/sdd When /dev/sdd is missing -> the VG disappeared You have excluded /dev/sdd from multipath. So, you need to: 1. Remove broken disk (anyway it's useless) 2. Push the new one 3. If you have a raid controller - create a logical lun with only the new disk 4. The new disk will be somwthing like /dev/sdz 5. Create new PV, VG , LV vgcreate OLD_VG /dev/sdz Note: If you use filters - pick the proper disk name lvcreate -l 100%FREE -T OLD_VG/OLD_THINPOOL lvcreate -V <old_size>G -T OLD_VG/OLD_THINPOOL -n OLD_THIN_VOL_NAME 6. Create a filesystem mkfs.xfs /dev/OLD_VG/OLD_THIN_VOL_NAME Note: you might get errors on all LVM commands complaining about /dev/sdd.To get rid of that , you have to delete that disk: echo 1 > /sys/block/sdd/device/delete 7. Update /etc/fstab or systemd unit if the old FS was not XFS 8. Mount the brick: mount /gluster_bricks/data2 mkdir /gluster_bricks/data2/data2 restorecon -RFvv /gluster_bricks/data2 9. Reset the brick A) You can do it from oVirt UI -> Storage -> Volumes -> Bricks -> Reset Brick (up left) B) gluster volume reset-brick data2 node_name:/gluster_bricks/data2/data2 node_name:/gluster_bricks/data2/data2 commit force C) gluster volume replace-brick data2 node_name:/gluster_bricks/data2/data2 node_name:/gluster_bricks/data2/data2 commit force Note: I had a bad experience with 'reset-brick' in gluster v3 , so I prefer to use replace-brick Note2: What's the difference between B & C -> replace brick allows the desitination to be another server and/or another directory. Reset-brick doesn't allow that. Once the brick is up - we should be ready to go. Best Regards, Strahil Nikolov

This is just my opinion but I would be to attach external storage and dd the drive using the command to ignore errors. Then replace the drive(s) and dd back to the replacement drive. I don't know if gluster has snapshot or backup of this drive that would rebuild it for you. This is just my opinion. Eric Evans Digital Data Services LLC. 304.660.9080 -----Original Message----- From: Strahil Nikolov <hunter86_bg@yahoo.com> Sent: Wednesday, February 19, 2020 4:48 PM To: users@ovirt.org; adrianquintero@gmail.com Subject: [ovirt-users] Re: Damaged hard disk in volume replica gluster On February 19, 2020 10:31:04 PM GMT+02:00, adrianquintero@gmail.com wrote:
Strahil, Let me take a step back and provide a bit more context of what we need to achieve.
We have a 3 node HCI setup, and host1 has a failed drive (/dev/sdd) that is used entirely for 1 brick ( /gluster_bricks/data2/data2), this is a Replica 3 brick setup.
The issue we have is that we don't have any more drive bays in our enclosure cages to add an extra disk and use it to replace the bad drive/Brick (/dev/sdd).
What would be the best way to replace the drive/brick in this situation? and what is the order in which the steps need to be completed?
I think I know how to replace a bad brick with a different brick and get things up and running, but in this case as mentioned above I have not more drive bays to allocate a new drive. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/HQPQKTP5L UOEUQITPIIIS6TBDRYJ5T5O/
Hm... What about LVM setup. I will assume some stuff, but this doesn't mean that it is right. Asumptions: /dev/sdd is the only PV in the VG You have thin pool ontop /dev/sdd When /dev/sdd is missing -> the VG disappeared You have excluded /dev/sdd from multipath. So, you need to: 1. Remove broken disk (anyway it's useless) 2. Push the new one 3. If you have a raid controller - create a logical lun with only the new disk 4. The new disk will be somwthing like /dev/sdz 5. Create new PV, VG , LV vgcreate OLD_VG /dev/sdz Note: If you use filters - pick the proper disk name lvcreate -l 100%FREE -T OLD_VG/OLD_THINPOOL lvcreate -V <old_size>G -T OLD_VG/OLD_THINPOOL -n OLD_THIN_VOL_NAME 6. Create a filesystem mkfs.xfs /dev/OLD_VG/OLD_THIN_VOL_NAME Note: you might get errors on all LVM commands complaining about /dev/sdd.To get rid of that , you have to delete that disk: echo 1 > /sys/block/sdd/device/delete 7. Update /etc/fstab or systemd unit if the old FS was not XFS 8. Mount the brick: mount /gluster_bricks/data2 mkdir /gluster_bricks/data2/data2 restorecon -RFvv /gluster_bricks/data2 9. Reset the brick A) You can do it from oVirt UI -> Storage -> Volumes -> Bricks -> Reset Brick (up left) B) gluster volume reset-brick data2 node_name:/gluster_bricks/data2/data2 node_name:/gluster_bricks/data2/data2 commit force C) gluster volume replace-brick data2 node_name:/gluster_bricks/data2/data2 node_name:/gluster_bricks/data2/data2 commit force Note: I had a bad experience with 'reset-brick' in gluster v3 , so I prefer to use replace-brick Note2: What's the difference between B & C -> replace brick allows the desitination to be another server and/or another directory. Reset-brick doesn't allow that. Once the brick is up - we should be ready to go. Best Regards, Strahil Nikolov _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/DMJPC7UHYXAYV2...

On February 20, 2020 1:19:41 AM GMT+02:00, eevans@digitaldatatechs.com wrote:
This is just my opinion but I would be to attach external storage and dd the drive using the command to ignore errors. Then replace the drive(s) and dd back to the replacement drive. I don't know if gluster has snapshot or backup of this drive that would rebuild it for you. This is just my opinion.
Eric Evans Digital Data Services LLC. 304.660.9080
-----Original Message----- From: Strahil Nikolov <hunter86_bg@yahoo.com> Sent: Wednesday, February 19, 2020 4:48 PM To: users@ovirt.org; adrianquintero@gmail.com Subject: [ovirt-users] Re: Damaged hard disk in volume replica gluster
On February 19, 2020 10:31:04 PM GMT+02:00, adrianquintero@gmail.com wrote:
Strahil, Let me take a step back and provide a bit more context of what we need
to achieve.
We have a 3 node HCI setup, and host1 has a failed drive (/dev/sdd) that is used entirely for 1 brick ( /gluster_bricks/data2/data2), this
is a Replica 3 brick setup.
The issue we have is that we don't have any more drive bays in our enclosure cages to add an extra disk and use it to replace the bad drive/Brick (/dev/sdd).
What would be the best way to replace the drive/brick in this situation? and what is the order in which the steps need to be completed?
I think I know how to replace a bad brick with a different brick and get things up and running, but in this case as mentioned above I have not more drive bays to allocate a new drive. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/HQPQKTP5L UOEUQITPIIIS6TBDRYJ5T5O/
Hm... What about LVM setup.
I will assume some stuff, but this doesn't mean that it is right.
Asumptions: /dev/sdd is the only PV in the VG You have thin pool ontop /dev/sdd When /dev/sdd is missing -> the VG disappeared You have excluded /dev/sdd from multipath.
So, you need to: 1. Remove broken disk (anyway it's useless) 2. Push the new one 3. If you have a raid controller - create a logical lun with only the new disk 4. The new disk will be somwthing like /dev/sdz 5. Create new PV, VG , LV vgcreate OLD_VG /dev/sdz Note: If you use filters - pick the proper disk name lvcreate -l 100%FREE -T OLD_VG/OLD_THINPOOL lvcreate -V <old_size>G -T OLD_VG/OLD_THINPOOL -n OLD_THIN_VOL_NAME 6. Create a filesystem mkfs.xfs /dev/OLD_VG/OLD_THIN_VOL_NAME
Note: you might get errors on all LVM commands complaining about /dev/sdd.To get rid of that , you have to delete that disk: echo 1 > /sys/block/sdd/device/delete
7. Update /etc/fstab or systemd unit if the old FS was not XFS 8. Mount the brick: mount /gluster_bricks/data2 mkdir /gluster_bricks/data2/data2 restorecon -RFvv /gluster_bricks/data2
9. Reset the brick A) You can do it from oVirt UI -> Storage -> Volumes -> Bricks -> Reset Brick (up left) B) gluster volume reset-brick data2 node_name:/gluster_bricks/data2/data2 node_name:/gluster_bricks/data2/data2 commit force C) gluster volume replace-brick data2 node_name:/gluster_bricks/data2/data2 node_name:/gluster_bricks/data2/data2 commit force
Note: I had a bad experience with 'reset-brick' in gluster v3 , so I prefer to use replace-brick Note2: What's the difference between B & C -> replace brick allows the desitination to be another server and/or another directory. Reset-brick doesn't allow that.
Once the brick is up - we should be ready to go.
Best Regards, Strahil Nikolov _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/DMJPC7UHYXAYV2... _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/KL3HA2PQJVGENZ...
Hi Eric, As this is replica volume , you already have the data at 2 other locations. You don't need the broken's disk data - you can access the other bricks and backup from there. Gluster is not interested of anything but the mount point. You can set a new brick on the same or different path and -> replace-brick option will take care. Beet Regards, Strahil Nikolov

Strahil, Something that just came to my attention is that we have a second cluster with VDO enabled, is there something out there that can explain the process to accomplish a disk replacement with VDO? Thanks, Adrian

On February 25, 2020 5:14:24 PM GMT+02:00, adrianquintero@gmail.com wrote:
Strahil, Something that just came to my attention is that we have a second cluster with VDO enabled, is there something out there that can explain the process to accomplish a disk replacement with VDO?
Thanks,
Adrian _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/D4VLQ673HYLTEF...
Hi Adrian, I'm also using VDO and thus my bricks are mounted via a '.mount' unit. From Gluster perspective - it just requires a directory with enough space. If the brick name is the same (same host) you have 2 options 'reset-brick' & 'replace-brick'. So in your case you need: 1. VDO starts first 2. LVM scans the VDO and activates the VG using the VDO as PV 3. Systemd '.mount' unit mounts the brick Note: Define in the mount unit requirements for the VDO service (or if you have a custom VDO service for this VDO device only) 4. You replace the brick in order to restore the volume Best Regards, Strahil Nikolov

Strahil, In our case the VDO was created during oVirt HCI setup. So I am trying to determine how it gets mounted As far as I can tell this is the config is as follows: [root@host1 ~]# vdo printConfigFile config: !Configuration vdos: vdo_sdc: !VDOService _operationState: finished ackThreads: 1 activated: enabled bioRotationInterval: 64 bioThreads: 4 blockMapCacheSize: 128M blockMapPeriod: 16380 compression: enabled cpuThreads: 2 deduplication: enabled device: /dev/disk/by-id/scsi-3600508b1001cd70935270813aca97c44 hashZoneThreads: 1 indexCfreq: 0 indexMemory: 0.25 indexSparse: disabled indexThreads: 0 logicalBlockSize: 512 logicalSize: 7200G logicalThreads: 1 name: vdo_sdc physicalSize: 781379416K physicalThreads: 1 readCache: enabled readCacheSize: 20M slabSize: 32G writePolicy: auto version: 538380551 filename: /etc/vdoconf.yml [root@host1 ~]# lsblk /dev/sdc NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdc 8:32 0 745.2G 0 disk ├─3600508b1001cd70935270813aca97c44 253:6 0 745.2G 0 mpath └─vdo_sdc 253:21 0 7T 0 vdo └─gluster_vg_sdc-gluster_lv_data 253:22 0 7T 0 lvm /gluster_bricks/data I know that vdo_sdc is the TYPE="LVM2_member", this from /etc/fstab entries: /dev/mapper/vdo_sdc: UUID="gsaPfw-agoW-HZ3o-ly0W-wjb3-YqvL-266SnR" TYPE="LVM2_member" /dev/mapper/gluster_vg_sdc-gluster_lv_data: UUID="8a94b876-baf2-442c-9e7f-6573308c8ef3" TYPE="xfs" --- Physical volumes --- PV Name /dev/mapper/vdo_sdc PV UUID gsaPfw-agoW-HZ3o-ly0W-wjb3-YqvL-266SnR PV Status allocatable Total PE / Free PE 1843199 / 0 Am am trying to piece things together, I am doing more research on VDO in an HCI oVirt setup. In the meantime any help is welcome. Thanks again, Adrian

On February 25, 2020 8:18:20 PM GMT+02:00, adrianquintero@gmail.com wrote:
Strahil, In our case the VDO was created during oVirt HCI setup. So I am trying to determine how it gets mounted
As far as I can tell this is the config is as follows:
[root@host1 ~]# vdo printConfigFile config: !Configuration vdos: vdo_sdc: !VDOService _operationState: finished ackThreads: 1 activated: enabled bioRotationInterval: 64 bioThreads: 4 blockMapCacheSize: 128M blockMapPeriod: 16380 compression: enabled cpuThreads: 2 deduplication: enabled device: /dev/disk/by-id/scsi-3600508b1001cd70935270813aca97c44 hashZoneThreads: 1 indexCfreq: 0 indexMemory: 0.25 indexSparse: disabled indexThreads: 0 logicalBlockSize: 512 logicalSize: 7200G logicalThreads: 1 name: vdo_sdc physicalSize: 781379416K physicalThreads: 1 readCache: enabled readCacheSize: 20M slabSize: 32G writePolicy: auto version: 538380551 filename: /etc/vdoconf.yml
[root@host1 ~]# lsblk /dev/sdc NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdc 8:32 0 745.2G 0 disk ├─3600508b1001cd70935270813aca97c44 253:6 0 745.2G 0 mpath └─vdo_sdc 253:21 0 7T 0 vdo └─gluster_vg_sdc-gluster_lv_data 253:22 0 7T 0 lvm /gluster_bricks/data
I know that vdo_sdc is the TYPE="LVM2_member", this from /etc/fstab entries: /dev/mapper/vdo_sdc: UUID="gsaPfw-agoW-HZ3o-ly0W-wjb3-YqvL-266SnR" TYPE="LVM2_member" /dev/mapper/gluster_vg_sdc-gluster_lv_data: UUID="8a94b876-baf2-442c-9e7f-6573308c8ef3" TYPE="xfs"
--- Physical volumes --- PV Name /dev/mapper/vdo_sdc PV UUID gsaPfw-agoW-HZ3o-ly0W-wjb3-YqvL-266SnR PV Status allocatable Total PE / Free PE 1843199 / 0
Am am trying to piece things together, I am doing more research on VDO in an HCI oVirt setup.
In the meantime any help is welcome.
Thanks again,
Adrian
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/HVCYYZSUJ7RCRM...
Hi Adrian, It seems that /dev/sdc (persistent name /dev/disk/by-id/scsi-3600508b1001cd70935270813aca97c44) is a PV in Volume Group 'gluster_vg_sdc' and the LV 'gluster_lv_data' is mounted as your brick . So you have: Gluster Brick gluster_lv_data -> LV gluster_vg_sdc -> VG gsaPfw-agoW-HZ3o-ly0W-wjb3-YqvL-266SnR -> PV's UUID /dev/disk/by-id/scsi-3600508b1001cd70935270813aca97c44 -> multipath with 1 leg /dev/sdc -> block device Note: You should avoid using multipath for local drvices. In order to do that , you can put the following in your /etc/multipath.conf to prevent vdsm from modifying it: # VDSM PRIVATE And you should blacklist the wwid: blacklist { wwid 3600508b1001cd70935270813aca97c44 } To verify the multipath.conf run: multipath -v3 and check for any errors . Once you modify /etc/multupath.conf and you verified it with 'multipath -v3' , you should update the initramfs: 'dracut -f' Best Regards, Strahil Nikolov

On February 25, 2020 8:35:27 PM GMT+02:00, Strahil Nikolov <hunter86_bg@yahoo.com> wrote:
On February 25, 2020 8:18:20 PM GMT+02:00, adrianquintero@gmail.com wrote:
Strahil, In our case the VDO was created during oVirt HCI setup. So I am trying to determine how it gets mounted
As far as I can tell this is the config is as follows:
[root@host1 ~]# vdo printConfigFile config: !Configuration vdos: vdo_sdc: !VDOService _operationState: finished ackThreads: 1 activated: enabled bioRotationInterval: 64 bioThreads: 4 blockMapCacheSize: 128M blockMapPeriod: 16380 compression: enabled cpuThreads: 2 deduplication: enabled device: /dev/disk/by-id/scsi-3600508b1001cd70935270813aca97c44 hashZoneThreads: 1 indexCfreq: 0 indexMemory: 0.25 indexSparse: disabled indexThreads: 0 logicalBlockSize: 512 logicalSize: 7200G logicalThreads: 1 name: vdo_sdc physicalSize: 781379416K physicalThreads: 1 readCache: enabled readCacheSize: 20M slabSize: 32G writePolicy: auto version: 538380551 filename: /etc/vdoconf.yml
[root@host1 ~]# lsblk /dev/sdc NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdc 8:32 0 745.2G 0 disk ├─3600508b1001cd70935270813aca97c44 253:6 0 745.2G 0 mpath └─vdo_sdc 253:21 0 7T 0 vdo └─gluster_vg_sdc-gluster_lv_data 253:22 0 7T 0 lvm /gluster_bricks/data
I know that vdo_sdc is the TYPE="LVM2_member", this from /etc/fstab entries: /dev/mapper/vdo_sdc: UUID="gsaPfw-agoW-HZ3o-ly0W-wjb3-YqvL-266SnR" TYPE="LVM2_member" /dev/mapper/gluster_vg_sdc-gluster_lv_data: UUID="8a94b876-baf2-442c-9e7f-6573308c8ef3" TYPE="xfs"
--- Physical volumes --- PV Name /dev/mapper/vdo_sdc PV UUID gsaPfw-agoW-HZ3o-ly0W-wjb3-YqvL-266SnR PV Status allocatable Total PE / Free PE 1843199 / 0
Am am trying to piece things together, I am doing more research on VDO in an HCI oVirt setup.
In the meantime any help is welcome.
Thanks again,
Adrian
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/HVCYYZSUJ7RCRM...
Hi Adrian,
It seems that /dev/sdc (persistent name /dev/disk/by-id/scsi-3600508b1001cd70935270813aca97c44) is a PV in Volume Group 'gluster_vg_sdc' and the LV 'gluster_lv_data' is mounted as your brick . So you have: Gluster Brick gluster_lv_data -> LV gluster_vg_sdc -> VG gsaPfw-agoW-HZ3o-ly0W-wjb3-YqvL-266SnR -> PV's UUID /dev/disk/by-id/scsi-3600508b1001cd70935270813aca97c44 -> multipath with 1 leg /dev/sdc -> block device
Note: You should avoid using multipath for local drvices. In order to do that , you can put the following in your /etc/multipath.conf to prevent vdsm from modifying it: # VDSM PRIVATE
And you should blacklist the wwid: blacklist { wwid 3600508b1001cd70935270813aca97c44 }
To verify the multipath.conf run: multipath -v3 and check for any errors . Once you modify /etc/multupath.conf and you verified it with 'multipath -v3' , you should update the initramfs: 'dracut -f'
Best Regards, Strahil Nikolov
Hm... I forgot to describe the VDO - it is above the mpath and below the PV layer. Best Regards, Strahil Nikolov

Thanks Strahil, I made a mistake in this 3 node cluster, I use this cluster for testing, in our Prod environment we do have the blacklist but it is as follows: # VDSM REVISION 1.8 # VDSM PRIVATE blacklist { devnode "*" } However we did not add each individual local disk to the blacklist entries, would this still be the case where I would have to add the individual blacklist entries as you suggested? I thought 'devnode "*" ' achieved this... From another post you mentioned something similart to: # VDSM PRIVATE blacklist { devnode "*" wwid Crucial_CT256MX100SSD1_14390D52DCF5 wwid WDC_WD5000AZRX-00A8LB0_WD-WCC1U0056126 wwid WDC_WD5003ABYX-01WERA0_WD-WMAYP2335378 wwid nvme.1cc1-324a31313230303131353936-414441544120535838323030504e50-00000001 } My production host: [root@host18 ~]# multipath -v2 -d [root@host18 ~]# Thoughts? Thanks, Adrian

On February 25, 2020 11:01:47 PM GMT+02:00, adrianquintero@gmail.com wrote:
Thanks Strahil, I made a mistake in this 3 node cluster, I use this cluster for testing, in our Prod environment we do have the blacklist but it is as follows:
# VDSM REVISION 1.8 # VDSM PRIVATE blacklist { devnode "*" }
However we did not add each individual local disk to the blacklist entries, would this still be the case where I would have to add the individual blacklist entries as you suggested? I thought 'devnode "*" ' achieved this...
From another post you mentioned something similart to: # VDSM PRIVATE blacklist { devnode "*" wwid Crucial_CT256MX100SSD1_14390D52DCF5 wwid WDC_WD5000AZRX-00A8LB0_WD-WCC1U0056126 wwid WDC_WD5003ABYX-01WERA0_WD-WMAYP2335378 wwid nvme.1cc1-324a31313230303131353936-414441544120535838323030504e50-00000001 }
My production host: [root@host18 ~]# multipath -v2 -d [root@host18 ~]#
Thoughts?
Thanks,
Adrian _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/UJPRAUG5VBZ4CO...
'devnode *' will blacklist all /dev/XYZ devices, which is only acceptable if you do not plan to use SAN or iSCSI . Otherwise, you should just blacklist local devices and the rest will be under mpath. Best Regards, Strahil Nikolov

We are not planning on using SAN/iSCSI at all. I will try out your suggested steps for VDO and see if I can create a procedure for our current scenario. thanks I will let you know the outcome. regards, Adrian
participants (5)
-
adrianquintero@gmail.com
-
eevans@digitaldatatechs.com
-
matteo fedeli
-
Strahil
-
Strahil Nikolov