On February 25, 2020 8:35:27 PM GMT+02:00, Strahil Nikolov <hunter86_bg(a)yahoo.com>
wrote:
On February 25, 2020 8:18:20 PM GMT+02:00, adrianquintero(a)gmail.com
wrote:
>Strahil,
>In our case the VDO was created during oVirt HCI setup. So I am trying
>to determine how it gets mounted
>
>As far as I can tell this is the config is as follows:
>
>
>[root@host1 ~]# vdo printConfigFile
>config: !Configuration
> vdos:
> vdo_sdc: !VDOService
> _operationState: finished
> ackThreads: 1
> activated: enabled
> bioRotationInterval: 64
> bioThreads: 4
> blockMapCacheSize: 128M
> blockMapPeriod: 16380
> compression: enabled
> cpuThreads: 2
> deduplication: enabled
> device: /dev/disk/by-id/scsi-3600508b1001cd70935270813aca97c44
> hashZoneThreads: 1
> indexCfreq: 0
> indexMemory: 0.25
> indexSparse: disabled
> indexThreads: 0
> logicalBlockSize: 512
> logicalSize: 7200G
> logicalThreads: 1
> name: vdo_sdc
> physicalSize: 781379416K
> physicalThreads: 1
> readCache: enabled
> readCacheSize: 20M
> slabSize: 32G
> writePolicy: auto
> version: 538380551
>filename: /etc/vdoconf.yml
>
>
>
>[root@host1 ~]# lsblk /dev/sdc
>NAME MAJ:MIN RM SIZE RO TYPE
>MOUNTPOINT
>sdc 8:32 0 745.2G 0 disk
>├─3600508b1001cd70935270813aca97c44 253:6 0 745.2G 0 mpath
>└─vdo_sdc 253:21 0 7T 0 vdo
>└─gluster_vg_sdc-gluster_lv_data 253:22 0 7T 0 lvm
>/gluster_bricks/data
>
>I know that vdo_sdc is the TYPE="LVM2_member", this from /etc/fstab
>entries:
>/dev/mapper/vdo_sdc: UUID="gsaPfw-agoW-HZ3o-ly0W-wjb3-YqvL-266SnR"
>TYPE="LVM2_member"
>/dev/mapper/gluster_vg_sdc-gluster_lv_data:
>UUID="8a94b876-baf2-442c-9e7f-6573308c8ef3" TYPE="xfs"
>
> --- Physical volumes ---
> PV Name /dev/mapper/vdo_sdc
> PV UUID gsaPfw-agoW-HZ3o-ly0W-wjb3-YqvL-266SnR
> PV Status allocatable
> Total PE / Free PE 1843199 / 0
>
>
>
>
>Am am trying to piece things together, I am doing more research on
VDO
>in an HCI oVirt setup.
>
>In the meantime any help is welcome.
>
>Thanks again,
>
>Adrian
>
>_______________________________________________
>Users mailing list -- users(a)ovirt.org
>To unsubscribe send an email to users-leave(a)ovirt.org
>Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/HVCYYZSUJ7RCRMKZLMUCTU537VFYCD76/
Hi Adrian,
It seems that /dev/sdc (persistent name
/dev/disk/by-id/scsi-3600508b1001cd70935270813aca97c44) is a PV in
Volume Group 'gluster_vg_sdc' and the LV 'gluster_lv_data' is mounted
as your brick .
So you have:
Gluster Brick
gluster_lv_data -> LV
gluster_vg_sdc -> VG
gsaPfw-agoW-HZ3o-ly0W-wjb3-YqvL-266SnR -> PV's UUID
/dev/disk/by-id/scsi-3600508b1001cd70935270813aca97c44 -> multipath
with 1 leg
/dev/sdc -> block device
Note: You should avoid using multipath for local drvices.
In order to do that , you can put the following in your
/etc/multipath.conf to prevent vdsm from modifying it:
# VDSM PRIVATE
And you should blacklist the wwid:
blacklist {
wwid 3600508b1001cd70935270813aca97c44
}
To verify the multipath.conf run:
multipath -v3 and check for any errors .
Once you modify /etc/multupath.conf and you verified it with
'multipath -v3' , you should update the initramfs:
'dracut -f'
Best Regards,
Strahil Nikolov
Hm...
I forgot to describe the VDO - it is above the mpath and below the PV layer.
Best Regards,
Strahil Nikolov