On 09/04/2018 02:22 PM, Nir Soffer wrote:
Maybe you have lvm filter set, which is highly recommend for an oVirt
hypervisor.
Indeed, I do. I am not sure I have the right filter however, so I
appreciate the help.
This is the filter setup initially:
filter = [ "a|^/dev/mapper/3600508b1001c7e172160824d7b204c3b2$|",
"r|.*|" ]
Just to be clear, my intent isn't to add /dev/sdb to the main volume
group, but to make a new volume group to setup a local ext4 mount point.
I changed it to:
filter = [ "a|^/dev/sdb|",
"a|^/dev/mapper/3600508b1001c7e172160824d7b204c3b2$|", "r|.*|" ]
Following this and a reboot, I was able to create a PV, VG, and LV.
# pvcreate /dev/sdb
# vgcreate data /dev/sdb
# lvcreate -L800g /dev/data --name local_images
# mkfs.ext4 /dev/data/local_images
-- adjust fstab
# mount -a
It seems to function as expected now the the filter has been adjusted.
But is the filter doing what it is "supposed" to?
When I run the command "vdsm-tool config-lvm-filter" what I see is:
[root@node4-g8-h4 ~]# vdsm-tool config-lvm-filter
Analyzing host...
LVM filter is already configured for Vdsm
Thanks for the help and confirming how this should work.
Matt
To add /dev/sdb, you need to add it to the lvm filter in
/etc/lvm/lvm.conf.
After you configure the device properly, you can generate lvm filter
for the current setup using:
vdsm-tool config-lvm-filter
Here is example run on unconfigued oVirt host:
# vdsm-tool config-lvm-filter
Analyzing host...
Found these mounted logical volumes on this host:
logical volume: /dev/mapper/fedora_voodoo1-root
mountpoint: /
devices: /dev/vda2
logical volume: /dev/mapper/fedora_voodoo1-swap
mountpoint: [SWAP]
devices: /dev/vda2
This is the recommended LVM filter for this host:
filter = [ "a|^/dev/vda2$|", "r|.*|" ]
This filter allows LVM to access the local devices used by the
hypervisor, but not shared storage owned by Vdsm. If you add a new
device to the volume group, you will need to edit the filter manually.
Nir
On 09/04/2018 01:23 PM, Matt Simonsen wrote:
> Hello,
>
> I'm running oVirt with several data centers, some with NFS
storage and
> some with local storage.
>
> I had problems in the past with a large pool and local storage. The
> problem was nodectl showed the pool being too full (I think
>80%), but
> it was only the images that made the pool "full" -- and this
storage
> was carefully setup such that there was no chance it would actually
> fill. The LVs for oVirt itself were all under 20%, yet nodectl
still
> reported the pool was too full.
>
> My solution so far has been to use our RAID card tools, so that
sda is
> the oVirt node install, and sdb is for images. There are probably
> other good reasons for me to handle it this way, for example being
> able to use different RAID levels, but I'm hoping someone can
confirm
> my partitioning below doesn't have some risk I'm now yet aware of.
>
> I setup a new volume group for images, as below:
>
>
> [root@node4-g8-h4 multipath]# pvs
> PV VG Fmt Attr PSize
> PFree
> /dev/mapper/3600508b1001c7e172160824d7b204c3b2 onn_node4-g8-h4
lvm2
> a-- <119.00g <22.85g
> /dev/sdb1 data lvm2 a--
1.13t
> <361.30g
>
> [root@node4-g8-h4 multipath]# vgs
> VG #PV #LV #SN Attr VSize VFree
> data 1 1 0 wz--n- 1.13t <361.30g
> onn_node4-g8-h4 1 13 0 wz--n- <119.00g <22.85g
>
> [root@node4-g8-h4 multipath]# lvs
> LV VG Attr LSize
> Pool Origin Data% Meta% Move Log
> Cpy%Sync Convert
> images_main data -wi-ao---- 800.00g
> home onn_node4-g8-h4 Vwi-aotz--
> 1.00g pool00 4.79
> ovirt-node-ng-4.2.5.1-0.20180816.0 onn_node4-g8-h4 Vwi---tz-k
> 64.10g pool00 root
> ovirt-node-ng-4.2.5.1-0.20180816.0+1 onn_node4-g8-h4 Vwi---tz--
> 64.10g pool00 ovirt-node-ng-4.2.5.1-0.20180816.0
> ovirt-node-ng-4.2.6-0.20180903.0 onn_node4-g8-h4 Vri---tz-k
> 64.10g pool00
> ovirt-node-ng-4.2.6-0.20180903.0+1 onn_node4-g8-h4 Vwi-aotz--
> 64.10g pool00 ovirt-node-ng-4.2.6-0.20180903.0 4.83
> pool00 onn_node4-g8-h4 twi-aotz--
> 91.10g 8.94 0.49
> root onn_node4-g8-h4 Vwi---tz--
> 64.10g pool00
> swap onn_node4-g8-h4
-wi-ao---- 4.00g
> tmp onn_node4-g8-h4 Vwi-aotz--
> 1.00g pool00 4.87
> var onn_node4-g8-h4 Vwi-aotz--
> 15.00g pool00 3.31
> var_crash onn_node4-g8-h4 Vwi-aotz--
> 10.00g pool00 2.86
> var_log onn_node4-g8-h4 Vwi-aotz--
> 8.00g pool00 3.57
> var_log_audit onn_node4-g8-h4 Vwi-aotz--
> 2.00g pool00 4.89
>
>
>
> The images_main is setup as "Block device for filesystems" with
ext4.
> Is there any reason I should consider pool for thinly provisioned
> volumes? I don't need to over-allocate storage and it seems to me
> like a fixed partition is ideal. Please confirm or let me know if
> there's anything else I should consider.
>
>
> Thanks
>
> Matt
> _______________________________________________
> Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
> To unsubscribe send an email to users-leave(a)ovirt.org
<mailto:users-leave@ovirt.org>
> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
>
https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/I7N547X6DC7...
_______________________________________________
Users mailing list -- users(a)ovirt.org <mailto:users@ovirt.org>
To unsubscribe send an email to users-leave(a)ovirt.org
<mailto:users-leave@ovirt.org>
Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LJINANK6PAG...