Re: [ovirt-users] Self hosted engine iusses

----- Original Message -----
From: "Stefano Danzi" <s.danzi@hawai.it> To: "Nir Soffer" <nsoffer@redhat.com> Sent: Thursday, February 5, 2015 1:39:41 PM Subject: Re: [ovirt-users] Self hosted engine iusses
Here
In vdsm log I see that you installed hosted engine on NFS storage domain; not having multipath devices and ovirt vgs/lvs is expected. Your multipath errors may be related to ovirt, but only because vdsm requires and starts multipathd, and install new multipath.conf. I suggest to try to get help about it in device-mapper channels (e.g. #lvm on freenode). If you cannot resolve this, please open a bug. Nir
Il 05/02/2015 12.31, Nir Soffer ha scritto:
----- Original Message -----
From: "Stefano Danzi" <s.danzi@hawai.it> To: "Nir Soffer" <nsoffer@redhat.com> Cc: users@ovirt.org Sent: Thursday, February 5, 2015 1:17:01 PM Subject: Re: [ovirt-users] Self hosted engine iusses
Il 05/02/2015 12.08, Nir Soffer ha scritto:
----- Original Message -----
From: "Stefano Danzi" <s.danzi@hawai.it> To: "Nir Soffer" <nsoffer@redhat.com> Cc: users@ovirt.org Sent: Thursday, February 5, 2015 12:58:35 PM Subject: Re: [ovirt-users] Self hosted engine iusses
Il 05/02/2015 11.52, Nir Soffer ha scritto:
----- Original Message -----
After ovirt installation on host console I see this error every 5 minutes:
[ 1823.837020] device-mapper: table: 253:4: multipath: error getting device [ 1823.837228] device-mapper: ioctl: error adding target to table This may be caused by the fact that vdsm does not cleanup properly after deactivating storage domains. We have an open bugs on this.
You may have an active lv using non-existent multipath device.
Can you share with us the output of:
lsblk multipath -ll dmsetup table cat /etc/multipath.conf pvscan --cache > /dev/null && lvs
Nir
See above:
[root@ovirt01 etc]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 931,5G 0 disk ├─sda1 8:1 0 500M 0 part │ └─md0 9:0 0 500M 0 raid1 /boot └─sda2 8:2 0 931G 0 part └─md1 9:1 0 930,9G 0 raid1 ├─centos_ovirt01-swap 253:0 0 7,9G 0 lvm [SWAP] ├─centos_ovirt01-root 253:1 0 50G 0 lvm / ├─centos_ovirt01-home 253:2 0 10G 0 lvm /home └─centos_ovirt01-glusterOVEngine 253:3 0 50G 0 lvm /home/glusterfs/engine sdb 8:16 0 931,5G 0 disk ├─sdb1 8:17 0 500M 0 part │ └─md0 9:0 0 500M 0 raid1 /boot └─sdb2 8:18 0 931G 0 part └─md1 9:1 0 930,9G 0 raid1 ├─centos_ovirt01-swap 253:0 0 7,9G 0 lvm [SWAP] ├─centos_ovirt01-root 253:1 0 50G 0 lvm / ├─centos_ovirt01-home 253:2 0 10G 0 lvm /home └─centos_ovirt01-glusterOVEngine 253:3 0 50G 0 lvm /home/glusterfs/engine
[root@ovirt01 etc]# multipath -ll Feb 05 11:56:25 | multipath.conf +5, invalid keyword: getuid_callout Feb 05 11:56:25 | multipath.conf +18, invalid keyword: getuid_callout Feb 05 11:56:25 | multipath.conf +37, invalid keyword: getuid_callout
[root@ovirt01 etc]# dmsetup table centos_ovirt01-home: 0 20971520 linear 9:1 121391104 centos_ovirt01-swap: 0 16531456 linear 9:1 2048 centos_ovirt01-root: 0 104857600 linear 9:1 16533504 centos_ovirt01-glusterOVEngine: 0 104857600 linear 9:1 142362624
[root@ovirt01 etc]# cat /etc/multipath.conf # RHEV REVISION 1.1
defaults { polling_interval 5 getuid_callout "/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n" no_path_retry fail user_friendly_names no flush_on_last_del yes fast_io_fail_tmo 5 dev_loss_tmo 30 max_fds 4096 }
devices { device { vendor "HITACHI" product "DF.*" getuid_callout "/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n" } device { vendor "COMPELNT" product "Compellent Vol" no_path_retry fail } device { # multipath.conf.default vendor "DGC" product ".*" product_blacklist "LUNZ" path_grouping_policy "group_by_prio" path_checker "emc_clariion" hardware_handler "1 emc" prio "emc" failback immediate rr_weight "uniform" # vdsm required configuration getuid_callout "/usr/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n" features "0" no_path_retry fail } }
[root@ovirt01 etc]# pvscan --cache > /dev/null && lvs Incorrect metadata area header checksum on /dev/sda2 at offset 4096 Incorrect metadata area header checksum on /dev/sda2 at offset 4096 LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert glusterOVEngine centos_ovirt01 -wi-ao---- 50,00g home centos_ovirt01 -wi-ao---- 10,00g root centos_ovirt01 -wi-ao---- 50,00g swap centos_ovirt01 -wi-ao---- 7,88g Are you sure this is the correct host that the multipath error came from?
There are no multipath devices in this host and no ovirt storage domains lvs.
Nir
Yes this is the host. I'm sure. I've not yet configured ovirt storage domains (only installed ovirt on host and self hosted engine VM)
Here a part of /var/log/messages:
Feb 5 10:04:43 ovirt01 kernel: device-mapper: table: 253:4: multipath: error getting device Feb 5 10:04:43 ovirt01 kernel: device-mapper: ioctl: error adding target to table Feb 5 10:04:43 ovirt01 multipathd: dm-4: remove map (uevent) Feb 5 10:04:43 ovirt01 multipathd: dm-4: remove map (uevent) Feb 5 10:04:43 ovirt01 kernel: device-mapper: table: 253:4: multipath: error getting device Feb 5 10:04:43 ovirt01 kernel: device-mapper: ioctl: error adding target to table Feb 5 10:04:43 ovirt01 multipathd: dm-4: remove map (uevent) Feb 5 10:04:43 ovirt01 multipathd: dm-4: remove map (uevent)
and kernel version:
[root@ovirt01 etc]# uname -a Linux ovirt01.hawai.lan 3.10.0-123.20.1.el7.x86_64 #1 SMP Thu Jan 29 18:05:33 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux Can you attach vdsm logs? (/var/log/vdsm/vdsm.log*)
Nir

I ran into this same problem after setting up my cluster on EL7. As has been pointed out, the hosted-engine installer modifies /etc/multipath.conf. I appended: blacklist { devnode "*" } to the end of the modified multipath.conf, which is what was there before the engine installer, and the errors stopped. I think I was getting 253:3 trying to map which don't exist on my systems. I have a similar setup, md raid1 and LVM+XFS for gluster.

You can also add “find_multipaths 1” to /etc/multipath.conf, this keeps multipathd from finding non-multipath devices as multi path devices and avoids the error message and keeps mutlipathd from binding your normal devices. I find it simpler than blacklisting and it should work if you also have real multi path devices. defaults { find_multipaths yes polling_interval 5 …
On Feb 5, 2015, at 1:04 PM, George Skorup <george@mwcomm.com> wrote:
I ran into this same problem after setting up my cluster on EL7. As has been pointed out, the hosted-engine installer modifies /etc/multipath.conf.
I appended:
blacklist { devnode "*" }
to the end of the modified multipath.conf, which is what was there before the engine installer, and the errors stopped.
I think I was getting 253:3 trying to map which don't exist on my systems. I have a similar setup, md raid1 and LVM+XFS for gluster. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

This solved the issue!!! Thanks!! If oVirt rewrite /etc/multipath.conf maybe useful to open a bug.... What do you-all think about it? Il 05/02/2015 20.36, Darrell Budic ha scritto:
You can also add “find_multipaths 1” to /etc/multipath.conf, this keeps multipathd from finding non-multipath devices as multi path devices and avoids the error message and keeps mutlipathd from binding your normal devices. I find it simpler than blacklisting and it should work if you also have real multi path devices.
defaults { find_multipaths yes polling_interval 5 …
On Feb 5, 2015, at 1:04 PM, George Skorup <george@mwcomm.com> wrote:
I ran into this same problem after setting up my cluster on EL7. As has been pointed out, the hosted-engine installer modifies /etc/multipath.conf.
I appended:
blacklist { devnode "*" }
to the end of the modified multipath.conf, which is what was there before the engine installer, and the errors stopped.
I think I was getting 253:3 trying to map which don't exist on my systems. I have a similar setup, md raid1 and LVM+XFS for gluster. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Please open a bug Stefano. Thanks, Doron On 06/02/15 11:19, Stefano Danzi wrote:
This solved the issue!!! Thanks!!
If oVirt rewrite /etc/multipath.conf maybe useful to open a bug.... What do you-all think about it?
Il 05/02/2015 20.36, Darrell Budic ha scritto:
You can also add “find_multipaths 1” to /etc/multipath.conf, this keeps multipathd from finding non-multipath devices as multi path devices and avoids the error message and keeps mutlipathd from binding your normal devices. I find it simpler than blacklisting and it should work if you also have real multi path devices.
defaults { find_multipaths yes polling_interval 5 …
On Feb 5, 2015, at 1:04 PM, George Skorup <george@mwcomm.com> wrote:
I ran into this same problem after setting up my cluster on EL7. As has been pointed out, the hosted-engine installer modifies /etc/multipath.conf.
I appended:
blacklist { devnode "*" }
to the end of the modified multipath.conf, which is what was there before the engine installer, and the errors stopped.
I think I was getting 253:3 trying to map which don't exist on my systems. I have a similar setup, md raid1 and LVM+XFS for gluster. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

I think this bug is covering the cause for this: https://bugzilla.redhat.com/show_bug.cgi?id=1173290 - fabian ----- Original Message -----
Please open a bug Stefano.
Thanks, Doron
On 06/02/15 11:19, Stefano Danzi wrote:
This solved the issue!!! Thanks!!
If oVirt rewrite /etc/multipath.conf maybe useful to open a bug.... What do you-all think about it?
Il 05/02/2015 20.36, Darrell Budic ha scritto:
You can also add “find_multipaths 1” to /etc/multipath.conf, this keeps multipathd from finding non-multipath devices as multi path devices and avoids the error message and keeps mutlipathd from binding your normal devices. I find it simpler than blacklisting and it should work if you also have real multi path devices.
defaults { find_multipaths yes polling_interval 5 …
On Feb 5, 2015, at 1:04 PM, George Skorup <george@mwcomm.com> wrote:
I ran into this same problem after setting up my cluster on EL7. As has been pointed out, the hosted-engine installer modifies /etc/multipath.conf.
I appended:
blacklist { devnode "*" }
to the end of the modified multipath.conf, which is what was there before the engine installer, and the errors stopped.
I think I was getting 253:3 trying to map which don't exist on my systems. I have a similar setup, md raid1 and LVM+XFS for gluster. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 06/02/15 13:25, Fabian Deutsch wrote:
I think this bug is covering the cause for this:
https://bugzilla.redhat.com/show_bug.cgi?id=1173290
- fabian
Thanks Fabian.
----- Original Message -----
Please open a bug Stefano.
Thanks, Doron
On 06/02/15 11:19, Stefano Danzi wrote:
This solved the issue!!! Thanks!!
If oVirt rewrite /etc/multipath.conf maybe useful to open a bug.... What do you-all think about it?
Il 05/02/2015 20.36, Darrell Budic ha scritto:
You can also add “find_multipaths 1” to /etc/multipath.conf, this keeps multipathd from finding non-multipath devices as multi path devices and avoids the error message and keeps mutlipathd from binding your normal devices. I find it simpler than blacklisting and it should work if you also have real multi path devices.
defaults { find_multipaths yes polling_interval 5 …
On Feb 5, 2015, at 1:04 PM, George Skorup <george@mwcomm.com> wrote:
I ran into this same problem after setting up my cluster on EL7. As has been pointed out, the hosted-engine installer modifies /etc/multipath.conf.
I appended:
blacklist { devnode "*" }
to the end of the modified multipath.conf, which is what was there before the engine installer, and the errors stopped.
I think I was getting 253:3 trying to map which don't exist on my systems. I have a similar setup, md raid1 and LVM+XFS for gluster. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

I wiped my test cluster and started over. This time I did not do the devnode blacklist and instead did "find_multipaths yes" (as is also in the default EL7 multipath.conf) and that worked fine as well, device mapper system messages went away. On 2/6/2015 5:33 AM, Doron Fediuck wrote:
On 06/02/15 13:25, Fabian Deutsch wrote:
I think this bug is covering the cause for this:
https://bugzilla.redhat.com/show_bug.cgi?id=1173290
- fabian
Thanks Fabian.
----- Original Message -----
Please open a bug Stefano.
Thanks, Doron
On 06/02/15 11:19, Stefano Danzi wrote:
This solved the issue!!! Thanks!!
If oVirt rewrite /etc/multipath.conf maybe useful to open a bug.... What do you-all think about it?
Il 05/02/2015 20.36, Darrell Budic ha scritto:
You can also add “find_multipaths 1” to /etc/multipath.conf, this keeps multipathd from finding non-multipath devices as multi path devices and avoids the error message and keeps mutlipathd from binding your normal devices. I find it simpler than blacklisting and it should work if you also have real multi path devices.
defaults { find_multipaths yes polling_interval 5 …
On Feb 5, 2015, at 1:04 PM, George Skorup <george@mwcomm.com> wrote:
I ran into this same problem after setting up my cluster on EL7. As has been pointed out, the hosted-engine installer modifies /etc/multipath.conf.
I appended:
blacklist { devnode "*" }
to the end of the modified multipath.conf, which is what was there before the engine installer, and the errors stopped.
I think I was getting 253:3 trying to map which don't exist on my systems. I have a similar setup, md raid1 and LVM+XFS for gluster.
participants (6)
-
Darrell Budic
-
Doron Fediuck
-
Fabian Deutsch
-
George Skorup
-
Nir Soffer
-
Stefano Danzi