[ovirt-users] Self hosted engine iusses
Nir Soffer
nsoffer at redhat.com
Thu Feb 5 11:31:42 UTC 2015
----- Original Message -----
> From: "Stefano Danzi" <s.danzi at hawai.it>
> To: "Nir Soffer" <nsoffer at redhat.com>
> Cc: users at ovirt.org
> Sent: Thursday, February 5, 2015 1:17:01 PM
> Subject: Re: [ovirt-users] Self hosted engine iusses
>
>
> Il 05/02/2015 12.08, Nir Soffer ha scritto:
> >
> > ----- Original Message -----
> >> From: "Stefano Danzi" <s.danzi at hawai.it>
> >> To: "Nir Soffer" <nsoffer at redhat.com>
> >> Cc: users at ovirt.org
> >> Sent: Thursday, February 5, 2015 12:58:35 PM
> >> Subject: Re: [ovirt-users] Self hosted engine iusses
> >>
> >>
> >> Il 05/02/2015 11.52, Nir Soffer ha scritto:
> >>> ----- Original Message -----
> >>>
> >>> After ovirt installation on host console I see this error every 5
> >>> minutes:
> >>>
> >>> [ 1823.837020] device-mapper: table: 253:4: multipath: error getting
> >>> device
> >>> [ 1823.837228] device-mapper: ioctl: error adding target to table
> >>> This may be caused by the fact that vdsm does not cleanup properly after
> >>> deactivating storage domains. We have an open bugs on this.
> >>>
> >>> You may have an active lv using non-existent multipath device.
> >>>
> >>> Can you share with us the output of:
> >>>
> >>> lsblk
> >>> multipath -ll
> >>> dmsetup table
> >>> cat /etc/multipath.conf
> >>> pvscan --cache > /dev/null && lvs
> >>>
> >>> Nir
> >>>
> >> See above:
> >>
> >> [root at ovirt01 etc]# lsblk
> >> NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
> >> sda 8:0 0 931,5G 0 disk
> >> ├─sda1 8:1 0 500M 0 part
> >> │ └─md0 9:0 0 500M 0 raid1 /boot
> >> └─sda2 8:2 0 931G 0 part
> >> └─md1 9:1 0 930,9G 0 raid1
> >> ├─centos_ovirt01-swap 253:0 0 7,9G 0 lvm [SWAP]
> >> ├─centos_ovirt01-root 253:1 0 50G 0 lvm /
> >> ├─centos_ovirt01-home 253:2 0 10G 0 lvm /home
> >> └─centos_ovirt01-glusterOVEngine 253:3 0 50G 0 lvm
> >> /home/glusterfs/engine
> >> sdb 8:16 0 931,5G 0 disk
> >> ├─sdb1 8:17 0 500M 0 part
> >> │ └─md0 9:0 0 500M 0 raid1 /boot
> >> └─sdb2 8:18 0 931G 0 part
> >> └─md1 9:1 0 930,9G 0 raid1
> >> ├─centos_ovirt01-swap 253:0 0 7,9G 0 lvm [SWAP]
> >> ├─centos_ovirt01-root 253:1 0 50G 0 lvm /
> >> ├─centos_ovirt01-home 253:2 0 10G 0 lvm /home
> >> └─centos_ovirt01-glusterOVEngine 253:3 0 50G 0 lvm
> >> /home/glusterfs/engine
> >>
> >> [root at ovirt01 etc]# multipath -ll
> >> Feb 05 11:56:25 | multipath.conf +5, invalid keyword: getuid_callout
> >> Feb 05 11:56:25 | multipath.conf +18, invalid keyword: getuid_callout
> >> Feb 05 11:56:25 | multipath.conf +37, invalid keyword: getuid_callout
> >>
> >> [root at ovirt01 etc]# dmsetup table
> >> centos_ovirt01-home: 0 20971520 linear 9:1 121391104
> >> centos_ovirt01-swap: 0 16531456 linear 9:1 2048
> >> centos_ovirt01-root: 0 104857600 linear 9:1 16533504
> >> centos_ovirt01-glusterOVEngine: 0 104857600 linear 9:1 142362624
> >>
> >> [root at ovirt01 etc]# cat /etc/multipath.conf
> >> # RHEV REVISION 1.1
> >>
> >> defaults {
> >> polling_interval 5
> >> getuid_callout "/usr/lib/udev/scsi_id --whitelisted
> >> --replace-whitespace --device=/dev/%n"
> >> no_path_retry fail
> >> user_friendly_names no
> >> flush_on_last_del yes
> >> fast_io_fail_tmo 5
> >> dev_loss_tmo 30
> >> max_fds 4096
> >> }
> >>
> >> devices {
> >> device {
> >> vendor "HITACHI"
> >> product "DF.*"
> >> getuid_callout "/usr/lib/udev/scsi_id --whitelisted
> >> --replace-whitespace --device=/dev/%n"
> >> }
> >> device {
> >> vendor "COMPELNT"
> >> product "Compellent Vol"
> >> no_path_retry fail
> >> }
> >> device {
> >> # multipath.conf.default
> >> vendor "DGC"
> >> product ".*"
> >> product_blacklist "LUNZ"
> >> path_grouping_policy "group_by_prio"
> >> path_checker "emc_clariion"
> >> hardware_handler "1 emc"
> >> prio "emc"
> >> failback immediate
> >> rr_weight "uniform"
> >> # vdsm required configuration
> >> getuid_callout "/usr/lib/udev/scsi_id --whitelisted
> >> --replace-whitespace --device=/dev/%n"
> >> features "0"
> >> no_path_retry fail
> >> }
> >> }
> >>
> >> [root at ovirt01 etc]# pvscan --cache > /dev/null && lvs
> >> Incorrect metadata area header checksum on /dev/sda2 at offset 4096
> >> Incorrect metadata area header checksum on /dev/sda2 at offset 4096
> >> LV VG Attr LSize Pool Origin Data%
> >> Move
> >> Log Cpy%Sync Convert
> >> glusterOVEngine centos_ovirt01 -wi-ao---- 50,00g
> >> home centos_ovirt01 -wi-ao---- 10,00g
> >> root centos_ovirt01 -wi-ao---- 50,00g
> >> swap centos_ovirt01 -wi-ao---- 7,88g
> > Are you sure this is the correct host that the multipath error came from?
> >
> > There are no multipath devices in this host and no ovirt storage domains
> > lvs.
> >
> > Nir
> >
>
> Yes this is the host. I'm sure.
> I've not yet configured ovirt storage domains (only installed ovirt on host
> and self hosted engine VM)
>
> Here a part of /var/log/messages:
>
> Feb 5 10:04:43 ovirt01 kernel: device-mapper: table: 253:4: multipath: error
> getting device
> Feb 5 10:04:43 ovirt01 kernel: device-mapper: ioctl: error adding target to
> table
> Feb 5 10:04:43 ovirt01 multipathd: dm-4: remove map (uevent)
> Feb 5 10:04:43 ovirt01 multipathd: dm-4: remove map (uevent)
> Feb 5 10:04:43 ovirt01 kernel: device-mapper: table: 253:4: multipath: error
> getting device
> Feb 5 10:04:43 ovirt01 kernel: device-mapper: ioctl: error adding target to
> table
> Feb 5 10:04:43 ovirt01 multipathd: dm-4: remove map (uevent)
> Feb 5 10:04:43 ovirt01 multipathd: dm-4: remove map (uevent)
>
> and kernel version:
>
> [root at ovirt01 etc]# uname -a
> Linux ovirt01.hawai.lan 3.10.0-123.20.1.el7.x86_64 #1 SMP Thu Jan 29 18:05:33
> UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
Can you attach vdsm logs? (/var/log/vdsm/vdsm.log*)
Nir
More information about the Users
mailing list