[ovirt-users] Update to 3.5.1 scrambled multipath.conf?

Benjamin Marzinski bmarzins at redhat.com
Mon Jan 26 17:14:41 UTC 2015


On Mon, Jan 26, 2015 at 10:27:23AM -0500, Fabian Deutsch wrote:
> 
> 
> ----- Original Message -----
> > ----- Original Message -----
> > > From: "Dan Kenigsberg" <danken at redhat.com>
> > > To: "Gianluca Cecchi" <gianluca.cecchi at gmail.com>, nsoffer at redhat.com
> > > Cc: "users" <users at ovirt.org>, ykaplan at redhat.com
> > > Sent: Monday, January 26, 2015 2:09:23 PM
> > > Subject: Re: [ovirt-users] Update to 3.5.1 scrambled multipath.conf?
> > > 
> > > On Sat, Jan 24, 2015 at 12:59:01AM +0100, Gianluca Cecchi wrote:
> > > > Hello,
> > > > on my all-in-one installation @home I had 3.5.0 with F20.
> > > > Today I updated to 3.5.1.
> > > > 
> > > > it seems it modified /etc/multipath.conf preventing me from using my
> > > > second
> > > > disk at all...
> > > > 
> > > > My system has internal ssd disk (sda) for OS and one local storage domain
> > > > and another disk (sdb) with some partitions (on one of them there is also
> > > > another local storage domain).
> > > > 
> > > > At reboot I was put in emergency boot because partitions at sdb disk
> > > > could
> > > > not be mounted (they were busy).
> > > > it took me some time to understand that the problem was due to sdb gone
> > > > managed as multipath device and so busy for partitions to be mounted.
> > > > 
> > > > Here you can find how multipath became after update and reboot
> > > > https://drive.google.com/file/d/0BwoPbcrMv8mvS0FkMnNyMTdVTms/view?usp=sharing
> > > > 
> > > > No device-mapper-multipath update in yum.log
> > > > 
> > > > Also it seems that after changing it, it was then reverted at boot again
> > > > (I
> > > > don't know if the responsible was initrd/dracut or vdsmd) so in the mean
> > > > time the only thing I could do was to make the file immutable with
> > > > 
> > > > chattr +i /etc/multipath.conf
> > > 
> > > The "supported" method of achieving this is to place "# RHEV PRIVATE" in
> > > the second line of your hand-modified multipath.conf
> > > 
> > > I do not understand why this has happened only after upgrade to 3.5.1 -
> > > 3.5.0's should have reverted you multipath.conf just as well during each
> > > vdsm startup.
> > > 
> > > The good thing is that this annoying behavior has been dropped from the
> > > master branch, so that 3.6 is not going to have it. Vdsm is not to mess
> > > with other services config file while it is running. The logic moved to
> > > `vdsm-tool configure`
> > > 
> > > > 
> > > > and so I was able to reboot and verify that my partitions on sdb were ok
> > > > and I was able to mount them (for safe I also ran an fsck against them)
> > > > 
> > > > Update ran around 19:20 and finished at 19:34
> > > > here the log in gzip format
> > > > https://drive.google.com/file/d/0BwoPbcrMv8mvWjJDTXU1YjRWOFk/view?usp=sharing
> > > > 
> > > > Reboot was done around 21:10-21:14
> > > > 
> > > > Here my /var/log/messages in gzip format, where you can see latest days.
> > > > https://drive.google.com/file/d/0BwoPbcrMv8mvMm1ldXljd3hZWnM/view?usp=sharing
> > > > 
> > > > 
> > > > Any suggestion appreciated.
> > > > 
> > > > Current multipath.conf (where I also commented out the getuid_callout
> > > > that
> > > > is not used anymore):
> > > > 
> > > > [root at tekkaman setup]# cat /etc/multipath.conf
> > > > # RHEV REVISION 1.1
> > > > 
> > > > blacklist {
> > > >     devnode "^(sda|sdb)[0-9]*"
> > > > }
> > 
> > 
> > I think what happened is:
> > 
> > 1. 3.5.1 had new multipath version
> > 2. So vdsm upgraded the local file
> > 3. blacklist above was removed
> >    (it should exists in /etc/multipath.bak)
> > 
> > To prevent local changes, you have to mark the file as private
> > as Dan suggests.
> > 
> > Seems to be related to the find_multiapth = "yes" bug:
> > https://bugzilla.redhat.com/show_bug.cgi?id=1173290
> 
> The symptoms above sound exactly liek this issue.
> When find_multipahts is no (the default when the directive is not present)
> I sthat all non-blacklisted devices are tried to get claimed, and this happened above.
> 
> Blacklisting the devices works, or adding "find_mutlipaths yes" should also work, because
> in that case only device which have more than one path (or are explicitly named) will be
> claimed by multipath.

I would like to point out one issue.  Once a device is claimed (even if
find_multipaths wasn't set when it was claimed) it will get added to
/etc/multipath/wwids.  This means that if you have previously claimed a
single path device, adding "find_multipaths yes" won't stop that device
from being claimed in the fulture (since it is in the wwids file). You
would need to either run:

# multipath -w <device>

to remove the device's wwid from the wwids file, or run

# multipath -W

to reset the wwids file to only include the wwids of the current
multipath devices (Obviously, you need to remove any devices that you
don't want multipathed before your run this).

> 
> My 2ct.
> 
> - fabian
> 
> > Ben, can you confirm that this is the same issue?

Yeah, I think so.

-Ben

> > 
> > > > 
> > > > defaults {
> > > >     polling_interval        5
> > > >     #getuid_callout          "/usr/lib/udev/scsi_id --whitelisted
> > > > --replace-whitespace --device=/dev/%n"
> > > >     no_path_retry           fail
> > > >     user_friendly_names     no
> > > >     flush_on_last_del       yes
> > > >     fast_io_fail_tmo        5
> > > >     dev_loss_tmo            30
> > > >     max_fds                 4096
> > > > }
> > > 
> > 
> > Regards,
> > Nir
> > _______________________________________________
> > Users mailing list
> > Users at ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> > 



More information about the Users mailing list