[ovirt-users] Update to 3.5.1 scrambled multipath.conf?
Nir Soffer
nsoffer at redhat.com
Mon Jan 26 12:37:59 UTC 2015
----- Original Message -----
> From: "Dan Kenigsberg" <danken at redhat.com>
> To: "Gianluca Cecchi" <gianluca.cecchi at gmail.com>, nsoffer at redhat.com
> Cc: "users" <users at ovirt.org>, ykaplan at redhat.com
> Sent: Monday, January 26, 2015 2:09:23 PM
> Subject: Re: [ovirt-users] Update to 3.5.1 scrambled multipath.conf?
>
> On Sat, Jan 24, 2015 at 12:59:01AM +0100, Gianluca Cecchi wrote:
> > Hello,
> > on my all-in-one installation @home I had 3.5.0 with F20.
> > Today I updated to 3.5.1.
> >
> > it seems it modified /etc/multipath.conf preventing me from using my second
> > disk at all...
> >
> > My system has internal ssd disk (sda) for OS and one local storage domain
> > and another disk (sdb) with some partitions (on one of them there is also
> > another local storage domain).
> >
> > At reboot I was put in emergency boot because partitions at sdb disk could
> > not be mounted (they were busy).
> > it took me some time to understand that the problem was due to sdb gone
> > managed as multipath device and so busy for partitions to be mounted.
> >
> > Here you can find how multipath became after update and reboot
> > https://drive.google.com/file/d/0BwoPbcrMv8mvS0FkMnNyMTdVTms/view?usp=sharing
> >
> > No device-mapper-multipath update in yum.log
> >
> > Also it seems that after changing it, it was then reverted at boot again (I
> > don't know if the responsible was initrd/dracut or vdsmd) so in the mean
> > time the only thing I could do was to make the file immutable with
> >
> > chattr +i /etc/multipath.conf
>
> The "supported" method of achieving this is to place "# RHEV PRIVATE" in
> the second line of your hand-modified multipath.conf
>
> I do not understand why this has happened only after upgrade to 3.5.1 -
> 3.5.0's should have reverted you multipath.conf just as well during each
> vdsm startup.
>
> The good thing is that this annoying behavior has been dropped from the
> master branch, so that 3.6 is not going to have it. Vdsm is not to mess
> with other services config file while it is running. The logic moved to
> `vdsm-tool configure`
>
> >
> > and so I was able to reboot and verify that my partitions on sdb were ok
> > and I was able to mount them (for safe I also ran an fsck against them)
> >
> > Update ran around 19:20 and finished at 19:34
> > here the log in gzip format
> > https://drive.google.com/file/d/0BwoPbcrMv8mvWjJDTXU1YjRWOFk/view?usp=sharing
> >
> > Reboot was done around 21:10-21:14
> >
> > Here my /var/log/messages in gzip format, where you can see latest days.
> > https://drive.google.com/file/d/0BwoPbcrMv8mvMm1ldXljd3hZWnM/view?usp=sharing
> >
> >
> > Any suggestion appreciated.
> >
> > Current multipath.conf (where I also commented out the getuid_callout that
> > is not used anymore):
> >
> > [root at tekkaman setup]# cat /etc/multipath.conf
> > # RHEV REVISION 1.1
> >
> > blacklist {
> > devnode "^(sda|sdb)[0-9]*"
> > }
I think what happened is:
1. 3.5.1 had new multipath version
2. So vdsm upgraded the local file
3. blacklist above was removed
(it should exists in /etc/multipath.bak)
To prevent local changes, you have to mark the file as private
as Dan suggests.
Seems to be related to the find_multiapth = "yes" bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1173290
Ben, can you confirm that this is the same issue?
> >
> > defaults {
> > polling_interval 5
> > #getuid_callout "/usr/lib/udev/scsi_id --whitelisted
> > --replace-whitespace --device=/dev/%n"
> > no_path_retry fail
> > user_friendly_names no
> > flush_on_last_del yes
> > fast_io_fail_tmo 5
> > dev_loss_tmo 30
> > max_fds 4096
> > }
>
Regards,
Nir
More information about the Users
mailing list