Hello,
on my all-in-one installation @home I had 3.5.0 with F20.
Today I updated to 3.5.1.
it seems it modified /etc/multipath.conf preventing me from using my second
disk at all...
My system has internal ssd disk (sda) for OS and one local storage domain
and another disk (sdb) with some partitions (on one of them there is also
another local storage domain).
At reboot I was put in emergency boot because partitions at sdb disk could
not be mounted (they were busy).
it took me some time to understand that the problem was due to sdb gone
managed as multipath device and so busy for partitions to be mounted.
Here you can find how multipath became after update and reboot
https://drive.google.com/file/d/0BwoPbcrMv8mvS0FkMnNyMTdVTms/view?usp=sha...
No device-mapper-multipath update in yum.log
Also it seems that after changing it, it was then reverted at boot again (I
don't know if the responsible was initrd/dracut or vdsmd) so in the mean
time the only thing I could do was to make the file immutable with
chattr +i /etc/multipath.conf
and so I was able to reboot and verify that my partitions on sdb were ok
and I was able to mount them (for safe I also ran an fsck against them)
Update ran around 19:20 and finished at 19:34
here the log in gzip format
https://drive.google.com/file/d/0BwoPbcrMv8mvWjJDTXU1YjRWOFk/view?usp=sha...
Reboot was done around 21:10-21:14
Here my /var/log/messages in gzip format, where you can see latest days.
https://drive.google.com/file/d/0BwoPbcrMv8mvMm1ldXljd3hZWnM/view?usp=sha...
Any suggestion appreciated.
Current multipath.conf (where I also commented out the getuid_callout that
is not used anymore):
[root@tekkaman setup]# cat /etc/multipath.conf
# RHEV REVISION 1.1
blacklist {
devnode "^(sda|sdb)[0-9]*"
}
defaults {
polling_interval 5
#getuid_callout "/usr/lib/udev/scsi_id --whitelisted
--replace-whitespace --device=/dev/%n"
no_path_retry fail
user_friendly_names no
flush_on_last_del yes
fast_io_fail_tmo 5
dev_loss_tmo 30
max_fds 4096
}
Gianluca