<html><body><div style="font-family: georgia,serif; font-size: 12pt; color: #000000"><div>Hi Punit,</div><div>You might find also tcpdump as useful tool, so then to investigate the recorded data using Wireshark tool.</div><div><a href="http://www.tcpdump.org/manpages/tcpdump.1.html" data-mce-href="http://www.tcpdump.org/manpages/tcpdump.1.html">http://www.tcpdump.org/manpages/tcpdump.1.html</a></div><div><a href="https://ask.wireshark.org/questions/23138/wireshark-for-red-hat-enterprise-linux" data-mce-href="https://ask.wireshark.org/questions/23138/wireshark-for-red-hat-enterprise-linux">https://ask.wireshark.org/questions/23138/wireshark-for-red-hat-enterprise-linux</a></div><div><span name="x"></span><br>Thanks in advance.<br><div><br></div>Best regards,<br>Nikolai<br>____________________<br>Nikolai Sednev<br>Senior Quality Engineer at Compute team<br>Red Hat Israel<br>34 Jerusalem Road,<br>Ra'anana, Israel 43501<br><div><br></div>Tel: +972 9 7692043<br>Mobile: +972 52 7342734<br>Email: nsednev@redhat.com<br>IRC: nsednev<span name="x"></span><br></div><div><br></div><hr id="zwchr"><div style="color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"><b>From: </b>users-request@ovirt.org<br><b>To: </b>users@ovirt.org<br><b>Sent: </b>Monday, January 26, 2015 5:27:25 PM<br><b>Subject: </b>Users Digest, Vol 40, Issue 140<br><div><br></div>Send Users mailing list submissions to<br> users@ovirt.org<br><div><br></div>To subscribe or unsubscribe via the World Wide Web, visit<br> http://lists.ovirt.org/mailman/listinfo/users<br>or, via email, send a message with subject or body 'help' to<br> users-request@ovirt.org<br><div><br></div>You can reach the person managing the list at<br> users-owner@ovirt.org<br><div><br></div>When replying, please edit your Subject line so it is more specific<br>than "Re: Contents of Users digest..."<br><div><br></div><br>Today's Topics:<br><div><br></div> 1. Re: Update to 3.5.1 scrambled multipath.conf? (Dan Kenigsberg)<br> 2. Re: Update to 3.5.1 scrambled multipath.conf? (Nir Soffer)<br> 3. Re: Loopback interface has huge network transctions<br> (Martin Pavl?k)<br> 4. Re: [ovirt-devel] oVirt 3.6 Feature: Cumulative Network<br> Usage Statistics (Dan Kenigsberg)<br> 5. Re: Cutting edge ovirt node images. (Tolik Litovsky)<br> 6. Re: Cutting edge ovirt node images. (Fabian Deutsch)<br> 7. Re: Update to 3.5.1 scrambled multipath.conf? (Fabian Deutsch)<br><div><br></div><br>----------------------------------------------------------------------<br><div><br></div>Message: 1<br>Date: Mon, 26 Jan 2015 12:09:23 +0000<br>From: Dan Kenigsberg <danken@redhat.com><br>To: Gianluca Cecchi <gianluca.cecchi@gmail.com>, nsoffer@redhat.com<br>Cc: users <users@ovirt.org><br>Subject: Re: [ovirt-users] Update to 3.5.1 scrambled multipath.conf?<br>Message-ID: <20150126120923.GF14455@redhat.com><br>Content-Type: text/plain; charset=us-ascii<br><div><br></div>On Sat, Jan 24, 2015 at 12:59:01AM +0100, Gianluca Cecchi wrote:<br>> Hello,<br>> on my all-in-one installation @home I had 3.5.0 with F20.<br>> Today I updated to 3.5.1.<br>> <br>> it seems it modified /etc/multipath.conf preventing me from using my second<br>> disk at all...<br>> <br>> My system has internal ssd disk (sda) for OS and one local storage domain<br>> and another disk (sdb) with some partitions (on one of them there is also<br>> another local storage domain).<br>> <br>> At reboot I was put in emergency boot because partitions at sdb disk could<br>> not be mounted (they were busy).<br>> it took me some time to understand that the problem was due to sdb gone<br>> managed as multipath device and so busy for partitions to be mounted.<br>> <br>> Here you can find how multipath became after update and reboot<br>> https://drive.google.com/file/d/0BwoPbcrMv8mvS0FkMnNyMTdVTms/view?usp=sharing<br>> <br>> No device-mapper-multipath update in yum.log<br>> <br>> Also it seems that after changing it, it was then reverted at boot again (I<br>> don't know if the responsible was initrd/dracut or vdsmd) so in the mean<br>> time the only thing I could do was to make the file immutable with<br>> <br>> chattr +i /etc/multipath.conf<br><div><br></div>The "supported" method of achieving this is to place "# RHEV PRIVATE" in<br>the second line of your hand-modified multipath.conf<br><div><br></div>I do not understand why this has happened only after upgrade to 3.5.1 -<br>3.5.0's should have reverted you multipath.conf just as well during each<br>vdsm startup.<br><div><br></div>The good thing is that this annoying behavior has been dropped from the<br>master branch, so that 3.6 is not going to have it. Vdsm is not to mess<br>with other services config file while it is running. The logic moved to<br>`vdsm-tool configure`<br><div><br></div>> <br>> and so I was able to reboot and verify that my partitions on sdb were ok<br>> and I was able to mount them (for safe I also ran an fsck against them)<br>> <br>> Update ran around 19:20 and finished at 19:34<br>> here the log in gzip format<br>> https://drive.google.com/file/d/0BwoPbcrMv8mvWjJDTXU1YjRWOFk/view?usp=sharing<br>> <br>> Reboot was done around 21:10-21:14<br>> <br>> Here my /var/log/messages in gzip format, where you can see latest days.<br>> https://drive.google.com/file/d/0BwoPbcrMv8mvMm1ldXljd3hZWnM/view?usp=sharing<br>> <br>> <br>> Any suggestion appreciated.<br>> <br>> Current multipath.conf (where I also commented out the getuid_callout that<br>> is not used anymore):<br>> <br>> [root@tekkaman setup]# cat /etc/multipath.conf<br>> # RHEV REVISION 1.1<br>> <br>> blacklist {<br>> devnode "^(sda|sdb)[0-9]*"<br>> }<br>> <br>> defaults {<br>> polling_interval 5<br>> #getuid_callout "/usr/lib/udev/scsi_id --whitelisted<br>> --replace-whitespace --device=/dev/%n"<br>> no_path_retry fail<br>> user_friendly_names no<br>> flush_on_last_del yes<br>> fast_io_fail_tmo 5<br>> dev_loss_tmo 30<br>> max_fds 4096<br>> }<br><div><br></div><br>------------------------------<br><div><br></div>Message: 2<br>Date: Mon, 26 Jan 2015 07:37:59 -0500 (EST)<br>From: Nir Soffer <nsoffer@redhat.com><br>To: Dan Kenigsberg <danken@redhat.com><br>Cc: Benjamin Marzinski <bmarzins@redhat.com>, users <users@ovirt.org><br>Subject: Re: [ovirt-users] Update to 3.5.1 scrambled multipath.conf?<br>Message-ID:<br> <610569491.313815.1422275879664.JavaMail.zimbra@redhat.com><br>Content-Type: text/plain; charset=utf-8<br><div><br></div>----- Original Message -----<br>> From: "Dan Kenigsberg" <danken@redhat.com><br>> To: "Gianluca Cecchi" <gianluca.cecchi@gmail.com>, nsoffer@redhat.com<br>> Cc: "users" <users@ovirt.org>, ykaplan@redhat.com<br>> Sent: Monday, January 26, 2015 2:09:23 PM<br>> Subject: Re: [ovirt-users] Update to 3.5.1 scrambled multipath.conf?<br>> <br>> On Sat, Jan 24, 2015 at 12:59:01AM +0100, Gianluca Cecchi wrote:<br>> > Hello,<br>> > on my all-in-one installation @home I had 3.5.0 with F20.<br>> > Today I updated to 3.5.1.<br>> > <br>> > it seems it modified /etc/multipath.conf preventing me from using my second<br>> > disk at all...<br>> > <br>> > My system has internal ssd disk (sda) for OS and one local storage domain<br>> > and another disk (sdb) with some partitions (on one of them there is also<br>> > another local storage domain).<br>> > <br>> > At reboot I was put in emergency boot because partitions at sdb disk could<br>> > not be mounted (they were busy).<br>> > it took me some time to understand that the problem was due to sdb gone<br>> > managed as multipath device and so busy for partitions to be mounted.<br>> > <br>> > Here you can find how multipath became after update and reboot<br>> > https://drive.google.com/file/d/0BwoPbcrMv8mvS0FkMnNyMTdVTms/view?usp=sharing<br>> > <br>> > No device-mapper-multipath update in yum.log<br>> > <br>> > Also it seems that after changing it, it was then reverted at boot again (I<br>> > don't know if the responsible was initrd/dracut or vdsmd) so in the mean<br>> > time the only thing I could do was to make the file immutable with<br>> > <br>> > chattr +i /etc/multipath.conf<br>> <br>> The "supported" method of achieving this is to place "# RHEV PRIVATE" in<br>> the second line of your hand-modified multipath.conf<br>> <br>> I do not understand why this has happened only after upgrade to 3.5.1 -<br>> 3.5.0's should have reverted you multipath.conf just as well during each<br>> vdsm startup.<br>> <br>> The good thing is that this annoying behavior has been dropped from the<br>> master branch, so that 3.6 is not going to have it. Vdsm is not to mess<br>> with other services config file while it is running. The logic moved to<br>> `vdsm-tool configure`<br>> <br>> > <br>> > and so I was able to reboot and verify that my partitions on sdb were ok<br>> > and I was able to mount them (for safe I also ran an fsck against them)<br>> > <br>> > Update ran around 19:20 and finished at 19:34<br>> > here the log in gzip format<br>> > https://drive.google.com/file/d/0BwoPbcrMv8mvWjJDTXU1YjRWOFk/view?usp=sharing<br>> > <br>> > Reboot was done around 21:10-21:14<br>> > <br>> > Here my /var/log/messages in gzip format, where you can see latest days.<br>> > https://drive.google.com/file/d/0BwoPbcrMv8mvMm1ldXljd3hZWnM/view?usp=sharing<br>> > <br>> > <br>> > Any suggestion appreciated.<br>> > <br>> > Current multipath.conf (where I also commented out the getuid_callout that<br>> > is not used anymore):<br>> > <br>> > [root@tekkaman setup]# cat /etc/multipath.conf<br>> > # RHEV REVISION 1.1<br>> > <br>> > blacklist {<br>> > devnode "^(sda|sdb)[0-9]*"<br>> > }<br><div><br></div><br>I think what happened is:<br><div><br></div>1. 3.5.1 had new multipath version<br>2. So vdsm upgraded the local file<br>3. blacklist above was removed<br> (it should exists in /etc/multipath.bak)<br><div><br></div>To prevent local changes, you have to mark the file as private<br>as Dan suggests.<br><div><br></div>Seems to be related to the find_multiapth = "yes" bug:<br>https://bugzilla.redhat.com/show_bug.cgi?id=1173290<br><div><br></div>Ben, can you confirm that this is the same issue?<br><div><br></div>> > <br>> > defaults {<br>> > polling_interval 5<br>> > #getuid_callout "/usr/lib/udev/scsi_id --whitelisted<br>> > --replace-whitespace --device=/dev/%n"<br>> > no_path_retry fail<br>> > user_friendly_names no<br>> > flush_on_last_del yes<br>> > fast_io_fail_tmo 5<br>> > dev_loss_tmo 30<br>> > max_fds 4096<br>> > }<br>> <br><div><br></div>Regards,<br>Nir<br><div><br></div><br>------------------------------<br><div><br></div>Message: 3<br>Date: Mon, 26 Jan 2015 14:25:23 +0100<br>From: Martin Pavl?k <mpavlik@redhat.com><br>To: Punit Dambiwal <hypunit@gmail.com><br>Cc: "users@ovirt.org" <users@ovirt.org><br>Subject: Re: [ovirt-users] Loopback interface has huge network<br> transctions<br>Message-ID: <6E2536E1-27AE-47EB-BCA7-88B3901D2B02@redhat.com><br>Content-Type: text/plain; charset="us-ascii"<br><div><br></div>Hi Punit,<br><div><br></div>it is ok since ovirt-engine is using loopback for its purposes, e.g. postgress databas access. Try to check netstat -putna | grep 127.0.0 to see how many things are attached to it.<br><div><br></div>If you are interested in checking what is going on a bit more have a look @ this great how-to http://www.slashroot.in/find-network-traffic-and-bandwidth-usage-process-linux <http://www.slashroot.in/find-network-traffic-and-bandwidth-usage-process-linux><br><div><br></div><br>HTH <br><div><br></div>Martin Pavlik<br><div><br></div>RHEV QE<br><div><br></div>> On 26 Jan 2015, at 02:24, Punit Dambiwal <hypunit@gmail.com> wrote:<br>> <br>> Hi,<br>> <br>> I have noticed that the loop back interface has huge network packets sent and received...is it common or need to some tweaks.... <br>> <br>> 1. Ovirt 3.5.1<br>> 2. Before ovirt engine installation...loop back address doesn't has that huge amount of packets sent/receive....<br>> 3. After Ovirt engine install it's keep increasing.....and in just 48 hours it reach to 35 GB...<br>> <br>> [root@ccr01 ~]# ifconfig<br>> eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500<br>> inet 43.252.x.x netmask 255.255.255.0 broadcast 43.252.x.x<br>> ether 60:eb:69:82:0b:4c txqueuelen 1000 (Ethernet)<br>> RX packets 6605350 bytes 6551029484 (6.1 GiB)<br>> RX errors 0 dropped 120622 overruns 0 frame 0<br>> TX packets 2155524 bytes 431348174 (411.3 MiB)<br>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br>> device memory 0xdf6e0000-df700000<br>> <br>> eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500<br>> inet 10.10.0.2 netmask 255.255.255.0 broadcast 10.10.0.255<br>> ether 60:eb:69:82:0b:4d txqueuelen 1000 (Ethernet)<br>> RX packets 788160 bytes 133915292 (127.7 MiB)<br>> RX errors 0 dropped 0 overruns 0 frame 0<br>> TX packets 546352 bytes 131672255 (125.5 MiB)<br>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br>> device memory 0xdf660000-df680000<br>> <br>> lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536<br>> inet 127.0.0.1 netmask 255.0.0.0<br>> loop txqueuelen 0 (Local Loopback)<br>> RX packets 84747311 bytes 40376482560 (37.6 GiB)<br>> RX errors 0 dropped 0 overruns 0 frame 0<br>> TX packets 84747311 bytes 40376482560 (37.6 GiB)<br>> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0<br>> <br>> [root@ccr01 ~]# w<br>> 09:23:07 up 2 days, 11:43, 1 user, load average: 0.27, 0.30, 0.31<br>> USER TTY LOGIN@ IDLE JCPU PCPU WHAT<br>> root pts/0 09:16 3.00s 0.01s 0.00s w<br>> [root@ccr01 ~]#<br>> <br>> Thanks,<br>> Punit<br>> _______________________________________________<br>> Users mailing list<br>> Users@ovirt.org<br>> http://lists.ovirt.org/mailman/listinfo/users<br><div><br></div>-------------- next part --------------<br>An HTML attachment was scrubbed...<br>URL: <http://lists.ovirt.org/pipermail/users/attachments/20150126/c38655c3/attachment-0001.html><br><div><br></div>------------------------------<br><div><br></div>Message: 4<br>Date: Mon, 26 Jan 2015 13:45:56 +0000<br>From: Dan Kenigsberg <danken@redhat.com><br>To: Lior Vernia <lvernia@redhat.com><br>Cc: "Users@ovirt.org List" <Users@ovirt.org>, "devel@ovirt.org"<br> <devel@ovirt.org><br>Subject: Re: [ovirt-users] [ovirt-devel] oVirt 3.6 Feature: Cumulative<br> Network Usage Statistics<br>Message-ID: <20150126134555.GI14455@redhat.com><br>Content-Type: text/plain; charset=us-ascii<br><div><br></div>On Mon, Dec 22, 2014 at 01:40:06PM +0200, Lior Vernia wrote:<br>> Hello users and developers,<br>> <br>> Just put up a feature page for the aforementioned feature; in summary,<br>> to report total RX/TX statistics for hosts and VMs in oVirt. This has<br>> been requested several times on the users mailing list, and is<br>> especially useful for accounting in VDI deployments.<br>> <br>> You're more than welcome to review the feature page:<br>> http://www.ovirt.org/Features/Cumulative_RX_TX_Statistics<br><div><br></div>Sorry for the late review; I have a couple of questions/comments.<br>- What do you mean by "VDI use cases" in the "Benefit to oVirt sanpshot"<br> section?<br> Do you refer to hosting services who would like to charge their<br> customers based on actual bandwidth usage?<br>- I've added another motivation: currently-reported rxRate/txRate<br> can be utterly meaningless.<br><div><br></div><br>I don't see reference to nasty negative flows: what happens if a host<br>disappears? Or a VM? I suppose there's always a chance that some traffic<br>would go unaccounted for. But do you expect to extract this information<br>somehow? Either way, it should be mentioned as a caveat on the feature<br>page.<br><div><br></div>> <br>> Note that this only deals with network usage - it'll be great if we have<br>> similar features for CPU and disk usage!<br><div><br></div>There's a formal feature request about this:<br> Bug 1172153 - [RFE] Collect CPU, IO and network accounting<br> information<br><div><br></div>Dan<br><div><br></div><br>------------------------------<br><div><br></div>Message: 5<br>Date: Mon, 26 Jan 2015 09:00:29 -0500 (EST)<br>From: Tolik Litovsky <tlitovsk@redhat.com><br>To: users@ovirt.org<br>Cc: devel@ovirt.org<br>Subject: Re: [ovirt-users] Cutting edge ovirt node images.<br>Message-ID: <14200256.98.1422280826405.JavaMail.tlitovsk@tolikl><br>Content-Type: text/plain; charset=utf-8<br><div><br></div>Hi<br><div><br></div>Both projects are now built with VDSM and Hosted Engine plugins.<br><div><br></div>Best Regards.<br>Tolik.<br><div><br></div>----- Original Message -----<br>From: "Anatoly Litvosky" <tlitovsk@redhat.com><br>To: users@ovirt.org<br>Cc: devel@ovirt.org<br>Sent: Thursday, 15 January, 2015 3:46:10 PM<br>Subject: Re: [ovirt-users] Cutting edge ovirt node images.<br><div><br></div>Hi <br><div><br></div>A centos7 based ovirt-node project joined our jenkins<br>http://jenkins.ovirt.org/job/ovirt-node_master_create-iso-el7_merged/<br><div><br></div>Best Regards<br>Tolik<br><div><br></div>On Sun, 2015-01-11 at 17:29 +0200, Anatoly Litvosky wrote:<br>> Hi<br>> <br>> For all of you that want the latest images for ovirt node.<br>> For some time there is a new job in the ovirt jenkins that builds an<br>> image for every commit made.<br>> http://jenkins.ovirt.org/job/ovirt-node_master_create-iso-fc20_merged/<br>> Currently for fedora 20 but hopefully more to come.<br>> <br>> Many thanks to David (dcaro) for all the effort.<br>> <br>> Best regards.<br>> Tolik.<br>> <br>> <br>> <br><div><br></div><br>_______________________________________________<br>Users mailing list<br>Users@ovirt.org<br>http://lists.ovirt.org/mailman/listinfo/users<br><div><br></div><br>------------------------------<br><div><br></div>Message: 6<br>Date: Mon, 26 Jan 2015 10:19:29 -0500 (EST)<br>From: Fabian Deutsch <fdeutsch@redhat.com><br>To: Tolik Litovsky <tlitovsk@redhat.com><br>Cc: users@ovirt.org, devel@ovirt.org<br>Subject: Re: [ovirt-users] Cutting edge ovirt node images.<br>Message-ID:<br> <1747701132.306300.1422285569125.JavaMail.zimbra@redhat.com><br>Content-Type: text/plain; charset=utf-8<br><div><br></div>Kudos Tolik!<br><div><br></div>- fabian<br><div><br></div>----- Original Message -----<br>> Hi<br>> <br>> Both projects are now built with VDSM and Hosted Engine plugins.<br>> <br>> Best Regards.<br>> Tolik.<br>> <br>> ----- Original Message -----<br>> From: "Anatoly Litvosky" <tlitovsk@redhat.com><br>> To: users@ovirt.org<br>> Cc: devel@ovirt.org<br>> Sent: Thursday, 15 January, 2015 3:46:10 PM<br>> Subject: Re: [ovirt-users] Cutting edge ovirt node images.<br>> <br>> Hi<br>> <br>> A centos7 based ovirt-node project joined our jenkins<br>> http://jenkins.ovirt.org/job/ovirt-node_master_create-iso-el7_merged/<br>> <br>> Best Regards<br>> Tolik<br>> <br>> On Sun, 2015-01-11 at 17:29 +0200, Anatoly Litvosky wrote:<br>> > Hi<br>> > <br>> > For all of you that want the latest images for ovirt node.<br>> > For some time there is a new job in the ovirt jenkins that builds an<br>> > image for every commit made.<br>> > http://jenkins.ovirt.org/job/ovirt-node_master_create-iso-fc20_merged/<br>> > Currently for fedora 20 but hopefully more to come.<br>> > <br>> > Many thanks to David (dcaro) for all the effort.<br>> > <br>> > Best regards.<br>> > Tolik.<br>> > <br>> > <br>> > <br>> <br>> <br>> _______________________________________________<br>> Users mailing list<br>> Users@ovirt.org<br>> http://lists.ovirt.org/mailman/listinfo/users<br>> _______________________________________________<br>> Users mailing list<br>> Users@ovirt.org<br>> http://lists.ovirt.org/mailman/listinfo/users<br>> <br><div><br></div><br>------------------------------<br><div><br></div>Message: 7<br>Date: Mon, 26 Jan 2015 10:27:23 -0500 (EST)<br>From: Fabian Deutsch <fdeutsch@redhat.com><br>To: Nir Soffer <nsoffer@redhat.com><br>Cc: Benjamin Marzinski <bmarzins@redhat.com>, users <users@ovirt.org><br>Subject: Re: [ovirt-users] Update to 3.5.1 scrambled multipath.conf?<br>Message-ID:<br> <1036916460.310615.1422286043504.JavaMail.zimbra@redhat.com><br>Content-Type: text/plain; charset=utf-8<br><div><br></div><br><div><br></div>----- Original Message -----<br>> ----- Original Message -----<br>> > From: "Dan Kenigsberg" <danken@redhat.com><br>> > To: "Gianluca Cecchi" <gianluca.cecchi@gmail.com>, nsoffer@redhat.com<br>> > Cc: "users" <users@ovirt.org>, ykaplan@redhat.com<br>> > Sent: Monday, January 26, 2015 2:09:23 PM<br>> > Subject: Re: [ovirt-users] Update to 3.5.1 scrambled multipath.conf?<br>> > <br>> > On Sat, Jan 24, 2015 at 12:59:01AM +0100, Gianluca Cecchi wrote:<br>> > > Hello,<br>> > > on my all-in-one installation @home I had 3.5.0 with F20.<br>> > > Today I updated to 3.5.1.<br>> > > <br>> > > it seems it modified /etc/multipath.conf preventing me from using my<br>> > > second<br>> > > disk at all...<br>> > > <br>> > > My system has internal ssd disk (sda) for OS and one local storage domain<br>> > > and another disk (sdb) with some partitions (on one of them there is also<br>> > > another local storage domain).<br>> > > <br>> > > At reboot I was put in emergency boot because partitions at sdb disk<br>> > > could<br>> > > not be mounted (they were busy).<br>> > > it took me some time to understand that the problem was due to sdb gone<br>> > > managed as multipath device and so busy for partitions to be mounted.<br>> > > <br>> > > Here you can find how multipath became after update and reboot<br>> > > https://drive.google.com/file/d/0BwoPbcrMv8mvS0FkMnNyMTdVTms/view?usp=sharing<br>> > > <br>> > > No device-mapper-multipath update in yum.log<br>> > > <br>> > > Also it seems that after changing it, it was then reverted at boot again<br>> > > (I<br>> > > don't know if the responsible was initrd/dracut or vdsmd) so in the mean<br>> > > time the only thing I could do was to make the file immutable with<br>> > > <br>> > > chattr +i /etc/multipath.conf<br>> > <br>> > The "supported" method of achieving this is to place "# RHEV PRIVATE" in<br>> > the second line of your hand-modified multipath.conf<br>> > <br>> > I do not understand why this has happened only after upgrade to 3.5.1 -<br>> > 3.5.0's should have reverted you multipath.conf just as well during each<br>> > vdsm startup.<br>> > <br>> > The good thing is that this annoying behavior has been dropped from the<br>> > master branch, so that 3.6 is not going to have it. Vdsm is not to mess<br>> > with other services config file while it is running. The logic moved to<br>> > `vdsm-tool configure`<br>> > <br>> > > <br>> > > and so I was able to reboot and verify that my partitions on sdb were ok<br>> > > and I was able to mount them (for safe I also ran an fsck against them)<br>> > > <br>> > > Update ran around 19:20 and finished at 19:34<br>> > > here the log in gzip format<br>> > > https://drive.google.com/file/d/0BwoPbcrMv8mvWjJDTXU1YjRWOFk/view?usp=sharing<br>> > > <br>> > > Reboot was done around 21:10-21:14<br>> > > <br>> > > Here my /var/log/messages in gzip format, where you can see latest days.<br>> > > https://drive.google.com/file/d/0BwoPbcrMv8mvMm1ldXljd3hZWnM/view?usp=sharing<br>> > > <br>> > > <br>> > > Any suggestion appreciated.<br>> > > <br>> > > Current multipath.conf (where I also commented out the getuid_callout<br>> > > that<br>> > > is not used anymore):<br>> > > <br>> > > [root@tekkaman setup]# cat /etc/multipath.conf<br>> > > # RHEV REVISION 1.1<br>> > > <br>> > > blacklist {<br>> > > devnode "^(sda|sdb)[0-9]*"<br>> > > }<br>> <br>> <br>> I think what happened is:<br>> <br>> 1. 3.5.1 had new multipath version<br>> 2. So vdsm upgraded the local file<br>> 3. blacklist above was removed<br>> (it should exists in /etc/multipath.bak)<br>> <br>> To prevent local changes, you have to mark the file as private<br>> as Dan suggests.<br>> <br>> Seems to be related to the find_multiapth = "yes" bug:<br>> https://bugzilla.redhat.com/show_bug.cgi?id=1173290<br><div><br></div>The symptoms above sound exactly liek this issue.<br>When find_multipahts is no (the default when the directive is not present)<br>I sthat all non-blacklisted devices are tried to get claimed, and this happened above.<br><div><br></div>Blacklisting the devices works, or adding "find_mutlipaths yes" should also work, because<br>in that case only device which have more than one path (or are explicitly named) will be<br>claimed by multipath.<br><div><br></div>My 2ct.<br><div><br></div>- fabian<br><div><br></div>> Ben, can you confirm that this is the same issue?<br>> <br>> > > <br>> > > defaults {<br>> > > polling_interval 5<br>> > > #getuid_callout "/usr/lib/udev/scsi_id --whitelisted<br>> > > --replace-whitespace --device=/dev/%n"<br>> > > no_path_retry fail<br>> > > user_friendly_names no<br>> > > flush_on_last_del yes<br>> > > fast_io_fail_tmo 5<br>> > > dev_loss_tmo 30<br>> > > max_fds 4096<br>> > > }<br>> > <br>> <br>> Regards,<br>> Nir<br>> _______________________________________________<br>> Users mailing list<br>> Users@ovirt.org<br>> http://lists.ovirt.org/mailman/listinfo/users<br>> <br><div><br></div><br>------------------------------<br><div><br></div>_______________________________________________<br>Users mailing list<br>Users@ovirt.org<br>http://lists.ovirt.org/mailman/listinfo/users<br><div><br></div><br>End of Users Digest, Vol 40, Issue 140<br>**************************************<br></div><div><br></div></div></body></html>