[ovirt-users] Loopback interface has huge network

Nikolai Sednev nsednev at redhat.com
Mon Jan 26 16:28:43 UTC 2015


Hi Punit, 
You might find also tcpdump as useful tool, so then to investigate the recorded data using Wireshark tool. 
http://www.tcpdump.org/manpages/tcpdump.1.html 
https://ask.wireshark.org/questions/23138/wireshark-for-red-hat-enterprise-linux 

Thanks in advance. 

Best regards, 
Nikolai 
____________________ 
Nikolai Sednev 
Senior Quality Engineer at Compute team 
Red Hat Israel 
34 Jerusalem Road, 
Ra'anana, Israel 43501 

Tel: +972 9 7692043 
Mobile: +972 52 7342734 
Email: nsednev at redhat.com 
IRC: nsednev 

----- Original Message -----

From: users-request at ovirt.org 
To: users at ovirt.org 
Sent: Monday, January 26, 2015 5:27:25 PM 
Subject: Users Digest, Vol 40, Issue 140 

Send Users mailing list submissions to 
users at ovirt.org 

To subscribe or unsubscribe via the World Wide Web, visit 
http://lists.ovirt.org/mailman/listinfo/users 
or, via email, send a message with subject or body 'help' to 
users-request at ovirt.org 

You can reach the person managing the list at 
users-owner at ovirt.org 

When replying, please edit your Subject line so it is more specific 
than "Re: Contents of Users digest..." 


Today's Topics: 

1. Re: Update to 3.5.1 scrambled multipath.conf? (Dan Kenigsberg) 
2. Re: Update to 3.5.1 scrambled multipath.conf? (Nir Soffer) 
3. Re: Loopback interface has huge network transctions 
(Martin Pavl?k) 
4. Re: [ovirt-devel] oVirt 3.6 Feature: Cumulative Network 
Usage Statistics (Dan Kenigsberg) 
5. Re: Cutting edge ovirt node images. (Tolik Litovsky) 
6. Re: Cutting edge ovirt node images. (Fabian Deutsch) 
7. Re: Update to 3.5.1 scrambled multipath.conf? (Fabian Deutsch) 


---------------------------------------------------------------------- 

Message: 1 
Date: Mon, 26 Jan 2015 12:09:23 +0000 
From: Dan Kenigsberg <danken at redhat.com> 
To: Gianluca Cecchi <gianluca.cecchi at gmail.com>, nsoffer at redhat.com 
Cc: users <users at ovirt.org> 
Subject: Re: [ovirt-users] Update to 3.5.1 scrambled multipath.conf? 
Message-ID: <20150126120923.GF14455 at redhat.com> 
Content-Type: text/plain; charset=us-ascii 

On Sat, Jan 24, 2015 at 12:59:01AM +0100, Gianluca Cecchi wrote: 
> Hello, 
> on my all-in-one installation @home I had 3.5.0 with F20. 
> Today I updated to 3.5.1. 
> 
> it seems it modified /etc/multipath.conf preventing me from using my second 
> disk at all... 
> 
> My system has internal ssd disk (sda) for OS and one local storage domain 
> and another disk (sdb) with some partitions (on one of them there is also 
> another local storage domain). 
> 
> At reboot I was put in emergency boot because partitions at sdb disk could 
> not be mounted (they were busy). 
> it took me some time to understand that the problem was due to sdb gone 
> managed as multipath device and so busy for partitions to be mounted. 
> 
> Here you can find how multipath became after update and reboot 
> https://drive.google.com/file/d/0BwoPbcrMv8mvS0FkMnNyMTdVTms/view?usp=sharing 
> 
> No device-mapper-multipath update in yum.log 
> 
> Also it seems that after changing it, it was then reverted at boot again (I 
> don't know if the responsible was initrd/dracut or vdsmd) so in the mean 
> time the only thing I could do was to make the file immutable with 
> 
> chattr +i /etc/multipath.conf 

The "supported" method of achieving this is to place "# RHEV PRIVATE" in 
the second line of your hand-modified multipath.conf 

I do not understand why this has happened only after upgrade to 3.5.1 - 
3.5.0's should have reverted you multipath.conf just as well during each 
vdsm startup. 

The good thing is that this annoying behavior has been dropped from the 
master branch, so that 3.6 is not going to have it. Vdsm is not to mess 
with other services config file while it is running. The logic moved to 
`vdsm-tool configure` 

> 
> and so I was able to reboot and verify that my partitions on sdb were ok 
> and I was able to mount them (for safe I also ran an fsck against them) 
> 
> Update ran around 19:20 and finished at 19:34 
> here the log in gzip format 
> https://drive.google.com/file/d/0BwoPbcrMv8mvWjJDTXU1YjRWOFk/view?usp=sharing 
> 
> Reboot was done around 21:10-21:14 
> 
> Here my /var/log/messages in gzip format, where you can see latest days. 
> https://drive.google.com/file/d/0BwoPbcrMv8mvMm1ldXljd3hZWnM/view?usp=sharing 
> 
> 
> Any suggestion appreciated. 
> 
> Current multipath.conf (where I also commented out the getuid_callout that 
> is not used anymore): 
> 
> [root at tekkaman setup]# cat /etc/multipath.conf 
> # RHEV REVISION 1.1 
> 
> blacklist { 
> devnode "^(sda|sdb)[0-9]*" 
> } 
> 
> defaults { 
> polling_interval 5 
> #getuid_callout "/usr/lib/udev/scsi_id --whitelisted 
> --replace-whitespace --device=/dev/%n" 
> no_path_retry fail 
> user_friendly_names no 
> flush_on_last_del yes 
> fast_io_fail_tmo 5 
> dev_loss_tmo 30 
> max_fds 4096 
> } 


------------------------------ 

Message: 2 
Date: Mon, 26 Jan 2015 07:37:59 -0500 (EST) 
From: Nir Soffer <nsoffer at redhat.com> 
To: Dan Kenigsberg <danken at redhat.com> 
Cc: Benjamin Marzinski <bmarzins at redhat.com>, users <users at ovirt.org> 
Subject: Re: [ovirt-users] Update to 3.5.1 scrambled multipath.conf? 
Message-ID: 
<610569491.313815.1422275879664.JavaMail.zimbra at redhat.com> 
Content-Type: text/plain; charset=utf-8 

----- Original Message ----- 
> From: "Dan Kenigsberg" <danken at redhat.com> 
> To: "Gianluca Cecchi" <gianluca.cecchi at gmail.com>, nsoffer at redhat.com 
> Cc: "users" <users at ovirt.org>, ykaplan at redhat.com 
> Sent: Monday, January 26, 2015 2:09:23 PM 
> Subject: Re: [ovirt-users] Update to 3.5.1 scrambled multipath.conf? 
> 
> On Sat, Jan 24, 2015 at 12:59:01AM +0100, Gianluca Cecchi wrote: 
> > Hello, 
> > on my all-in-one installation @home I had 3.5.0 with F20. 
> > Today I updated to 3.5.1. 
> > 
> > it seems it modified /etc/multipath.conf preventing me from using my second 
> > disk at all... 
> > 
> > My system has internal ssd disk (sda) for OS and one local storage domain 
> > and another disk (sdb) with some partitions (on one of them there is also 
> > another local storage domain). 
> > 
> > At reboot I was put in emergency boot because partitions at sdb disk could 
> > not be mounted (they were busy). 
> > it took me some time to understand that the problem was due to sdb gone 
> > managed as multipath device and so busy for partitions to be mounted. 
> > 
> > Here you can find how multipath became after update and reboot 
> > https://drive.google.com/file/d/0BwoPbcrMv8mvS0FkMnNyMTdVTms/view?usp=sharing 
> > 
> > No device-mapper-multipath update in yum.log 
> > 
> > Also it seems that after changing it, it was then reverted at boot again (I 
> > don't know if the responsible was initrd/dracut or vdsmd) so in the mean 
> > time the only thing I could do was to make the file immutable with 
> > 
> > chattr +i /etc/multipath.conf 
> 
> The "supported" method of achieving this is to place "# RHEV PRIVATE" in 
> the second line of your hand-modified multipath.conf 
> 
> I do not understand why this has happened only after upgrade to 3.5.1 - 
> 3.5.0's should have reverted you multipath.conf just as well during each 
> vdsm startup. 
> 
> The good thing is that this annoying behavior has been dropped from the 
> master branch, so that 3.6 is not going to have it. Vdsm is not to mess 
> with other services config file while it is running. The logic moved to 
> `vdsm-tool configure` 
> 
> > 
> > and so I was able to reboot and verify that my partitions on sdb were ok 
> > and I was able to mount them (for safe I also ran an fsck against them) 
> > 
> > Update ran around 19:20 and finished at 19:34 
> > here the log in gzip format 
> > https://drive.google.com/file/d/0BwoPbcrMv8mvWjJDTXU1YjRWOFk/view?usp=sharing 
> > 
> > Reboot was done around 21:10-21:14 
> > 
> > Here my /var/log/messages in gzip format, where you can see latest days. 
> > https://drive.google.com/file/d/0BwoPbcrMv8mvMm1ldXljd3hZWnM/view?usp=sharing 
> > 
> > 
> > Any suggestion appreciated. 
> > 
> > Current multipath.conf (where I also commented out the getuid_callout that 
> > is not used anymore): 
> > 
> > [root at tekkaman setup]# cat /etc/multipath.conf 
> > # RHEV REVISION 1.1 
> > 
> > blacklist { 
> > devnode "^(sda|sdb)[0-9]*" 
> > } 


I think what happened is: 

1. 3.5.1 had new multipath version 
2. So vdsm upgraded the local file 
3. blacklist above was removed 
(it should exists in /etc/multipath.bak) 

To prevent local changes, you have to mark the file as private 
as Dan suggests. 

Seems to be related to the find_multiapth = "yes" bug: 
https://bugzilla.redhat.com/show_bug.cgi?id=1173290 

Ben, can you confirm that this is the same issue? 

> > 
> > defaults { 
> > polling_interval 5 
> > #getuid_callout "/usr/lib/udev/scsi_id --whitelisted 
> > --replace-whitespace --device=/dev/%n" 
> > no_path_retry fail 
> > user_friendly_names no 
> > flush_on_last_del yes 
> > fast_io_fail_tmo 5 
> > dev_loss_tmo 30 
> > max_fds 4096 
> > } 
> 

Regards, 
Nir 


------------------------------ 

Message: 3 
Date: Mon, 26 Jan 2015 14:25:23 +0100 
From: Martin Pavl?k <mpavlik at redhat.com> 
To: Punit Dambiwal <hypunit at gmail.com> 
Cc: "users at ovirt.org" <users at ovirt.org> 
Subject: Re: [ovirt-users] Loopback interface has huge network 
transctions 
Message-ID: <6E2536E1-27AE-47EB-BCA7-88B3901D2B02 at redhat.com> 
Content-Type: text/plain; charset="us-ascii" 

Hi Punit, 

it is ok since ovirt-engine is using loopback for its purposes, e.g. postgress databas access. Try to check netstat -putna | grep 127.0.0 to see how many things are attached to it. 

If you are interested in checking what is going on a bit more have a look @ this great how-to http://www.slashroot.in/find-network-traffic-and-bandwidth-usage-process-linux <http://www.slashroot.in/find-network-traffic-and-bandwidth-usage-process-linux> 


HTH 

Martin Pavlik 

RHEV QE 

> On 26 Jan 2015, at 02:24, Punit Dambiwal <hypunit at gmail.com> wrote: 
> 
> Hi, 
> 
> I have noticed that the loop back interface has huge network packets sent and received...is it common or need to some tweaks.... 
> 
> 1. Ovirt 3.5.1 
> 2. Before ovirt engine installation...loop back address doesn't has that huge amount of packets sent/receive.... 
> 3. After Ovirt engine install it's keep increasing.....and in just 48 hours it reach to 35 GB... 
> 
> [root at ccr01 ~]# ifconfig 
> eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 
> inet 43.252.x.x netmask 255.255.255.0 broadcast 43.252.x.x 
> ether 60:eb:69:82:0b:4c txqueuelen 1000 (Ethernet) 
> RX packets 6605350 bytes 6551029484 (6.1 GiB) 
> RX errors 0 dropped 120622 overruns 0 frame 0 
> TX packets 2155524 bytes 431348174 (411.3 MiB) 
> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 
> device memory 0xdf6e0000-df700000 
> 
> eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 
> inet 10.10.0.2 netmask 255.255.255.0 broadcast 10.10.0.255 
> ether 60:eb:69:82:0b:4d txqueuelen 1000 (Ethernet) 
> RX packets 788160 bytes 133915292 (127.7 MiB) 
> RX errors 0 dropped 0 overruns 0 frame 0 
> TX packets 546352 bytes 131672255 (125.5 MiB) 
> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 
> device memory 0xdf660000-df680000 
> 
> lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 
> inet 127.0.0.1 netmask 255.0.0.0 
> loop txqueuelen 0 (Local Loopback) 
> RX packets 84747311 bytes 40376482560 (37.6 GiB) 
> RX errors 0 dropped 0 overruns 0 frame 0 
> TX packets 84747311 bytes 40376482560 (37.6 GiB) 
> TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 
> 
> [root at ccr01 ~]# w 
> 09:23:07 up 2 days, 11:43, 1 user, load average: 0.27, 0.30, 0.31 
> USER TTY LOGIN@ IDLE JCPU PCPU WHAT 
> root pts/0 09:16 3.00s 0.01s 0.00s w 
> [root at ccr01 ~]# 
> 
> Thanks, 
> Punit 
> _______________________________________________ 
> Users mailing list 
> Users at ovirt.org 
> http://lists.ovirt.org/mailman/listinfo/users 

-------------- next part -------------- 
An HTML attachment was scrubbed... 
URL: <http://lists.ovirt.org/pipermail/users/attachments/20150126/c38655c3/attachment-0001.html> 

------------------------------ 

Message: 4 
Date: Mon, 26 Jan 2015 13:45:56 +0000 
From: Dan Kenigsberg <danken at redhat.com> 
To: Lior Vernia <lvernia at redhat.com> 
Cc: "Users at ovirt.org List" <Users at ovirt.org>, "devel at ovirt.org" 
<devel at ovirt.org> 
Subject: Re: [ovirt-users] [ovirt-devel] oVirt 3.6 Feature: Cumulative 
Network Usage Statistics 
Message-ID: <20150126134555.GI14455 at redhat.com> 
Content-Type: text/plain; charset=us-ascii 

On Mon, Dec 22, 2014 at 01:40:06PM +0200, Lior Vernia wrote: 
> Hello users and developers, 
> 
> Just put up a feature page for the aforementioned feature; in summary, 
> to report total RX/TX statistics for hosts and VMs in oVirt. This has 
> been requested several times on the users mailing list, and is 
> especially useful for accounting in VDI deployments. 
> 
> You're more than welcome to review the feature page: 
> http://www.ovirt.org/Features/Cumulative_RX_TX_Statistics 

Sorry for the late review; I have a couple of questions/comments. 
- What do you mean by "VDI use cases" in the "Benefit to oVirt sanpshot" 
section? 
Do you refer to hosting services who would like to charge their 
customers based on actual bandwidth usage? 
- I've added another motivation: currently-reported rxRate/txRate 
can be utterly meaningless. 


I don't see reference to nasty negative flows: what happens if a host 
disappears? Or a VM? I suppose there's always a chance that some traffic 
would go unaccounted for. But do you expect to extract this information 
somehow? Either way, it should be mentioned as a caveat on the feature 
page. 

> 
> Note that this only deals with network usage - it'll be great if we have 
> similar features for CPU and disk usage! 

There's a formal feature request about this: 
Bug 1172153 - [RFE] Collect CPU, IO and network accounting 
information 

Dan 


------------------------------ 

Message: 5 
Date: Mon, 26 Jan 2015 09:00:29 -0500 (EST) 
From: Tolik Litovsky <tlitovsk at redhat.com> 
To: users at ovirt.org 
Cc: devel at ovirt.org 
Subject: Re: [ovirt-users] Cutting edge ovirt node images. 
Message-ID: <14200256.98.1422280826405.JavaMail.tlitovsk at tolikl> 
Content-Type: text/plain; charset=utf-8 

Hi 

Both projects are now built with VDSM and Hosted Engine plugins. 

Best Regards. 
Tolik. 

----- Original Message ----- 
From: "Anatoly Litvosky" <tlitovsk at redhat.com> 
To: users at ovirt.org 
Cc: devel at ovirt.org 
Sent: Thursday, 15 January, 2015 3:46:10 PM 
Subject: Re: [ovirt-users] Cutting edge ovirt node images. 

Hi 

A centos7 based ovirt-node project joined our jenkins 
http://jenkins.ovirt.org/job/ovirt-node_master_create-iso-el7_merged/ 

Best Regards 
Tolik 

On Sun, 2015-01-11 at 17:29 +0200, Anatoly Litvosky wrote: 
> Hi 
> 
> For all of you that want the latest images for ovirt node. 
> For some time there is a new job in the ovirt jenkins that builds an 
> image for every commit made. 
> http://jenkins.ovirt.org/job/ovirt-node_master_create-iso-fc20_merged/ 
> Currently for fedora 20 but hopefully more to come. 
> 
> Many thanks to David (dcaro) for all the effort. 
> 
> Best regards. 
> Tolik. 
> 
> 
> 


_______________________________________________ 
Users mailing list 
Users at ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users 


------------------------------ 

Message: 6 
Date: Mon, 26 Jan 2015 10:19:29 -0500 (EST) 
From: Fabian Deutsch <fdeutsch at redhat.com> 
To: Tolik Litovsky <tlitovsk at redhat.com> 
Cc: users at ovirt.org, devel at ovirt.org 
Subject: Re: [ovirt-users] Cutting edge ovirt node images. 
Message-ID: 
<1747701132.306300.1422285569125.JavaMail.zimbra at redhat.com> 
Content-Type: text/plain; charset=utf-8 

Kudos Tolik! 

- fabian 

----- Original Message ----- 
> Hi 
> 
> Both projects are now built with VDSM and Hosted Engine plugins. 
> 
> Best Regards. 
> Tolik. 
> 
> ----- Original Message ----- 
> From: "Anatoly Litvosky" <tlitovsk at redhat.com> 
> To: users at ovirt.org 
> Cc: devel at ovirt.org 
> Sent: Thursday, 15 January, 2015 3:46:10 PM 
> Subject: Re: [ovirt-users] Cutting edge ovirt node images. 
> 
> Hi 
> 
> A centos7 based ovirt-node project joined our jenkins 
> http://jenkins.ovirt.org/job/ovirt-node_master_create-iso-el7_merged/ 
> 
> Best Regards 
> Tolik 
> 
> On Sun, 2015-01-11 at 17:29 +0200, Anatoly Litvosky wrote: 
> > Hi 
> > 
> > For all of you that want the latest images for ovirt node. 
> > For some time there is a new job in the ovirt jenkins that builds an 
> > image for every commit made. 
> > http://jenkins.ovirt.org/job/ovirt-node_master_create-iso-fc20_merged/ 
> > Currently for fedora 20 but hopefully more to come. 
> > 
> > Many thanks to David (dcaro) for all the effort. 
> > 
> > Best regards. 
> > Tolik. 
> > 
> > 
> > 
> 
> 
> _______________________________________________ 
> Users mailing list 
> Users at ovirt.org 
> http://lists.ovirt.org/mailman/listinfo/users 
> _______________________________________________ 
> Users mailing list 
> Users at ovirt.org 
> http://lists.ovirt.org/mailman/listinfo/users 
> 


------------------------------ 

Message: 7 
Date: Mon, 26 Jan 2015 10:27:23 -0500 (EST) 
From: Fabian Deutsch <fdeutsch at redhat.com> 
To: Nir Soffer <nsoffer at redhat.com> 
Cc: Benjamin Marzinski <bmarzins at redhat.com>, users <users at ovirt.org> 
Subject: Re: [ovirt-users] Update to 3.5.1 scrambled multipath.conf? 
Message-ID: 
<1036916460.310615.1422286043504.JavaMail.zimbra at redhat.com> 
Content-Type: text/plain; charset=utf-8 



----- Original Message ----- 
> ----- Original Message ----- 
> > From: "Dan Kenigsberg" <danken at redhat.com> 
> > To: "Gianluca Cecchi" <gianluca.cecchi at gmail.com>, nsoffer at redhat.com 
> > Cc: "users" <users at ovirt.org>, ykaplan at redhat.com 
> > Sent: Monday, January 26, 2015 2:09:23 PM 
> > Subject: Re: [ovirt-users] Update to 3.5.1 scrambled multipath.conf? 
> > 
> > On Sat, Jan 24, 2015 at 12:59:01AM +0100, Gianluca Cecchi wrote: 
> > > Hello, 
> > > on my all-in-one installation @home I had 3.5.0 with F20. 
> > > Today I updated to 3.5.1. 
> > > 
> > > it seems it modified /etc/multipath.conf preventing me from using my 
> > > second 
> > > disk at all... 
> > > 
> > > My system has internal ssd disk (sda) for OS and one local storage domain 
> > > and another disk (sdb) with some partitions (on one of them there is also 
> > > another local storage domain). 
> > > 
> > > At reboot I was put in emergency boot because partitions at sdb disk 
> > > could 
> > > not be mounted (they were busy). 
> > > it took me some time to understand that the problem was due to sdb gone 
> > > managed as multipath device and so busy for partitions to be mounted. 
> > > 
> > > Here you can find how multipath became after update and reboot 
> > > https://drive.google.com/file/d/0BwoPbcrMv8mvS0FkMnNyMTdVTms/view?usp=sharing 
> > > 
> > > No device-mapper-multipath update in yum.log 
> > > 
> > > Also it seems that after changing it, it was then reverted at boot again 
> > > (I 
> > > don't know if the responsible was initrd/dracut or vdsmd) so in the mean 
> > > time the only thing I could do was to make the file immutable with 
> > > 
> > > chattr +i /etc/multipath.conf 
> > 
> > The "supported" method of achieving this is to place "# RHEV PRIVATE" in 
> > the second line of your hand-modified multipath.conf 
> > 
> > I do not understand why this has happened only after upgrade to 3.5.1 - 
> > 3.5.0's should have reverted you multipath.conf just as well during each 
> > vdsm startup. 
> > 
> > The good thing is that this annoying behavior has been dropped from the 
> > master branch, so that 3.6 is not going to have it. Vdsm is not to mess 
> > with other services config file while it is running. The logic moved to 
> > `vdsm-tool configure` 
> > 
> > > 
> > > and so I was able to reboot and verify that my partitions on sdb were ok 
> > > and I was able to mount them (for safe I also ran an fsck against them) 
> > > 
> > > Update ran around 19:20 and finished at 19:34 
> > > here the log in gzip format 
> > > https://drive.google.com/file/d/0BwoPbcrMv8mvWjJDTXU1YjRWOFk/view?usp=sharing 
> > > 
> > > Reboot was done around 21:10-21:14 
> > > 
> > > Here my /var/log/messages in gzip format, where you can see latest days. 
> > > https://drive.google.com/file/d/0BwoPbcrMv8mvMm1ldXljd3hZWnM/view?usp=sharing 
> > > 
> > > 
> > > Any suggestion appreciated. 
> > > 
> > > Current multipath.conf (where I also commented out the getuid_callout 
> > > that 
> > > is not used anymore): 
> > > 
> > > [root at tekkaman setup]# cat /etc/multipath.conf 
> > > # RHEV REVISION 1.1 
> > > 
> > > blacklist { 
> > > devnode "^(sda|sdb)[0-9]*" 
> > > } 
> 
> 
> I think what happened is: 
> 
> 1. 3.5.1 had new multipath version 
> 2. So vdsm upgraded the local file 
> 3. blacklist above was removed 
> (it should exists in /etc/multipath.bak) 
> 
> To prevent local changes, you have to mark the file as private 
> as Dan suggests. 
> 
> Seems to be related to the find_multiapth = "yes" bug: 
> https://bugzilla.redhat.com/show_bug.cgi?id=1173290 

The symptoms above sound exactly liek this issue. 
When find_multipahts is no (the default when the directive is not present) 
I sthat all non-blacklisted devices are tried to get claimed, and this happened above. 

Blacklisting the devices works, or adding "find_mutlipaths yes" should also work, because 
in that case only device which have more than one path (or are explicitly named) will be 
claimed by multipath. 

My 2ct. 

- fabian 

> Ben, can you confirm that this is the same issue? 
> 
> > > 
> > > defaults { 
> > > polling_interval 5 
> > > #getuid_callout "/usr/lib/udev/scsi_id --whitelisted 
> > > --replace-whitespace --device=/dev/%n" 
> > > no_path_retry fail 
> > > user_friendly_names no 
> > > flush_on_last_del yes 
> > > fast_io_fail_tmo 5 
> > > dev_loss_tmo 30 
> > > max_fds 4096 
> > > } 
> > 
> 
> Regards, 
> Nir 
> _______________________________________________ 
> Users mailing list 
> Users at ovirt.org 
> http://lists.ovirt.org/mailman/listinfo/users 
> 


------------------------------ 

_______________________________________________ 
Users mailing list 
Users at ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users 


End of Users Digest, Vol 40, Issue 140 
************************************** 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20150126/e8117c6d/attachment-0001.html>


More information about the Users mailing list