<div dir="ltr">OK,<div>I suggest to test using a VM with local disk (preferably on a host with SSD configured), if its working,</div><div>lets expedite moving all VMs or at least a large amount of VMs to it until we see network load reduced.</div><div><br></div><div>e.</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, May 25, 2016 at 12:38 PM, Evgheni Dereveanchin <span dir="ltr"><<a href="mailto:ederevea@redhat.com" target="_blank">ederevea@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi,<br>
<br>
I checked SAR data on storage servers and compared loads<br>
yesterday and three weeks ago (May 3). Load values are in<br>
pretty much the same range, yet now most of the time they<br>
are around the "high" mark so we may be nearing a bottleneck,<br>
specifically on I/O where we mostly do writes to the NAS,<br>
not reads and there's quite a bit of overhead:<br>
<br>
VM -> QCOW -> file -> network -> NFS -> DRBD -> disk<br>
<br>
Surely, using local scratch disks stored on SSDs should greatly<br>
improve performance as at least half of the above steps will be gone.<br>
We don't really need to centrally store (NFS) or mirror (DRBD)<br>
data that slaves write to their disks all the time anyways.<br>
<br>
For VMs where we do need redundancy, I'd suggest using<br>
iSCSI storage domains in the long run.<br>
<br>
Regards,<br>
Evgheni Dereveanchin<br>
<div class="HOEnZb"><div class="h5"><br>
----- Original Message -----<br>
From: "Eyal Edri" <<a href="mailto:eedri@redhat.com">eedri@redhat.com</a>><br>
To: "Sandro Bonazzola" <<a href="mailto:sbonazzo@redhat.com">sbonazzo@redhat.com</a>>, "Evgheni Dereveanchin" <<a href="mailto:ederevea@redhat.com">ederevea@redhat.com</a>>, "Anton Marchukov" <<a href="mailto:amarchuk@redhat.com">amarchuk@redhat.com</a>><br>
Cc: "Fabian Deutsch" <<a href="mailto:fdeutsch@redhat.com">fdeutsch@redhat.com</a>>, "infra" <<a href="mailto:infra@ovirt.org">infra@ovirt.org</a>><br>
Sent: Wednesday, 25 May, 2016 9:31:43 AM<br>
Subject: Re: ngn build jobs take more than twice (x) as long as in the last days<br>
<br>
It might be more load on the storage servers with now running much more<br>
jobs.<br>
Evgheni - can you check if the load on the storage servers has changed<br>
significantly to justify this degradation of service?<br>
<br>
We need to expedite the enablement of SSDs in the hypervisors and move to<br>
local hooks.<br>
Anton - do we have a test VM that uses a local DISK we can use to test if<br>
it improves the runtime?<br>
<br>
On Tue, May 24, 2016 at 11:19 PM, Sandro Bonazzola <<a href="mailto:sbonazzo@redhat.com">sbonazzo@redhat.com</a>><br>
wrote:<br>
<br>
><br>
> Il 24/Mag/2016 17:57, "Fabian Deutsch" <<a href="mailto:fdeutsch@redhat.com">fdeutsch@redhat.com</a>> ha scritto:<br>
> ><br>
> > Hey,<br>
> ><br>
> > $subj says it all.<br>
> ><br>
> > Affected jobs are:<br>
> > <a href="http://jenkins.ovirt.org/user/fabiand/my-views/view/ovirt-node-ng/" rel="noreferrer" target="_blank">http://jenkins.ovirt.org/user/fabiand/my-views/view/ovirt-node-ng/</a><br>
> ><br>
> > I.e. 3.6 - before: ~46min, now 1:23hrs<br>
> ><br>
> > In master it's even worse: >1:30hrs<br>
> ><br>
> > Can someone help to idnetify the reason?<br>
><br>
> I have no numbers but I have the feeling that all jobs are getting slower<br>
> since a couple of weeks ago. Yum install phase takes ages. I thoughtit was<br>
> some temporary storage i/o peak but looks like it's not temporary.<br>
><br>
> ><br>
> > - fabian<br>
> ><br>
> > --<br>
> > Fabian Deutsch <<a href="mailto:fdeutsch@redhat.com">fdeutsch@redhat.com</a>><br>
> > RHEV Hypervisor<br>
> > Red Hat<br>
> > _______________________________________________<br>
> > Infra mailing list<br>
> > <a href="mailto:Infra@ovirt.org">Infra@ovirt.org</a><br>
> > <a href="http://lists.ovirt.org/mailman/listinfo/infra" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman/listinfo/infra</a><br>
><br>
> _______________________________________________<br>
> Infra mailing list<br>
> <a href="mailto:Infra@ovirt.org">Infra@ovirt.org</a><br>
> <a href="http://lists.ovirt.org/mailman/listinfo/infra" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman/listinfo/infra</a><br>
><br>
><br>
<br>
<br>
--<br>
Eyal Edri<br>
Associate Manager<br>
RHEV DevOps<br>
EMEA ENG Virtualization R&D<br>
Red Hat Israel<br>
<br>
phone: <a href="tel:%2B972-9-7692018" value="+97297692018">+972-9-7692018</a><br>
irc: eedri (on #tlv #rhev-dev #rhev-integ)<br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div>Eyal Edri<br>Associate Manager</div><div>RHEV DevOps<br>EMEA ENG Virtualization R&D<br>Red Hat Israel<br><br>phone: +972-9-7692018<br>irc: eedri (on #tlv #rhev-dev #rhev-integ)</div></div></div></div></div>
</div>