<div dir="ltr">zram does not support direct IO (tested, indeed fails).<div>What I do is host the VMs there, though - this is working - but I'm using Lago (and not oVirt). does oVirt need direct IO for the temp disks? I thought we are doing them on the libvirt level?</div><div><br></div><div>This is the command I use:</div><div>sudo modprobe zram num_devices=1 && sudo zramctl --find --size 12G && sudo mkfs.xfs -K /dev/zram0 && sudo mount -o nobarrier /dev/zram0 /home/zram && sudo chmod 777 /home/zram<br></div><div><br></div><div>And then I run lago with: ./run_suite.sh -o /home/zram basic_suite_master</div><div><br></div><div>Y.</div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Sep 29, 2016 at 10:47 AM, Evgheni Dereveanchin <span dir="ltr"><<a href="mailto:ederevea@redhat.com" target="_blank">ederevea@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Yaniv,<br>
<br>
this is a physical server with work directories<br>
created on a zRAM device, here's the patch:<br>
<a href="https://gerrit.ovirt.org/#/c/62249/2/site/ovirt_jenkins_slave/templates/prepare-ram-disk.service.erb" rel="noreferrer" target="_blank">https://gerrit.ovirt.org/#/c/<wbr>62249/2/site/ovirt_jenkins_<wbr>slave/templates/prepare-ram-<wbr>disk.service.erb</a><br>
<br>
I'll still need to read up on this but the only<br>
slave having this class (ovirt-srv08) is now offline<br>
and should not cause issues. I tested on VM slaves<br>
and did not see errors from the dd test command you provided.<br>
<br>
Please tell me if you see errors on other nodes and I'll<br>
check what's going on but it must be something else than RAM disks.<br>
<span class="im HOEnZb"><br>
Regards,<br>
Evgheni Dereveanchin<br>
<br>
----- Original Message -----<br>
</span><div class="HOEnZb"><div class="h5">From: "Yaniv Kaul" <<a href="mailto:ykaul@redhat.com">ykaul@redhat.com</a>><br>
To: "Evgheni Dereveanchin" <<a href="mailto:ederevea@redhat.com">ederevea@redhat.com</a>><br>
Cc: "infra" <<a href="mailto:infra@ovirt.org">infra@ovirt.org</a>>, "devel" <<a href="mailto:devel@ovirt.org">devel@ovirt.org</a>>, "Eyal Edri" <<a href="mailto:eedri@redhat.com">eedri@redhat.com</a>>, "Nir Soffer" <<a href="mailto:nsoffer@redhat.com">nsoffer@redhat.com</a>><br>
Sent: Thursday, 29 September, 2016 9:32:45 AM<br>
Subject: Re: [ovirt-devel] [VDSM] All tests using directio fail on CI<br>
<br>
On Sep 29, 2016 10:28 AM, "Evgheni Dereveanchin" <<a href="mailto:ederevea@redhat.com">ederevea@redhat.com</a>><br>
wrote:<br>
><br>
> Hi,<br>
><br>
> Indeed the proposed dd test does not work on zRAM slaves.<br>
> Can we modify the job not to run on nodes with ram_disk label?<br>
<br>
Are those zram based or ram based *virtio-blk* disks, or zram/ram disks<br>
within the VMs?<br>
The former should work. The latter - no idea.<br>
<br>
><br>
> The node will be offline for now until we agree on what to do.<br>
> An option is to abandon RAM disks completely as we didn't find<br>
> any performance benefits from using them so far.<br>
<br>
That's very surprising. On my case it doubles the performance, at least.<br>
But I assume my storage (single disk) is far slower than yours.<br>
Y.<br>
<br>
><br>
> Regards,<br>
> Evgheni Dereveanchin<br>
><br>
> ----- Original Message -----<br>
> From: "Eyal Edri" <<a href="mailto:eedri@redhat.com">eedri@redhat.com</a>><br>
> To: "Nir Soffer" <<a href="mailto:nsoffer@redhat.com">nsoffer@redhat.com</a>>, "Evgheni Dereveanchin" <<br>
<a href="mailto:ederevea@redhat.com">ederevea@redhat.com</a>><br>
> Cc: "Yaniv Kaul" <<a href="mailto:ykaul@redhat.com">ykaul@redhat.com</a>>, "devel" <<a href="mailto:devel@ovirt.org">devel@ovirt.org</a>>, "infra" <<br>
<a href="mailto:infra@ovirt.org">infra@ovirt.org</a>><br>
> Sent: Thursday, 29 September, 2016 8:08:45 AM<br>
> Subject: Re: [ovirt-devel] [VDSM] All tests using directio fail on CI<br>
><br>
> Evgheni,<br>
> Can you try switching the current RAM drive with zram?<br>
><br>
> On Wed, Sep 28, 2016 at 11:43 PM, Nir Soffer <<a href="mailto:nsoffer@redhat.com">nsoffer@redhat.com</a>> wrote:<br>
><br>
> > On Wed, Sep 28, 2016 at 11:39 PM, Yaniv Kaul <<a href="mailto:ykaul@redhat.com">ykaul@redhat.com</a>> wrote:<br>
> > > On Sep 28, 2016 11:37 PM, "Nir Soffer" <<a href="mailto:nsoffer@redhat.com">nsoffer@redhat.com</a>> wrote:<br>
> > >><br>
> > >> On Wed, Sep 28, 2016 at 11:20 PM, Nir Soffer <<a href="mailto:nsoffer@redhat.com">nsoffer@redhat.com</a>><br>
> > wrote:<br>
> > >> > On Wed, Sep 28, 2016 at 10:31 PM, Barak Korren <<a href="mailto:bkorren@redhat.com">bkorren@redhat.com</a>><br>
> > >> > wrote:<br>
> > >> >> The CI setup did not change recently.<br>
> > >> ><br>
> > >> > Great<br>
> > >> ><br>
> > >> >> All standard-CI jobs run inside mock (chroot) which is stored on<br>
top<br>
> > >> >> of a regular FS, so they should not be affected by the slave OS at<br>
> > all<br>
> > >> >> as far as FS settings go.<br>
> > >> >><br>
> > >> >> But perhaps some slave-OS/mock-OS combination is acting<br>
strangely, so<br>
> > >> >> could you be more specific and point to particular job runs that<br>
> > fail?<br>
> > >> ><br>
> > >> > This jobs failed, but it was deleted (I get 404 now):<br>
> > >> > <a href="http://jenkins.ovirt.org/job/vdsm_master_check-patch-fc24-" rel="noreferrer" target="_blank">http://jenkins.ovirt.org/job/<wbr>vdsm_master_check-patch-fc24-</a><br>
> > x86_64/2530/<br>
> > >><br>
> > >> Oops, wrong build.<br>
> > >><br>
> > >> This is the failing build:<br>
> > >><br>
> > >><br>
> > >> <a href="http://jenkins.ovirt.org/job/vdsm_master_check-patch-el7-" rel="noreferrer" target="_blank">http://jenkins.ovirt.org/job/<wbr>vdsm_master_check-patch-el7-</a><br>
> > x86_64/1054/console<br>
> > >><br>
> > >> And this is probably the reason - using a ram disk:<br>
> > >><br>
> > >> 12:24:53 Building remotely on <a href="http://ovirt-srv08.phx.ovirt.org" rel="noreferrer" target="_blank">ovirt-srv08.phx.ovirt.org</a> (phx physical<br>
> > >> integ-tests ram_disk fc23) in workspace<br>
> > >> /home/jenkins/workspace/vdsm_<wbr>master_check-patch-el7-x86_64<br>
> > >><br>
> > >> We cannot run the storage tests using a ramdisk. We are creating<br>
> > >> (tiny) volumes and storage domains and doing copies, this code cannot<br>
> > >> work with ramdisk.<br>
> > ><br>
> > > Will it work on zram?<br>
> > > What if we configure ram based iSCSI targets?<br>
> ><br>
> > I don't know, but it is easy to test - it this works the tests will<br>
work:<br>
> ><br>
> > dd if=/dev/zero of=file bs=512 count=1 oflag=direct<br>
> > ______________________________<wbr>_________________<br>
> > Infra mailing list<br>
> > <a href="mailto:Infra@ovirt.org">Infra@ovirt.org</a><br>
> > <a href="http://lists.ovirt.org/mailman/listinfo/infra" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/infra</a><br>
> ><br>
> ><br>
> ><br>
><br>
><br>
> --<br>
> Eyal Edri<br>
> Associate Manager<br>
> RHV DevOps<br>
> EMEA ENG Virtualization R&D<br>
> Red Hat Israel<br>
><br>
> phone: <a href="tel:%2B972-9-7692018" value="+97297692018">+972-9-7692018</a><br>
> irc: eedri (on #tlv #rhev-dev #rhev-integ)<br>
</div></div></blockquote></div><br></div>