zram does not support direct IO (tested, indeed fails).
What I do is host the VMs there, though - this is working - but I'm using
Lago (and not oVirt). does oVirt need direct IO for the temp disks? I
thought we are doing them on the libvirt level?
This is the command I use:
sudo modprobe zram num_devices=1 && sudo zramctl --find --size 12G &&
sudo
mkfs.xfs -K /dev/zram0 && sudo mount -o nobarrier /dev/zram0 /home/zram
&&
sudo chmod 777 /home/zram
And then I run lago with: ./run_suite.sh -o /home/zram basic_suite_master
Y.
On Thu, Sep 29, 2016 at 10:47 AM, Evgheni Dereveanchin <ederevea(a)redhat.com>
wrote:
Hi Yaniv,
this is a physical server with work directories
created on a zRAM device, here's the patch:
https://gerrit.ovirt.org/#/c/62249/2/site/ovirt_jenkins_
slave/templates/prepare-ram-disk.service.erb
I'll still need to read up on this but the only
slave having this class (ovirt-srv08) is now offline
and should not cause issues. I tested on VM slaves
and did not see errors from the dd test command you provided.
Please tell me if you see errors on other nodes and I'll
check what's going on but it must be something else than RAM disks.
Regards,
Evgheni Dereveanchin
----- Original Message -----
From: "Yaniv Kaul" <ykaul(a)redhat.com>
To: "Evgheni Dereveanchin" <ederevea(a)redhat.com>
Cc: "infra" <infra(a)ovirt.org>, "devel" <devel(a)ovirt.org>,
"Eyal Edri" <
eedri(a)redhat.com>, "Nir Soffer" <nsoffer(a)redhat.com>
Sent: Thursday, 29 September, 2016 9:32:45 AM
Subject: Re: [ovirt-devel] [VDSM] All tests using directio fail on CI
On Sep 29, 2016 10:28 AM, "Evgheni Dereveanchin" <ederevea(a)redhat.com>
wrote:
>
> Hi,
>
> Indeed the proposed dd test does not work on zRAM slaves.
> Can we modify the job not to run on nodes with ram_disk label?
Are those zram based or ram based *virtio-blk* disks, or zram/ram disks
within the VMs?
The former should work. The latter - no idea.
>
> The node will be offline for now until we agree on what to do.
> An option is to abandon RAM disks completely as we didn't find
> any performance benefits from using them so far.
That's very surprising. On my case it doubles the performance, at least.
But I assume my storage (single disk) is far slower than yours.
Y.
>
> Regards,
> Evgheni Dereveanchin
>
> ----- Original Message -----
> From: "Eyal Edri" <eedri(a)redhat.com>
> To: "Nir Soffer" <nsoffer(a)redhat.com>, "Evgheni
Dereveanchin" <
ederevea(a)redhat.com>
> Cc: "Yaniv Kaul" <ykaul(a)redhat.com>, "devel"
<devel(a)ovirt.org>, "infra"
<
infra(a)ovirt.org>
> Sent: Thursday, 29 September, 2016 8:08:45 AM
> Subject: Re: [ovirt-devel] [VDSM] All tests using directio fail on CI
>
> Evgheni,
> Can you try switching the current RAM drive with zram?
>
> On Wed, Sep 28, 2016 at 11:43 PM, Nir Soffer <nsoffer(a)redhat.com> wrote:
>
> > On Wed, Sep 28, 2016 at 11:39 PM, Yaniv Kaul <ykaul(a)redhat.com> wrote:
> > > On Sep 28, 2016 11:37 PM, "Nir Soffer"
<nsoffer(a)redhat.com> wrote:
> > >>
> > >> On Wed, Sep 28, 2016 at 11:20 PM, Nir Soffer
<nsoffer(a)redhat.com>
> > wrote:
> > >> > On Wed, Sep 28, 2016 at 10:31 PM, Barak Korren <
bkorren(a)redhat.com>
> > >> > wrote:
> > >> >> The CI setup did not change recently.
> > >> >
> > >> > Great
> > >> >
> > >> >> All standard-CI jobs run inside mock (chroot) which is stored
on
top
> > >> >> of a regular FS, so they should not be affected by the slave
OS
at
> > all
> > >> >> as far as FS settings go.
> > >> >>
> > >> >> But perhaps some slave-OS/mock-OS combination is acting
strangely, so
> > >> >> could you be more specific and point to particular job runs
that
> > fail?
> > >> >
> > >> > This jobs failed, but it was deleted (I get 404 now):
> > >> >
http://jenkins.ovirt.org/job/vdsm_master_check-patch-fc24-
> > x86_64/2530/
> > >>
> > >> Oops, wrong build.
> > >>
> > >> This is the failing build:
> > >>
> > >>
> > >>
http://jenkins.ovirt.org/job/vdsm_master_check-patch-el7-
> > x86_64/1054/console
> > >>
> > >> And this is probably the reason - using a ram disk:
> > >>
> > >> 12:24:53 Building remotely on
ovirt-srv08.phx.ovirt.org (phx
physical
> > >> integ-tests ram_disk fc23) in workspace
> > >> /home/jenkins/workspace/vdsm_master_check-patch-el7-x86_64
> > >>
> > >> We cannot run the storage tests using a ramdisk. We are creating
> > >> (tiny) volumes and storage domains and doing copies, this code
cannot
> > >> work with ramdisk.
> > >
> > > Will it work on zram?
> > > What if we configure ram based iSCSI targets?
> >
> > I don't know, but it is easy to test - it this works the tests will
work:
> >
> > dd if=/dev/zero of=file bs=512 count=1 oflag=direct
> > _______________________________________________
> > Infra mailing list
> > Infra(a)ovirt.org
> >
http://lists.ovirt.org/mailman/listinfo/infra
> >
> >
> >
>
>
> --
> Eyal Edri
> Associate Manager
> RHV DevOps
> EMEA ENG Virtualization R&D
> Red Hat Israel
>
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ)