[ovirt-users] oVirt and Ceph

Yaniv Dary ydary at redhat.com
Sun Jun 26 12:47:06 UTC 2016


Yaniv Dary
Technical Product Manager
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109

Tel : +972 (9) 7692306
        8272306
Email: ydary at redhat.com
IRC : ydary


On Sun, Jun 26, 2016 at 11:49 AM, Nicolás <nicolas at devels.es> wrote:

> Hi Nir,
>
> El 25/06/16 a las 22:57, Nir Soffer escribió:
>
> On Sat, Jun 25, 2016 at 11:47 PM, Nicolás <nicolas at devels.es> <nicolas at devels.es> wrote:
>
> Hi,
>
> We're using Ceph along with an iSCSI gateway, so our storage domain is
> actually an iSCSI backend. So far, we have had zero issues with cca. 50 high
> IO rated VMs. Perhaps [1] might shed some light on how to set it up.
>
> Can you share more details on this setup and how you integrate with ovirt?
>
> For example, are you using ceph luns in regular iscsi storage domain, or
> attaching luns directly to vms?
>
>
> Fernando Frediani (responding to this thread) hit the nail on the head.
> Actually we have a 3-node Ceph infrastructure, so we created a few volumes
> on the Ceph nodes side (RBD) and then exported them to iSCSI, so it's oVirt
> who creates the LVs on the top, this way we don't need to attach luns
> directly.
>
> Once the volumes are exported on the iSCSI side, adding an iSCSI domain on
> oVirt is enough to make the whole thing work.
>
> As for experience, we have done a few tests and so far we've had zero
> issues:
>
>    - The main bottleneck is the iSCSI gateway interface bandwith. In our
>    case we have a balance-alb bond over two 1G network interfaces. Later we
>    realized this kind of bonding is useless because MAC addresses won't
>    change, so in practice only 1G will be used at most. Making some heavy
>    tests (i.e., powering on 50 VMs at a time) we've reached this threshold at
>    specific points but it didn't affect performance significantly.
>
>
Did you try using ISCSI bonding to allow use of more than one path?


>
>    - Doing some additional heavy tests (powering on and off all VMs at a
>    time), we've reached the maximum value of cca. 1200 IOPS at a time. In
>    normal conditions we don't surpass 200 IOPS, even when these 50 VMs do lots
>    of disk operations.
>    - We've also done some tolerance tests, like removing one or more
>    disks from a Ceph node, reinserting them, suddenly shut down one node,
>    restoring it... The only problem we've experienced is a slower access to
>    the iSCSI backend, which results in a message in the oVirt manager warning
>    about this: something like "Storage is taking to long to respond...", which
>    was maybe 15-20 seconds. We got no VM pauses at any time, though, nor any
>    significant issue.
>
> Did you try our dedicated cinder/ceph support and compared it with ceph
> iscsi gateway?
>
>
> Not actually, in order to avoid deploying Cinder we directly implemented
> the gateway as it looked easier to us.
>
> Nir
>
>
> Hope this helps.
>
> Regards.
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20160626/53b25f0d/attachment-0001.html>


More information about the Users mailing list