On Wed, 2014-02-05 at 08:49 -0500, Yedidyah Bar David wrote:
From: "ml ml" <mliebherr99@googlemail.com>
To: Users@ovirt.org
Sent: Wednesday, February 5, 2014 12:45:55 PM
Subject: [Users] Will this two node concept scale and work?
Hello List,
my aim is to host multiple VMs which are redundant and are high available. It should also scale well.
I think usually people just buy a fat iSCSI Storage and attach this. In my case it should scale well from very small nodes to big ones.
Therefore an iSCSI Target will bring a lot of overhead (10GBit Links and two Paths, and really i should have a 2nd Hot Standby SAN, too). This makes scalability very hard.
This post is also not meant to be a iscsi discussion.
Since oVirt does not support DRBD out of the box i came up with my own concept:
http://oi62.tinypic.com/2550xg5.jpg
As far as i can tell i have the following advantages:
--------------------------------------------------------------------
- i can start with two simple cheap nodes
- i could add more disks to my nodes. Maybe even a SSD as a dedicated drbd resource.
- i can connect the two nodes directly to each other with bonding or infiniband. i dont need a switch or something between it.
Downside:
---------------
- i always need two nodes (as a couple)
Will this setup work for me. So far i think i will be quite happy with it.
Since the DRBD Resources are shared in dual primary mode i am not sure if ovirt can handle it. It is not allowed to write to a vm disk at the same time.
I don't know ovirt enough to comment on that.
I did play in the past with drbd and libvirt (virsh).
Note that having both nodes primary all the time for all resources is
calling for a disaster. In any case of split brain, for any reason, drbd
will not know what to do.
I second that, had many problems without proper fencing and even with fencing.
What I did was to allow both to be primary, but had only one primary
most of the time (per resource). I wrote a script to do migration, which
made both primary for the duration of the migration (required by qemu)
and then moved the source to secondary when migration finished. This
way you still have a chance for a disaster, if there is a problem (split
brain, node failure) during a migration. So if you decide to go this way,
carefully plan and test to see that it works well for you. One source for
a split brain, for me, at the time, was buggy nic drivers and bad bonding
configuration. So test that well too if applicable.
The approach I took seems similar to "DRBD on LV level" in [1], but
with custom scripts and without ovirt.
You might be able to make ovirt do this for you with hooks. Didn't try that.\
You could use drbd9 but I haven't tested it extensively yet. DRBD 9 has primary on write so you have both sides on passive until one of the nodes want's to write. It should automatically become primary then. This has been done by linbit to decrease split brain and expand to more than two nodes.
http://www.drbd.org/users-guide-9.0/s-automatic-promotion.html
But I don't know why it shouldn't work, maybe not with the node image but you can make a node of a normal rhel/centos/fedora install.
One problem I always have with drbd and RHEL/Centos is that when you don't pay for the Linbit support, you don't get access to the repo and drbd is an additional option on RHEL. On Centos and Fedora the version is always lagging behind, so I have to compile the kernel module everytime for a new version or kernel update.
An obvious downside to this approach is that if one node in a pair is
down, the other has no backup now. If you have multiple nodes and
external shared storage, multiple nodes can be down with no disruption
to service if the remaining nodes are capable enough.
[1] http://www.ovirt.org/Features/DRBD
Best regards,
--
Didi