This is a multi-part message in MIME format.
--------------040107040309070106020800
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 7bit
On 07/11/2013 02:45 PM, Piotr Szubiakowski wrote:
>
>
>> On 07/11/2013 12:52 PM, Piotr Szubiakowski wrote:
>>>
>>>>> The way that oVirt manage storage domains accessed via FC is very
>>>>> smart.
>>>>> There is separate logical volume for each virtual disk. But I
>>>>> think that
>>>>> logical volume at the same time could be "touched" only by
one
>>>>> host. Is
>>>>> it possible that two host access read/write the same logical
>>>>> volume and
>>>>> there is no data corruption?
>>>>
>>>> hence a shared disk over block storage using LVM must be
>>>> pre-allocated, so no LV changes (lv extend) would be needed.
>>>> (also, it cannot have snapshots, since it would become qcow)
>>>
>>> Ok, but this is the hypervisor view. For a guest OS this LV is
>>> normal RAW block device. I wonder if someone test this feature and
>>> access shared disk from many VM's at the same time?
>> I did a brief tests - 3 vms, shared disk, cman/pacemaker + gfs2 -
>> had no problems using it, no data corruption. although that was only
>> basic tests, like create/move/delete files, no extensive
>> usage/stress or something like that.
>
> Thanks Yuri,
> Is fencing handled by the cman/pacemaker? I read about the fanced
> daemon and it seems to be difficult to use it together with oVirt.
>
> Piotr
yea, fencing handled by pacemaker, stonith resource fence_rhev -
parameters according to man page, with one exception - had to add
/pcmk_host_list="vmname" /additional parameter, so pacemaker would
know that this stonith device belongs to that VM.
It's great news for me!
Many thanks,
Piotr
--------------040107040309070106020800
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#ffffff">
<br>
<blockquote cite="mid:51DE8FC1.4020700@gmail.com"
type="cite">
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
<div class="moz-cite-prefix">On 07/11/2013 02:45 PM, Piotr
Szubiakowski wrote:<br>
</div>
<blockquote cite="mid:51DE8CD5.2030806@nask.pl"
type="cite"> <br>
<br>
<blockquote type="cite">On 07/11/2013 12:52 PM, Piotr
Szubiakowski wrote: <br>
<blockquote type="cite"> <br>
<blockquote type="cite">
<blockquote type="cite">The way that oVirt manage storage
domains accessed via FC is very smart. <br>
There is separate logical volume for each virtual disk.
But I think that <br>
logical volume at the same time could be "touched" only
by one host. Is <br>
it possible that two host access read/write the same
logical volume and <br>
there is no data corruption? <br>
</blockquote>
<br>
hence a shared disk over block storage using LVM must be
pre-allocated, so no LV changes (lv extend) would be
needed. <br>
(also, it cannot have snapshots, since it would become
qcow) <br>
</blockquote>
<br>
Ok, but this is the hypervisor view. For a guest OS this LV
is normal RAW block device. I wonder if someone test this
feature and access shared disk from many VM's at the same
time? <br>
</blockquote>
I did a brief tests - 3 vms, shared disk, cman/pacemaker +
gfs2 - had no problems using it, no data corruption. although
that was only basic tests, like create/move/delete files, no
extensive usage/stress or something like that. <br>
</blockquote>
<br>
Thanks Yuri, <br>
Is fencing handled by the cman/pacemaker? I read about the
fanced daemon and it seems to be difficult to use it together
with oVirt. <br>
<br>
Piotr <br>
</blockquote>
yea, fencing handled by pacemaker, stonith resource fence_rhev -
parameters according to man page, with one exception - had to add
<em>pcmk_host_list="vmname" </em>additional parameter, so
pacemaker would know that this stonith device belongs to that VM.<br>
</blockquote>
It's great news for me!<br>
<br>
Many thanks,<br>
Piotr<br>
</body>
</html>
--------------040107040309070106020800--