[Users] which file system for shared disk?

Hi, we are developing an application where would be great if multiple host could have access to the same disk. I think that we can use features like shared disk or direct LUN to attach the same storage to multiple VM's. However to provide concurrent access to the resource, there should be a cluster file system used. The most popular open source cluster file systems are GFS2 and OCFS2. So my questions are: 1) Does anyone have share disk between VM's in oVirt? What fs did You used? 2) Is it possible to use GFS2 on VM's that are running on oVirt? Does anyone have run fencing mechanism with ovirt/libvirt? Many thanks, Piotr

Why not use gluster with xfs on the storage bricks? http://www.gluster.org/ On Wed, Jul 10, 2013 at 7:15 AM, Piotr Szubiakowski <piotr.szubiakowski@nask.pl> wrote:
Hi, we are developing an application where would be great if multiple host could have access to the same disk. I think that we can use features like shared disk or direct LUN to attach the same storage to multiple VM's. However to provide concurrent access to the resource, there should be a cluster file system used. The most popular open source cluster file systems are GFS2 and OCFS2. So my questions are:
1) Does anyone have share disk between VM's in oVirt? What fs did You used? 2) Is it possible to use GFS2 on VM's that are running on oVirt? Does anyone have run fencing mechanism with ovirt/libvirt?
Many thanks, Piotr _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi, gluster is good in scenario when we have many hosts with own storage and we aggregate these pieces it into one shared storage. In this situation data is transferred via Ethernet ore Infiniband. In our scenario we have centralized storage accedes via Fibre Channel. In this situation it would be more efficient when data would be transferred through FC. Thanks, Piotr W dniu 10.07.2013 13:23, Chris Smith pisze:
Why not use gluster with xfs on the storage bricks?
On Wed, Jul 10, 2013 at 7:15 AM, Piotr Szubiakowski <piotr.szubiakowski@nask.pl> wrote:
Hi, we are developing an application where would be great if multiple host could have access to the same disk. I think that we can use features like shared disk or direct LUN to attach the same storage to multiple VM's. However to provide concurrent access to the resource, there should be a cluster file system used. The most popular open source cluster file systems are GFS2 and OCFS2. So my questions are:
1) Does anyone have share disk between VM's in oVirt? What fs did You used? 2) Is it possible to use GFS2 on VM's that are running on oVirt? Does anyone have run fencing mechanism with ovirt/libvirt?
Many thanks, Piotr _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 07/10/2013 07:53 AM, Piotr Szubiakowski wrote:
Hi, gluster is good in scenario when we have many hosts with own storage and we aggregate these pieces it into one shared storage. In this situation data is transferred via Ethernet ore Infiniband. In our scenario we have centralized storage accedes via Fibre Channel. In this situation it would be more efficient when data would be transferred through FC.
For FC disk presented to all nodes in the cluster, why use a filesystem? use LVM instead, http://www.ovirt.org/Vdsm_Block_Storage_Domains
Thanks, Piotr
W dniu 10.07.2013 13:23, Chris Smith pisze:
Why not use gluster with xfs on the storage bricks?
On Wed, Jul 10, 2013 at 7:15 AM, Piotr Szubiakowski <piotr.szubiakowski@nask.pl> wrote:
Hi, we are developing an application where would be great if multiple host could have access to the same disk. I think that we can use features like shared disk or direct LUN to attach the same storage to multiple VM's. However to provide concurrent access to the resource, there should be a cluster file system used. The most popular open source cluster file systems are GFS2 and OCFS2. So my questions are:
1) Does anyone have share disk between VM's in oVirt? What fs did You used? 2) Is it possible to use GFS2 on VM's that are running on oVirt? Does anyone have run fencing mechanism with ovirt/libvirt?
Many thanks, Piotr _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 07/10/2013 07:53 AM, Piotr Szubiakowski wrote:
Hi, gluster is good in scenario when we have many hosts with own storage and we aggregate these pieces it into one shared storage. In this situation data is transferred via Ethernet ore Infiniband. In our scenario we have centralized storage accedes via Fibre Channel. In this situation it would be more efficient when data would be transferred through FC. For FC disk presented to all nodes in the cluster, why use a filesystem? use LVM instead,
The way that oVirt manage storage domains accessed via FC is very smart. There is separate logical volume for each virtual disk. But I think that logical volume at the same time could be "touched" only by one host. Is it possible that two host access read/write the same logical volume and there is no data corruption? Thanks, Piotr

On 07/10/2013 05:33 PM, Piotr Szubiakowski wrote:
The way that oVirt manage storage domains accessed via FC is very smart. There is separate logical volume for each virtual disk. But I think that logical volume at the same time could be "touched" only by one host. Is it possible that two host access read/write the same logical volume and there is no data corruption?
hence a shared disk over block storage using LVM must be pre-allocated, so no LV changes (lv extend) would be needed. (also, it cannot have snapshots, since it would become qcow)

On 07/10/2013 05:33 PM, Piotr Szubiakowski wrote:
The way that oVirt manage storage domains accessed via FC is very smart. There is separate logical volume for each virtual disk. But I think that logical volume at the same time could be "touched" only by one host. Is it possible that two host access read/write the same logical volume and there is no data corruption?
hence a shared disk over block storage using LVM must be pre-allocated, so no LV changes (lv extend) would be needed. By using "pre-allocated" here, even for qcow on KVM? If it is
2013-7-11 1:43, Itamar Heim: pre-allocated, there is no benefit to use qcow.
(also, it cannot have snapshots, since it would become qcow) Why it can not have snapshots? I think qcow on the block device can also have snapshots.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- --- 舒明 Shu Ming Open Virtualization Engineerning; CSTL, IBM Corp. Tel: 86-10-82451626 Tieline: 9051626 E-mail: shuming@cn.ibm.com or shuming@linux.vnet.ibm.com Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC

On 07/11/2013 05:38 AM, Shu Ming wrote:
On 07/10/2013 05:33 PM, Piotr Szubiakowski wrote:
The way that oVirt manage storage domains accessed via FC is very smart. There is separate logical volume for each virtual disk. But I think that logical volume at the same time could be "touched" only by one host. Is it possible that two host access read/write the same logical volume and there is no data corruption?
hence a shared disk over block storage using LVM must be pre-allocated, so no LV changes (lv extend) would be needed. By using "pre-allocated" here, even for qcow on KVM? If it is
2013-7-11 1:43, Itamar Heim: pre-allocated, there is no benefit to use qcow.
(also, it cannot have snapshots, since it would become qcow) Why it can not have snapshots? I think qcow on the block device can also have snapshots.
of course, and we use it all the time, but not for shared disk, since then you'd need to sync the qcow metadata changes between the two active VMs writing to it

The way that oVirt manage storage domains accessed via FC is very smart. There is separate logical volume for each virtual disk. But I think that logical volume at the same time could be "touched" only by one host. Is it possible that two host access read/write the same logical volume and there is no data corruption?
hence a shared disk over block storage using LVM must be pre-allocated, so no LV changes (lv extend) would be needed. (also, it cannot have snapshots, since it would become qcow)
Ok, but this is the hypervisor view. For a guest OS this LV is normal RAW block device. I wonder if someone test this feature and access shared disk from many VM's at the same time? Thanks, Piotr

On 07/11/2013 11:52 AM, Piotr Szubiakowski wrote:
The way that oVirt manage storage domains accessed via FC is very smart. There is separate logical volume for each virtual disk. But I think that logical volume at the same time could be "touched" only by one host. Is it possible that two host access read/write the same logical volume and there is no data corruption?
hence a shared disk over block storage using LVM must be pre-allocated, so no LV changes (lv extend) would be needed. (also, it cannot have snapshots, since it would become qcow)
Ok, but this is the hypervisor view. For a guest OS this LV is normal RAW block device. I wonder if someone test this feature and access shared disk from many VM's at the same time?
it was tested, but it has to be RAW/pre-allocated or either LV metadata changes or qcow changes need to be changed in more than a single host.

W dniu 11.07.2013 12:02, Itamar Heim pisze:
On 07/11/2013 11:52 AM, Piotr Szubiakowski wrote:
The way that oVirt manage storage domains accessed via FC is very smart. There is separate logical volume for each virtual disk. But I think that logical volume at the same time could be "touched" only by one host. Is it possible that two host access read/write the same logical volume and there is no data corruption?
hence a shared disk over block storage using LVM must be pre-allocated, so no LV changes (lv extend) would be needed. (also, it cannot have snapshots, since it would become qcow)
Ok, but this is the hypervisor view. For a guest OS this LV is normal RAW block device. I wonder if someone test this feature and access shared disk from many VM's at the same time?
it was tested, but it has to be RAW/pre-allocated or either LV metadata changes or qcow changes need to be changed in more than a single host.
During this tests have You used any cluster file system? Thanks, Piotr

On 07/11/2013 12:52 PM, Piotr Szubiakowski wrote:
The way that oVirt manage storage domains accessed via FC is very smart. There is separate logical volume for each virtual disk. But I think that logical volume at the same time could be "touched" only by one host. Is it possible that two host access read/write the same logical volume and there is no data corruption?
hence a shared disk over block storage using LVM must be pre-allocated, so no LV changes (lv extend) would be needed. (also, it cannot have snapshots, since it would become qcow)
Ok, but this is the hypervisor view. For a guest OS this LV is normal RAW block device. I wonder if someone test this feature and access shared disk from many VM's at the same time?
I did a brief tests - 3 vms, shared disk, cman/pacemaker + gfs2 - had no problems using it, no data corruption. although that was only basic tests, like create/move/delete files, no extensive usage/stress or something like that.

On 07/11/2013 12:52 PM, Piotr Szubiakowski wrote:
The way that oVirt manage storage domains accessed via FC is very smart. There is separate logical volume for each virtual disk. But I think that logical volume at the same time could be "touched" only by one host. Is it possible that two host access read/write the same logical volume and there is no data corruption?
hence a shared disk over block storage using LVM must be pre-allocated, so no LV changes (lv extend) would be needed. (also, it cannot have snapshots, since it would become qcow)
Ok, but this is the hypervisor view. For a guest OS this LV is normal RAW block device. I wonder if someone test this feature and access shared disk from many VM's at the same time?
I did a brief tests - 3 vms, shared disk, cman/pacemaker + gfs2 - had no problems using it, no data corruption. although that was only basic tests, like create/move/delete files, no extensive usage/stress or something like that.
Thanks Yuri, Is fencing handled by the cman/pacemaker? I read about the fanced daemon and it seems to be difficult to use it together with oVirt. Piotr

This is a multi-part message in MIME format. --------------080506010200000803060006 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit On 07/11/2013 02:45 PM, Piotr Szubiakowski wrote:
On 07/11/2013 12:52 PM, Piotr Szubiakowski wrote:
The way that oVirt manage storage domains accessed via FC is very smart. There is separate logical volume for each virtual disk. But I think that logical volume at the same time could be "touched" only by one host. Is it possible that two host access read/write the same logical volume and there is no data corruption?
hence a shared disk over block storage using LVM must be pre-allocated, so no LV changes (lv extend) would be needed. (also, it cannot have snapshots, since it would become qcow)
Ok, but this is the hypervisor view. For a guest OS this LV is normal RAW block device. I wonder if someone test this feature and access shared disk from many VM's at the same time?
I did a brief tests - 3 vms, shared disk, cman/pacemaker + gfs2 - had no problems using it, no data corruption. although that was only basic tests, like create/move/delete files, no extensive usage/stress or something like that.
Thanks Yuri, Is fencing handled by the cman/pacemaker? I read about the fanced daemon and it seems to be difficult to use it together with oVirt.
Piotr
yea, fencing handled by pacemaker, stonith resource fence_rhev - parameters according to man page, with one exception - had to add /pcmk_host_list="vmname" /additional parameter, so pacemaker would know that this stonith device belongs to that VM. --------------080506010200000803060006 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit <html> <head> <meta content="text/html; charset=ISO-8859-1" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> <div class="moz-cite-prefix">On 07/11/2013 02:45 PM, Piotr Szubiakowski wrote:<br> </div> <blockquote cite="mid:51DE8CD5.2030806@nask.pl" type="cite"> <br> <br> <blockquote type="cite">On 07/11/2013 12:52 PM, Piotr Szubiakowski wrote: <br> <blockquote type="cite"> <br> <blockquote type="cite"> <blockquote type="cite">The way that oVirt manage storage domains accessed via FC is very smart. <br> There is separate logical volume for each virtual disk. But I think that <br> logical volume at the same time could be "touched" only by one host. Is <br> it possible that two host access read/write the same logical volume and <br> there is no data corruption? <br> </blockquote> <br> hence a shared disk over block storage using LVM must be pre-allocated, so no LV changes (lv extend) would be needed. <br> (also, it cannot have snapshots, since it would become qcow) <br> </blockquote> <br> Ok, but this is the hypervisor view. For a guest OS this LV is normal RAW block device. I wonder if someone test this feature and access shared disk from many VM's at the same time? <br> </blockquote> I did a brief tests - 3 vms, shared disk, cman/pacemaker + gfs2 - had no problems using it, no data corruption. although that was only basic tests, like create/move/delete files, no extensive usage/stress or something like that. <br> </blockquote> <br> Thanks Yuri, <br> Is fencing handled by the cman/pacemaker? I read about the fanced daemon and it seems to be difficult to use it together with oVirt. <br> <br> Piotr <br> </blockquote> yea, fencing handled by pacemaker, stonith resource fence_rhev - parameters according to man page, with one exception - had to add <em>pcmk_host_list="vmname" </em>additional parameter, so pacemaker would know that this stonith device belongs to that VM.<br> </body> </html> --------------080506010200000803060006--

This is a multi-part message in MIME format. --------------040107040309070106020800 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit
On 07/11/2013 02:45 PM, Piotr Szubiakowski wrote:
On 07/11/2013 12:52 PM, Piotr Szubiakowski wrote:
The way that oVirt manage storage domains accessed via FC is very smart. There is separate logical volume for each virtual disk. But I think that logical volume at the same time could be "touched" only by one host. Is it possible that two host access read/write the same logical volume and there is no data corruption?
hence a shared disk over block storage using LVM must be pre-allocated, so no LV changes (lv extend) would be needed. (also, it cannot have snapshots, since it would become qcow)
Ok, but this is the hypervisor view. For a guest OS this LV is normal RAW block device. I wonder if someone test this feature and access shared disk from many VM's at the same time?
I did a brief tests - 3 vms, shared disk, cman/pacemaker + gfs2 - had no problems using it, no data corruption. although that was only basic tests, like create/move/delete files, no extensive usage/stress or something like that.
Thanks Yuri, Is fencing handled by the cman/pacemaker? I read about the fanced daemon and it seems to be difficult to use it together with oVirt.
Piotr
yea, fencing handled by pacemaker, stonith resource fence_rhev - parameters according to man page, with one exception - had to add /pcmk_host_list="vmname" /additional parameter, so pacemaker would know that this stonith device belongs to that VM. It's great news for me!
Many thanks, Piotr --------------040107040309070106020800 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <meta content="text/html; charset=ISO-8859-1" http-equiv="Content-Type"> </head> <body text="#000000" bgcolor="#ffffff"> <br> <blockquote cite="mid:51DE8FC1.4020700@gmail.com" type="cite"> <meta content="text/html; charset=ISO-8859-1" http-equiv="Content-Type"> <div class="moz-cite-prefix">On 07/11/2013 02:45 PM, Piotr Szubiakowski wrote:<br> </div> <blockquote cite="mid:51DE8CD5.2030806@nask.pl" type="cite"> <br> <br> <blockquote type="cite">On 07/11/2013 12:52 PM, Piotr Szubiakowski wrote: <br> <blockquote type="cite"> <br> <blockquote type="cite"> <blockquote type="cite">The way that oVirt manage storage domains accessed via FC is very smart. <br> There is separate logical volume for each virtual disk. But I think that <br> logical volume at the same time could be "touched" only by one host. Is <br> it possible that two host access read/write the same logical volume and <br> there is no data corruption? <br> </blockquote> <br> hence a shared disk over block storage using LVM must be pre-allocated, so no LV changes (lv extend) would be needed. <br> (also, it cannot have snapshots, since it would become qcow) <br> </blockquote> <br> Ok, but this is the hypervisor view. For a guest OS this LV is normal RAW block device. I wonder if someone test this feature and access shared disk from many VM's at the same time? <br> </blockquote> I did a brief tests - 3 vms, shared disk, cman/pacemaker + gfs2 - had no problems using it, no data corruption. although that was only basic tests, like create/move/delete files, no extensive usage/stress or something like that. <br> </blockquote> <br> Thanks Yuri, <br> Is fencing handled by the cman/pacemaker? I read about the fanced daemon and it seems to be difficult to use it together with oVirt. <br> <br> Piotr <br> </blockquote> yea, fencing handled by pacemaker, stonith resource fence_rhev - parameters according to man page, with one exception - had to add <em>pcmk_host_list="vmname" </em>additional parameter, so pacemaker would know that this stonith device belongs to that VM.<br> </blockquote> It's great news for me!<br> <br> Many thanks,<br> Piotr<br> </body> </html> --------------040107040309070106020800--

Hi Piotr! I've used OCFS2 out of oVirt, so I can't tell you specifically about VM environment, but I suggest you use OCFS2 in place of GFS2. It is simpler to implement, so less components to configure and it care about fencing for you. On 07/10/2013 08:15 AM, Piotr Szubiakowski wrote:
Hi, we are developing an application where would be great if multiple host could have access to the same disk. I think that we can use features like shared disk or direct LUN to attach the same storage to multiple VM's. However to provide concurrent access to the resource, there should be a cluster file system used. The most popular open source cluster file systems are GFS2 and OCFS2. So my questions are:
1) Does anyone have share disk between VM's in oVirt? What fs did You used? 2) Is it possible to use GFS2 on VM's that are running on oVirt? Does anyone have run fencing mechanism with ovirt/libvirt?
Many thanks, Piotr _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

This is a multi-part message in MIME format. --------------060805040100080505090603 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Hi Eduardo, yes fencing method used in OCFS2 is probably better for vitalized environments. Thanks for advice! Many thanks, Piotr W dniu 10.07.2013 13:32, Eduardo Ramos pisze:
Hi Piotr!
I've used OCFS2 out of oVirt, so I can't tell you specifically about VM environment, but I suggest you use OCFS2 in place of GFS2. It is simpler to implement, so less components to configure and it care about fencing for you.
On 07/10/2013 08:15 AM, Piotr Szubiakowski wrote:
Hi, we are developing an application where would be great if multiple host could have access to the same disk. I think that we can use features like shared disk or direct LUN to attach the same storage to multiple VM's. However to provide concurrent access to the resource, there should be a cluster file system used. The most popular open source cluster file systems are GFS2 and OCFS2. So my questions are:
1) Does anyone have share disk between VM's in oVirt? What fs did You used? 2) Is it possible to use GFS2 on VM's that are running on oVirt? Does anyone have run fencing mechanism with ovirt/libvirt?
Many thanks, Piotr _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--------------060805040100080505090603 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <meta content="text/html; charset=ISO-8859-1" http-equiv="Content-Type"> </head> <body text="#000000" bgcolor="#ffffff"> <tt>Hi Eduardo,<br> yes fencing method used in OCFS2 is probably better for vitalized environments. Thanks for advice!<br> <br> Many thanks,<br> Piotr <br> </tt><br> W dniu 10.07.2013 13:32, Eduardo Ramos pisze: <blockquote cite="mid:51DD465D.70902@freedominterface.org" type="cite">Hi Piotr! <br> <br> I've used OCFS2 out of oVirt, so I can't tell you specifically about VM environment, but I suggest you use OCFS2 in place of GFS2. It is simpler to implement, so less components to configure and it care about fencing for you. <br> <br> On 07/10/2013 08:15 AM, Piotr Szubiakowski wrote: <br> <blockquote type="cite">Hi, <br> we are developing an application where would be great if multiple host could have access to the same disk. I think that we can use features like shared disk or direct LUN to attach the same storage to multiple VM's. However to provide concurrent access to the resource, there should be a cluster file system used. The most popular open source cluster file systems are GFS2 and OCFS2. So my questions are: <br> <br> 1) Does anyone have share disk between VM's in oVirt? What fs did You used? <br> 2) Is it possible to use GFS2 on VM's that are running on oVirt? Does anyone have run fencing mechanism with ovirt/libvirt? <br> <br> Many thanks, <br> Piotr <br> _______________________________________________ <br> Users mailing list <br> <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <br> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> <br> </blockquote> <br> <pre wrap=""> <fieldset class="mimeAttachmentHeader"></fieldset> _______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> </body> </html> --------------060805040100080505090603--
participants (7)
-
Chris Smith
-
Eduardo Ramos
-
Itamar Heim
-
Piotr Szubiakowski
-
Shu Ming
-
Subhendu Ghosh
-
Yuriy Demchenko