[Users] glusterfs and ovirt

This is a multi-part message in MIME format. ------=_NextPart_000_0046_01CCF7C2.C523B6F0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Hi. Test the ability to work as a storage server glusterfs. Direct support to glusterf ovirt unfortunately not. This feature will be added in the future? Attempted to implement a scheme of work -> glusterfs mounted on a node in a folder mount glusterfs connected via NFS to ovirt. It works =) Now try to mount NFS to 127.0.0.1 and encountered an error: Command: [root@noc-4-m77 ~] # / bin / mount-o soft, timeo = 600, retrans = 6, nosharecache, vers = 3 -t nfs 127.0.0.1 :/share/tmp /tmpgcOezk Error: mount.nfs: Unknown error 521 NFS V4 is disabled. In this mount: /bin/mount -t nfs 127.0.0.1:/share/ tmp/tmpgtsoetsk is successful. I understand that this is not a problem ovirt, but you might prompt any ideas how to fix it? To use glusterfs in overt to execute a commandL Mount -t glusterfs -o log-level = WARNING, log-file = /var/log/gluster.log noc-1 :/mht / /share I can prescribe it in vdsm that it was carried out instead of /bin/mount-o soft, timeo = 600, retrans = 6, nosharecache, vers = 3 -t nfs 127.0.0.1:/share/tmp/tmpgtsOetsk ------=_NextPart_000_0046_01CCF7C2.C523B6F0 Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable <html xmlns:v=3D"urn:schemas-microsoft-com:vml" = xmlns:o=3D"urn:schemas-microsoft-com:office:office" = xmlns:w=3D"urn:schemas-microsoft-com:office:word" = xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" = xmlns=3D"http://www.w3.org/TR/REC-html40"><head> <META HTTP-EQUIV=3D"Content-Type" CONTENT=3D"text/html; = charset=3Dus-ascii"> <meta name=3DGenerator content=3D"Microsoft Word 12 (filtered = medium)"><style><!-- /* Font Definitions */ @font-face {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4;} @font-face {font-family:Calibri; panose-1:2 15 5 2 2 2 4 3 2 4;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {margin:0cm; margin-bottom:.0001pt; font-size:11.0pt; font-family:"Calibri","sans-serif";} a:link, span.MsoHyperlink {mso-style-priority:99; color:blue; text-decoration:underline;} a:visited, span.MsoHyperlinkFollowed {mso-style-priority:99; color:purple; text-decoration:underline;} span.EmailStyle17 {mso-style-type:personal-compose; font-family:"Calibri","sans-serif"; color:windowtext;} .MsoChpDefault {mso-style-type:export-only;} @page WordSection1 {size:612.0pt 792.0pt; margin:2.0cm 42.5pt 2.0cm 3.0cm;} div.WordSection1 {page:WordSection1;} --></style><!--[if gte mso 9]><xml> <o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" /> </xml><![endif]--><!--[if gte mso 9]><xml> <o:shapelayout v:ext=3D"edit"> <o:idmap v:ext=3D"edit" data=3D"1" /> </o:shapelayout></xml><![endif]--></head><body lang=3DRU link=3Dblue = vlink=3Dpurple><div class=3DWordSection1><p class=3DMsoNormal><span = lang=3DEN-US>Hi.<o:p></o:p></span></p><p class=3DMsoNormal><span = lang=3DEN-US>Test the ability to work as a storage server glusterfs. = Direct support to glusterf ovirt unfortunately = not.<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US>This = feature will be added in the future?<o:p></o:p></span></p><p = class=3DMsoNormal><span lang=3DEN-US><o:p> </o:p></span></p><p = class=3DMsoNormal><span lang=3DEN-US>Attempted to implement a scheme of = work -> glusterfs mounted on a node in a folder mount glusterfs = connected via NFS to ovirt.<o:p></o:p></span></p><p = class=3DMsoNormal><span lang=3DEN-US>It works = =3D)<o:p></o:p></span></p><p class=3DMsoNormal><span lang=3DEN-US>Now = try to mount NFS to 127.0.0.1 and encountered an = error:<o:p></o:p></span></p><p class=3DMsoNormal><span = lang=3DEN-US>Command:<o:p></o:p></span></p><p class=3DMsoNormal><span = lang=3DEN-US>[root@noc-4-m77 ~] # / bin / mount-o soft, timeo =3D 600, = retrans =3D 6, nosharecache, vers =3D 3 -t nfs 127.0.0.1 :/share/tmp = /tmpgcOezk<o:p></o:p></span></p><p class=3DMsoNormal><span = lang=3DEN-US>Error:<o:p></o:p></span></p><p class=3DMsoNormal><span = lang=3DEN-US>mount.nfs: Unknown error 521<o:p></o:p></span></p><p = class=3DMsoNormal><span lang=3DEN-US><o:p> </o:p></span></p><p = class=3DMsoNormal><span lang=3DEN-US>NFS V4 is = disabled.<o:p></o:p></span></p><p class=3DMsoNormal><span = lang=3DEN-US>In this mount:<o:p></o:p></span></p><p = class=3DMsoNormal><span lang=3DEN-US>/bin/mount -t nfs 127.0.0.1:/share/ = tmp/tmpgtsoetsk is successful.<o:p></o:p></span></p><p = class=3DMsoNormal><span lang=3DEN-US>I understand that this is not a = problem ovirt, but you might prompt any ideas how to fix = it?<o:p></o:p></span></p><p class=3DMsoNormal><span = lang=3DEN-US><o:p> </o:p></span></p><p class=3DMsoNormal><span = lang=3DEN-US>To use glusterfs in overt to execute a = commandL<o:p></o:p></span></p><p class=3DMsoNormal><span = lang=3DEN-US>Mount -t glusterfs -o log-level =3D WARNING, log-file =3D = /var/log/gluster.log noc-1 :/mht / /share<o:p></o:p></span></p><p = class=3DMsoNormal><span lang=3DEN-US>I can prescribe it in vdsm that it = was carried out instead of /bin/mount-o soft, timeo =3D 600, retrans =3D = 6, nosharecache, vers =3D 3 -t nfs = 127.0.0.1:/share/tmp/tmpgtsOetsk<o:p></o:p></span></p><p = class=3DMsoNormal><span lang=3DEN-US><o:p> </o:p></span></p><p = class=3DMsoNormal><span = lang=3DEN-US><o:p> </o:p></span></p></div></body></html> ------=_NextPart_000_0046_01CCF7C2.C523B6F0--

On 03/01/2012 01:48 PM, ?????? wrote:
Hi.
Test the ability to work as a storage server glusterfs. Direct support to glusterf ovirt unfortunately not.
This feature will be added in the future?
I'll let someone else reply on the below, but as for ovirt-gluster integration - yes, it is in the works. this gives a general picture of the work being carried out: http://www.ovirt.org/wiki/AddingGlusterSupportToOvirt
Attempted to implement a scheme of work -> glusterfs mounted on a node in a folder mount glusterfs connected via NFS to ovirt.
It works =)
Now try to mount NFS to 127.0.0.1 and encountered an error:
Command:
[root@noc-4-m77 ~] # / bin / mount-o soft, timeo = 600, retrans = 6, nosharecache, vers = 3 -t nfs 127.0.0.1 :/share/tmp /tmpgcOezk
Error:
mount.nfs: Unknown error 521
NFS V4 is disabled.
In this mount:
/bin/mount -t nfs 127.0.0.1:/share/ tmp/tmpgtsoetsk is successful.
I understand that this is not a problem ovirt, but you might prompt any ideas how to fix it?
To use glusterfs in overt to execute a commandL
Mount -t glusterfs -o log-level = WARNING, log-file = /var/log/gluster.log noc-1 :/mht / /share
I can prescribe it in vdsm that it was carried out instead of /bin/mount-o soft, timeo = 600, retrans = 6, nosharecache, vers = 3 -t nfs 127.0.0.1:/share/tmp/tmpgtsOetsk
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Good news. It already works in a test version or development has not yet begun? -----Original Message----- From: Itamar Heim [mailto:iheim@redhat.com] Sent: Thursday, March 01, 2012 7:44 PM To: ?????? Cc: users@ovirt.org Subject: Re: [Users] glusterfs and ovirt On 03/01/2012 01:48 PM, ?????? wrote:
Hi.
Test the ability to work as a storage server glusterfs. Direct support to glusterf ovirt unfortunately not.
This feature will be added in the future?
I'll let someone else reply on the below, but as for ovirt-gluster integration - yes, it is in the works. this gives a general picture of the work being carried out: http://www.ovirt.org/wiki/AddingGlusterSupportToOvirt
Attempted to implement a scheme of work -> glusterfs mounted on a node in a folder mount glusterfs connected via NFS to ovirt.
It works =)
Now try to mount NFS to 127.0.0.1 and encountered an error:
Command:
[root@noc-4-m77 ~] # / bin / mount-o soft, timeo = 600, retrans = 6, nosharecache, vers = 3 -t nfs 127.0.0.1 :/share/tmp /tmpgcOezk
Error:
mount.nfs: Unknown error 521
NFS V4 is disabled.
In this mount:
/bin/mount -t nfs 127.0.0.1:/share/ tmp/tmpgtsoetsk is successful.
I understand that this is not a problem ovirt, but you might prompt any ideas how to fix it?
To use glusterfs in overt to execute a commandL
Mount -t glusterfs -o log-level = WARNING, log-file = /var/log/gluster.log noc-1 :/mht / /share
I can prescribe it in vdsm that it was carried out instead of /bin/mount-o soft, timeo = 600, retrans = 6, nosharecache, vers = 3 -t nfs 127.0.0.1:/share/tmp/tmpgtsOetsk
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 03/01/2012 08:36 PM, зоррыч wrote:
Good news. It already works in a test version or development has not yet begun?
early patches to build this support are sent now to engine/vdsm. so work has begun, but not working yet
-----Original Message----- From: Itamar Heim [mailto:iheim@redhat.com] Sent: Thursday, March 01, 2012 7:44 PM To: ?????? Cc: users@ovirt.org Subject: Re: [Users] glusterfs and ovirt
On 03/01/2012 01:48 PM, ?????? wrote:
Hi.
Test the ability to work as a storage server glusterfs. Direct support to glusterf ovirt unfortunately not.
This feature will be added in the future?
I'll let someone else reply on the below, but as for ovirt-gluster integration - yes, it is in the works. this gives a general picture of the work being carried out: http://www.ovirt.org/wiki/AddingGlusterSupportToOvirt
Attempted to implement a scheme of work -> glusterfs mounted on a node in a folder mount glusterfs connected via NFS to ovirt.
It works =)
Now try to mount NFS to 127.0.0.1 and encountered an error:
Command:
[root@noc-4-m77 ~] # / bin / mount-o soft, timeo = 600, retrans = 6, nosharecache, vers = 3 -t nfs 127.0.0.1 :/share/tmp /tmpgcOezk
Error:
mount.nfs: Unknown error 521
NFS V4 is disabled.
In this mount:
/bin/mount -t nfs 127.0.0.1:/share/ tmp/tmpgtsoetsk is successful.
I understand that this is not a problem ovirt, but you might prompt any ideas how to fix it?
To use glusterfs in overt to execute a commandL
Mount -t glusterfs -o log-level = WARNING, log-file = /var/log/gluster.log noc-1 :/mht / /share
I can prescribe it in vdsm that it was carried out instead of /bin/mount-o soft, timeo = 600, retrans = 6, nosharecache, vers = 3 -t nfs 127.0.0.1:/share/tmp/tmpgtsOetsk
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Already know the date when it is added support for glusterfs? I can work with the test in a test version of glusterfs? -----Original Message----- From: Itamar Heim [mailto:iheim@redhat.com] Sent: Friday, March 02, 2012 4:10 AM To: ÚÏÒÒÙÞ Cc: users@ovirt.org Subject: Re: [Users] glusterfs and ovirt On 03/01/2012 08:36 PM, ÚÏÒÒÙÞ wrote:
Good news. It already works in a test version or development has not yet begun?
early patches to build this support are sent now to engine/vdsm. so work has begun, but not working yet
-----Original Message----- From: Itamar Heim [mailto:iheim@redhat.com] Sent: Thursday, March 01, 2012 7:44 PM To: ?????? Cc: users@ovirt.org Subject: Re: [Users] glusterfs and ovirt
On 03/01/2012 01:48 PM, ?????? wrote:
Hi.
Test the ability to work as a storage server glusterfs. Direct support to glusterf ovirt unfortunately not.
This feature will be added in the future?
I'll let someone else reply on the below, but as for ovirt-gluster integration - yes, it is in the works. this gives a general picture of the work being carried out: http://www.ovirt.org/wiki/AddingGlusterSupportToOvirt
Attempted to implement a scheme of work -> glusterfs mounted on a node in a folder mount glusterfs connected via NFS to ovirt.
It works =)
Now try to mount NFS to 127.0.0.1 and encountered an error:
Command:
[root@noc-4-m77 ~] # / bin / mount-o soft, timeo = 600, retrans = 6, nosharecache, vers = 3 -t nfs 127.0.0.1 :/share/tmp /tmpgcOezk
Error:
mount.nfs: Unknown error 521
NFS V4 is disabled.
In this mount:
/bin/mount -t nfs 127.0.0.1:/share/ tmp/tmpgtsoetsk is successful.
I understand that this is not a problem ovirt, but you might prompt any ideas how to fix it?
To use glusterfs in overt to execute a commandL
Mount -t glusterfs -o log-level = WARNING, log-file = /var/log/gluster.log noc-1 :/mht / /share
I can prescribe it in vdsm that it was carried out instead of /bin/mount-o soft, timeo = 600, retrans = 6, nosharecache, vers = 3 -t nfs 127.0.0.1:/share/tmp/tmpgtsOetsk
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 04/05/2012 05:25 PM, зоррыч wrote:
Already know the date when it is added support for glusterfs? I can work with the test in a test version of glusterfs?
there are two parts to the support: 1. managing the gluster hosts/storage/volumes - patches for different parts of this in gerrit - but not needed for first phase of using this. 2. using gluster as a posix fs - vdsm side is ready, and iirc, engine done by laszlo who can share when test patches can be used for testing this 3. integrating both together - a bit later :)
-----Original Message----- From: Itamar Heim [mailto:iheim@redhat.com] Sent: Friday, March 02, 2012 4:10 AM To: зоррыч Cc: users@ovirt.org Subject: Re: [Users] glusterfs and ovirt
On 03/01/2012 08:36 PM, зоррыч wrote:
Good news. It already works in a test version or development has not yet begun?
early patches to build this support are sent now to engine/vdsm. so work has begun, but not working yet
-----Original Message----- From: Itamar Heim [mailto:iheim@redhat.com] Sent: Thursday, March 01, 2012 7:44 PM To: ?????? Cc: users@ovirt.org Subject: Re: [Users] glusterfs and ovirt
On 03/01/2012 01:48 PM, ?????? wrote:
Hi.
Test the ability to work as a storage server glusterfs. Direct support to glusterf ovirt unfortunately not.
This feature will be added in the future?
I'll let someone else reply on the below, but as for ovirt-gluster integration - yes, it is in the works. this gives a general picture of the work being carried out: http://www.ovirt.org/wiki/AddingGlusterSupportToOvirt
Attempted to implement a scheme of work -> glusterfs mounted on a node in a folder mount glusterfs connected via NFS to ovirt.
It works =)
Now try to mount NFS to 127.0.0.1 and encountered an error:
Command:
[root@noc-4-m77 ~] # / bin / mount-o soft, timeo = 600, retrans = 6, nosharecache, vers = 3 -t nfs 127.0.0.1 :/share/tmp /tmpgcOezk
Error:
mount.nfs: Unknown error 521
NFS V4 is disabled.
In this mount:
/bin/mount -t nfs 127.0.0.1:/share/ tmp/tmpgtsoetsk is successful.
I understand that this is not a problem ovirt, but you might prompt any ideas how to fix it?
To use glusterfs in overt to execute a commandL
Mount -t glusterfs -o log-level = WARNING, log-file = /var/log/gluster.log noc-1 :/mht / /share
I can prescribe it in vdsm that it was carried out instead of /bin/mount-o soft, timeo = 600, retrans = 6, nosharecache, vers = 3 -t nfs 127.0.0.1:/share/tmp/tmpgtsOetsk
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 04/10/2012 01:43 PM, Itamar Heim wrote:
On 04/05/2012 05:25 PM, зоррыч wrote:
Already know the date when it is added support for glusterfs? I can work with the test in a test version of glusterfs?
there are two parts to the support: 1. managing the gluster hosts/storage/volumes - patches for different parts of this in gerrit - but not needed for first phase of using this. 2. using gluster as a posix fs - vdsm side is ready, and iirc, engine done by laszlo who can share when test patches can be used for testing this
Small correction - patches are not at gerrit (referring to Posix FS at engine side)
3. integrating both together - a bit later :)
-----Original Message----- From: Itamar Heim [mailto:iheim@redhat.com] Sent: Friday, March 02, 2012 4:10 AM To: зоррыч Cc: users@ovirt.org Subject: Re: [Users] glusterfs and ovirt
On 03/01/2012 08:36 PM, зоррыч wrote:
Good news. It already works in a test version or development has not yet begun?
early patches to build this support are sent now to engine/vdsm. so work has begun, but not working yet
-----Original Message----- From: Itamar Heim [mailto:iheim@redhat.com] Sent: Thursday, March 01, 2012 7:44 PM To: ?????? Cc: users@ovirt.org Subject: Re: [Users] glusterfs and ovirt
On 03/01/2012 01:48 PM, ?????? wrote:
Hi.
Test the ability to work as a storage server glusterfs. Direct support to glusterf ovirt unfortunately not.
This feature will be added in the future?
I'll let someone else reply on the below, but as for ovirt-gluster integration - yes, it is in the works. this gives a general picture of the work being carried out: http://www.ovirt.org/wiki/AddingGlusterSupportToOvirt
Attempted to implement a scheme of work -> glusterfs mounted on a node in a folder mount glusterfs connected via NFS to ovirt.
It works =)
Now try to mount NFS to 127.0.0.1 and encountered an error:
Command:
[root@noc-4-m77 ~] # / bin / mount-o soft, timeo = 600, retrans = 6, nosharecache, vers = 3 -t nfs 127.0.0.1 :/share/tmp /tmpgcOezk
Error:
mount.nfs: Unknown error 521
NFS V4 is disabled.
In this mount:
/bin/mount -t nfs 127.0.0.1:/share/ tmp/tmpgtsoetsk is successful.
I understand that this is not a problem ovirt, but you might prompt any ideas how to fix it?
To use glusterfs in overt to execute a commandL
Mount -t glusterfs -o log-level = WARNING, log-file = /var/log/gluster.log noc-1 :/mht / /share
I can prescribe it in vdsm that it was carried out instead of /bin/mount-o soft, timeo = 600, retrans = 6, nosharecache, vers = 3 -t nfs 127.0.0.1:/share/tmp/tmpgtsOetsk
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Yair Thanks for an update. Can I have KVM hypervisors also function as storage nodes for glusterfs? What is a release date for glusterfs support? We're looking for a production deployment in June. Thanks Andrei On Apr 10, 2012, at 7:18 AM, Yair Zaslavsky wrote:
On 04/10/2012 01:43 PM, Itamar Heim wrote:
On 04/05/2012 05:25 PM, зоррыч wrote:
Already know the date when it is added support for glusterfs? I can work with the test in a test version of glusterfs?
there are two parts to the support: 1. managing the gluster hosts/storage/volumes - patches for different parts of this in gerrit - but not needed for first phase of using this. 2. using gluster as a posix fs - vdsm side is ready, and iirc, engine done by laszlo who can share when test patches can be used for testing this
Small correction - patches are not at gerrit (referring to Posix FS at engine side)
3. integrating both together - a bit later :)
-----Original Message----- From: Itamar Heim [mailto:iheim@redhat.com] Sent: Friday, March 02, 2012 4:10 AM To: зоррыч Cc: users@ovirt.org Subject: Re: [Users] glusterfs and ovirt
On 03/01/2012 08:36 PM, зоррыч wrote:
Good news. It already works in a test version or development has not yet begun?
early patches to build this support are sent now to engine/vdsm. so work has begun, but not working yet
-----Original Message----- From: Itamar Heim [mailto:iheim@redhat.com] Sent: Thursday, March 01, 2012 7:44 PM To: ?????? Cc: users@ovirt.org Subject: Re: [Users] glusterfs and ovirt
On 03/01/2012 01:48 PM, ?????? wrote:
Hi.
Test the ability to work as a storage server glusterfs. Direct support to glusterf ovirt unfortunately not.
This feature will be added in the future?
I'll let someone else reply on the below, but as for ovirt-gluster integration - yes, it is in the works. this gives a general picture of the work being carried out: http://www.ovirt.org/wiki/AddingGlusterSupportToOvirt
Attempted to implement a scheme of work -> glusterfs mounted on a node in a folder mount glusterfs connected via NFS to ovirt.
It works =)
Now try to mount NFS to 127.0.0.1 and encountered an error:
Command:
[root@noc-4-m77 ~] # / bin / mount-o soft, timeo = 600, retrans = 6, nosharecache, vers = 3 -t nfs 127.0.0.1 :/share/tmp /tmpgcOezk
Error:
mount.nfs: Unknown error 521
NFS V4 is disabled.
In this mount:
/bin/mount -t nfs 127.0.0.1:/share/ tmp/tmpgtsoetsk is successful.
I understand that this is not a problem ovirt, but you might prompt any ideas how to fix it?
To use glusterfs in overt to execute a commandL
Mount -t glusterfs -o log-level = WARNING, log-file = /var/log/gluster.log noc-1 :/mht / /share
I can prescribe it in vdsm that it was carried out instead of /bin/mount-o soft, timeo = 600, retrans = 6, nosharecache, vers = 3 -t nfs 127.0.0.1:/share/tmp/tmpgtsOetsk
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 05/15/2012 07:35 PM, Andrei Vakhnin wrote:
Yair
Thanks for an update. Can I have KVM hypervisors also function as storage nodes for glusterfs? What is a release date for glusterfs support? We're looking for a production deployment in June. Thanks
Andrei - not sure about release date. I can update you that patches for POSIX-FS (i.e - support for storage domains for POSIX compliant FS), patches for components of oVirt-engine-core and API ) are in gerrit. Most of that work is even merged (I just introduced some changes, hoping to get them reviewed ASAP and get them merged).
Andrei On Apr 10, 2012, at 7:18 AM, Yair Zaslavsky wrote:
On 04/10/2012 01:43 PM, Itamar Heim wrote:
On 04/05/2012 05:25 PM, зоррыч wrote:
Already know the date when it is added support for glusterfs? I can work with the test in a test version of glusterfs?
there are two parts to the support: 1. managing the gluster hosts/storage/volumes - patches for different parts of this in gerrit - but not needed for first phase of using this. 2. using gluster as a posix fs - vdsm side is ready, and iirc, engine done by laszlo who can share when test patches can be used for testing this
Small correction - patches are not at gerrit (referring to Posix FS at engine side)
3. integrating both together - a bit later :)
-----Original Message----- From: Itamar Heim [mailto:iheim@redhat.com] Sent: Friday, March 02, 2012 4:10 AM To: зоррыч Cc: users@ovirt.org Subject: Re: [Users] glusterfs and ovirt
On 03/01/2012 08:36 PM, зоррыч wrote:
Good news. It already works in a test version or development has not yet begun?
early patches to build this support are sent now to engine/vdsm. so work has begun, but not working yet
-----Original Message----- From: Itamar Heim [mailto:iheim@redhat.com] Sent: Thursday, March 01, 2012 7:44 PM To: ?????? Cc: users@ovirt.org Subject: Re: [Users] glusterfs and ovirt
On 03/01/2012 01:48 PM, ?????? wrote:
Hi.
Test the ability to work as a storage server glusterfs. Direct support to glusterf ovirt unfortunately not.
This feature will be added in the future?
I'll let someone else reply on the below, but as for ovirt-gluster integration - yes, it is in the works. this gives a general picture of the work being carried out: http://www.ovirt.org/wiki/AddingGlusterSupportToOvirt
Attempted to implement a scheme of work -> glusterfs mounted on a node in a folder mount glusterfs connected via NFS to ovirt.
It works =)
Now try to mount NFS to 127.0.0.1 and encountered an error:
Command:
[root@noc-4-m77 ~] # / bin / mount-o soft, timeo = 600, retrans = 6, nosharecache, vers = 3 -t nfs 127.0.0.1 :/share/tmp /tmpgcOezk
Error:
mount.nfs: Unknown error 521
NFS V4 is disabled.
In this mount:
/bin/mount -t nfs 127.0.0.1:/share/ tmp/tmpgtsoetsk is successful.
I understand that this is not a problem ovirt, but you might prompt any ideas how to fix it?
To use glusterfs in overt to execute a commandL
Mount -t glusterfs -o log-level = WARNING, log-file = /var/log/gluster.log noc-1 :/mht / /share
I can prescribe it in vdsm that it was carried out instead of /bin/mount-o soft, timeo = 600, retrans = 6, nosharecache, vers = 3 -t nfs 127.0.0.1:/share/tmp/tmpgtsOetsk
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 05/15/2012 07:35 PM, Andrei Vakhnin wrote:
Yair
Thanks for an update. Can I have KVM hypervisors also function as storage nodes for glusterfs? What is a release date for glusterfs support? We're looking for a production deployment in June. Thanks
current status is 1. patches for provisioning gluster clusters and volumes via ovirt are in review, trying to cover this feature set [1]. I'm not sure if all of them will make the ovirt 3.1 version which is slated to branch for stabilization June 1st, but i think "enough" is there. so i'd start trying current upstream version to help find issues blocking you, and following on them during june as we stabilize ovirt 3.1 for release (planned for end of june). 2. you should be able to use same hosts for both gluster and virt, but there is no special logic/handling for this yet (i.e., trying and providing feedback would help improve this mode). I would suggest start from separate clusters though first, and only later trying the joint mode. 3. creating a storage domain on top of gluster: - expose NFS on top of it, and consume as a normal nfs storage domain - use posixfs storage domain with gluster mount semantics - future: probably native gluster storage domain, up to native integration with qemu can you please describe the use case you are trying to accommodate? thanks, Itamar [1] http://ovirt.org/wiki/Features/Gluster_Support
Andrei On Apr 10, 2012, at 7:18 AM, Yair Zaslavsky wrote:
On 04/10/2012 01:43 PM, Itamar Heim wrote:
On 04/05/2012 05:25 PM, зоррыч wrote:
Already know the date when it is added support for glusterfs? I can work with the test in a test version of glusterfs?
there are two parts to the support: 1. managing the gluster hosts/storage/volumes - patches for different parts of this in gerrit - but not needed for first phase of using this. 2. using gluster as a posix fs - vdsm side is ready, and iirc, engine done by laszlo who can share when test patches can be used for testing this
Small correction - patches are not at gerrit (referring to Posix FS at engine side)
3. integrating both together - a bit later :)
-----Original Message----- From: Itamar Heim [mailto:iheim@redhat.com] Sent: Friday, March 02, 2012 4:10 AM To: зоррыч Cc: users@ovirt.org Subject: Re: [Users] glusterfs and ovirt
On 03/01/2012 08:36 PM, зоррыч wrote:
Good news. It already works in a test version or development has not yet begun?
early patches to build this support are sent now to engine/vdsm. so work has begun, but not working yet
-----Original Message----- From: Itamar Heim [mailto:iheim@redhat.com] Sent: Thursday, March 01, 2012 7:44 PM To: ?????? Cc: users@ovirt.org Subject: Re: [Users] glusterfs and ovirt
On 03/01/2012 01:48 PM, ?????? wrote:
Hi.
Test the ability to work as a storage server glusterfs. Direct support to glusterf ovirt unfortunately not.
This feature will be added in the future?
I'll let someone else reply on the below, but as for ovirt-gluster integration - yes, it is in the works. this gives a general picture of the work being carried out: http://www.ovirt.org/wiki/AddingGlusterSupportToOvirt
Attempted to implement a scheme of work -> glusterfs mounted on a node in a folder mount glusterfs connected via NFS to ovirt.
It works =)
Now try to mount NFS to 127.0.0.1 and encountered an error:
Command:
[root@noc-4-m77 ~] # / bin / mount-o soft, timeo = 600, retrans = 6, nosharecache, vers = 3 -t nfs 127.0.0.1 :/share/tmp /tmpgcOezk
Error:
mount.nfs: Unknown error 521
NFS V4 is disabled.
In this mount:
/bin/mount -t nfs 127.0.0.1:/share/ tmp/tmpgtsoetsk is successful.
I understand that this is not a problem ovirt, but you might prompt any ideas how to fix it?
To use glusterfs in overt to execute a commandL
Mount -t glusterfs -o log-level = WARNING, log-file = /var/log/gluster.log noc-1 :/mht / /share
I can prescribe it in vdsm that it was carried out instead of /bin/mount-o soft, timeo = 600, retrans = 6, nosharecache, vers = 3 -t nfs 127.0.0.1:/share/tmp/tmpgtsOetsk
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Wed, May 16, 2012 at 3:29 PM, Itamar Heim <iheim@redhat.com> wrote:
On 05/15/2012 07:35 PM, Andrei Vakhnin wrote:
Yair
Thanks for an update. Can I have KVM hypervisors also function as storage nodes for glusterfs? What is a release date for glusterfs support? We're looking for a production deployment in June. Thanks
current status is 1. patches for provisioning gluster clusters and volumes via ovirt are in review, trying to cover this feature set [1]. I'm not sure if all of them will make the ovirt 3.1 version which is slated to branch for stabilization June 1st, but i think "enough" is there. so i'd start trying current upstream version to help find issues blocking you, and following on them during june as we stabilize ovirt 3.1 for release (planned for end of june).
2. you should be able to use same hosts for both gluster and virt, but there is no special logic/handling for this yet (i.e., trying and providing feedback would help improve this mode). I would suggest start from separate clusters though first, and only later trying the joint mode.
3. creating a storage domain on top of gluster: - expose NFS on top of it, and consume as a normal nfs storage domain - use posixfs storage domain with gluster mount semantics - future: probably native gluster storage domain, up to native integration with qemu
I am looking at GlusterFS integration with QEMU which involves adding GlusterFS as block backend in QEMU. This will involve QEMU talking to gluster directly via libglusterfs bypassing FUSE. I could specify a volume file and the VM image directly on QEMU command line to boot from the VM image that resides on a gluster volume. Eg: qemu -drive file=client.vol:/Fedora.img,format=gluster In this example, Fedora.img is being served by gluster and client.vol would have client-side translators specified. I am not sure if this use case would be served if GlusterFS is integrated as posixfs storage domain in VDSM. Posixfs would involve normal FUSE mount and QEMU would be required to work with images from FUSE mount path ? With QEMU supporting GlusterFS backend natively, further optimizations are possible in case of gluster volume being local to the host node. In this case, one could provide QEMU with a simple volume file that would not contain client or server xlators, but instead just the posix xlator. This would lead to most optimal IO path that bypasses RPC calls. So do you think, this use case (QEMU supporting GlusterFS backend natively and using volume file to specify the needed translators) warrants a specialized storage domain type for GlusterFS in VDSM ? Regards, Bharata. -- http://bharata.sulekha.com/blog/posts.htm, http://raobharata.wordpress.com/

On 05/17/2012 06:55 PM, Bharata B Rao wrote:
On Wed, May 16, 2012 at 3:29 PM, Itamar Heim<iheim@redhat.com> wrote:
On 05/15/2012 07:35 PM, Andrei Vakhnin wrote:
Yair
Thanks for an update. Can I have KVM hypervisors also function as storage nodes for glusterfs? What is a release date for glusterfs support? We're looking for a production deployment in June. Thanks
current status is 1. patches for provisioning gluster clusters and volumes via ovirt are in review, trying to cover this feature set [1]. I'm not sure if all of them will make the ovirt 3.1 version which is slated to branch for stabilization June 1st, but i think "enough" is there. so i'd start trying current upstream version to help find issues blocking you, and following on them during june as we stabilize ovirt 3.1 for release (planned for end of june).
2. you should be able to use same hosts for both gluster and virt, but there is no special logic/handling for this yet (i.e., trying and providing feedback would help improve this mode). I would suggest start from separate clusters though first, and only later trying the joint mode.
3. creating a storage domain on top of gluster: - expose NFS on top of it, and consume as a normal nfs storage domain - use posixfs storage domain with gluster mount semantics - future: probably native gluster storage domain, up to native integration with qemu
I am looking at GlusterFS integration with QEMU which involves adding GlusterFS as block backend in QEMU. This will involve QEMU talking to gluster directly via libglusterfs bypassing FUSE. I could specify a volume file and the VM image directly on QEMU command line to boot from the VM image that resides on a gluster volume.
Eg: qemu -drive file=client.vol:/Fedora.img,format=gluster
In this example, Fedora.img is being served by gluster and client.vol would have client-side translators specified.
I am not sure if this use case would be served if GlusterFS is integrated as posixfs storage domain in VDSM. Posixfs would involve normal FUSE mount and QEMU would be required to work with images from FUSE mount path ?
With QEMU supporting GlusterFS backend natively, further optimizations are possible in case of gluster volume being local to the host node. In this case, one could provide QEMU with a simple volume file that would not contain client or server xlators, but instead just the posix xlator. This would lead to most optimal IO path that bypasses RPC calls.
So do you think, this use case (QEMU supporting GlusterFS backend natively and using volume file to specify the needed translators) warrants a specialized storage domain type for GlusterFS in VDSM ?
I'm not sure if a special storage domain, or a PosixFS based domain with enhanced capabilities. Ayal?

On 5/17/2012 1:35 PM, Itamar Heim wrote:
On 05/17/2012 06:55 PM, Bharata B Rao wrote:
On Wed, May 16, 2012 at 3:29 PM, Itamar Heim<iheim@redhat.com> wrote:
On 05/15/2012 07:35 PM, Andrei Vakhnin wrote:
Yair
Thanks for an update. Can I have KVM hypervisors also function as storage nodes for glusterfs? What is a release date for glusterfs support? We're looking for a production deployment in June. Thanks
current status is 1. patches for provisioning gluster clusters and volumes via ovirt are in review, trying to cover this feature set [1]. I'm not sure if all of them will make the ovirt 3.1 version which is slated to branch for stabilization June 1st, but i think "enough" is there. so i'd start trying current upstream version to help find issues blocking you, and following on them during june as we stabilize ovirt 3.1 for release (planned for end of june).
2. you should be able to use same hosts for both gluster and virt, but there is no special logic/handling for this yet (i.e., trying and providing feedback would help improve this mode). I would suggest start from separate clusters though first, and only later trying the joint mode.
3. creating a storage domain on top of gluster: - expose NFS on top of it, and consume as a normal nfs storage domain - use posixfs storage domain with gluster mount semantics - future: probably native gluster storage domain, up to native integration with qemu
I am looking at GlusterFS integration with QEMU which involves adding GlusterFS as block backend in QEMU. This will involve QEMU talking to gluster directly via libglusterfs bypassing FUSE. I could specify a volume file and the VM image directly on QEMU command line to boot from the VM image that resides on a gluster volume.
Eg: qemu -drive file=client.vol:/Fedora.img,format=gluster
In this example, Fedora.img is being served by gluster and client.vol would have client-side translators specified.
I am not sure if this use case would be served if GlusterFS is integrated as posixfs storage domain in VDSM. Posixfs would involve normal FUSE mount and QEMU would be required to work with images from FUSE mount path ?
With QEMU supporting GlusterFS backend natively, further optimizations are possible in case of gluster volume being local to the host node. In this case, one could provide QEMU with a simple volume file that would not contain client or server xlators, but instead just the posix xlator. This would lead to most optimal IO path that bypasses RPC calls.
So do you think, this use case (QEMU supporting GlusterFS backend natively and using volume file to specify the needed translators) warrants a specialized storage domain type for GlusterFS in VDSM ?
I'm not sure if a special storage domain, or a PosixFS based domain with enhanced capabilities. Ayal?
Direct qemu support for gluster is similar to ceph rbd/rados object storage which is also supported in qemu. A domain type which can handle object based storage of this sort would be very nice.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 05/17/2012 11:05 PM, Itamar Heim wrote:
On 05/17/2012 06:55 PM, Bharata B Rao wrote:
On Wed, May 16, 2012 at 3:29 PM, Itamar Heim<iheim@redhat.com> wrote:
On 05/15/2012 07:35 PM, Andrei Vakhnin wrote:
Yair
Thanks for an update. Can I have KVM hypervisors also function as storage nodes for glusterfs? What is a release date for glusterfs support? We're looking for a production deployment in June. Thanks
current status is 1. patches for provisioning gluster clusters and volumes via ovirt are in review, trying to cover this feature set [1]. I'm not sure if all of them will make the ovirt 3.1 version which is slated to branch for stabilization June 1st, but i think "enough" is there. so i'd start trying current upstream version to help find issues blocking you, and following on them during june as we stabilize ovirt 3.1 for release (planned for end of june).
2. you should be able to use same hosts for both gluster and virt, but there is no special logic/handling for this yet (i.e., trying and providing feedback would help improve this mode). I would suggest start from separate clusters though first, and only later trying the joint mode.
3. creating a storage domain on top of gluster: - expose NFS on top of it, and consume as a normal nfs storage domain - use posixfs storage domain with gluster mount semantics - future: probably native gluster storage domain, up to native integration with qemu
I am looking at GlusterFS integration with QEMU which involves adding GlusterFS as block backend in QEMU. This will involve QEMU talking to gluster directly via libglusterfs bypassing FUSE. I could specify a volume file and the VM image directly on QEMU command line to boot from the VM image that resides on a gluster volume.
Eg: qemu -drive file=client.vol:/Fedora.img,format=gluster
In this example, Fedora.img is being served by gluster and client.vol would have client-side translators specified.
I am not sure if this use case would be served if GlusterFS is integrated as posixfs storage domain in VDSM. Posixfs would involve normal FUSE mount and QEMU would be required to work with images from FUSE mount path ?
With QEMU supporting GlusterFS backend natively, further optimizations are possible in case of gluster volume being local to the host node. In this case, one could provide QEMU with a simple volume file that would not contain client or server xlators, but instead just the posix xlator. This would lead to most optimal IO path that bypasses RPC calls.
So do you think, this use case (QEMU supporting GlusterFS backend natively and using volume file to specify the needed translators) warrants a specialized storage domain type for GlusterFS in VDSM ?
I'm not sure if a special storage domain, or a PosixFS based domain with enhanced capabilities. Ayal?
Related Question: With QEMU using GlusterFS backend natively (as described above), it also means that it needs addnl options/parameters as part of qemu command line (as given above). How does VDSM today support generating a custom qemu cmdline. I know VDSM talks to libvirt, so is there a framework in VDSM to edit/modify the domxml based on some pre-conditions, and how / where one should hook up to do that modification ? I know of libvirt hooks framework in VDSM, but that was more for temporary/experimental needs, or am i completely wrong here ? Irrespective of whether GlusterFS integrates into VDSM as PosixFS or special storage domain it won't address the need to generate a custom qemu cmdline if a file/image was served by GlusterFS. Whats the way to address this issue in VDSM ? I am assuming here that special storage domain (aka repo engine) is only to manage image repository, and image related operations, won't help in modifying qemu cmd line being generated. [Ccing vdsm-devel also] thanx, deepak

On 05/18/2012 04:28 PM, Deepak C Shetty wrote:
On 05/17/2012 11:05 PM, Itamar Heim wrote:
On 05/17/2012 06:55 PM, Bharata B Rao wrote:
On Wed, May 16, 2012 at 3:29 PM, Itamar Heim<iheim@redhat.com> wrote:
On 05/15/2012 07:35 PM, Andrei Vakhnin wrote:
Yair
Thanks for an update. Can I have KVM hypervisors also function as storage nodes for glusterfs? What is a release date for glusterfs support? We're looking for a production deployment in June. Thanks
current status is 1. patches for provisioning gluster clusters and volumes via ovirt are in review, trying to cover this feature set [1]. I'm not sure if all of them will make the ovirt 3.1 version which is slated to branch for stabilization June 1st, but i think "enough" is there. so i'd start trying current upstream version to help find issues blocking you, and following on them during june as we stabilize ovirt 3.1 for release (planned for end of june).
2. you should be able to use same hosts for both gluster and virt, but there is no special logic/handling for this yet (i.e., trying and providing feedback would help improve this mode). I would suggest start from separate clusters though first, and only later trying the joint mode.
3. creating a storage domain on top of gluster: - expose NFS on top of it, and consume as a normal nfs storage domain - use posixfs storage domain with gluster mount semantics - future: probably native gluster storage domain, up to native integration with qemu
I am looking at GlusterFS integration with QEMU which involves adding GlusterFS as block backend in QEMU. This will involve QEMU talking to gluster directly via libglusterfs bypassing FUSE. I could specify a volume file and the VM image directly on QEMU command line to boot from the VM image that resides on a gluster volume.
Eg: qemu -drive file=client.vol:/Fedora.img,format=gluster
In this example, Fedora.img is being served by gluster and client.vol would have client-side translators specified.
I am not sure if this use case would be served if GlusterFS is integrated as posixfs storage domain in VDSM. Posixfs would involve normal FUSE mount and QEMU would be required to work with images from FUSE mount path ?
With QEMU supporting GlusterFS backend natively, further optimizations are possible in case of gluster volume being local to the host node. In this case, one could provide QEMU with a simple volume file that would not contain client or server xlators, but instead just the posix xlator. This would lead to most optimal IO path that bypasses RPC calls.
So do you think, this use case (QEMU supporting GlusterFS backend natively and using volume file to specify the needed translators) warrants a specialized storage domain type for GlusterFS in VDSM ?
I'm not sure if a special storage domain, or a PosixFS based domain with enhanced capabilities. Ayal?
Related Question: With QEMU using GlusterFS backend natively (as described above), it also means that it needs addnl options/parameters as part of qemu command line (as given above).
There is no support in qemu for gluster yet but it will be there not far away
How does VDSM today support generating a custom qemu cmdline. I know VDSM talks to libvirt, so is there a framework in VDSM to edit/modify the domxml based on some pre-conditions, and how / where one should hook up to do that modification ? I know of libvirt hooks framework in VDSM, but that was more for temporary/experimental needs, or am i completely wrong here ?
Irrespective of whether GlusterFS integrates into VDSM as PosixFS or special storage domain it won't address the need to generate a custom qemu cmdline if a file/image was served by GlusterFS. Whats the way to address this issue in VDSM ?
I am assuming here that special storage domain (aka repo engine) is only to manage image repository, and image related operations, won't help in modifying qemu cmd line being generated.
[Ccing vdsm-devel also]
thanx, deepak
_______________________________________________ vdsm-devel mailing list vdsm-devel@lists.fedorahosted.org https://fedorahosted.org/mailman/listinfo/vdsm-devel

On Sun, May 20, 2012 at 4:57 PM, Dor Laor <dlaor@redhat.com> wrote:
On 05/18/2012 04:28 PM, Deepak C Shetty wrote:
On 05/17/2012 11:05 PM, Itamar Heim wrote:
On 05/17/2012 06:55 PM, Bharata B Rao wrote:
I am looking at GlusterFS integration with QEMU which involves adding GlusterFS as block backend in QEMU. This will involve QEMU talking to gluster directly via libglusterfs bypassing FUSE. I could specify a volume file and the VM image directly on QEMU command line to boot from the VM image that resides on a gluster volume.
Eg: qemu -drive file=client.vol:/Fedora.img,format=gluster
In this example, Fedora.img is being served by gluster and client.vol would have client-side translators specified.
I am not sure if this use case would be served if GlusterFS is integrated as posixfs storage domain in VDSM. Posixfs would involve normal FUSE mount and QEMU would be required to work with images from FUSE mount path ?
With QEMU supporting GlusterFS backend natively, further optimizations are possible in case of gluster volume being local to the host node. In this case, one could provide QEMU with a simple volume file that would not contain client or server xlators, but instead just the posix xlator. This would lead to most optimal IO path that bypasses RPC calls.
So do you think, this use case (QEMU supporting GlusterFS backend natively and using volume file to specify the needed translators) warrants a specialized storage domain type for GlusterFS in VDSM ?
I'm not sure if a special storage domain, or a PosixFS based domain with enhanced capabilities. Ayal?
Related Question: With QEMU using GlusterFS backend natively (as described above), it also means that it needs addnl options/parameters as part of qemu command line (as given above).
There is no support in qemu for gluster yet but it will be there not far away
As I said above, I am working on this. Will post the patches shortly. Regards, Bharata. -- http://bharata.sulekha.com/blog/posts.htm, http://raobharata.wordpress.com/

On 05/21/2012 06:15 AM, Bharata B Rao wrote:
On Sun, May 20, 2012 at 4:57 PM, Dor Laor<dlaor@redhat.com> wrote:
On 05/18/2012 04:28 PM, Deepak C Shetty wrote:
On 05/17/2012 11:05 PM, Itamar Heim wrote:
On 05/17/2012 06:55 PM, Bharata B Rao wrote:
I am looking at GlusterFS integration with QEMU which involves adding GlusterFS as block backend in QEMU. This will involve QEMU talking to gluster directly via libglusterfs bypassing FUSE. I could specify a volume file and the VM image directly on QEMU command line to boot from the VM image that resides on a gluster volume.
Eg: qemu -drive file=client.vol:/Fedora.img,format=gluster
In this example, Fedora.img is being served by gluster and client.vol would have client-side translators specified.
I am not sure if this use case would be served if GlusterFS is integrated as posixfs storage domain in VDSM. Posixfs would involve normal FUSE mount and QEMU would be required to work with images from FUSE mount path ?
With QEMU supporting GlusterFS backend natively, further optimizations are possible in case of gluster volume being local to the host node. In this case, one could provide QEMU with a simple volume file that would not contain client or server xlators, but instead just the posix xlator. This would lead to most optimal IO path that bypasses RPC calls.
So do you think, this use case (QEMU supporting GlusterFS backend natively and using volume file to specify the needed translators) warrants a specialized storage domain type for GlusterFS in VDSM ?
I'm not sure if a special storage domain, or a PosixFS based domain with enhanced capabilities. Ayal?
Related Question: With QEMU using GlusterFS backend natively (as described above), it also means that it needs addnl options/parameters as part of qemu command line (as given above).
There is no support in qemu for gluster yet but it will be there not far away
As I said above, I am working on this. Will post the patches shortly.
/me apologize for the useless noise, I'm using a new thunderbird plugin that collapses quotes and it made me loss the context.
Regards, Bharata.

On 05/18/2012 04:28 PM, Deepak C Shetty wrote:
On 05/17/2012 11:05 PM, Itamar Heim wrote:
On 05/17/2012 06:55 PM, Bharata B Rao wrote:
On Wed, May 16, 2012 at 3:29 PM, Itamar Heim<iheim@redhat.com> wrote:
On 05/15/2012 07:35 PM, Andrei Vakhnin wrote:
Yair
Thanks for an update. Can I have KVM hypervisors also function as storage nodes for glusterfs? What is a release date for glusterfs support? We're looking for a production deployment in June. Thanks
current status is 1. patches for provisioning gluster clusters and volumes via ovirt are in review, trying to cover this feature set [1]. I'm not sure if all of them will make the ovirt 3.1 version which is slated to branch for stabilization June 1st, but i think "enough" is there. so i'd start trying current upstream version to help find issues blocking you, and following on them during june as we stabilize ovirt 3.1 for release (planned for end of june).
2. you should be able to use same hosts for both gluster and virt, but there is no special logic/handling for this yet (i.e., trying and providing feedback would help improve this mode). I would suggest start from separate clusters though first, and only later trying the joint mode.
3. creating a storage domain on top of gluster: - expose NFS on top of it, and consume as a normal nfs storage domain - use posixfs storage domain with gluster mount semantics - future: probably native gluster storage domain, up to native integration with qemu
I am looking at GlusterFS integration with QEMU which involves adding GlusterFS as block backend in QEMU. This will involve QEMU talking to gluster directly via libglusterfs bypassing FUSE. I could specify a volume file and the VM image directly on QEMU command line to boot from the VM image that resides on a gluster volume.
Eg: qemu -drive file=client.vol:/Fedora.img,format=gluster
In this example, Fedora.img is being served by gluster and client.vol would have client-side translators specified.
I am not sure if this use case would be served if GlusterFS is integrated as posixfs storage domain in VDSM. Posixfs would involve normal FUSE mount and QEMU would be required to work with images from FUSE mount path ?
With QEMU supporting GlusterFS backend natively, further optimizations are possible in case of gluster volume being local to the host node. In this case, one could provide QEMU with a simple volume file that would not contain client or server xlators, but instead just the posix xlator. This would lead to most optimal IO path that bypasses RPC calls.
So do you think, this use case (QEMU supporting GlusterFS backend natively and using volume file to specify the needed translators) warrants a specialized storage domain type for GlusterFS in VDSM ?
I'm not sure if a special storage domain, or a PosixFS based domain with enhanced capabilities. Ayal?
Related Question: With QEMU using GlusterFS backend natively (as described above), it also means that it needs addnl options/parameters as part of qemu command line (as given above).
How does VDSM today support generating a custom qemu cmdline. I know VDSM talks to libvirt, so is there a framework in VDSM to edit/modify the domxml based on some pre-conditions, and how / where one should hook up to do that modification ? I know of libvirt hooks framework in VDSM, but that was more for temporary/experimental needs, or am i completely wrong here ?
for something vdsm is not aware of yet - you can use vdsm custom hooks to manipulate the libvirt xml.
Irrespective of whether GlusterFS integrates into VDSM as PosixFS or special storage domain it won't address the need to generate a custom qemu cmdline if a file/image was served by GlusterFS. Whats the way to address this issue in VDSM ?
when vdsm supports this I expect it will know to pass these. it won't necessarily be a generic PosixFS at that time.
I am assuming here that special storage domain (aka repo engine) is only to manage image repository, and image related operations, won't help in modifying qemu cmd line being generated.
support by vdsm for specific qemu options (via libvirt) will be done by either having a special type of storage domain, or some capability exchange, etc.
[Ccing vdsm-devel also]
thanx, deepak

Hi Zorro, Can you tell me your Gluster version? Regards, Bala ----- Original Message -----
From: "зоррыч" <zorro@megatrone.ru> To: "Itamar Heim" <iheim@redhat.com> Cc: users@ovirt.org Sent: Friday, March 2, 2012 12:06:24 AM Subject: Re: [Users] glusterfs and ovirt
Good news. It already works in a test version or development has not yet begun?
-----Original Message----- From: Itamar Heim [mailto:iheim@redhat.com] Sent: Thursday, March 01, 2012 7:44 PM To: ?????? Cc: users@ovirt.org Subject: Re: [Users] glusterfs and ovirt
On 03/01/2012 01:48 PM, ?????? wrote:
Hi.
Test the ability to work as a storage server glusterfs. Direct support to glusterf ovirt unfortunately not.
This feature will be added in the future?
I'll let someone else reply on the below, but as for ovirt-gluster integration - yes, it is in the works. this gives a general picture of the work being carried out: http://www.ovirt.org/wiki/AddingGlusterSupportToOvirt
Attempted to implement a scheme of work -> glusterfs mounted on a node in a folder mount glusterfs connected via NFS to ovirt.
It works =)
Now try to mount NFS to 127.0.0.1 and encountered an error:
Command:
[root@noc-4-m77 ~] # / bin / mount-o soft, timeo = 600, retrans = 6, nosharecache, vers = 3 -t nfs 127.0.0.1 :/share/tmp /tmpgcOezk
Error:
mount.nfs: Unknown error 521
NFS V4 is disabled.
In this mount:
/bin/mount -t nfs 127.0.0.1:/share/ tmp/tmpgtsoetsk is successful.
I understand that this is not a problem ovirt, but you might prompt any ideas how to fix it?
To use glusterfs in overt to execute a commandL
Mount -t glusterfs -o log-level = WARNING, log-file = /var/log/gluster.log noc-1 :/mht / /share
I can prescribe it in vdsm that it was carried out instead of /bin/mount-o soft, timeo = 600, retrans = 6, nosharecache, vers = 3 -t nfs 127.0.0.1:/share/tmp/tmpgtsOetsk
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

[root@noc-4-m77 ~]# gluster --version glusterfs 3.2.5 built on Nov 15 2011 08:43:14 Repository revision: git://git.gluster.com/glusterfs.git Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com> GlusterFS comes with ABSOLUTELY NO WARRANTY. You may redistribute copies of GlusterFS under the terms of the GNU General Public License. [root@noc-4-m77 ~]# -----Original Message----- From: Balamurugan Arumugam [mailto:barumuga@redhat.com] Sent: Monday, March 05, 2012 10:30 AM To: зоррыч Cc: users@ovirt.org; Itamar Heim Subject: Re: [Users] glusterfs and ovirt Hi Zorro, Can you tell me your Gluster version? Regards, Bala ----- Original Message -----
From: "зоррыч" <zorro@megatrone.ru> To: "Itamar Heim" <iheim@redhat.com> Cc: users@ovirt.org Sent: Friday, March 2, 2012 12:06:24 AM Subject: Re: [Users] glusterfs and ovirt
Good news. It already works in a test version or development has not yet begun?
-----Original Message----- From: Itamar Heim [mailto:iheim@redhat.com] Sent: Thursday, March 01, 2012 7:44 PM To: ?????? Cc: users@ovirt.org Subject: Re: [Users] glusterfs and ovirt
On 03/01/2012 01:48 PM, ?????? wrote:
Hi.
Test the ability to work as a storage server glusterfs. Direct support to glusterf ovirt unfortunately not.
This feature will be added in the future?
I'll let someone else reply on the below, but as for ovirt-gluster integration - yes, it is in the works. this gives a general picture of the work being carried out: http://www.ovirt.org/wiki/AddingGlusterSupportToOvirt
Attempted to implement a scheme of work -> glusterfs mounted on a node in a folder mount glusterfs connected via NFS to ovirt.
It works =)
Now try to mount NFS to 127.0.0.1 and encountered an error:
Command:
[root@noc-4-m77 ~] # / bin / mount-o soft, timeo = 600, retrans = 6, nosharecache, vers = 3 -t nfs 127.0.0.1 :/share/tmp /tmpgcOezk
Error:
mount.nfs: Unknown error 521
NFS V4 is disabled.
In this mount:
/bin/mount -t nfs 127.0.0.1:/share/ tmp/tmpgtsoetsk is successful.
I understand that this is not a problem ovirt, but you might prompt any ideas how to fix it?
To use glusterfs in overt to execute a commandL
Mount -t glusterfs -o log-level = WARNING, log-file = /var/log/gluster.log noc-1 :/mht / /share
I can prescribe it in vdsm that it was carried out instead of /bin/mount-o soft, timeo = 600, retrans = 6, nosharecache, vers = 3 -t nfs 127.0.0.1:/share/tmp/tmpgtsOetsk
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

I discussed with Zorro of the list and below is an update. Mounting Gluster NFS using loopback is not qualified in Gluster v3.2.5. Upcoming Gluster v3.3.0 will have this feature which will be fully qualified. Regards, Bala ----- Original Message -----
From: "зоррыч" <zorro@megatrone.ru> To: "Balamurugan Arumugam" <barumuga@redhat.com> Cc: users@ovirt.org, "Itamar Heim" <iheim@redhat.com> Sent: Monday, March 5, 2012 6:42:29 PM Subject: RE: [Users] glusterfs and ovirt
[root@noc-4-m77 ~]# gluster --version glusterfs 3.2.5 built on Nov 15 2011 08:43:14 Repository revision: git://git.gluster.com/glusterfs.git Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com> GlusterFS comes with ABSOLUTELY NO WARRANTY. You may redistribute copies of GlusterFS under the terms of the GNU General Public License. [root@noc-4-m77 ~]#
-----Original Message----- From: Balamurugan Arumugam [mailto:barumuga@redhat.com] Sent: Monday, March 05, 2012 10:30 AM To: зоррыч Cc: users@ovirt.org; Itamar Heim Subject: Re: [Users] glusterfs and ovirt
Hi Zorro,
Can you tell me your Gluster version?
Regards, Bala
----- Original Message -----
From: "зоррыч" <zorro@megatrone.ru> To: "Itamar Heim" <iheim@redhat.com> Cc: users@ovirt.org Sent: Friday, March 2, 2012 12:06:24 AM Subject: Re: [Users] glusterfs and ovirt
Good news. It already works in a test version or development has not yet begun?
-----Original Message----- From: Itamar Heim [mailto:iheim@redhat.com] Sent: Thursday, March 01, 2012 7:44 PM To: ?????? Cc: users@ovirt.org Subject: Re: [Users] glusterfs and ovirt
On 03/01/2012 01:48 PM, ?????? wrote:
Hi.
Test the ability to work as a storage server glusterfs. Direct support to glusterf ovirt unfortunately not.
This feature will be added in the future?
I'll let someone else reply on the below, but as for ovirt-gluster integration - yes, it is in the works. this gives a general picture of the work being carried out: http://www.ovirt.org/wiki/AddingGlusterSupportToOvirt
Attempted to implement a scheme of work -> glusterfs mounted on a node in a folder mount glusterfs connected via NFS to ovirt.
It works =)
Now try to mount NFS to 127.0.0.1 and encountered an error:
Command:
[root@noc-4-m77 ~] # / bin / mount-o soft, timeo = 600, retrans = 6, nosharecache, vers = 3 -t nfs 127.0.0.1 :/share/tmp /tmpgcOezk
Error:
mount.nfs: Unknown error 521
NFS V4 is disabled.
In this mount:
/bin/mount -t nfs 127.0.0.1:/share/ tmp/tmpgtsoetsk is successful.
I understand that this is not a problem ovirt, but you might prompt any ideas how to fix it?
To use glusterfs in overt to execute a commandL
Mount -t glusterfs -o log-level = WARNING, log-file = /var/log/gluster.log noc-1 :/mht / /share
I can prescribe it in vdsm that it was carried out instead of /bin/mount-o soft, timeo = 600, retrans = 6, nosharecache, vers = 3 -t nfs 127.0.0.1:/share/tmp/tmpgtsOetsk
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

This is a multi-part message in MIME format. --------------050405070507020703060109 Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Simple thing to try. Create a URL something like nfs.glusterfs.example.com and add a host entry on each system pointing to the ip of the system. Not 127.0.0.1 but the IP other stations would use to talk to the system. That would point every system to there local client. Thanks Robert On 3/1/2012 6:48 AM, ?????? wrote:
Hi.
Test the ability to work as a storage server glusterfs. Direct support to glusterf ovirt unfortunately not.
This feature will be added in the future?
Attempted to implement a scheme of work -> glusterfs mounted on a node in a folder mount glusterfs connected via NFS to ovirt.
It works =)
Now try to mount NFS to 127.0.0.1 and encountered an error:
Command:
[root@noc-4-m77 ~] # / bin / mount-o soft, timeo = 600, retrans = 6, nosharecache, vers = 3 -t nfs 127.0.0.1 :/share/tmp /tmpgcOezk
Error:
mount.nfs: Unknown error 521
NFS V4 is disabled.
In this mount:
/bin/mount -t nfs 127.0.0.1:/share/ tmp/tmpgtsoetsk is successful.
I understand that this is not a problem ovirt, but you might prompt any ideas how to fix it?
To use glusterfs in overt to execute a commandL
Mount -t glusterfs -o log-level = WARNING, log-file = /var/log/gluster.log noc-1 :/mht / /share
I can prescribe it in vdsm that it was carried out instead of /bin/mount-o soft, timeo = 600, retrans = 6, nosharecache, vers = 3 -t nfs 127.0.0.1:/share/tmp/tmpgtsOetsk
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--------------050405070507020703060109 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit <html> <head> <meta content="text/html; charset=ISO-8859-1" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> Simple thing to try. Create a URL something like nfs.glusterfs.example.com and add a host entry on each system pointing to the ip of the system. Not 127.0.0.1 but the IP other stations would use to talk to the system. That would point every system to there local client.<br> <br> Thanks<br> Robert<br> <br> On 3/1/2012 6:48 AM, ?????? wrote: <blockquote cite="mid:004501ccf7a1$3e1216f0$ba3644d0$@ru" type="cite"> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <meta name="Generator" content="Microsoft Word 12 (filtered medium)"> <style><!-- /* Font Definitions */ @font-face {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4;} @font-face {font-family:Calibri; panose-1:2 15 5 2 2 2 4 3 2 4;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {margin:0cm; margin-bottom:.0001pt; font-size:11.0pt; font-family:"Calibri","sans-serif";} a:link, span.MsoHyperlink {mso-style-priority:99; color:blue; text-decoration:underline;} a:visited, span.MsoHyperlinkFollowed {mso-style-priority:99; color:purple; text-decoration:underline;} span.EmailStyle17 {mso-style-type:personal-compose; font-family:"Calibri","sans-serif"; color:windowtext;} .MsoChpDefault {mso-style-type:export-only;} @page WordSection1 {size:612.0pt 792.0pt; margin:2.0cm 42.5pt 2.0cm 3.0cm;} div.WordSection1 {page:WordSection1;} --></style><!--[if gte mso 9]><xml> <o:shapedefaults v:ext="edit" spidmax="1026" /> </xml><![endif]--><!--[if gte mso 9]><xml> <o:shapelayout v:ext="edit"> <o:idmap v:ext="edit" data="1" /> </o:shapelayout></xml><![endif]--> <div class="WordSection1"> <p class="MsoNormal"><span lang="EN-US">Hi.<o:p></o:p></span></p> <p class="MsoNormal"><span lang="EN-US">Test the ability to work as a storage server glusterfs. Direct support to glusterf ovirt unfortunately not.<o:p></o:p></span></p> <p class="MsoNormal"><span lang="EN-US">This feature will be added in the future?<o:p></o:p></span></p> <p class="MsoNormal"><span lang="EN-US"><o:p> </o:p></span></p> <p class="MsoNormal"><span lang="EN-US">Attempted to implement a scheme of work -> glusterfs mounted on a node in a folder mount glusterfs connected via NFS to ovirt.<o:p></o:p></span></p> <p class="MsoNormal"><span lang="EN-US">It works =)<o:p></o:p></span></p> <p class="MsoNormal"><span lang="EN-US">Now try to mount NFS to 127.0.0.1 and encountered an error:<o:p></o:p></span></p> <p class="MsoNormal"><span lang="EN-US">Command:<o:p></o:p></span></p> <p class="MsoNormal"><span lang="EN-US">[root@noc-4-m77 ~] # / bin / mount-o soft, timeo = 600, retrans = 6, nosharecache, vers = 3 -t nfs 127.0.0.1 :/share/tmp /tmpgcOezk<o:p></o:p></span></p> <p class="MsoNormal"><span lang="EN-US">Error:<o:p></o:p></span></p> <p class="MsoNormal"><span lang="EN-US">mount.nfs: Unknown error 521<o:p></o:p></span></p> <p class="MsoNormal"><span lang="EN-US"><o:p> </o:p></span></p> <p class="MsoNormal"><span lang="EN-US">NFS V4 is disabled.<o:p></o:p></span></p> <p class="MsoNormal"><span lang="EN-US">In this mount:<o:p></o:p></span></p> <p class="MsoNormal"><span lang="EN-US">/bin/mount -t nfs 127.0.0.1:/share/ tmp/tmpgtsoetsk is successful.<o:p></o:p></span></p> <p class="MsoNormal"><span lang="EN-US">I understand that this is not a problem ovirt, but you might prompt any ideas how to fix it?<o:p></o:p></span></p> <p class="MsoNormal"><span lang="EN-US"><o:p> </o:p></span></p> <p class="MsoNormal"><span lang="EN-US">To use glusterfs in overt to execute a commandL<o:p></o:p></span></p> <p class="MsoNormal"><span lang="EN-US">Mount -t glusterfs -o log-level = WARNING, log-file = /var/log/gluster.log noc-1 :/mht / /share<o:p></o:p></span></p> <p class="MsoNormal"><span lang="EN-US">I can prescribe it in vdsm that it was carried out instead of /bin/mount-o soft, timeo = 600, retrans = 6, nosharecache, vers = 3 -t nfs 127.0.0.1:/share/tmp/tmpgtsOetsk<o:p></o:p></span></p> <p class="MsoNormal"><span lang="EN-US"><o:p> </o:p></span></p> <p class="MsoNormal"><span lang="EN-US"><o:p> </o:p></span></p> </div> <br> <fieldset class="mimeAttachmentHeader"></fieldset> <br> <pre wrap="">_______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <br> </body> </html> --------------050405070507020703060109--
participants (11)
-
??????
-
Andrei Vakhnin
-
Balamurugan Arumugam
-
Bharata B Rao
-
Deepak C Shetty
-
Dor Laor
-
Itamar Heim
-
Jeff Bailey
-
Robert Middleswarth
-
Yair Zaslavsky
-
зоррыч