From zorro at megatrone.ru Thu Mar 1 06:48:46 2012 Content-Type: multipart/mixed; boundary="===============1976808811452888363==" MIME-Version: 1.0 From: ?????? To: users at ovirt.org Subject: [Users] glusterfs and ovirt Date: Thu, 01 Mar 2012 15:48:38 +0400 Message-ID: <004501ccf7a1$3e1216f0$ba3644d0$@ru> --===============1976808811452888363== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable This is a multi-part message in MIME format. ------=3D_NextPart_000_0046_01CCF7C2.C523B6F0 Content-Type: text/plain; charset=3D"us-ascii" Content-Transfer-Encoding: 7bit Hi. Test the ability to work as a storage server glusterfs. Direct support to glusterf ovirt unfortunately not. This feature will be added in the future? = Attempted to implement a scheme of work -> glusterfs mounted on a node in a folder mount glusterfs connected via NFS to ovirt. It works =3D) Now try to mount NFS to 127.0.0.1 and encountered an error: Command: [root(a)noc-4-m77 ~] # / bin / mount-o soft, timeo =3D 600, retrans =3D 6, nosharecache, vers =3D 3 -t nfs 127.0.0.1 :/share/tmp /tmpgcOezk Error: mount.nfs: Unknown error 521 = NFS V4 is disabled. In this mount: /bin/mount -t nfs 127.0.0.1:/share/ tmp/tmpgtsoetsk is successful. I understand that this is not a problem ovirt, but you might prompt any ideas how to fix it? = To use glusterfs in overt to execute a commandL Mount -t glusterfs -o log-level =3D WARNING, log-file =3D /var/log/gluster.= log noc-1 :/mht / /share I can prescribe it in vdsm that it was carried out instead of /bin/mount-o soft, timeo =3D 600, retrans =3D 6, nosharecache, vers =3D 3 -t nfs 127.0.0.1:/share/tmp/tmpgtsOetsk = = ------=3D_NextPart_000_0046_01CCF7C2.C523B6F0 Content-Type: text/html; charset=3D"us-ascii" Content-Transfer-Encoding: quoted-printable

Hi.

Test the ability to work as a storage server glusterfs. =3D Direct support to glusterf ovirt unfortunately =3D not.

This = =3D feature will be added in the future?

 

Attempted to implement a scheme of= =3D work -> glusterfs mounted on a node in a folder mount glusterfs =3D connected via NFS to ovirt.

It works =3D =3D3D)

Now= =3D try to mount NFS to 127.0.0.1 and encountered an =3D error:

Command:

[root(a)noc-4-m77 ~] # / bin / mount-o soft, timeo =3D3D 600= , =3D retrans =3D3D 6, nosharecache, vers =3D3D 3 -t nfs 127.0.0.1 :/share/tmp = =3D /tmpgcOezk

Error:

mount.nfs: Unknown error 521

 

NFS V4 is =3D disabled.

In this mount:

/bin/mount -t nfs 127.0.0.1:/share= / =3D tmp/tmpgtsoetsk is successful.

I understand that this is not a = =3D problem ovirt, but you might prompt any ideas how to fix =3D it?

 

To use glusterfs in overt  to execute a =3D commandL

Mount -t glusterfs -o log-level =3D3D WARNING, log-file =3D3= D =3D /var/log/gluster.log noc-1 :/mht /  /share

I can prescribe it in vdsm that it= =3D was carried out instead of /bin/mount-o soft, timeo =3D3D 600, retrans =3D3= D =3D 6, nosharecache, vers =3D3D 3 -t nfs =3D 127.0.0.1:/share/tmp/tmpgtsOetsk

 

 

------=3D_NextPart_000_0046_01CCF7C2.C523B6F0-- --===============1976808811452888363== Content-Type: multipart/alternative MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="attachment.bin" VGhpcyBpcyBhIG11bHRpLXBhcnQgbWVzc2FnZSBpbiBNSU1FIGZvcm1hdC4KCi0tLS0tLT1fTmV4 dFBhcnRfMDAwXzAwNDZfMDFDQ0Y3QzIuQzUyM0I2RjAKQ29udGVudC1UeXBlOiB0ZXh0L3BsYWlu OwoJY2hhcnNldD0idXMtYXNjaWkiCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKCkhp LgoKVGVzdCB0aGUgYWJpbGl0eSB0byB3b3JrIGFzIGEgc3RvcmFnZSBzZXJ2ZXIgZ2x1c3RlcmZz LiBEaXJlY3Qgc3VwcG9ydCB0bwpnbHVzdGVyZiBvdmlydCB1bmZvcnR1bmF0ZWx5IG5vdC4KClRo aXMgZmVhdHVyZSB3aWxsIGJlIGFkZGVkIGluIHRoZSBmdXR1cmU/CgogCgpBdHRlbXB0ZWQgdG8g aW1wbGVtZW50IGEgc2NoZW1lIG9mIHdvcmsgLT4gZ2x1c3RlcmZzIG1vdW50ZWQgb24gYSBub2Rl IGluIGEKZm9sZGVyIG1vdW50IGdsdXN0ZXJmcyBjb25uZWN0ZWQgdmlhIE5GUyB0byBvdmlydC4K Ckl0IHdvcmtzID0pCgpOb3cgdHJ5IHRvIG1vdW50IE5GUyB0byAxMjcuMC4wLjEgYW5kIGVuY291 bnRlcmVkIGFuIGVycm9yOgoKQ29tbWFuZDoKCltyb290QG5vYy00LW03NyB+XSAjIC8gYmluIC8g bW91bnQtbyBzb2Z0LCB0aW1lbyA9IDYwMCwgcmV0cmFucyA9IDYsCm5vc2hhcmVjYWNoZSwgdmVy cyA9IDMgLXQgbmZzIDEyNy4wLjAuMSA6L3NoYXJlL3RtcCAvdG1wZ2NPZXprCgpFcnJvcjoKCm1v dW50Lm5mczogVW5rbm93biBlcnJvciA1MjEKCiAKCk5GUyBWNCBpcyBkaXNhYmxlZC4KCkluIHRo aXMgbW91bnQ6CgovYmluL21vdW50IC10IG5mcyAxMjcuMC4wLjE6L3NoYXJlLyB0bXAvdG1wZ3Rz b2V0c2sgaXMgc3VjY2Vzc2Z1bC4KCkkgdW5kZXJzdGFuZCB0aGF0IHRoaXMgaXMgbm90IGEgcHJv YmxlbSBvdmlydCwgYnV0IHlvdSBtaWdodCBwcm9tcHQgYW55CmlkZWFzIGhvdyB0byBmaXggaXQ/ CgogCgpUbyB1c2UgZ2x1c3RlcmZzIGluIG92ZXJ0ICB0byBleGVjdXRlIGEgY29tbWFuZEwKCk1v dW50IC10IGdsdXN0ZXJmcyAtbyBsb2ctbGV2ZWwgPSBXQVJOSU5HLCBsb2ctZmlsZSA9IC92YXIv bG9nL2dsdXN0ZXIubG9nCm5vYy0xIDovbWh0IC8gIC9zaGFyZQoKSSBjYW4gcHJlc2NyaWJlIGl0 IGluIHZkc20gdGhhdCBpdCB3YXMgY2FycmllZCBvdXQgaW5zdGVhZCBvZiAvYmluL21vdW50LW8K c29mdCwgdGltZW8gPSA2MDAsIHJldHJhbnMgPSA2LCBub3NoYXJlY2FjaGUsIHZlcnMgPSAzIC10 IG5mcwoxMjcuMC4wLjE6L3NoYXJlL3RtcC90bXBndHNPZXRzawoKIAoKIAoKCi0tLS0tLT1fTmV4 dFBhcnRfMDAwXzAwNDZfMDFDQ0Y3QzIuQzUyM0I2RjAKQ29udGVudC1UeXBlOiB0ZXh0L2h0bWw7 CgljaGFyc2V0PSJ1cy1hc2NpaSIKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogcXVvdGVkLXBy aW50YWJsZQoKPGh0bWwgeG1sbnM6dj0zRCJ1cm46c2NoZW1hcy1taWNyb3NvZnQtY29tOnZtbCIg PQp4bWxuczpvPTNEInVybjpzY2hlbWFzLW1pY3Jvc29mdC1jb206b2ZmaWNlOm9mZmljZSIgPQp4 bWxuczp3PTNEInVybjpzY2hlbWFzLW1pY3Jvc29mdC1jb206b2ZmaWNlOndvcmQiID0KeG1sbnM6 bT0zRCJodHRwOi8vc2NoZW1hcy5taWNyb3NvZnQuY29tL29mZmljZS8yMDA0LzEyL29tbWwiID0K eG1sbnM9M0QiaHR0cDovL3d3dy53My5vcmcvVFIvUkVDLWh0bWw0MCI+PGhlYWQ+CjxNRVRBIEhU VFAtRVFVSVY9M0QiQ29udGVudC1UeXBlIiBDT05URU5UPTNEInRleHQvaHRtbDsgPQpjaGFyc2V0 PTNEdXMtYXNjaWkiPgo8bWV0YSBuYW1lPTNER2VuZXJhdG9yIGNvbnRlbnQ9M0QiTWljcm9zb2Z0 IFdvcmQgMTIgKGZpbHRlcmVkID0KbWVkaXVtKSI+PHN0eWxlPjwhLS0KLyogRm9udCBEZWZpbml0 aW9ucyAqLwpAZm9udC1mYWNlCgl7Zm9udC1mYW1pbHk6IkNhbWJyaWEgTWF0aCI7CglwYW5vc2Ut MToyIDQgNSAzIDUgNCA2IDMgMiA0O30KQGZvbnQtZmFjZQoJe2ZvbnQtZmFtaWx5OkNhbGlicmk7 CglwYW5vc2UtMToyIDE1IDUgMiAyIDIgNCAzIDIgNDt9Ci8qIFN0eWxlIERlZmluaXRpb25zICov CnAuTXNvTm9ybWFsLCBsaS5Nc29Ob3JtYWwsIGRpdi5Nc29Ob3JtYWwKCXttYXJnaW46MGNtOwoJ bWFyZ2luLWJvdHRvbTouMDAwMXB0OwoJZm9udC1zaXplOjExLjBwdDsKCWZvbnQtZmFtaWx5OiJD YWxpYnJpIiwic2Fucy1zZXJpZiI7fQphOmxpbmssIHNwYW4uTXNvSHlwZXJsaW5rCgl7bXNvLXN0 eWxlLXByaW9yaXR5Ojk5OwoJY29sb3I6Ymx1ZTsKCXRleHQtZGVjb3JhdGlvbjp1bmRlcmxpbmU7 fQphOnZpc2l0ZWQsIHNwYW4uTXNvSHlwZXJsaW5rRm9sbG93ZWQKCXttc28tc3R5bGUtcHJpb3Jp dHk6OTk7Cgljb2xvcjpwdXJwbGU7Cgl0ZXh0LWRlY29yYXRpb246dW5kZXJsaW5lO30Kc3Bhbi5F bWFpbFN0eWxlMTcKCXttc28tc3R5bGUtdHlwZTpwZXJzb25hbC1jb21wb3NlOwoJZm9udC1mYW1p bHk6IkNhbGlicmkiLCJzYW5zLXNlcmlmIjsKCWNvbG9yOndpbmRvd3RleHQ7fQouTXNvQ2hwRGVm YXVsdAoJe21zby1zdHlsZS10eXBlOmV4cG9ydC1vbmx5O30KQHBhZ2UgV29yZFNlY3Rpb24xCgl7 c2l6ZTo2MTIuMHB0IDc5Mi4wcHQ7CgltYXJnaW46Mi4wY20gNDIuNXB0IDIuMGNtIDMuMGNtO30K ZGl2LldvcmRTZWN0aW9uMQoJe3BhZ2U6V29yZFNlY3Rpb24xO30KLS0+PC9zdHlsZT48IS0tW2lm IGd0ZSBtc28gOV0+PHhtbD4KPG86c2hhcGVkZWZhdWx0cyB2OmV4dD0zRCJlZGl0IiBzcGlkbWF4 PTNEIjEwMjYiIC8+CjwveG1sPjwhW2VuZGlmXS0tPjwhLS1baWYgZ3RlIG1zbyA5XT48eG1sPgo8 bzpzaGFwZWxheW91dCB2OmV4dD0zRCJlZGl0Ij4KPG86aWRtYXAgdjpleHQ9M0QiZWRpdCIgZGF0 YT0zRCIxIiAvPgo8L286c2hhcGVsYXlvdXQ+PC94bWw+PCFbZW5kaWZdLS0+PC9oZWFkPjxib2R5 IGxhbmc9M0RSVSBsaW5rPTNEYmx1ZSA9CnZsaW5rPTNEcHVycGxlPjxkaXYgY2xhc3M9M0RXb3Jk U2VjdGlvbjE+PHAgY2xhc3M9M0RNc29Ob3JtYWw+PHNwYW4gPQpsYW5nPTNERU4tVVM+SGkuPG86 cD48L286cD48L3NwYW4+PC9wPjxwIGNsYXNzPTNETXNvTm9ybWFsPjxzcGFuID0KbGFuZz0zREVO LVVTPlRlc3QgdGhlIGFiaWxpdHkgdG8gd29yayBhcyBhIHN0b3JhZ2Ugc2VydmVyIGdsdXN0ZXJm cy4gPQpEaXJlY3Qgc3VwcG9ydCB0byBnbHVzdGVyZiBvdmlydCB1bmZvcnR1bmF0ZWx5ID0Kbm90 LjxvOnA+PC9vOnA+PC9zcGFuPjwvcD48cCBjbGFzcz0zRE1zb05vcm1hbD48c3BhbiBsYW5nPTNE RU4tVVM+VGhpcyA9CmZlYXR1cmUgd2lsbCBiZSBhZGRlZCBpbiB0aGUgZnV0dXJlPzxvOnA+PC9v OnA+PC9zcGFuPjwvcD48cCA9CmNsYXNzPTNETXNvTm9ybWFsPjxzcGFuIGxhbmc9M0RFTi1VUz48 bzpwPiZuYnNwOzwvbzpwPjwvc3Bhbj48L3A+PHAgPQpjbGFzcz0zRE1zb05vcm1hbD48c3BhbiBs YW5nPTNERU4tVVM+QXR0ZW1wdGVkIHRvIGltcGxlbWVudCBhIHNjaGVtZSBvZiA9CndvcmsgLSZn dDsgZ2x1c3RlcmZzIG1vdW50ZWQgb24gYSBub2RlIGluIGEgZm9sZGVyIG1vdW50IGdsdXN0ZXJm cyA9CmNvbm5lY3RlZCB2aWEgTkZTIHRvIG92aXJ0LjxvOnA+PC9vOnA+PC9zcGFuPjwvcD48cCA9 CmNsYXNzPTNETXNvTm9ybWFsPjxzcGFuIGxhbmc9M0RFTi1VUz5JdCB3b3JrcyA9Cj0zRCk8bzpw PjwvbzpwPjwvc3Bhbj48L3A+PHAgY2xhc3M9M0RNc29Ob3JtYWw+PHNwYW4gbGFuZz0zREVOLVVT Pk5vdyA9CnRyeSB0byBtb3VudCBORlMgdG8gMTI3LjAuMC4xIGFuZCBlbmNvdW50ZXJlZCBhbiA9 CmVycm9yOjxvOnA+PC9vOnA+PC9zcGFuPjwvcD48cCBjbGFzcz0zRE1zb05vcm1hbD48c3BhbiA9 Cmxhbmc9M0RFTi1VUz5Db21tYW5kOjxvOnA+PC9vOnA+PC9zcGFuPjwvcD48cCBjbGFzcz0zRE1z b05vcm1hbD48c3BhbiA9Cmxhbmc9M0RFTi1VUz5bcm9vdEBub2MtNC1tNzcgfl0gIyAvIGJpbiAv IG1vdW50LW8gc29mdCwgdGltZW8gPTNEIDYwMCwgPQpyZXRyYW5zID0zRCA2LCBub3NoYXJlY2Fj aGUsIHZlcnMgPTNEIDMgLXQgbmZzIDEyNy4wLjAuMSA6L3NoYXJlL3RtcCA9Ci90bXBnY09lems8 bzpwPjwvbzpwPjwvc3Bhbj48L3A+PHAgY2xhc3M9M0RNc29Ob3JtYWw+PHNwYW4gPQpsYW5nPTNE RU4tVVM+RXJyb3I6PG86cD48L286cD48L3NwYW4+PC9wPjxwIGNsYXNzPTNETXNvTm9ybWFsPjxz cGFuID0KbGFuZz0zREVOLVVTPm1vdW50Lm5mczogVW5rbm93biBlcnJvciA1MjE8bzpwPjwvbzpw Pjwvc3Bhbj48L3A+PHAgPQpjbGFzcz0zRE1zb05vcm1hbD48c3BhbiBsYW5nPTNERU4tVVM+PG86 cD4mbmJzcDs8L286cD48L3NwYW4+PC9wPjxwID0KY2xhc3M9M0RNc29Ob3JtYWw+PHNwYW4gbGFu Zz0zREVOLVVTPk5GUyBWNCBpcyA9CmRpc2FibGVkLjxvOnA+PC9vOnA+PC9zcGFuPjwvcD48cCBj bGFzcz0zRE1zb05vcm1hbD48c3BhbiA9Cmxhbmc9M0RFTi1VUz5JbiB0aGlzIG1vdW50OjxvOnA+ PC9vOnA+PC9zcGFuPjwvcD48cCA9CmNsYXNzPTNETXNvTm9ybWFsPjxzcGFuIGxhbmc9M0RFTi1V Uz4vYmluL21vdW50IC10IG5mcyAxMjcuMC4wLjE6L3NoYXJlLyA9CnRtcC90bXBndHNvZXRzayBp cyBzdWNjZXNzZnVsLjxvOnA+PC9vOnA+PC9zcGFuPjwvcD48cCA9CmNsYXNzPTNETXNvTm9ybWFs PjxzcGFuIGxhbmc9M0RFTi1VUz5JIHVuZGVyc3RhbmQgdGhhdCB0aGlzIGlzIG5vdCBhID0KcHJv YmxlbSBvdmlydCwgYnV0IHlvdSBtaWdodCBwcm9tcHQgYW55IGlkZWFzIGhvdyB0byBmaXggPQpp dD88bzpwPjwvbzpwPjwvc3Bhbj48L3A+PHAgY2xhc3M9M0RNc29Ob3JtYWw+PHNwYW4gPQpsYW5n PTNERU4tVVM+PG86cD4mbmJzcDs8L286cD48L3NwYW4+PC9wPjxwIGNsYXNzPTNETXNvTm9ybWFs PjxzcGFuID0KbGFuZz0zREVOLVVTPlRvIHVzZSBnbHVzdGVyZnMgaW4gb3ZlcnQgJm5ic3A7dG8g ZXhlY3V0ZSBhID0KY29tbWFuZEw8bzpwPjwvbzpwPjwvc3Bhbj48L3A+PHAgY2xhc3M9M0RNc29O b3JtYWw+PHNwYW4gPQpsYW5nPTNERU4tVVM+TW91bnQgLXQgZ2x1c3RlcmZzIC1vIGxvZy1sZXZl bCA9M0QgV0FSTklORywgbG9nLWZpbGUgPTNEID0KL3Zhci9sb2cvZ2x1c3Rlci5sb2cgbm9jLTEg Oi9taHQgLyAmbmJzcDsvc2hhcmU8bzpwPjwvbzpwPjwvc3Bhbj48L3A+PHAgPQpjbGFzcz0zRE1z b05vcm1hbD48c3BhbiBsYW5nPTNERU4tVVM+SSBjYW4gcHJlc2NyaWJlIGl0IGluIHZkc20gdGhh dCBpdCA9CndhcyBjYXJyaWVkIG91dCBpbnN0ZWFkIG9mIC9iaW4vbW91bnQtbyBzb2Z0LCB0aW1l byA9M0QgNjAwLCByZXRyYW5zID0zRCA9CjYsIG5vc2hhcmVjYWNoZSwgdmVycyA9M0QgMyAtdCBu ZnMgPQoxMjcuMC4wLjE6L3NoYXJlL3RtcC90bXBndHNPZXRzazxvOnA+PC9vOnA+PC9zcGFuPjwv cD48cCA9CmNsYXNzPTNETXNvTm9ybWFsPjxzcGFuIGxhbmc9M0RFTi1VUz48bzpwPiZuYnNwOzwv bzpwPjwvc3Bhbj48L3A+PHAgPQpjbGFzcz0zRE1zb05vcm1hbD48c3BhbiA9Cmxhbmc9M0RFTi1V Uz48bzpwPiZuYnNwOzwvbzpwPjwvc3Bhbj48L3A+PC9kaXY+PC9ib2R5PjwvaHRtbD4KLS0tLS0t PV9OZXh0UGFydF8wMDBfMDA0Nl8wMUNDRjdDMi5DNTIzQjZGMC0tCgoK --===============1976808811452888363==-- From iheim at redhat.com Thu Mar 1 10:44:31 2012 Content-Type: multipart/mixed; boundary="===============8848890067658206189==" MIME-Version: 1.0 From: Itamar Heim To: users at ovirt.org Subject: Re: [Users] glusterfs and ovirt Date: Thu, 01 Mar 2012 17:44:24 +0200 Message-ID: <4F4F9958.9060608@redhat.com> In-Reply-To: 004501ccf7a1$3e1216f0$ba3644d0$@ru --===============8848890067658206189== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On 03/01/2012 01:48 PM, ?????? wrote: > Hi. > > Test the ability to work as a storage server glusterfs. Direct support > to glusterf ovirt unfortunately not. > > This feature will be added in the future? I'll let someone else reply on the below, but as for ovirt-gluster = integration - yes, it is in the works. this gives a general picture of the work being carried out: http://www.ovirt.org/wiki/AddingGlusterSupportToOvirt > > Attempted to implement a scheme of work -> glusterfs mounted on a node > in a folder mount glusterfs connected via NFS to ovirt. > > It works =3D) > > Now try to mount NFS to 127.0.0.1 and encountered an error: > > Command: > > [root(a)noc-4-m77 ~] # / bin / mount-o soft, timeo =3D 600, retrans =3D 6, > nosharecache, vers =3D 3 -t nfs 127.0.0.1 :/share/tmp /tmpgcOezk > > Error: > > mount.nfs: Unknown error 521 > > NFS V4 is disabled. > > In this mount: > > /bin/mount -t nfs 127.0.0.1:/share/ tmp/tmpgtsoetsk is successful. > > I understand that this is not a problem ovirt, but you might prompt any > ideas how to fix it? > > To use glusterfs in overt to execute a commandL > > Mount -t glusterfs -o log-level =3D WARNING, log-file =3D > /var/log/gluster.log noc-1 :/mht / /share > > I can prescribe it in vdsm that it was carried out instead of > /bin/mount-o soft, timeo =3D 600, retrans =3D 6, nosharecache, vers =3D 3= -t > nfs 127.0.0.1:/share/tmp/tmpgtsOetsk > > > > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users --===============8848890067658206189==-- From zorro at megatrone.ru Thu Mar 1 13:36:28 2012 Content-Type: multipart/mixed; boundary="===============1801729666636304729==" MIME-Version: 1.0 From: =?utf-8?q?=D0=B7=D0=BE=D1=80=D1=80=D1=8B=D1=87_=3Czorro_at_megatrone=2Eru?= =?utf-8?q?=3E?= To: users at ovirt.org Subject: Re: [Users] glusterfs and ovirt Date: Thu, 01 Mar 2012 22:36:24 +0400 Message-ID: <007901ccf7da$35620810$a0261830$@megatrone.ru> In-Reply-To: 4F4F9958.9060608@redhat.com --===============1801729666636304729== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Good news. It already works in a test version or development has not yet begun? -----Original Message----- From: Itamar Heim [mailto:iheim(a)redhat.com] = Sent: Thursday, March 01, 2012 7:44 PM To: ?????? Cc: users(a)ovirt.org Subject: Re: [Users] glusterfs and ovirt On 03/01/2012 01:48 PM, ?????? wrote: > Hi. > > Test the ability to work as a storage server glusterfs. Direct support = > to glusterf ovirt unfortunately not. > > This feature will be added in the future? I'll let someone else reply on the below, but as for ovirt-gluster integration - yes, it is in the works. this gives a general picture of the work being carried out: http://www.ovirt.org/wiki/AddingGlusterSupportToOvirt > > Attempted to implement a scheme of work -> glusterfs mounted on a node = > in a folder mount glusterfs connected via NFS to ovirt. > > It works =3D) > > Now try to mount NFS to 127.0.0.1 and encountered an error: > > Command: > > [root(a)noc-4-m77 ~] # / bin / mount-o soft, timeo =3D 600, retrans =3D 6= , = > nosharecache, vers =3D 3 -t nfs 127.0.0.1 :/share/tmp /tmpgcOezk > > Error: > > mount.nfs: Unknown error 521 > > NFS V4 is disabled. > > In this mount: > > /bin/mount -t nfs 127.0.0.1:/share/ tmp/tmpgtsoetsk is successful. > > I understand that this is not a problem ovirt, but you might prompt = > any ideas how to fix it? > > To use glusterfs in overt to execute a commandL > > Mount -t glusterfs -o log-level =3D WARNING, log-file =3D = > /var/log/gluster.log noc-1 :/mht / /share > > I can prescribe it in vdsm that it was carried out instead of = > /bin/mount-o soft, timeo =3D 600, retrans =3D 6, nosharecache, vers =3D 3= -t = > nfs 127.0.0.1:/share/tmp/tmpgtsOetsk > > > > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users --===============1801729666636304729==-- From iheim at redhat.com Thu Mar 1 19:10:23 2012 Content-Type: multipart/mixed; boundary="===============1143530451967811518==" MIME-Version: 1.0 From: Itamar Heim To: users at ovirt.org Subject: Re: [Users] glusterfs and ovirt Date: Fri, 02 Mar 2012 02:10:12 +0200 Message-ID: <4F500FE4.7070506@redhat.com> In-Reply-To: 007901ccf7da$35620810$a0261830$@megatrone.ru --===============1143530451967811518== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On 03/01/2012 08:36 PM, =D0=B7=D0=BE=D1=80=D1=80=D1=8B=D1=87 wrote: > Good news. > It already works in a test version or development has not yet begun? early patches to build this support are sent now to engine/vdsm. so work has begun, but not working yet > > > -----Original Message----- > From: Itamar Heim [mailto:iheim(a)redhat.com] > Sent: Thursday, March 01, 2012 7:44 PM > To: ?????? > Cc: users(a)ovirt.org > Subject: Re: [Users] glusterfs and ovirt > > On 03/01/2012 01:48 PM, ?????? wrote: >> Hi. >> >> Test the ability to work as a storage server glusterfs. Direct support >> to glusterf ovirt unfortunately not. >> >> This feature will be added in the future? > > I'll let someone else reply on the below, but as for ovirt-gluster > integration - yes, it is in the works. > this gives a general picture of the work being carried out: > http://www.ovirt.org/wiki/AddingGlusterSupportToOvirt > >> >> Attempted to implement a scheme of work -> glusterfs mounted on a node >> in a folder mount glusterfs connected via NFS to ovirt. >> >> It works =3D) >> >> Now try to mount NFS to 127.0.0.1 and encountered an error: >> >> Command: >> >> [root(a)noc-4-m77 ~] # / bin / mount-o soft, timeo =3D 600, retrans =3D = 6, >> nosharecache, vers =3D 3 -t nfs 127.0.0.1 :/share/tmp /tmpgcOezk >> >> Error: >> >> mount.nfs: Unknown error 521 >> >> NFS V4 is disabled. >> >> In this mount: >> >> /bin/mount -t nfs 127.0.0.1:/share/ tmp/tmpgtsoetsk is successful. >> >> I understand that this is not a problem ovirt, but you might prompt >> any ideas how to fix it? >> >> To use glusterfs in overt to execute a commandL >> >> Mount -t glusterfs -o log-level =3D WARNING, log-file =3D >> /var/log/gluster.log noc-1 :/mht / /share >> >> I can prescribe it in vdsm that it was carried out instead of >> /bin/mount-o soft, timeo =3D 600, retrans =3D 6, nosharecache, vers =3D = 3 -t >> nfs 127.0.0.1:/share/tmp/tmpgtsOetsk >> >> >> >> _______________________________________________ >> Users mailing list >> Users(a)ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > --===============1143530451967811518==-- From barumuga at redhat.com Mon Mar 5 01:29:41 2012 Content-Type: multipart/mixed; boundary="===============5065813103861171310==" MIME-Version: 1.0 From: Balamurugan Arumugam To: users at ovirt.org Subject: Re: [Users] glusterfs and ovirt Date: Mon, 05 Mar 2012 01:29:37 -0500 Message-ID: <445089d4-f14f-4a13-b5c0-f395791a8ec8@zmail01.collab.prod.int.phx2.redhat.com> In-Reply-To: 007901ccf7da$35620810$a0261830$@megatrone.ru --===============5065813103861171310== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Hi Zorro, Can you tell me your Gluster version? Regards, Bala ----- Original Message ----- > From: "=D0=B7=D0=BE=D1=80=D1=80=D1=8B=D1=87" > To: "Itamar Heim" > Cc: users(a)ovirt.org > Sent: Friday, March 2, 2012 12:06:24 AM > Subject: Re: [Users] glusterfs and ovirt > = > Good news. > It already works in a test version or development has not yet begun? > = > = > -----Original Message----- > From: Itamar Heim [mailto:iheim(a)redhat.com] > Sent: Thursday, March 01, 2012 7:44 PM > To: ?????? > Cc: users(a)ovirt.org > Subject: Re: [Users] glusterfs and ovirt > = > On 03/01/2012 01:48 PM, ?????? wrote: > > Hi. > > > > Test the ability to work as a storage server glusterfs. Direct > > support > > to glusterf ovirt unfortunately not. > > > > This feature will be added in the future? > = > I'll let someone else reply on the below, but as for ovirt-gluster > integration - yes, it is in the works. > this gives a general picture of the work being carried out: > http://www.ovirt.org/wiki/AddingGlusterSupportToOvirt > = > > > > Attempted to implement a scheme of work -> glusterfs mounted on a > > node > > in a folder mount glusterfs connected via NFS to ovirt. > > > > It works =3D) > > > > Now try to mount NFS to 127.0.0.1 and encountered an error: > > > > Command: > > > > [root(a)noc-4-m77 ~] # / bin / mount-o soft, timeo =3D 600, retrans =3D > > 6, > > nosharecache, vers =3D 3 -t nfs 127.0.0.1 :/share/tmp /tmpgcOezk > > > > Error: > > > > mount.nfs: Unknown error 521 > > > > NFS V4 is disabled. > > > > In this mount: > > > > /bin/mount -t nfs 127.0.0.1:/share/ tmp/tmpgtsoetsk is successful. > > > > I understand that this is not a problem ovirt, but you might prompt > > any ideas how to fix it? > > > > To use glusterfs in overt to execute a commandL > > > > Mount -t glusterfs -o log-level =3D WARNING, log-file =3D > > /var/log/gluster.log noc-1 :/mht / /share > > > > I can prescribe it in vdsm that it was carried out instead of > > /bin/mount-o soft, timeo =3D 600, retrans =3D 6, nosharecache, vers =3D= 3 > > -t > > nfs 127.0.0.1:/share/tmp/tmpgtsOetsk > > > > > > > > _______________________________________________ > > Users mailing list > > Users(a)ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users > = > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users >=20 --===============5065813103861171310==-- From zorro at megatrone.ru Mon Mar 5 08:12:35 2012 Content-Type: multipart/mixed; boundary="===============6551502776936486568==" MIME-Version: 1.0 From: =?utf-8?q?=D0=B7=D0=BE=D1=80=D1=80=D1=8B=D1=87_=3Czorro_at_megatrone=2Eru?= =?utf-8?q?=3E?= To: users at ovirt.org Subject: Re: [Users] glusterfs and ovirt Date: Mon, 05 Mar 2012 17:12:29 +0400 Message-ID: <001601ccfad1$9ea8c030$dbfa4090$@ru> In-Reply-To: 445089d4-f14f-4a13-b5c0-f395791a8ec8@zmail01.collab.prod.int.phx2.redhat.com --===============6551502776936486568== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable [root(a)noc-4-m77 ~]# gluster --version glusterfs 3.2.5 built on Nov 15 2011 08:43:14 Repository revision: git://git.gluster.com/glusterfs.git Copyright (c) 2006-2011 Gluster Inc. GlusterFS comes with ABSOLUTELY NO WARRANTY. You may redistribute copies of GlusterFS under the terms of the GNU General= Public License. [root(a)noc-4-m77 ~]# -----Original Message----- From: Balamurugan Arumugam [mailto:barumuga(a)redhat.com] = Sent: Monday, March 05, 2012 10:30 AM To: =D0=B7=D0=BE=D1=80=D1=80=D1=8B=D1=87 Cc: users(a)ovirt.org; Itamar Heim Subject: Re: [Users] glusterfs and ovirt Hi Zorro, Can you tell me your Gluster version? Regards, Bala ----- Original Message ----- > From: "=D0=B7=D0=BE=D1=80=D1=80=D1=8B=D1=87" > To: "Itamar Heim" > Cc: users(a)ovirt.org > Sent: Friday, March 2, 2012 12:06:24 AM > Subject: Re: [Users] glusterfs and ovirt > = > Good news. > It already works in a test version or development has not yet begun? > = > = > -----Original Message----- > From: Itamar Heim [mailto:iheim(a)redhat.com] > Sent: Thursday, March 01, 2012 7:44 PM > To: ?????? > Cc: users(a)ovirt.org > Subject: Re: [Users] glusterfs and ovirt > = > On 03/01/2012 01:48 PM, ?????? wrote: > > Hi. > > > > Test the ability to work as a storage server glusterfs. Direct = > > support to glusterf ovirt unfortunately not. > > > > This feature will be added in the future? > = > I'll let someone else reply on the below, but as for ovirt-gluster = > integration - yes, it is in the works. > this gives a general picture of the work being carried out: > http://www.ovirt.org/wiki/AddingGlusterSupportToOvirt > = > > > > Attempted to implement a scheme of work -> glusterfs mounted on a = > > node in a folder mount glusterfs connected via NFS to ovirt. > > > > It works =3D) > > > > Now try to mount NFS to 127.0.0.1 and encountered an error: > > > > Command: > > > > [root(a)noc-4-m77 ~] # / bin / mount-o soft, timeo =3D 600, retrans =3D= 6, = > > nosharecache, vers =3D 3 -t nfs 127.0.0.1 :/share/tmp /tmpgcOezk > > > > Error: > > > > mount.nfs: Unknown error 521 > > > > NFS V4 is disabled. > > > > In this mount: > > > > /bin/mount -t nfs 127.0.0.1:/share/ tmp/tmpgtsoetsk is successful. > > > > I understand that this is not a problem ovirt, but you might prompt = > > any ideas how to fix it? > > > > To use glusterfs in overt to execute a commandL > > > > Mount -t glusterfs -o log-level =3D WARNING, log-file =3D = > > /var/log/gluster.log noc-1 :/mht / /share > > > > I can prescribe it in vdsm that it was carried out instead of = > > /bin/mount-o soft, timeo =3D 600, retrans =3D 6, nosharecache, vers =3D= 3 = > > -t nfs 127.0.0.1:/share/tmp/tmpgtsOetsk > > > > > > > > _______________________________________________ > > Users mailing list > > Users(a)ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users > = > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > = --===============6551502776936486568==-- From barumuga at redhat.com Mon Mar 12 01:21:30 2012 Content-Type: multipart/mixed; boundary="===============9143522292135786278==" MIME-Version: 1.0 From: Balamurugan Arumugam To: users at ovirt.org Subject: Re: [Users] glusterfs and ovirt Date: Mon, 12 Mar 2012 01:21:24 -0400 Message-ID: <29731854-51fc-487e-baae-d9f00061ed92@zmail01.collab.prod.int.phx2.redhat.com> In-Reply-To: 001601ccfad1$9ea8c030$dbfa4090$@ru --===============9143522292135786278== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable I discussed with Zorro of the list and below is an update. Mounting Gluster NFS using loopback is not qualified in Gluster v3.2.5. Upcoming Gluster v3.3.0 will have this feature which will be fully qualifie= d. Regards, Bala ----- Original Message ----- > From: "=D0=B7=D0=BE=D1=80=D1=80=D1=8B=D1=87" > To: "Balamurugan Arumugam" > Cc: users(a)ovirt.org, "Itamar Heim" > Sent: Monday, March 5, 2012 6:42:29 PM > Subject: RE: [Users] glusterfs and ovirt > = > [root(a)noc-4-m77 ~]# gluster --version > glusterfs 3.2.5 built on Nov 15 2011 08:43:14 > Repository revision: git://git.gluster.com/glusterfs.git > Copyright (c) 2006-2011 Gluster Inc. > GlusterFS comes with ABSOLUTELY NO WARRANTY. > You may redistribute copies of GlusterFS under the terms of the GNU > General Public License. > [root(a)noc-4-m77 ~]# > = > = > = > = > -----Original Message----- > From: Balamurugan Arumugam [mailto:barumuga(a)redhat.com] > Sent: Monday, March 05, 2012 10:30 AM > To: =D0=B7=D0=BE=D1=80=D1=80=D1=8B=D1=87 > Cc: users(a)ovirt.org; Itamar Heim > Subject: Re: [Users] glusterfs and ovirt > = > = > Hi Zorro, > = > Can you tell me your Gluster version? > = > Regards, > Bala > = > = > ----- Original Message ----- > > From: "=D0=B7=D0=BE=D1=80=D1=80=D1=8B=D1=87" > > To: "Itamar Heim" > > Cc: users(a)ovirt.org > > Sent: Friday, March 2, 2012 12:06:24 AM > > Subject: Re: [Users] glusterfs and ovirt > > = > > Good news. > > It already works in a test version or development has not yet > > begun? > > = > > = > > -----Original Message----- > > From: Itamar Heim [mailto:iheim(a)redhat.com] > > Sent: Thursday, March 01, 2012 7:44 PM > > To: ?????? > > Cc: users(a)ovirt.org > > Subject: Re: [Users] glusterfs and ovirt > > = > > On 03/01/2012 01:48 PM, ?????? wrote: > > > Hi. > > > > > > Test the ability to work as a storage server glusterfs. Direct > > > support to glusterf ovirt unfortunately not. > > > > > > This feature will be added in the future? > > = > > I'll let someone else reply on the below, but as for ovirt-gluster > > integration - yes, it is in the works. > > this gives a general picture of the work being carried out: > > http://www.ovirt.org/wiki/AddingGlusterSupportToOvirt > > = > > > > > > Attempted to implement a scheme of work -> glusterfs mounted on a > > > node in a folder mount glusterfs connected via NFS to ovirt. > > > > > > It works =3D) > > > > > > Now try to mount NFS to 127.0.0.1 and encountered an error: > > > > > > Command: > > > > > > [root(a)noc-4-m77 ~] # / bin / mount-o soft, timeo =3D 600, retrans = =3D > > > 6, > > > nosharecache, vers =3D 3 -t nfs 127.0.0.1 :/share/tmp /tmpgcOezk > > > > > > Error: > > > > > > mount.nfs: Unknown error 521 > > > > > > NFS V4 is disabled. > > > > > > In this mount: > > > > > > /bin/mount -t nfs 127.0.0.1:/share/ tmp/tmpgtsoetsk is > > > successful. > > > > > > I understand that this is not a problem ovirt, but you might > > > prompt > > > any ideas how to fix it? > > > > > > To use glusterfs in overt to execute a commandL > > > > > > Mount -t glusterfs -o log-level =3D WARNING, log-file =3D > > > /var/log/gluster.log noc-1 :/mht / /share > > > > > > I can prescribe it in vdsm that it was carried out instead of > > > /bin/mount-o soft, timeo =3D 600, retrans =3D 6, nosharecache, vers = =3D > > > 3 > > > -t nfs 127.0.0.1:/share/tmp/tmpgtsOetsk > > > > > > > > > > > > _______________________________________________ > > > Users mailing list > > > Users(a)ovirt.org > > > http://lists.ovirt.org/mailman/listinfo/users > > = > > _______________________________________________ > > Users mailing list > > Users(a)ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users > > = > = > = >=20 --===============9143522292135786278==-- From zorro at megatrone.ru Thu Apr 5 10:26:01 2012 Content-Type: multipart/mixed; boundary="===============1801640072718750033==" MIME-Version: 1.0 From: =?utf-8?q?=D0=B7=D0=BE=D1=80=D1=80=D1=8B=D1=87_=3Czorro_at_megatrone=2Eru?= =?utf-8?q?=3E?= To: users at ovirt.org Subject: Re: [Users] glusterfs and ovirt Date: Thu, 05 Apr 2012 18:25:55 +0400 Message-ID: <002b01cd1338$038449b0$0a8cdd10$@ru> In-Reply-To: 4F500FE4.7070506@redhat.com --===============1801640072718750033== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Already know the date when it is added support for glusterfs? I can work with the test in a test version of glusterfs? -----Original Message----- From: Itamar Heim [mailto:iheim(a)redhat.com] = Sent: Friday, March 02, 2012 4:10 AM To: =C3=9A=C3=8F=C3=92=C3=92=C3=99=C3=9E Cc: users(a)ovirt.org Subject: Re: [Users] glusterfs and ovirt On 03/01/2012 08:36 PM, =C3=9A=C3=8F=C3=92=C3=92=C3=99=C3=9E wrote: > Good news. > It already works in a test version or development has not yet begun? early patches to build this support are sent now to engine/vdsm. so work has begun, but not working yet > > > -----Original Message----- > From: Itamar Heim [mailto:iheim(a)redhat.com] > Sent: Thursday, March 01, 2012 7:44 PM > To: ?????? > Cc: users(a)ovirt.org > Subject: Re: [Users] glusterfs and ovirt > > On 03/01/2012 01:48 PM, ?????? wrote: >> Hi. >> >> Test the ability to work as a storage server glusterfs. Direct = >> support to glusterf ovirt unfortunately not. >> >> This feature will be added in the future? > > I'll let someone else reply on the below, but as for ovirt-gluster = > integration - yes, it is in the works. > this gives a general picture of the work being carried out: > http://www.ovirt.org/wiki/AddingGlusterSupportToOvirt > >> >> Attempted to implement a scheme of work -> glusterfs mounted on a = >> node in a folder mount glusterfs connected via NFS to ovirt. >> >> It works =3D) >> >> Now try to mount NFS to 127.0.0.1 and encountered an error: >> >> Command: >> >> [root(a)noc-4-m77 ~] # / bin / mount-o soft, timeo =3D 600, retrans =3D = 6, = >> nosharecache, vers =3D 3 -t nfs 127.0.0.1 :/share/tmp /tmpgcOezk >> >> Error: >> >> mount.nfs: Unknown error 521 >> >> NFS V4 is disabled. >> >> In this mount: >> >> /bin/mount -t nfs 127.0.0.1:/share/ tmp/tmpgtsoetsk is successful. >> >> I understand that this is not a problem ovirt, but you might prompt = >> any ideas how to fix it? >> >> To use glusterfs in overt to execute a commandL >> >> Mount -t glusterfs -o log-level =3D WARNING, log-file =3D = >> /var/log/gluster.log noc-1 :/mht / /share >> >> I can prescribe it in vdsm that it was carried out instead of = >> /bin/mount-o soft, timeo =3D 600, retrans =3D 6, nosharecache, vers =3D = 3 = >> -t nfs 127.0.0.1:/share/tmp/tmpgtsOetsk >> >> >> >> _______________________________________________ >> Users mailing list >> Users(a)ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > --===============1801640072718750033==-- From iheim at redhat.com Tue Apr 10 06:43:32 2012 Content-Type: multipart/mixed; boundary="===============0200722319670525565==" MIME-Version: 1.0 From: Itamar Heim To: users at ovirt.org Subject: Re: [Users] glusterfs and ovirt Date: Tue, 10 Apr 2012 13:43:22 +0300 Message-ID: <4F840ECA.6030801@redhat.com> In-Reply-To: 002b01cd1338$038449b0$0a8cdd10$@ru --===============0200722319670525565== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On 04/05/2012 05:25 PM, =D0=B7=D0=BE=D1=80=D1=80=D1=8B=D1=87 wrote: > Already know the date when it is added support for glusterfs? > I can work with the test in a test version of glusterfs? there are two parts to the support: 1. managing the gluster hosts/storage/volumes - patches for different = parts of this in gerrit - but not needed for first phase of using this. 2. using gluster as a posix fs - vdsm side is ready, and iirc, engine = done by laszlo who can share when test patches can be used for testing this 3. integrating both together - a bit later :) > > > > -----Original Message----- > From: Itamar Heim [mailto:iheim(a)redhat.com] > Sent: Friday, March 02, 2012 4:10 AM > To: =D0=B7=D0=BE=D1=80=D1=80=D1=8B=D1=87 > Cc: users(a)ovirt.org > Subject: Re: [Users] glusterfs and ovirt > > On 03/01/2012 08:36 PM, =D0=B7=D0=BE=D1=80=D1=80=D1=8B=D1=87 wrote: >> Good news. >> It already works in a test version or development has not yet begun? > > early patches to build this support are sent now to engine/vdsm. > so work has begun, but not working yet > >> >> >> -----Original Message----- >> From: Itamar Heim [mailto:iheim(a)redhat.com] >> Sent: Thursday, March 01, 2012 7:44 PM >> To: ?????? >> Cc: users(a)ovirt.org >> Subject: Re: [Users] glusterfs and ovirt >> >> On 03/01/2012 01:48 PM, ?????? wrote: >>> Hi. >>> >>> Test the ability to work as a storage server glusterfs. Direct >>> support to glusterf ovirt unfortunately not. >>> >>> This feature will be added in the future? >> >> I'll let someone else reply on the below, but as for ovirt-gluster >> integration - yes, it is in the works. >> this gives a general picture of the work being carried out: >> http://www.ovirt.org/wiki/AddingGlusterSupportToOvirt >> >>> >>> Attempted to implement a scheme of work -> glusterfs mounted on a >>> node in a folder mount glusterfs connected via NFS to ovirt. >>> >>> It works =3D) >>> >>> Now try to mount NFS to 127.0.0.1 and encountered an error: >>> >>> Command: >>> >>> [root(a)noc-4-m77 ~] # / bin / mount-o soft, timeo =3D 600, retrans =3D= 6, >>> nosharecache, vers =3D 3 -t nfs 127.0.0.1 :/share/tmp /tmpgcOezk >>> >>> Error: >>> >>> mount.nfs: Unknown error 521 >>> >>> NFS V4 is disabled. >>> >>> In this mount: >>> >>> /bin/mount -t nfs 127.0.0.1:/share/ tmp/tmpgtsoetsk is successful. >>> >>> I understand that this is not a problem ovirt, but you might prompt >>> any ideas how to fix it? >>> >>> To use glusterfs in overt to execute a commandL >>> >>> Mount -t glusterfs -o log-level =3D WARNING, log-file =3D >>> /var/log/gluster.log noc-1 :/mht / /share >>> >>> I can prescribe it in vdsm that it was carried out instead of >>> /bin/mount-o soft, timeo =3D 600, retrans =3D 6, nosharecache, vers =3D= 3 >>> -t nfs 127.0.0.1:/share/tmp/tmpgtsOetsk >>> >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users(a)ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >> > > > --===============0200722319670525565==-- From yzaslavs at redhat.com Tue Apr 10 07:15:15 2012 Content-Type: multipart/mixed; boundary="===============8171005678522088555==" MIME-Version: 1.0 From: Yair Zaslavsky To: users at ovirt.org Subject: Re: [Users] glusterfs and ovirt Date: Tue, 10 Apr 2012 14:18:32 +0300 Message-ID: <4F841708.3040904@redhat.com> In-Reply-To: 4F840ECA.6030801@redhat.com --===============8171005678522088555== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On 04/10/2012 01:43 PM, Itamar Heim wrote: > On 04/05/2012 05:25 PM, =D0=B7=D0=BE=D1=80=D1=80=D1=8B=D1=87 wrote: >> Already know the date when it is added support for glusterfs? >> I can work with the test in a test version of glusterfs? > = > there are two parts to the support: > 1. managing the gluster hosts/storage/volumes - patches for different > parts of this in gerrit - but not needed for first phase of using this. > 2. using gluster as a posix fs - vdsm side is ready, and iirc, engine > done by laszlo who can share when test patches can be used for testing th= is Small correction - patches are not at gerrit (referring to Posix FS at engine side) > 3. integrating both together - a bit later :) > = >> >> >> >> -----Original Message----- >> From: Itamar Heim [mailto:iheim(a)redhat.com] >> Sent: Friday, March 02, 2012 4:10 AM >> To: =D0=B7=D0=BE=D1=80=D1=80=D1=8B=D1=87 >> Cc: users(a)ovirt.org >> Subject: Re: [Users] glusterfs and ovirt >> >> On 03/01/2012 08:36 PM, =D0=B7=D0=BE=D1=80=D1=80=D1=8B=D1=87 wrote: >>> Good news. >>> It already works in a test version or development has not yet begun? >> >> early patches to build this support are sent now to engine/vdsm. >> so work has begun, but not working yet >> >>> >>> >>> -----Original Message----- >>> From: Itamar Heim [mailto:iheim(a)redhat.com] >>> Sent: Thursday, March 01, 2012 7:44 PM >>> To: ?????? >>> Cc: users(a)ovirt.org >>> Subject: Re: [Users] glusterfs and ovirt >>> >>> On 03/01/2012 01:48 PM, ?????? wrote: >>>> Hi. >>>> >>>> Test the ability to work as a storage server glusterfs. Direct >>>> support to glusterf ovirt unfortunately not. >>>> >>>> This feature will be added in the future? >>> >>> I'll let someone else reply on the below, but as for ovirt-gluster >>> integration - yes, it is in the works. >>> this gives a general picture of the work being carried out: >>> http://www.ovirt.org/wiki/AddingGlusterSupportToOvirt >>> >>>> >>>> Attempted to implement a scheme of work -> glusterfs mounted on a >>>> node in a folder mount glusterfs connected via NFS to ovirt. >>>> >>>> It works =3D) >>>> >>>> Now try to mount NFS to 127.0.0.1 and encountered an error: >>>> >>>> Command: >>>> >>>> [root(a)noc-4-m77 ~] # / bin / mount-o soft, timeo =3D 600, retrans = =3D 6, >>>> nosharecache, vers =3D 3 -t nfs 127.0.0.1 :/share/tmp /tmpgcOezk >>>> >>>> Error: >>>> >>>> mount.nfs: Unknown error 521 >>>> >>>> NFS V4 is disabled. >>>> >>>> In this mount: >>>> >>>> /bin/mount -t nfs 127.0.0.1:/share/ tmp/tmpgtsoetsk is successful. >>>> >>>> I understand that this is not a problem ovirt, but you might prompt >>>> any ideas how to fix it? >>>> >>>> To use glusterfs in overt to execute a commandL >>>> >>>> Mount -t glusterfs -o log-level =3D WARNING, log-file =3D >>>> /var/log/gluster.log noc-1 :/mht / /share >>>> >>>> I can prescribe it in vdsm that it was carried out instead of >>>> /bin/mount-o soft, timeo =3D 600, retrans =3D 6, nosharecache, vers = =3D 3 >>>> -t nfs 127.0.0.1:/share/tmp/tmpgtsOetsk >>>> >>>> >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users(a)ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>> >> >> >> > = > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users --===============8171005678522088555==-- From Andrey.A.Vakhnin at nasa.gov Tue May 15 12:35:53 2012 Content-Type: multipart/mixed; boundary="===============4309491957468967547==" MIME-Version: 1.0 From: Andrei Vakhnin To: users at ovirt.org Subject: Re: [Users] glusterfs and ovirt Date: Tue, 15 May 2012 12:35:50 -0400 Message-ID: <8FE33E87-D20E-4142-8AF9-F78F0F7E3BB8@nasa.gov> In-Reply-To: 4F841708.3040904@redhat.com --===============4309491957468967547== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Yair Thanks for an update. Can I have KVM hypervisors also function as storage n= odes for glusterfs? What is a release date for glusterfs support? We're loo= king for a production deployment in June. Thanks Andrei On Apr 10, 2012, at 7:18 AM, Yair Zaslavsky wrote: > On 04/10/2012 01:43 PM, Itamar Heim wrote: >> On 04/05/2012 05:25 PM, =D0=B7=D0=BE=D1=80=D1=80=D1=8B=D1=87 wrote: >>> Already know the date when it is added support for glusterfs? >>> I can work with the test in a test version of glusterfs? >> = >> there are two parts to the support: >> 1. managing the gluster hosts/storage/volumes - patches for different >> parts of this in gerrit - but not needed for first phase of using this. >> 2. using gluster as a posix fs - vdsm side is ready, and iirc, engine >> done by laszlo who can share when test patches can be used for testing t= his > = > Small correction - patches are not at gerrit (referring to Posix FS at > engine side) > = > = > = > = >> 3. integrating both together - a bit later :) >> = >>> = >>> = >>> = >>> -----Original Message----- >>> From: Itamar Heim [mailto:iheim(a)redhat.com] >>> Sent: Friday, March 02, 2012 4:10 AM >>> To: =D0=B7=D0=BE=D1=80=D1=80=D1=8B=D1=87 >>> Cc: users(a)ovirt.org >>> Subject: Re: [Users] glusterfs and ovirt >>> = >>> On 03/01/2012 08:36 PM, =D0=B7=D0=BE=D1=80=D1=80=D1=8B=D1=87 wrote: >>>> Good news. >>>> It already works in a test version or development has not yet begun? >>> = >>> early patches to build this support are sent now to engine/vdsm. >>> so work has begun, but not working yet >>> = >>>> = >>>> = >>>> -----Original Message----- >>>> From: Itamar Heim [mailto:iheim(a)redhat.com] >>>> Sent: Thursday, March 01, 2012 7:44 PM >>>> To: ?????? >>>> Cc: users(a)ovirt.org >>>> Subject: Re: [Users] glusterfs and ovirt >>>> = >>>> On 03/01/2012 01:48 PM, ?????? wrote: >>>>> Hi. >>>>> = >>>>> Test the ability to work as a storage server glusterfs. Direct >>>>> support to glusterf ovirt unfortunately not. >>>>> = >>>>> This feature will be added in the future? >>>> = >>>> I'll let someone else reply on the below, but as for ovirt-gluster >>>> integration - yes, it is in the works. >>>> this gives a general picture of the work being carried out: >>>> http://www.ovirt.org/wiki/AddingGlusterSupportToOvirt >>>> = >>>>> = >>>>> Attempted to implement a scheme of work -> glusterfs mounted on a >>>>> node in a folder mount glusterfs connected via NFS to ovirt. >>>>> = >>>>> It works =3D) >>>>> = >>>>> Now try to mount NFS to 127.0.0.1 and encountered an error: >>>>> = >>>>> Command: >>>>> = >>>>> [root(a)noc-4-m77 ~] # / bin / mount-o soft, timeo =3D 600, retrans = =3D 6, >>>>> nosharecache, vers =3D 3 -t nfs 127.0.0.1 :/share/tmp /tmpgcOezk >>>>> = >>>>> Error: >>>>> = >>>>> mount.nfs: Unknown error 521 >>>>> = >>>>> NFS V4 is disabled. >>>>> = >>>>> In this mount: >>>>> = >>>>> /bin/mount -t nfs 127.0.0.1:/share/ tmp/tmpgtsoetsk is successful. >>>>> = >>>>> I understand that this is not a problem ovirt, but you might prompt >>>>> any ideas how to fix it? >>>>> = >>>>> To use glusterfs in overt to execute a commandL >>>>> = >>>>> Mount -t glusterfs -o log-level =3D WARNING, log-file =3D >>>>> /var/log/gluster.log noc-1 :/mht / /share >>>>> = >>>>> I can prescribe it in vdsm that it was carried out instead of >>>>> /bin/mount-o soft, timeo =3D 600, retrans =3D 6, nosharecache, vers = =3D 3 >>>>> -t nfs 127.0.0.1:/share/tmp/tmpgtsOetsk >>>>> = >>>>> = >>>>> = >>>>> _______________________________________________ >>>>> Users mailing list >>>>> Users(a)ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/users >>>> = >>> = >>> = >>> = >> = >> _______________________________________________ >> Users mailing list >> Users(a)ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > = > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users --===============4309491957468967547==-- From yzaslavs at redhat.com Wed May 16 01:51:22 2012 Content-Type: multipart/mixed; boundary="===============2860229302064593261==" MIME-Version: 1.0 From: Yair Zaslavsky To: users at ovirt.org Subject: Re: [Users] glusterfs and ovirt Date: Wed, 16 May 2012 08:54:53 +0300 Message-ID: <4FB3412D.5000805@redhat.com> In-Reply-To: 8FE33E87-D20E-4142-8AF9-F78F0F7E3BB8@nasa.gov --===============2860229302064593261== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On 05/15/2012 07:35 PM, Andrei Vakhnin wrote: > Yair > = > Thanks for an update. Can I have KVM hypervisors also function as storage= nodes for glusterfs? What is a release date for glusterfs support? We're l= ooking for a production deployment in June. Thanks Andrei - not sure about release date. I can update you that patches for POSIX-FS (i.e - support for storage domains for POSIX compliant FS), patches for components of oVirt-engine-core and API ) are in gerrit. Most of that work is even merged (I just introduced some changes, hoping to get them reviewed ASAP and get them merged). > = > Andrei > On Apr 10, 2012, at 7:18 AM, Yair Zaslavsky wrote: > = >> On 04/10/2012 01:43 PM, Itamar Heim wrote: >>> On 04/05/2012 05:25 PM, =D0=B7=D0=BE=D1=80=D1=80=D1=8B=D1=87 wrote: >>>> Already know the date when it is added support for glusterfs? >>>> I can work with the test in a test version of glusterfs? >>> >>> there are two parts to the support: >>> 1. managing the gluster hosts/storage/volumes - patches for different >>> parts of this in gerrit - but not needed for first phase of using this. >>> 2. using gluster as a posix fs - vdsm side is ready, and iirc, engine >>> done by laszlo who can share when test patches can be used for testing = this >> >> Small correction - patches are not at gerrit (referring to Posix FS at >> engine side) >> >> >> >> >>> 3. integrating both together - a bit later :) >>> >>>> >>>> >>>> >>>> -----Original Message----- >>>> From: Itamar Heim [mailto:iheim(a)redhat.com] >>>> Sent: Friday, March 02, 2012 4:10 AM >>>> To: =D0=B7=D0=BE=D1=80=D1=80=D1=8B=D1=87 >>>> Cc: users(a)ovirt.org >>>> Subject: Re: [Users] glusterfs and ovirt >>>> >>>> On 03/01/2012 08:36 PM, =D0=B7=D0=BE=D1=80=D1=80=D1=8B=D1=87 wrote: >>>>> Good news. >>>>> It already works in a test version or development has not yet begun? >>>> >>>> early patches to build this support are sent now to engine/vdsm. >>>> so work has begun, but not working yet >>>> >>>>> >>>>> >>>>> -----Original Message----- >>>>> From: Itamar Heim [mailto:iheim(a)redhat.com] >>>>> Sent: Thursday, March 01, 2012 7:44 PM >>>>> To: ?????? >>>>> Cc: users(a)ovirt.org >>>>> Subject: Re: [Users] glusterfs and ovirt >>>>> >>>>> On 03/01/2012 01:48 PM, ?????? wrote: >>>>>> Hi. >>>>>> >>>>>> Test the ability to work as a storage server glusterfs. Direct >>>>>> support to glusterf ovirt unfortunately not. >>>>>> >>>>>> This feature will be added in the future? >>>>> >>>>> I'll let someone else reply on the below, but as for ovirt-gluster >>>>> integration - yes, it is in the works. >>>>> this gives a general picture of the work being carried out: >>>>> http://www.ovirt.org/wiki/AddingGlusterSupportToOvirt >>>>> >>>>>> >>>>>> Attempted to implement a scheme of work -> glusterfs mounted on a >>>>>> node in a folder mount glusterfs connected via NFS to ovirt. >>>>>> >>>>>> It works =3D) >>>>>> >>>>>> Now try to mount NFS to 127.0.0.1 and encountered an error: >>>>>> >>>>>> Command: >>>>>> >>>>>> [root(a)noc-4-m77 ~] # / bin / mount-o soft, timeo =3D 600, retrans = =3D 6, >>>>>> nosharecache, vers =3D 3 -t nfs 127.0.0.1 :/share/tmp /tmpgcOezk >>>>>> >>>>>> Error: >>>>>> >>>>>> mount.nfs: Unknown error 521 >>>>>> >>>>>> NFS V4 is disabled. >>>>>> >>>>>> In this mount: >>>>>> >>>>>> /bin/mount -t nfs 127.0.0.1:/share/ tmp/tmpgtsoetsk is successful. >>>>>> >>>>>> I understand that this is not a problem ovirt, but you might prompt >>>>>> any ideas how to fix it? >>>>>> >>>>>> To use glusterfs in overt to execute a commandL >>>>>> >>>>>> Mount -t glusterfs -o log-level =3D WARNING, log-file =3D >>>>>> /var/log/gluster.log noc-1 :/mht / /share >>>>>> >>>>>> I can prescribe it in vdsm that it was carried out instead of >>>>>> /bin/mount-o soft, timeo =3D 600, retrans =3D 6, nosharecache, vers = =3D 3 >>>>>> -t nfs 127.0.0.1:/share/tmp/tmpgtsOetsk >>>>>> >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Users mailing list >>>>>> Users(a)ovirt.org >>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>> >>>> >>>> >>>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users(a)ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >> >> _______________________________________________ >> Users mailing list >> Users(a)ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > = --===============2860229302064593261==-- From iheim at redhat.com Wed May 16 06:00:01 2012 Content-Type: multipart/mixed; boundary="===============0031101804461485899==" MIME-Version: 1.0 From: Itamar Heim To: users at ovirt.org Subject: Re: [Users] glusterfs and ovirt Date: Wed, 16 May 2012 12:59:55 +0300 Message-ID: <4FB37A9B.1050703@redhat.com> In-Reply-To: 8FE33E87-D20E-4142-8AF9-F78F0F7E3BB8@nasa.gov --===============0031101804461485899== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On 05/15/2012 07:35 PM, Andrei Vakhnin wrote: > Yair > > Thanks for an update. Can I have KVM hypervisors also function as storage= nodes for glusterfs? What is a release date for glusterfs support? We're l= ooking for a production deployment in June. Thanks current status is 1. patches for provisioning gluster clusters and volumes via ovirt are = in review, trying to cover this feature set [1]. I'm not sure if all of them will make the ovirt 3.1 version which is = slated to branch for stabilization June 1st, but i think "enough" is there. so i'd start trying current upstream version to help find issues = blocking you, and following on them during june as we stabilize ovirt = 3.1 for release (planned for end of june). 2. you should be able to use same hosts for both gluster and virt, but = there is no special logic/handling for this yet (i.e., trying and = providing feedback would help improve this mode). I would suggest start from separate clusters though first, and only = later trying the joint mode. 3. creating a storage domain on top of gluster: - expose NFS on top of it, and consume as a normal nfs storage domain - use posixfs storage domain with gluster mount semantics - future: probably native gluster storage domain, up to native integration with qemu can you please describe the use case you are trying to accommodate? thanks, Itamar [1] http://ovirt.org/wiki/Features/Gluster_Support > > Andrei > On Apr 10, 2012, at 7:18 AM, Yair Zaslavsky wrote: > >> On 04/10/2012 01:43 PM, Itamar Heim wrote: >>> On 04/05/2012 05:25 PM, =D0=B7=D0=BE=D1=80=D1=80=D1=8B=D1=87 wrote: >>>> Already know the date when it is added support for glusterfs? >>>> I can work with the test in a test version of glusterfs? >>> >>> there are two parts to the support: >>> 1. managing the gluster hosts/storage/volumes - patches for different >>> parts of this in gerrit - but not needed for first phase of using this. >>> 2. using gluster as a posix fs - vdsm side is ready, and iirc, engine >>> done by laszlo who can share when test patches can be used for testing = this >> >> Small correction - patches are not at gerrit (referring to Posix FS at >> engine side) >> >> >> >> >>> 3. integrating both together - a bit later :) >>> >>>> >>>> >>>> >>>> -----Original Message----- >>>> From: Itamar Heim [mailto:iheim(a)redhat.com] >>>> Sent: Friday, March 02, 2012 4:10 AM >>>> To: =D0=B7=D0=BE=D1=80=D1=80=D1=8B=D1=87 >>>> Cc: users(a)ovirt.org >>>> Subject: Re: [Users] glusterfs and ovirt >>>> >>>> On 03/01/2012 08:36 PM, =D0=B7=D0=BE=D1=80=D1=80=D1=8B=D1=87 wrote: >>>>> Good news. >>>>> It already works in a test version or development has not yet begun? >>>> >>>> early patches to build this support are sent now to engine/vdsm. >>>> so work has begun, but not working yet >>>> >>>>> >>>>> >>>>> -----Original Message----- >>>>> From: Itamar Heim [mailto:iheim(a)redhat.com] >>>>> Sent: Thursday, March 01, 2012 7:44 PM >>>>> To: ?????? >>>>> Cc: users(a)ovirt.org >>>>> Subject: Re: [Users] glusterfs and ovirt >>>>> >>>>> On 03/01/2012 01:48 PM, ?????? wrote: >>>>>> Hi. >>>>>> >>>>>> Test the ability to work as a storage server glusterfs. Direct >>>>>> support to glusterf ovirt unfortunately not. >>>>>> >>>>>> This feature will be added in the future? >>>>> >>>>> I'll let someone else reply on the below, but as for ovirt-gluster >>>>> integration - yes, it is in the works. >>>>> this gives a general picture of the work being carried out: >>>>> http://www.ovirt.org/wiki/AddingGlusterSupportToOvirt >>>>> >>>>>> >>>>>> Attempted to implement a scheme of work -> glusterfs mounted on a >>>>>> node in a folder mount glusterfs connected via NFS to ovirt. >>>>>> >>>>>> It works =3D) >>>>>> >>>>>> Now try to mount NFS to 127.0.0.1 and encountered an error: >>>>>> >>>>>> Command: >>>>>> >>>>>> [root(a)noc-4-m77 ~] # / bin / mount-o soft, timeo =3D 600, retrans = =3D 6, >>>>>> nosharecache, vers =3D 3 -t nfs 127.0.0.1 :/share/tmp /tmpgcOezk >>>>>> >>>>>> Error: >>>>>> >>>>>> mount.nfs: Unknown error 521 >>>>>> >>>>>> NFS V4 is disabled. >>>>>> >>>>>> In this mount: >>>>>> >>>>>> /bin/mount -t nfs 127.0.0.1:/share/ tmp/tmpgtsoetsk is successful. >>>>>> >>>>>> I understand that this is not a problem ovirt, but you might prompt >>>>>> any ideas how to fix it? >>>>>> >>>>>> To use glusterfs in overt to execute a commandL >>>>>> >>>>>> Mount -t glusterfs -o log-level =3D WARNING, log-file =3D >>>>>> /var/log/gluster.log noc-1 :/mht / /share >>>>>> >>>>>> I can prescribe it in vdsm that it was carried out instead of >>>>>> /bin/mount-o soft, timeo =3D 600, retrans =3D 6, nosharecache, vers = =3D 3 >>>>>> -t nfs 127.0.0.1:/share/tmp/tmpgtsOetsk >>>>>> >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Users mailing list >>>>>> Users(a)ovirt.org >>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>> >>>> >>>> >>>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users(a)ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >> >> _______________________________________________ >> Users mailing list >> Users(a)ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users --===============0031101804461485899==-- From bharata.rao at gmail.com Thu May 17 11:55:53 2012 Content-Type: multipart/mixed; boundary="===============2690615531367185163==" MIME-Version: 1.0 From: Bharata B Rao To: users at ovirt.org Subject: Re: [Users] glusterfs and ovirt Date: Thu, 17 May 2012 21:25:52 +0530 Message-ID: In-Reply-To: 4FB37A9B.1050703@redhat.com --===============2690615531367185163== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On Wed, May 16, 2012 at 3:29 PM, Itamar Heim wrote: > On 05/15/2012 07:35 PM, Andrei Vakhnin wrote: >> >> Yair >> >> Thanks for an update. Can I have KVM hypervisors also function as storage >> nodes for glusterfs? What is a release date for glusterfs support? We're >> looking for a production deployment in June. Thanks > > > current status is > 1. patches for provisioning gluster clusters and volumes via ovirt are in > review, trying to cover this feature set [1]. > I'm not sure if all of them will make the ovirt 3.1 version which is slat= ed > to branch for stabilization June 1st, but i think "enough" is there. > so i'd start trying current upstream version to help find issues blocking > you, and following on them during june as we stabilize ovirt 3.1 for rele= ase > (planned for end of june). > > 2. you should be able to use same hosts for both gluster and virt, but th= ere > is no special logic/handling for this yet (i.e., trying and providing > feedback would help improve this mode). > I would suggest start from separate clusters though first, and only later > trying the joint mode. > > 3. creating a storage domain on top of gluster: > - expose NFS on top of it, and consume as a normal nfs storage domain > - use posixfs storage domain with gluster mount semantics > - future: probably native gluster storage domain, up to native > =C2=A0integration with qemu I am looking at GlusterFS integration with QEMU which involves adding GlusterFS as block backend in QEMU. This will involve QEMU talking to gluster directly via libglusterfs bypassing FUSE. I could specify a volume file and the VM image directly on QEMU command line to boot from the VM image that resides on a gluster volume. Eg: qemu -drive file=3Dclient.vol:/Fedora.img,format=3Dgluster In this example, Fedora.img is being served by gluster and client.vol would have client-side translators specified. I am not sure if this use case would be served if GlusterFS is integrated as posixfs storage domain in VDSM. Posixfs would involve normal FUSE mount and QEMU would be required to work with images from FUSE mount path ? With QEMU supporting GlusterFS backend natively, further optimizations are possible in case of gluster volume being local to the host node. In this case, one could provide QEMU with a simple volume file that would not contain client or server xlators, but instead just the posix xlator. This would lead to most optimal IO path that bypasses RPC calls. So do you think, this use case (QEMU supporting GlusterFS backend natively and using volume file to specify the needed translators) warrants a specialized storage domain type for GlusterFS in VDSM ? Regards, Bharata. -- = http://bharata.sulekha.com/blog/posts.htm, http://raobharata.wordpress.com/ --===============2690615531367185163==-- From iheim at redhat.com Thu May 17 13:35:16 2012 Content-Type: multipart/mixed; boundary="===============4739161884679360853==" MIME-Version: 1.0 From: Itamar Heim To: users at ovirt.org Subject: Re: [Users] glusterfs and ovirt Date: Thu, 17 May 2012 20:35:13 +0300 Message-ID: <4FB536D1.9010301@redhat.com> In-Reply-To: CAGZKiBqTKBcxFNqXGrztvsaLZfzMrYwk1FFVetxTsPodt=rTDg@mail.gmail.com --===============4739161884679360853== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On 05/17/2012 06:55 PM, Bharata B Rao wrote: > On Wed, May 16, 2012 at 3:29 PM, Itamar Heim wrote: >> On 05/15/2012 07:35 PM, Andrei Vakhnin wrote: >>> >>> Yair >>> >>> Thanks for an update. Can I have KVM hypervisors also function as stora= ge >>> nodes for glusterfs? What is a release date for glusterfs support? We're >>> looking for a production deployment in June. Thanks >> >> >> current status is >> 1. patches for provisioning gluster clusters and volumes via ovirt are in >> review, trying to cover this feature set [1]. >> I'm not sure if all of them will make the ovirt 3.1 version which is sla= ted >> to branch for stabilization June 1st, but i think "enough" is there. >> so i'd start trying current upstream version to help find issues blocking >> you, and following on them during june as we stabilize ovirt 3.1 for rel= ease >> (planned for end of june). >> >> 2. you should be able to use same hosts for both gluster and virt, but t= here >> is no special logic/handling for this yet (i.e., trying and providing >> feedback would help improve this mode). >> I would suggest start from separate clusters though first, and only later >> trying the joint mode. >> >> 3. creating a storage domain on top of gluster: >> - expose NFS on top of it, and consume as a normal nfs storage domain >> - use posixfs storage domain with gluster mount semantics >> - future: probably native gluster storage domain, up to native >> integration with qemu > > I am looking at GlusterFS integration with QEMU which involves adding > GlusterFS as block backend in QEMU. This will involve QEMU talking to > gluster directly via libglusterfs bypassing FUSE. I could specify a > volume file and the VM image directly on QEMU command line to boot > from the VM image that resides on a gluster volume. > > Eg: qemu -drive file=3Dclient.vol:/Fedora.img,format=3Dgluster > > In this example, Fedora.img is being served by gluster and client.vol > would have client-side translators specified. > > I am not sure if this use case would be served if GlusterFS is > integrated as posixfs storage domain in VDSM. Posixfs would involve > normal FUSE mount and QEMU would be required to work with images from > FUSE mount path ? > > With QEMU supporting GlusterFS backend natively, further optimizations > are possible in case of gluster volume being local to the host node. > In this case, one could provide QEMU with a simple volume file that > would not contain client or server xlators, but instead just the posix > xlator. This would lead to most optimal IO path that bypasses RPC > calls. > > So do you think, this use case (QEMU supporting GlusterFS backend > natively and using volume file to specify the needed translators) > warrants a specialized storage domain type for GlusterFS in VDSM ? I'm not sure if a special storage domain, or a PosixFS based domain with = enhanced capabilities. Ayal? --===============4739161884679360853==-- From bailey at cs.kent.edu Thu May 17 14:41:06 2012 Content-Type: multipart/mixed; boundary="===============8120190732674201835==" MIME-Version: 1.0 From: Jeff Bailey To: users at ovirt.org Subject: Re: [Users] glusterfs and ovirt Date: Thu, 17 May 2012 14:40:58 -0400 Message-ID: <4FB5463A.1030506@cs.kent.edu> In-Reply-To: 4FB536D1.9010301@redhat.com --===============8120190732674201835== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On 5/17/2012 1:35 PM, Itamar Heim wrote: > On 05/17/2012 06:55 PM, Bharata B Rao wrote: >> On Wed, May 16, 2012 at 3:29 PM, Itamar Heim wrote: >>> On 05/15/2012 07:35 PM, Andrei Vakhnin wrote: >>>> >>>> Yair >>>> >>>> Thanks for an update. Can I have KVM hypervisors also function as = >>>> storage >>>> nodes for glusterfs? What is a release date for glusterfs support? = >>>> We're >>>> looking for a production deployment in June. Thanks >>> >>> >>> current status is >>> 1. patches for provisioning gluster clusters and volumes via ovirt = >>> are in >>> review, trying to cover this feature set [1]. >>> I'm not sure if all of them will make the ovirt 3.1 version which is = >>> slated >>> to branch for stabilization June 1st, but i think "enough" is there. >>> so i'd start trying current upstream version to help find issues = >>> blocking >>> you, and following on them during june as we stabilize ovirt 3.1 for = >>> release >>> (planned for end of june). >>> >>> 2. you should be able to use same hosts for both gluster and virt, = >>> but there >>> is no special logic/handling for this yet (i.e., trying and providing >>> feedback would help improve this mode). >>> I would suggest start from separate clusters though first, and only = >>> later >>> trying the joint mode. >>> >>> 3. creating a storage domain on top of gluster: >>> - expose NFS on top of it, and consume as a normal nfs storage domain >>> - use posixfs storage domain with gluster mount semantics >>> - future: probably native gluster storage domain, up to native >>> integration with qemu >> >> I am looking at GlusterFS integration with QEMU which involves adding >> GlusterFS as block backend in QEMU. This will involve QEMU talking to >> gluster directly via libglusterfs bypassing FUSE. I could specify a >> volume file and the VM image directly on QEMU command line to boot >> from the VM image that resides on a gluster volume. >> >> Eg: qemu -drive file=3Dclient.vol:/Fedora.img,format=3Dgluster >> >> In this example, Fedora.img is being served by gluster and client.vol >> would have client-side translators specified. >> >> I am not sure if this use case would be served if GlusterFS is >> integrated as posixfs storage domain in VDSM. Posixfs would involve >> normal FUSE mount and QEMU would be required to work with images from >> FUSE mount path ? >> >> With QEMU supporting GlusterFS backend natively, further optimizations >> are possible in case of gluster volume being local to the host node. >> In this case, one could provide QEMU with a simple volume file that >> would not contain client or server xlators, but instead just the posix >> xlator. This would lead to most optimal IO path that bypasses RPC >> calls. >> >> So do you think, this use case (QEMU supporting GlusterFS backend >> natively and using volume file to specify the needed translators) >> warrants a specialized storage domain type for GlusterFS in VDSM ? > > I'm not sure if a special storage domain, or a PosixFS based domain = > with enhanced capabilities. > Ayal? Direct qemu support for gluster is similar to ceph rbd/rados object = storage which is also supported in qemu. A domain type which can handle = object based storage of this sort would be very nice. > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users --===============8120190732674201835==-- From deepakcs at linux.vnet.ibm.com Fri May 18 09:28:26 2012 Content-Type: multipart/mixed; boundary="===============5070610663411756857==" MIME-Version: 1.0 From: Deepak C Shetty To: users at ovirt.org Subject: Re: [Users] glusterfs and ovirt Date: Fri, 18 May 2012 18:58:12 +0530 Message-ID: <4FB64E6C.8090700@linux.vnet.ibm.com> In-Reply-To: 4FB536D1.9010301@redhat.com --===============5070610663411756857== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On 05/17/2012 11:05 PM, Itamar Heim wrote: > On 05/17/2012 06:55 PM, Bharata B Rao wrote: >> On Wed, May 16, 2012 at 3:29 PM, Itamar Heim wrote: >>> On 05/15/2012 07:35 PM, Andrei Vakhnin wrote: >>>> >>>> Yair >>>> >>>> Thanks for an update. Can I have KVM hypervisors also function as = >>>> storage >>>> nodes for glusterfs? What is a release date for glusterfs support? = >>>> We're >>>> looking for a production deployment in June. Thanks >>> >>> >>> current status is >>> 1. patches for provisioning gluster clusters and volumes via ovirt = >>> are in >>> review, trying to cover this feature set [1]. >>> I'm not sure if all of them will make the ovirt 3.1 version which is = >>> slated >>> to branch for stabilization June 1st, but i think "enough" is there. >>> so i'd start trying current upstream version to help find issues = >>> blocking >>> you, and following on them during june as we stabilize ovirt 3.1 for = >>> release >>> (planned for end of june). >>> >>> 2. you should be able to use same hosts for both gluster and virt, = >>> but there >>> is no special logic/handling for this yet (i.e., trying and providing >>> feedback would help improve this mode). >>> I would suggest start from separate clusters though first, and only = >>> later >>> trying the joint mode. >>> >>> 3. creating a storage domain on top of gluster: >>> - expose NFS on top of it, and consume as a normal nfs storage domain >>> - use posixfs storage domain with gluster mount semantics >>> - future: probably native gluster storage domain, up to native >>> integration with qemu >> >> I am looking at GlusterFS integration with QEMU which involves adding >> GlusterFS as block backend in QEMU. This will involve QEMU talking to >> gluster directly via libglusterfs bypassing FUSE. I could specify a >> volume file and the VM image directly on QEMU command line to boot >> from the VM image that resides on a gluster volume. >> >> Eg: qemu -drive file=3Dclient.vol:/Fedora.img,format=3Dgluster >> >> In this example, Fedora.img is being served by gluster and client.vol >> would have client-side translators specified. >> >> I am not sure if this use case would be served if GlusterFS is >> integrated as posixfs storage domain in VDSM. Posixfs would involve >> normal FUSE mount and QEMU would be required to work with images from >> FUSE mount path ? >> >> With QEMU supporting GlusterFS backend natively, further optimizations >> are possible in case of gluster volume being local to the host node. >> In this case, one could provide QEMU with a simple volume file that >> would not contain client or server xlators, but instead just the posix >> xlator. This would lead to most optimal IO path that bypasses RPC >> calls. >> >> So do you think, this use case (QEMU supporting GlusterFS backend >> natively and using volume file to specify the needed translators) >> warrants a specialized storage domain type for GlusterFS in VDSM ? > > I'm not sure if a special storage domain, or a PosixFS based domain = > with enhanced capabilities. > Ayal? Related Question: With QEMU using GlusterFS backend natively (as described above), it = also means that it needs addnl options/parameters as part of qemu command line (as given = above). How does VDSM today support generating a custom qemu cmdline. I know = VDSM talks to libvirt, so is there a framework in VDSM to edit/modify the domxml based on some = pre-conditions, and how / where one should hook up to do that modification ? I know of = libvirt hooks framework in VDSM, but that was more for temporary/experimental needs, = or am i completely wrong here ? Irrespective of whether GlusterFS integrates into VDSM as PosixFS or = special storage domain it won't address the need to generate a custom qemu cmdline if a = file/image was served by GlusterFS. Whats the way to address this issue in VDSM ? I am assuming here that special storage domain (aka repo engine) is only = to manage image repository, and image related operations, won't help in modifying qemu = cmd line being generated. [Ccing vdsm-devel also] thanx, deepak --===============5070610663411756857==-- From robert at middleswarth.net Sat May 19 20:01:46 2012 Content-Type: multipart/mixed; boundary="===============4416105241057158472==" MIME-Version: 1.0 From: Robert Middleswarth To: users at ovirt.org Subject: Re: [Users] glusterfs and ovirt Date: Sat, 19 May 2012 20:01:33 -0400 Message-ID: <4FB8345D.4010107@middleswarth.net> In-Reply-To: 004501ccf7a1$3e1216f0$ba3644d0$@ru --===============4416105241057158472== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable This is a multi-part message in MIME format. --------------050405070507020703060109 Content-Type: text/plain; charset=3DISO-8859-1; format=3Dflowed Content-Transfer-Encoding: 7bit Simple thing to try. Create a URL something like = nfs.glusterfs.example.com and add a host entry on each system pointing = to the ip of the system. Not 127.0.0.1 but the IP other stations would = use to talk to the system. That would point every system to there local = client. Thanks Robert On 3/1/2012 6:48 AM, ?????? wrote: > > Hi. > > Test the ability to work as a storage server glusterfs. Direct support = > to glusterf ovirt unfortunately not. > > This feature will be added in the future? > > Attempted to implement a scheme of work -> glusterfs mounted on a node = > in a folder mount glusterfs connected via NFS to ovirt. > > It works =3D) > > Now try to mount NFS to 127.0.0.1 and encountered an error: > > Command: > > [root(a)noc-4-m77 ~] # / bin / mount-o soft, timeo =3D 600, retrans =3D 6= , = > nosharecache, vers =3D 3 -t nfs 127.0.0.1 :/share/tmp /tmpgcOezk > > Error: > > mount.nfs: Unknown error 521 > > NFS V4 is disabled. > > In this mount: > > /bin/mount -t nfs 127.0.0.1:/share/ tmp/tmpgtsoetsk is successful. > > I understand that this is not a problem ovirt, but you might prompt = > any ideas how to fix it? > > To use glusterfs in overt to execute a commandL > > Mount -t glusterfs -o log-level =3D WARNING, log-file =3D = > /var/log/gluster.log noc-1 :/mht / /share > > I can prescribe it in vdsm that it was carried out instead of = > /bin/mount-o soft, timeo =3D 600, retrans =3D 6, nosharecache, vers =3D 3= -t = > nfs 127.0.0.1:/share/tmp/tmpgtsOetsk > > > > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users --------------050405070507020703060109 Content-Type: text/html; charset=3DISO-8859-1 Content-Transfer-Encoding: 7bit Simple thing to try.  Create a URL something like nfs.glusterfs.example.com and add a host entry on each system pointing to the ip of the system.  Not 127.0.0.1 but the IP other stations would use to talk to the system.  That would point every system to there local client.

Thanks
Robert

On 3/1/2012 6:48 AM, ?????? wrote:

Hi.

Test the ability to work as a storage server glusterfs. Direct support to glusterf ovirt unfortunately not.

This feature will be added in the future?

 

Attempted to implement a scheme of work -> glusterfs mounted on a node in a folder mount glusterfs connected via NFS to ovirt.

It works =3D)

Now try to mount NFS to 127.0.0.1 and encountered an error:

Command:

[root(a)noc-4-m77 ~] # / bin / mount-o soft, timeo =3D 600, retrans =3D 6, nosharecache, vers =3D 3 -t nfs 127.0.0.1 :/share/tmp /tmpgcOezk

Error:

mount.nfs: Unknown error 521

 

NFS V4 is disabled.

In this mount:

/bin/mount -t nfs 127.0.0.1:/share/ tmp/tmpgtsoetsk is successful.

I understand that this is not a problem ovirt, but you might prompt any ideas how to fix it?

 

To use glusterfs in overt  to execute a commandL

Mount -t glusterfs -o log-level =3D WARNING, log-file =3D /var/log/gluster.log noc-1 :/mht /  /share

I can prescribe it in vdsm that it was carried out instead of /bin/mount-o soft, timeo =3D 600, retrans =3D 6, nosharecache, vers =3D 3 -t nfs 127.0.0.1:/share/tmp/tmpgtsOetsk

 

 



_______________________________________________
Users mailing list
Use=
rs(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

--------------050405070507020703060109-- --===============4416105241057158472== Content-Type: multipart/alternative MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="attachment.bin" VGhpcyBpcyBhIG11bHRpLXBhcnQgbWVzc2FnZSBpbiBNSU1FIGZvcm1hdC4KLS0tLS0tLS0tLS0t LS0wNTA0MDUwNzA1MDcwMjA3MDMwNjAxMDkKQ29udGVudC1UeXBlOiB0ZXh0L3BsYWluOyBjaGFy c2V0PUlTTy04ODU5LTE7IGZvcm1hdD1mbG93ZWQKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzog N2JpdAoKU2ltcGxlIHRoaW5nIHRvIHRyeS4gIENyZWF0ZSBhIFVSTCBzb21ldGhpbmcgbGlrZSAK bmZzLmdsdXN0ZXJmcy5leGFtcGxlLmNvbSBhbmQgYWRkIGEgaG9zdCBlbnRyeSBvbiBlYWNoIHN5 c3RlbSBwb2ludGluZyAKdG8gdGhlIGlwIG9mIHRoZSBzeXN0ZW0uICBOb3QgMTI3LjAuMC4xIGJ1 dCB0aGUgSVAgb3RoZXIgc3RhdGlvbnMgd291bGQgCnVzZSB0byB0YWxrIHRvIHRoZSBzeXN0ZW0u ICBUaGF0IHdvdWxkIHBvaW50IGV2ZXJ5IHN5c3RlbSB0byB0aGVyZSBsb2NhbCAKY2xpZW50LgoK VGhhbmtzClJvYmVydAoKT24gMy8xLzIwMTIgNjo0OCBBTSwgPz8/Pz8/IHdyb3RlOgo+Cj4gSGku Cj4KPiBUZXN0IHRoZSBhYmlsaXR5IHRvIHdvcmsgYXMgYSBzdG9yYWdlIHNlcnZlciBnbHVzdGVy ZnMuIERpcmVjdCBzdXBwb3J0IAo+IHRvIGdsdXN0ZXJmIG92aXJ0IHVuZm9ydHVuYXRlbHkgbm90 Lgo+Cj4gVGhpcyBmZWF0dXJlIHdpbGwgYmUgYWRkZWQgaW4gdGhlIGZ1dHVyZT8KPgo+IEF0dGVt cHRlZCB0byBpbXBsZW1lbnQgYSBzY2hlbWUgb2Ygd29yayAtPiBnbHVzdGVyZnMgbW91bnRlZCBv biBhIG5vZGUgCj4gaW4gYSBmb2xkZXIgbW91bnQgZ2x1c3RlcmZzIGNvbm5lY3RlZCB2aWEgTkZT IHRvIG92aXJ0Lgo+Cj4gSXQgd29ya3MgPSkKPgo+IE5vdyB0cnkgdG8gbW91bnQgTkZTIHRvIDEy Ny4wLjAuMSBhbmQgZW5jb3VudGVyZWQgYW4gZXJyb3I6Cj4KPiBDb21tYW5kOgo+Cj4gW3Jvb3RA bm9jLTQtbTc3IH5dICMgLyBiaW4gLyBtb3VudC1vIHNvZnQsIHRpbWVvID0gNjAwLCByZXRyYW5z ID0gNiwgCj4gbm9zaGFyZWNhY2hlLCB2ZXJzID0gMyAtdCBuZnMgMTI3LjAuMC4xIDovc2hhcmUv dG1wIC90bXBnY09lemsKPgo+IEVycm9yOgo+Cj4gbW91bnQubmZzOiBVbmtub3duIGVycm9yIDUy MQo+Cj4gTkZTIFY0IGlzIGRpc2FibGVkLgo+Cj4gSW4gdGhpcyBtb3VudDoKPgo+IC9iaW4vbW91 bnQgLXQgbmZzIDEyNy4wLjAuMTovc2hhcmUvIHRtcC90bXBndHNvZXRzayBpcyBzdWNjZXNzZnVs Lgo+Cj4gSSB1bmRlcnN0YW5kIHRoYXQgdGhpcyBpcyBub3QgYSBwcm9ibGVtIG92aXJ0LCBidXQg eW91IG1pZ2h0IHByb21wdCAKPiBhbnkgaWRlYXMgaG93IHRvIGZpeCBpdD8KPgo+IFRvIHVzZSBn bHVzdGVyZnMgaW4gb3ZlcnQgIHRvIGV4ZWN1dGUgYSBjb21tYW5kTAo+Cj4gTW91bnQgLXQgZ2x1 c3RlcmZzIC1vIGxvZy1sZXZlbCA9IFdBUk5JTkcsIGxvZy1maWxlID0gCj4gL3Zhci9sb2cvZ2x1 c3Rlci5sb2cgbm9jLTEgOi9taHQgLyAgL3NoYXJlCj4KPiBJIGNhbiBwcmVzY3JpYmUgaXQgaW4g dmRzbSB0aGF0IGl0IHdhcyBjYXJyaWVkIG91dCBpbnN0ZWFkIG9mIAo+IC9iaW4vbW91bnQtbyBz b2Z0LCB0aW1lbyA9IDYwMCwgcmV0cmFucyA9IDYsIG5vc2hhcmVjYWNoZSwgdmVycyA9IDMgLXQg Cj4gbmZzIDEyNy4wLjAuMTovc2hhcmUvdG1wL3RtcGd0c09ldHNrCj4KPgo+Cj4gX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KPiBVc2VycyBtYWlsaW5nIGxp c3QKPiBVc2Vyc0BvdmlydC5vcmcKPiBodHRwOi8vbGlzdHMub3ZpcnQub3JnL21haWxtYW4vbGlz dGluZm8vdXNlcnMKCgotLS0tLS0tLS0tLS0tLTA1MDQwNTA3MDUwNzAyMDcwMzA2MDEwOQpDb250 ZW50LVR5cGU6IHRleHQvaHRtbDsgY2hhcnNldD1JU08tODg1OS0xCkNvbnRlbnQtVHJhbnNmZXIt RW5jb2Rpbmc6IDdiaXQKCjxodG1sPgogIDxoZWFkPgogICAgPG1ldGEgY29udGVudD0idGV4dC9o dG1sOyBjaGFyc2V0PUlTTy04ODU5LTEiCiAgICAgIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSI+ CiAgPC9oZWFkPgogIDxib2R5IGJnY29sb3I9IiNGRkZGRkYiIHRleHQ9IiMwMDAwMDAiPgogICAg U2ltcGxlIHRoaW5nIHRvIHRyeS4mbmJzcDsgQ3JlYXRlIGEgVVJMIHNvbWV0aGluZyBsaWtlCiAg ICBuZnMuZ2x1c3RlcmZzLmV4YW1wbGUuY29tIGFuZCBhZGQgYSBob3N0IGVudHJ5IG9uIGVhY2gg c3lzdGVtCiAgICBwb2ludGluZyB0byB0aGUgaXAgb2YgdGhlIHN5c3RlbS4mbmJzcDsgTm90IDEy Ny4wLjAuMSBidXQgdGhlIElQIG90aGVyCiAgICBzdGF0aW9ucyB3b3VsZCB1c2UgdG8gdGFsayB0 byB0aGUgc3lzdGVtLiZuYnNwOyBUaGF0IHdvdWxkIHBvaW50IGV2ZXJ5CiAgICBzeXN0ZW0gdG8g dGhlcmUgbG9jYWwgY2xpZW50Ljxicj4KICAgIDxicj4KICAgIFRoYW5rczxicj4KICAgIFJvYmVy dDxicj4KICAgIDxicj4KICAgIE9uIDMvMS8yMDEyIDY6NDggQU0sID8/Pz8/PyB3cm90ZToKICAg IDxibG9ja3F1b3RlIGNpdGU9Im1pZDowMDQ1MDFjY2Y3YTEkM2UxMjE2ZjAkYmEzNjQ0ZDAkQHJ1 IgogICAgICB0eXBlPSJjaXRlIj4KICAgICAgPG1ldGEgaHR0cC1lcXVpdj0iQ29udGVudC1UeXBl IiBjb250ZW50PSJ0ZXh0L2h0bWw7CiAgICAgICAgY2hhcnNldD1JU08tODg1OS0xIj4KICAgICAg PG1ldGEgbmFtZT0iR2VuZXJhdG9yIiBjb250ZW50PSJNaWNyb3NvZnQgV29yZCAxMiAoZmlsdGVy ZWQKICAgICAgICBtZWRpdW0pIj4KICAgICAgPHN0eWxlPjwhLS0KLyogRm9udCBEZWZpbml0aW9u cyAqLwpAZm9udC1mYWNlCgl7Zm9udC1mYW1pbHk6IkNhbWJyaWEgTWF0aCI7CglwYW5vc2UtMToy IDQgNSAzIDUgNCA2IDMgMiA0O30KQGZvbnQtZmFjZQoJe2ZvbnQtZmFtaWx5OkNhbGlicmk7Cglw YW5vc2UtMToyIDE1IDUgMiAyIDIgNCAzIDIgNDt9Ci8qIFN0eWxlIERlZmluaXRpb25zICovCnAu TXNvTm9ybWFsLCBsaS5Nc29Ob3JtYWwsIGRpdi5Nc29Ob3JtYWwKCXttYXJnaW46MGNtOwoJbWFy Z2luLWJvdHRvbTouMDAwMXB0OwoJZm9udC1zaXplOjExLjBwdDsKCWZvbnQtZmFtaWx5OiJDYWxp YnJpIiwic2Fucy1zZXJpZiI7fQphOmxpbmssIHNwYW4uTXNvSHlwZXJsaW5rCgl7bXNvLXN0eWxl LXByaW9yaXR5Ojk5OwoJY29sb3I6Ymx1ZTsKCXRleHQtZGVjb3JhdGlvbjp1bmRlcmxpbmU7fQph OnZpc2l0ZWQsIHNwYW4uTXNvSHlwZXJsaW5rRm9sbG93ZWQKCXttc28tc3R5bGUtcHJpb3JpdHk6 OTk7Cgljb2xvcjpwdXJwbGU7Cgl0ZXh0LWRlY29yYXRpb246dW5kZXJsaW5lO30Kc3Bhbi5FbWFp bFN0eWxlMTcKCXttc28tc3R5bGUtdHlwZTpwZXJzb25hbC1jb21wb3NlOwoJZm9udC1mYW1pbHk6 IkNhbGlicmkiLCJzYW5zLXNlcmlmIjsKCWNvbG9yOndpbmRvd3RleHQ7fQouTXNvQ2hwRGVmYXVs dAoJe21zby1zdHlsZS10eXBlOmV4cG9ydC1vbmx5O30KQHBhZ2UgV29yZFNlY3Rpb24xCgl7c2l6 ZTo2MTIuMHB0IDc5Mi4wcHQ7CgltYXJnaW46Mi4wY20gNDIuNXB0IDIuMGNtIDMuMGNtO30KZGl2 LldvcmRTZWN0aW9uMQoJe3BhZ2U6V29yZFNlY3Rpb24xO30KLS0+PC9zdHlsZT48IS0tW2lmIGd0 ZSBtc28gOV0+PHhtbD4KPG86c2hhcGVkZWZhdWx0cyB2OmV4dD0iZWRpdCIgc3BpZG1heD0iMTAy NiIgLz4KPC94bWw+PCFbZW5kaWZdLS0+PCEtLVtpZiBndGUgbXNvIDldPjx4bWw+CjxvOnNoYXBl bGF5b3V0IHY6ZXh0PSJlZGl0Ij4KPG86aWRtYXAgdjpleHQ9ImVkaXQiIGRhdGE9IjEiIC8+Cjwv bzpzaGFwZWxheW91dD48L3htbD48IVtlbmRpZl0tLT4KICAgICAgPGRpdiBjbGFzcz0iV29yZFNl Y3Rpb24xIj4KICAgICAgICA8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBsYW5nPSJFTi1VUyI+ SGkuPG86cD48L286cD48L3NwYW4+PC9wPgogICAgICAgIDxwIGNsYXNzPSJNc29Ob3JtYWwiPjxz cGFuIGxhbmc9IkVOLVVTIj5UZXN0IHRoZSBhYmlsaXR5IHRvIHdvcmsKICAgICAgICAgICAgYXMg YSBzdG9yYWdlIHNlcnZlciBnbHVzdGVyZnMuIERpcmVjdCBzdXBwb3J0IHRvIGdsdXN0ZXJmCiAg ICAgICAgICAgIG92aXJ0IHVuZm9ydHVuYXRlbHkgbm90LjxvOnA+PC9vOnA+PC9zcGFuPjwvcD4K ICAgICAgICA8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBsYW5nPSJFTi1VUyI+VGhpcyBmZWF0 dXJlIHdpbGwgYmUKICAgICAgICAgICAgYWRkZWQgaW4gdGhlIGZ1dHVyZT88bzpwPjwvbzpwPjwv c3Bhbj48L3A+CiAgICAgICAgPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gbGFuZz0iRU4tVVMi PjxvOnA+Jm5ic3A7PC9vOnA+PC9zcGFuPjwvcD4KICAgICAgICA8cCBjbGFzcz0iTXNvTm9ybWFs Ij48c3BhbiBsYW5nPSJFTi1VUyI+QXR0ZW1wdGVkIHRvIGltcGxlbWVudCBhCiAgICAgICAgICAg IHNjaGVtZSBvZiB3b3JrIC0mZ3Q7IGdsdXN0ZXJmcyBtb3VudGVkIG9uIGEgbm9kZSBpbiBhIGZv bGRlcgogICAgICAgICAgICBtb3VudCBnbHVzdGVyZnMgY29ubmVjdGVkIHZpYSBORlMgdG8gb3Zp cnQuPG86cD48L286cD48L3NwYW4+PC9wPgogICAgICAgIDxwIGNsYXNzPSJNc29Ob3JtYWwiPjxz cGFuIGxhbmc9IkVOLVVTIj5JdCB3b3JrcyA9KTxvOnA+PC9vOnA+PC9zcGFuPjwvcD4KICAgICAg ICA8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBsYW5nPSJFTi1VUyI+Tm93IHRyeSB0byBtb3Vu dCBORlMgdG8KICAgICAgICAgICAgMTI3LjAuMC4xIGFuZCBlbmNvdW50ZXJlZCBhbiBlcnJvcjo8 bzpwPjwvbzpwPjwvc3Bhbj48L3A+CiAgICAgICAgPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4g bGFuZz0iRU4tVVMiPkNvbW1hbmQ6PG86cD48L286cD48L3NwYW4+PC9wPgogICAgICAgIDxwIGNs YXNzPSJNc29Ob3JtYWwiPjxzcGFuIGxhbmc9IkVOLVVTIj5bcm9vdEBub2MtNC1tNzcgfl0gIyAv CiAgICAgICAgICAgIGJpbiAvIG1vdW50LW8gc29mdCwgdGltZW8gPSA2MDAsIHJldHJhbnMgPSA2 LCBub3NoYXJlY2FjaGUsCiAgICAgICAgICAgIHZlcnMgPSAzIC10IG5mcyAxMjcuMC4wLjEgOi9z aGFyZS90bXAgL3RtcGdjT2V6azxvOnA+PC9vOnA+PC9zcGFuPjwvcD4KICAgICAgICA8cCBjbGFz cz0iTXNvTm9ybWFsIj48c3BhbiBsYW5nPSJFTi1VUyI+RXJyb3I6PG86cD48L286cD48L3NwYW4+ PC9wPgogICAgICAgIDxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIGxhbmc9IkVOLVVTIj5tb3Vu dC5uZnM6IFVua25vd24gZXJyb3IKICAgICAgICAgICAgNTIxPG86cD48L286cD48L3NwYW4+PC9w PgogICAgICAgIDxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIGxhbmc9IkVOLVVTIj48bzpwPiZu YnNwOzwvbzpwPjwvc3Bhbj48L3A+CiAgICAgICAgPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4g bGFuZz0iRU4tVVMiPk5GUyBWNCBpcyBkaXNhYmxlZC48bzpwPjwvbzpwPjwvc3Bhbj48L3A+CiAg ICAgICAgPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gbGFuZz0iRU4tVVMiPkluIHRoaXMgbW91 bnQ6PG86cD48L286cD48L3NwYW4+PC9wPgogICAgICAgIDxwIGNsYXNzPSJNc29Ob3JtYWwiPjxz cGFuIGxhbmc9IkVOLVVTIj4vYmluL21vdW50IC10IG5mcwogICAgICAgICAgICAxMjcuMC4wLjE6 L3NoYXJlLyB0bXAvdG1wZ3Rzb2V0c2sgaXMgc3VjY2Vzc2Z1bC48bzpwPjwvbzpwPjwvc3Bhbj48 L3A+CiAgICAgICAgPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gbGFuZz0iRU4tVVMiPkkgdW5k ZXJzdGFuZCB0aGF0IHRoaXMKICAgICAgICAgICAgaXMgbm90IGEgcHJvYmxlbSBvdmlydCwgYnV0 IHlvdSBtaWdodCBwcm9tcHQgYW55IGlkZWFzIGhvdwogICAgICAgICAgICB0byBmaXggaXQ/PG86 cD48L286cD48L3NwYW4+PC9wPgogICAgICAgIDxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIGxh bmc9IkVOLVVTIj48bzpwPiZuYnNwOzwvbzpwPjwvc3Bhbj48L3A+CiAgICAgICAgPHAgY2xhc3M9 Ik1zb05vcm1hbCI+PHNwYW4gbGFuZz0iRU4tVVMiPlRvIHVzZSBnbHVzdGVyZnMgaW4KICAgICAg ICAgICAgb3ZlcnQgJm5ic3A7dG8gZXhlY3V0ZSBhIGNvbW1hbmRMPG86cD48L286cD48L3NwYW4+ PC9wPgogICAgICAgIDxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIGxhbmc9IkVOLVVTIj5Nb3Vu dCAtdCBnbHVzdGVyZnMgLW8KICAgICAgICAgICAgbG9nLWxldmVsID0gV0FSTklORywgbG9nLWZp bGUgPSAvdmFyL2xvZy9nbHVzdGVyLmxvZyBub2MtMQogICAgICAgICAgICA6L21odCAvICZuYnNw Oy9zaGFyZTxvOnA+PC9vOnA+PC9zcGFuPjwvcD4KICAgICAgICA8cCBjbGFzcz0iTXNvTm9ybWFs Ij48c3BhbiBsYW5nPSJFTi1VUyI+SSBjYW4gcHJlc2NyaWJlIGl0IGluCiAgICAgICAgICAgIHZk c20gdGhhdCBpdCB3YXMgY2FycmllZCBvdXQgaW5zdGVhZCBvZiAvYmluL21vdW50LW8gc29mdCwK ICAgICAgICAgICAgdGltZW8gPSA2MDAsIHJldHJhbnMgPSA2LCBub3NoYXJlY2FjaGUsIHZlcnMg PSAzIC10IG5mcwogICAgICAgICAgICAxMjcuMC4wLjE6L3NoYXJlL3RtcC90bXBndHNPZXRzazxv OnA+PC9vOnA+PC9zcGFuPjwvcD4KICAgICAgICA8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBs YW5nPSJFTi1VUyI+PG86cD4mbmJzcDs8L286cD48L3NwYW4+PC9wPgogICAgICAgIDxwIGNsYXNz PSJNc29Ob3JtYWwiPjxzcGFuIGxhbmc9IkVOLVVTIj48bzpwPiZuYnNwOzwvbzpwPjwvc3Bhbj48 L3A+CiAgICAgIDwvZGl2PgogICAgICA8YnI+CiAgICAgIDxmaWVsZHNldCBjbGFzcz0ibWltZUF0 dGFjaG1lbnRIZWFkZXIiPjwvZmllbGRzZXQ+CiAgICAgIDxicj4KICAgICAgPHByZSB3cmFwPSIi Pl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fClVzZXJzIG1h aWxpbmcgbGlzdAo8YSBjbGFzcz0ibW96LXR4dC1saW5rLWFiYnJldmlhdGVkIiBocmVmPSJtYWls dG86VXNlcnNAb3ZpcnQub3JnIj5Vc2Vyc0BvdmlydC5vcmc8L2E+CjxhIGNsYXNzPSJtb3otdHh0 LWxpbmstZnJlZXRleHQiIGhyZWY9Imh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0 aW5mby91c2VycyI+aHR0cDovL2xpc3RzLm92aXJ0Lm9yZy9tYWlsbWFuL2xpc3RpbmZvL3VzZXJz PC9hPgo8L3ByZT4KICAgIDwvYmxvY2txdW90ZT4KICAgIDxicj4KICA8L2JvZHk+CjwvaHRtbD4K Ci0tLS0tLS0tLS0tLS0tMDUwNDA1MDcwNTA3MDIwNzAzMDYwMTA5LS0K --===============4416105241057158472==-- From dlaor at redhat.com Sun May 20 07:27:58 2012 Content-Type: multipart/mixed; boundary="===============5887288750197478570==" MIME-Version: 1.0 From: Dor Laor To: users at ovirt.org Subject: Re: [Users] [vdsm] glusterfs and ovirt Date: Sun, 20 May 2012 14:27:53 +0300 Message-ID: <4FB8D539.6000609@redhat.com> In-Reply-To: 4FB64E6C.8090700@linux.vnet.ibm.com --===============5887288750197478570== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On 05/18/2012 04:28 PM, Deepak C Shetty wrote: > On 05/17/2012 11:05 PM, Itamar Heim wrote: >> On 05/17/2012 06:55 PM, Bharata B Rao wrote: >>> On Wed, May 16, 2012 at 3:29 PM, Itamar Heim wrote: >>>> On 05/15/2012 07:35 PM, Andrei Vakhnin wrote: >>>>> >>>>> Yair >>>>> >>>>> Thanks for an update. Can I have KVM hypervisors also function as >>>>> storage >>>>> nodes for glusterfs? What is a release date for glusterfs support? >>>>> We're >>>>> looking for a production deployment in June. Thanks >>>> >>>> >>>> current status is >>>> 1. patches for provisioning gluster clusters and volumes via ovirt >>>> are in >>>> review, trying to cover this feature set [1]. >>>> I'm not sure if all of them will make the ovirt 3.1 version which is >>>> slated >>>> to branch for stabilization June 1st, but i think "enough" is there. >>>> so i'd start trying current upstream version to help find issues >>>> blocking >>>> you, and following on them during june as we stabilize ovirt 3.1 for >>>> release >>>> (planned for end of june). >>>> >>>> 2. you should be able to use same hosts for both gluster and virt, >>>> but there >>>> is no special logic/handling for this yet (i.e., trying and providing >>>> feedback would help improve this mode). >>>> I would suggest start from separate clusters though first, and only >>>> later >>>> trying the joint mode. >>>> >>>> 3. creating a storage domain on top of gluster: >>>> - expose NFS on top of it, and consume as a normal nfs storage domain >>>> - use posixfs storage domain with gluster mount semantics >>>> - future: probably native gluster storage domain, up to native >>>> integration with qemu >>> >>> I am looking at GlusterFS integration with QEMU which involves adding >>> GlusterFS as block backend in QEMU. This will involve QEMU talking to >>> gluster directly via libglusterfs bypassing FUSE. I could specify a >>> volume file and the VM image directly on QEMU command line to boot >>> from the VM image that resides on a gluster volume. >>> >>> Eg: qemu -drive file=3Dclient.vol:/Fedora.img,format=3Dgluster >>> >>> In this example, Fedora.img is being served by gluster and client.vol >>> would have client-side translators specified. >>> >>> I am not sure if this use case would be served if GlusterFS is >>> integrated as posixfs storage domain in VDSM. Posixfs would involve >>> normal FUSE mount and QEMU would be required to work with images from >>> FUSE mount path ? >>> >>> With QEMU supporting GlusterFS backend natively, further optimizations >>> are possible in case of gluster volume being local to the host node. >>> In this case, one could provide QEMU with a simple volume file that >>> would not contain client or server xlators, but instead just the posix >>> xlator. This would lead to most optimal IO path that bypasses RPC >>> calls. >>> >>> So do you think, this use case (QEMU supporting GlusterFS backend >>> natively and using volume file to specify the needed translators) >>> warrants a specialized storage domain type for GlusterFS in VDSM ? >> >> I'm not sure if a special storage domain, or a PosixFS based domain >> with enhanced capabilities. >> Ayal? > > Related Question: > With QEMU using GlusterFS backend natively (as described above), it also > means that > it needs addnl options/parameters as part of qemu command line (as given > above). There is no support in qemu for gluster yet but it will be there not far = away > > How does VDSM today support generating a custom qemu cmdline. I know > VDSM talks to libvirt, > so is there a framework in VDSM to edit/modify the domxml based on some > pre-conditions, > and how / where one should hook up to do that modification ? I know of > libvirt hooks > framework in VDSM, but that was more for temporary/experimental needs, > or am i completely > wrong here ? > > Irrespective of whether GlusterFS integrates into VDSM as PosixFS or > special storage domain > it won't address the need to generate a custom qemu cmdline if a > file/image was served by > GlusterFS. Whats the way to address this issue in VDSM ? > > I am assuming here that special storage domain (aka repo engine) is only > to manage image > repository, and image related operations, won't help in modifying qemu > cmd line being generated. > > [Ccing vdsm-devel also] > > thanx, > deepak > > > _______________________________________________ > vdsm-devel mailing list > vdsm-devel(a)lists.fedorahosted.org > https://fedorahosted.org/mailman/listinfo/vdsm-devel --===============5887288750197478570==-- From bharata.rao at gmail.com Sun May 20 23:15:12 2012 Content-Type: multipart/mixed; boundary="===============6190788811974282740==" MIME-Version: 1.0 From: Bharata B Rao To: users at ovirt.org Subject: Re: [Users] [vdsm] glusterfs and ovirt Date: Mon, 21 May 2012 08:45:11 +0530 Message-ID: In-Reply-To: 4FB8D539.6000609@redhat.com --===============6190788811974282740== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On Sun, May 20, 2012 at 4:57 PM, Dor Laor wrote: > On 05/18/2012 04:28 PM, Deepak C Shetty wrote: >> >> On 05/17/2012 11:05 PM, Itamar Heim wrote: >>> >>> On 05/17/2012 06:55 PM, Bharata B Rao wrote: >>>> I am looking at GlusterFS integration with QEMU which involves adding >>>> GlusterFS as block backend in QEMU. This will involve QEMU talking to >>>> gluster directly via libglusterfs bypassing FUSE. I could specify a >>>> volume file and the VM image directly on QEMU command line to boot >>>> from the VM image that resides on a gluster volume. >>>> >>>> Eg: qemu -drive file=3Dclient.vol:/Fedora.img,format=3Dgluster >>>> >>>> In this example, Fedora.img is being served by gluster and client.vol >>>> would have client-side translators specified. >>>> >>>> I am not sure if this use case would be served if GlusterFS is >>>> integrated as posixfs storage domain in VDSM. Posixfs would involve >>>> normal FUSE mount and QEMU would be required to work with images from >>>> FUSE mount path ? >>>> >>>> With QEMU supporting GlusterFS backend natively, further optimizations >>>> are possible in case of gluster volume being local to the host node. >>>> In this case, one could provide QEMU with a simple volume file that >>>> would not contain client or server xlators, but instead just the posix >>>> xlator. This would lead to most optimal IO path that bypasses RPC >>>> calls. >>>> >>>> So do you think, this use case (QEMU supporting GlusterFS backend >>>> natively and using volume file to specify the needed translators) >>>> warrants a specialized storage domain type for GlusterFS in VDSM ? >>> >>> >>> I'm not sure if a special storage domain, or a PosixFS based domain >>> with enhanced capabilities. >>> Ayal? >> >> >> Related Question: >> With QEMU using GlusterFS backend natively (as described above), it also >> means that >> it needs addnl options/parameters as part of qemu command line (as given >> above). > > > There is no support in qemu for gluster yet but it will be there not far > away As I said above, I am working on this. Will post the patches shortly. Regards, Bharata. -- = http://bharata.sulekha.com/blog/posts.htm, http://raobharata.wordpress.com/ --===============6190788811974282740==-- From dlaor at redhat.com Mon May 21 02:55:59 2012 Content-Type: multipart/mixed; boundary="===============8069163708964993109==" MIME-Version: 1.0 From: Dor Laor To: users at ovirt.org Subject: Re: [Users] [vdsm] glusterfs and ovirt Date: Mon, 21 May 2012 09:55:51 +0300 Message-ID: <4FB9E6F7.5010007@redhat.com> In-Reply-To: CAGZKiBpogh8rEK3zgX4kYkAHvbAcAPgc7V00vWL4iLahgsKv=w@mail.gmail.com --===============8069163708964993109== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On 05/21/2012 06:15 AM, Bharata B Rao wrote: > On Sun, May 20, 2012 at 4:57 PM, Dor Laor wrote: >> On 05/18/2012 04:28 PM, Deepak C Shetty wrote: >>> >>> On 05/17/2012 11:05 PM, Itamar Heim wrote: >>>> >>>> On 05/17/2012 06:55 PM, Bharata B Rao wrote: >>>>> I am looking at GlusterFS integration with QEMU which involves adding >>>>> GlusterFS as block backend in QEMU. This will involve QEMU talking to >>>>> gluster directly via libglusterfs bypassing FUSE. I could specify a >>>>> volume file and the VM image directly on QEMU command line to boot >>>>> from the VM image that resides on a gluster volume. >>>>> >>>>> Eg: qemu -drive file=3Dclient.vol:/Fedora.img,format=3Dgluster >>>>> >>>>> In this example, Fedora.img is being served by gluster and client.vol >>>>> would have client-side translators specified. >>>>> >>>>> I am not sure if this use case would be served if GlusterFS is >>>>> integrated as posixfs storage domain in VDSM. Posixfs would involve >>>>> normal FUSE mount and QEMU would be required to work with images from >>>>> FUSE mount path ? >>>>> >>>>> With QEMU supporting GlusterFS backend natively, further optimizations >>>>> are possible in case of gluster volume being local to the host node. >>>>> In this case, one could provide QEMU with a simple volume file that >>>>> would not contain client or server xlators, but instead just the posix >>>>> xlator. This would lead to most optimal IO path that bypasses RPC >>>>> calls. >>>>> >>>>> So do you think, this use case (QEMU supporting GlusterFS backend >>>>> natively and using volume file to specify the needed translators) >>>>> warrants a specialized storage domain type for GlusterFS in VDSM ? >>>> >>>> >>>> I'm not sure if a special storage domain, or a PosixFS based domain >>>> with enhanced capabilities. >>>> Ayal? >>> >>> >>> Related Question: >>> With QEMU using GlusterFS backend natively (as described above), it also >>> means that >>> it needs addnl options/parameters as part of qemu command line (as given >>> above). >> >> >> There is no support in qemu for gluster yet but it will be there not far >> away > > As I said above, I am working on this. Will post the patches shortly. /me apologize for the useless noise, I'm using a new thunderbird plugin = that collapses quotes and it made me loss the context. > > Regards, > Bharata. --===============8069163708964993109==-- From iheim at redhat.com Tue Jun 5 17:35:52 2012 Content-Type: multipart/mixed; boundary="===============8548408391982531287==" MIME-Version: 1.0 From: Itamar Heim To: users at ovirt.org Subject: Re: [Users] glusterfs and ovirt Date: Wed, 06 Jun 2012 00:35:45 +0300 Message-ID: <4FCE7BB1.7010108@redhat.com> In-Reply-To: 4FB64E6C.8090700@linux.vnet.ibm.com --===============8548408391982531287== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On 05/18/2012 04:28 PM, Deepak C Shetty wrote: > On 05/17/2012 11:05 PM, Itamar Heim wrote: >> On 05/17/2012 06:55 PM, Bharata B Rao wrote: >>> On Wed, May 16, 2012 at 3:29 PM, Itamar Heim wrote: >>>> On 05/15/2012 07:35 PM, Andrei Vakhnin wrote: >>>>> >>>>> Yair >>>>> >>>>> Thanks for an update. Can I have KVM hypervisors also function as >>>>> storage >>>>> nodes for glusterfs? What is a release date for glusterfs support? >>>>> We're >>>>> looking for a production deployment in June. Thanks >>>> >>>> >>>> current status is >>>> 1. patches for provisioning gluster clusters and volumes via ovirt >>>> are in >>>> review, trying to cover this feature set [1]. >>>> I'm not sure if all of them will make the ovirt 3.1 version which is >>>> slated >>>> to branch for stabilization June 1st, but i think "enough" is there. >>>> so i'd start trying current upstream version to help find issues >>>> blocking >>>> you, and following on them during june as we stabilize ovirt 3.1 for >>>> release >>>> (planned for end of june). >>>> >>>> 2. you should be able to use same hosts for both gluster and virt, >>>> but there >>>> is no special logic/handling for this yet (i.e., trying and providing >>>> feedback would help improve this mode). >>>> I would suggest start from separate clusters though first, and only >>>> later >>>> trying the joint mode. >>>> >>>> 3. creating a storage domain on top of gluster: >>>> - expose NFS on top of it, and consume as a normal nfs storage domain >>>> - use posixfs storage domain with gluster mount semantics >>>> - future: probably native gluster storage domain, up to native >>>> integration with qemu >>> >>> I am looking at GlusterFS integration with QEMU which involves adding >>> GlusterFS as block backend in QEMU. This will involve QEMU talking to >>> gluster directly via libglusterfs bypassing FUSE. I could specify a >>> volume file and the VM image directly on QEMU command line to boot >>> from the VM image that resides on a gluster volume. >>> >>> Eg: qemu -drive file=3Dclient.vol:/Fedora.img,format=3Dgluster >>> >>> In this example, Fedora.img is being served by gluster and client.vol >>> would have client-side translators specified. >>> >>> I am not sure if this use case would be served if GlusterFS is >>> integrated as posixfs storage domain in VDSM. Posixfs would involve >>> normal FUSE mount and QEMU would be required to work with images from >>> FUSE mount path ? >>> >>> With QEMU supporting GlusterFS backend natively, further optimizations >>> are possible in case of gluster volume being local to the host node. >>> In this case, one could provide QEMU with a simple volume file that >>> would not contain client or server xlators, but instead just the posix >>> xlator. This would lead to most optimal IO path that bypasses RPC >>> calls. >>> >>> So do you think, this use case (QEMU supporting GlusterFS backend >>> natively and using volume file to specify the needed translators) >>> warrants a specialized storage domain type for GlusterFS in VDSM ? >> >> I'm not sure if a special storage domain, or a PosixFS based domain >> with enhanced capabilities. >> Ayal? > > Related Question: > With QEMU using GlusterFS backend natively (as described above), it also > means that > it needs addnl options/parameters as part of qemu command line (as given > above). > > How does VDSM today support generating a custom qemu cmdline. I know > VDSM talks to libvirt, > so is there a framework in VDSM to edit/modify the domxml based on some > pre-conditions, > and how / where one should hook up to do that modification ? I know of > libvirt hooks > framework in VDSM, but that was more for temporary/experimental needs, > or am i completely > wrong here ? for something vdsm is not aware of yet - you can use vdsm custom hooks = to manipulate the libvirt xml. > > Irrespective of whether GlusterFS integrates into VDSM as PosixFS or > special storage domain > it won't address the need to generate a custom qemu cmdline if a > file/image was served by > GlusterFS. Whats the way to address this issue in VDSM ? when vdsm supports this I expect it will know to pass these. it won't necessarily be a generic PosixFS at that time. > > I am assuming here that special storage domain (aka repo engine) is only > to manage image > repository, and image related operations, won't help in modifying qemu > cmd line being generated. support by vdsm for specific qemu options (via libvirt) will be done by = either having a special type of storage domain, or some capability = exchange, etc. > > [Ccing vdsm-devel also] > > thanx, > deepak > > --===============8548408391982531287==--