From lists at murrell.ca Tue Jul 14 00:26:44 2015 Content-Type: multipart/mixed; boundary="===============8824235122248637267==" MIME-Version: 1.0 From: Alan Murrell To: users at ovirt.org Subject: [ovirt-users] Storage question: single node gluster? Date: Mon, 13 Jul 2015 21:18:31 -0700 Message-ID: <55A48D97.9000405@murrell.ca> --===============8824235122248637267== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Hi There, This is not strictly oVirt, but is storage-related, so hopefully you will indulge me? Is there any detriment (performance or otherwise) in setting up a single-node glusterFS storage? I know glusterFS is designed to be used with multiple nodes, but I am wondering if there are any ill-effects in configuring current storage as a single-node cluster, with the idea of possibly adding future nodes in the future? Thanks! :-) -Alan --===============8824235122248637267==-- From acanan at redhat.com Tue Jul 14 06:49:50 2015 Content-Type: multipart/mixed; boundary="===============4460524665350737816==" MIME-Version: 1.0 From: Aharon Canan To: users at ovirt.org Subject: Re: [ovirt-users] Storage question: single node gluster? Date: Tue, 14 Jul 2015 06:49:48 -0400 Message-ID: <305802602.41065708.1436870988016.JavaMail.zimbra@redhat.com> In-Reply-To: 55A48D97.9000405@murrell.ca --===============4460524665350737816== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable ------=3D_Part_41065707_1831774342.1436870988015 Content-Type: text/plain; charset=3Dutf-8 Content-Transfer-Encoding: 7bit AFAIK it is not the best practice from Gluster. = I tested it using single node and it should work fine. = everything depend on your traffic and configuration. = For example, = Single node with only one NIc is not the same as with multiple nics = Using only one HDD for both OS and volumes or SSD for OS and another SSD fo= r volumes. = so... it depends on the setup... = Regards, = __________________________________________________ = Aharon Canan = int phone - 8272036 = ext phone - +97297692036 = email - acanan(a)redhat.com = ----- Original Message ----- > From: "Alan Murrell" > To: users(a)ovirt.org > Sent: Tuesday, July 14, 2015 7:18:31 AM > Subject: [ovirt-users] Storage question: single node gluster? > Hi There, > This is not strictly oVirt, but is storage-related, so hopefully you > will indulge me? > Is there any detriment (performance or otherwise) in setting up a > single-node glusterFS storage? I know glusterFS is designed to be used > with multiple nodes, but I am wondering if there are any ill-effects in > configuring current storage as a single-node cluster, with the idea of > possibly adding future nodes in the future? > Thanks! :-) > -Alan > _______________________________________________ > Users mailing list > Users(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/users ------=3D_Part_41065707_1831774342.1436870988015 Content-Type: text/html; charset=3Dutf-8 Content-Transfer-Encoding: quoted-printable
AFAIK it is not the best practice fr= =3D om Gluster.

I tested it using single node and = =3D it should work fine.
everything depend on your traffic and co= =3D nfiguration.

For example,
Singl= =3D e node with only one NIc is not the same as with multiple nics
Using only one HDD for both OS and volumes or SSD for OS and another SSD = =3D for volumes.

so... it depends on the setup...<= =3D br>





Regards,
___________________= =3D _______________________________
Aharon Canan= int phone -= 8=3D 272036
ext phone - +97297692036
email - acanan(a)redhat.com


From: "Alan Murrell" <lists(a)murrell.ca= &g=3D t;
To: users(a)ovirt.org
Sent: Tuesday, July 14, 2015 7= :1=3D 8:31 AM
Subject: [ovirt-users] Storage question: single node glus= =3D ter?

Hi There,

This is not strictly oVi= =3D rt, but is storage-related, so hopefully you
will indulge me?
Is there any detriment (performance or otherwise) in setting up asingle-node glusterFS storage?  I know glusterFS is designed to be u= =3D sed
with multiple nodes, but I am wondering if there are any ill-effects= =3D in
configuring current storage as a single-node cluster, with the idea = =3D of
possibly adding future nodes in the future?

Thanks!= =3D :-)

-Alan

____________________________= =3D ___________________
Users mailing list
Users(a)ovirt.org
http://li= st=3D s.ovirt.org/mailman/listinfo/users

------=3D_Part_41065707_1831774342.1436870988015-- --===============4460524665350737816== Content-Type: multipart/alternative MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="attachment.bin" LS0tLS0tPV9QYXJ0XzQxMDY1NzA3XzE4MzE3NzQzNDIuMTQzNjg3MDk4ODAxNQpDb250ZW50LVR5 cGU6IHRleHQvcGxhaW47IGNoYXJzZXQ9dXRmLTgKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzog N2JpdAoKQUZBSUsgaXQgaXMgbm90IHRoZSBiZXN0IHByYWN0aWNlIGZyb20gR2x1c3Rlci4gCgpJ IHRlc3RlZCBpdCB1c2luZyBzaW5nbGUgbm9kZSBhbmQgaXQgc2hvdWxkIHdvcmsgZmluZS4gCmV2 ZXJ5dGhpbmcgZGVwZW5kIG9uIHlvdXIgdHJhZmZpYyBhbmQgY29uZmlndXJhdGlvbi4gCgpGb3Ig ZXhhbXBsZSwgClNpbmdsZSBub2RlIHdpdGggb25seSBvbmUgTkljIGlzIG5vdCB0aGUgc2FtZSBh cyB3aXRoIG11bHRpcGxlIG5pY3MgClVzaW5nIG9ubHkgb25lIEhERCBmb3IgYm90aCBPUyBhbmQg dm9sdW1lcyBvciBTU0QgZm9yIE9TIGFuZCBhbm90aGVyIFNTRCBmb3Igdm9sdW1lcy4gCgpzby4u LiBpdCBkZXBlbmRzIG9uIHRoZSBzZXR1cC4uLiAKClJlZ2FyZHMsIApfX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXyAKQWhhcm9uIENhbmFuIAppbnQgcGhv bmUgLSA4MjcyMDM2IApleHQgcGhvbmUgLSArOTcyOTc2OTIwMzYgCmVtYWlsIC0gYWNhbmFuQHJl ZGhhdC5jb20gCgotLS0tLSBPcmlnaW5hbCBNZXNzYWdlIC0tLS0tCgo+IEZyb206ICJBbGFuIE11 cnJlbGwiIDxsaXN0c0BtdXJyZWxsLmNhPgo+IFRvOiB1c2Vyc0BvdmlydC5vcmcKPiBTZW50OiBU dWVzZGF5LCBKdWx5IDE0LCAyMDE1IDc6MTg6MzEgQU0KPiBTdWJqZWN0OiBbb3ZpcnQtdXNlcnNd IFN0b3JhZ2UgcXVlc3Rpb246IHNpbmdsZSBub2RlIGdsdXN0ZXI/Cgo+IEhpIFRoZXJlLAoKPiBU aGlzIGlzIG5vdCBzdHJpY3RseSBvVmlydCwgYnV0IGlzIHN0b3JhZ2UtcmVsYXRlZCwgc28gaG9w ZWZ1bGx5IHlvdQo+IHdpbGwgaW5kdWxnZSBtZT8KCj4gSXMgdGhlcmUgYW55IGRldHJpbWVudCAo cGVyZm9ybWFuY2Ugb3Igb3RoZXJ3aXNlKSBpbiBzZXR0aW5nIHVwIGEKPiBzaW5nbGUtbm9kZSBn bHVzdGVyRlMgc3RvcmFnZT8gSSBrbm93IGdsdXN0ZXJGUyBpcyBkZXNpZ25lZCB0byBiZSB1c2Vk Cj4gd2l0aCBtdWx0aXBsZSBub2RlcywgYnV0IEkgYW0gd29uZGVyaW5nIGlmIHRoZXJlIGFyZSBh bnkgaWxsLWVmZmVjdHMgaW4KPiBjb25maWd1cmluZyBjdXJyZW50IHN0b3JhZ2UgYXMgYSBzaW5n bGUtbm9kZSBjbHVzdGVyLCB3aXRoIHRoZSBpZGVhIG9mCj4gcG9zc2libHkgYWRkaW5nIGZ1dHVy ZSBub2RlcyBpbiB0aGUgZnV0dXJlPwoKPiBUaGFua3MhIDotKQoKPiAtQWxhbgoKPiBfX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fXwo+IFVzZXJzIG1haWxpbmcg bGlzdAo+IFVzZXJzQG92aXJ0Lm9yZwo+IGh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9s aXN0aW5mby91c2VycwoKLS0tLS0tPV9QYXJ0XzQxMDY1NzA3XzE4MzE3NzQzNDIuMTQzNjg3MDk4 ODAxNQpDb250ZW50LVR5cGU6IHRleHQvaHRtbDsgY2hhcnNldD11dGYtOApDb250ZW50LVRyYW5z ZmVyLUVuY29kaW5nOiBxdW90ZWQtcHJpbnRhYmxlCgo8aHRtbD48Ym9keT48ZGl2IHN0eWxlPTNE ImZvbnQtZmFtaWx5OiBjb21pYyBzYW5zIG1zLGNvbWljIHNhbnMsc2Fucy1zZXJpZjs9CiBmb250 LXNpemU6IDEwcHQ7IGNvbG9yOiAjMDAwMDAwIj48ZGl2PkFGQUlLIGl0IGlzIG5vdCB0aGUgYmVz dCBwcmFjdGljZSBmcj0Kb20gR2x1c3Rlci48YnI+PC9kaXY+PGRpdj48YnI+PC9kaXY+PGRpdj5J IHRlc3RlZCBpdCB1c2luZyBzaW5nbGUgbm9kZSBhbmQgPQppdCBzaG91bGQgd29yayBmaW5lLjxi cj48L2Rpdj48ZGl2PmV2ZXJ5dGhpbmcgZGVwZW5kIG9uIHlvdXIgdHJhZmZpYyBhbmQgY289Cm5m aWd1cmF0aW9uLjxicj48L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PkZvciBleGFtcGxlLCA8YnI+ PC9kaXY+PGRpdj5TaW5nbD0KZSBub2RlIHdpdGggb25seSBvbmUgTkljIGlzIG5vdCB0aGUgc2Ft ZSBhcyB3aXRoIG11bHRpcGxlIG5pY3M8YnI+PC9kaXY+PGRpPQp2PlVzaW5nIG9ubHkgb25lIEhE RCBmb3IgYm90aCBPUyBhbmQgdm9sdW1lcyBvciBTU0QgZm9yIE9TIGFuZCBhbm90aGVyIFNTRCA9 CmZvciB2b2x1bWVzLjxicj48L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PnNvLi4uIGl0IGRlcGVu ZHMgb24gdGhlIHNldHVwLi4uPD0KYnI+PC9kaXY+PGRpdj48YnI+PC9kaXY+PGRpdj48YnI+PC9k aXY+PGRpdj48YnI+PC9kaXY+PGRpdj48c3BhbiBuYW1lPTNEIngiPQo+PC9zcGFuPjxkaXY+PGJy PjwvZGl2PjxkaXY+PGJyPjwvZGl2PjxkaXY+UmVnYXJkcywgPGJyPl9fX19fX19fX19fX19fX19f X189Cl9fX19fX19fX19fX19fX19fX19fX19fX19fX19fX188YnI+PHN0cm9uZz48c3BhbiBkYXRh LW1jZS1zdHlsZT0zRCJjb2xvcjogIz0KMzM2NmZmOyIgc3R5bGU9M0QiY29sb3I6IHJnYig1MSwg MTAyLCAyNTUpOyI+QWhhcm9uIENhbmFuPC9zcGFuPjwvc3Ryb25nPjxiPQpyPjxzcGFuIHN0eWxl PTNEImNvbG9yOiByZ2IoNTEsIDEwMiwgMjU1KTsgZm9udC1zaXplOiBzbWFsbDsiPmludCBwaG9u ZSAtIDg9CjI3MjAzNjwvc3Bhbj48YnI+PHNwYW4gc3R5bGU9M0QiY29sb3I6IHJnYig1MSwgMTAy LCAyNTUpOyBmb250LXNpemU6IHNtYWxsOz0KIj5leHQgcGhvbmUgLSArOTcyOTc2OTIwMzY8L3Nw YW4+PGJyPjxzcGFuIHN0eWxlPTNEImNvbG9yOiByZ2IoNTEsIDEwMiwgMjU1PQopOyBmb250LXNp emU6IHNtYWxsOyI+ZW1haWwgLSBhY2FuYW5AcmVkaGF0LmNvbTwvc3Bhbj48YnI+PC9kaXY+PHNw YW4gbmFtZT0KPTNEIngiPjwvc3Bhbj48YnI+PC9kaXY+PGhyIGlkPTNEInp3Y2hyIj48YmxvY2tx dW90ZSBzdHlsZT0zRCJib3JkZXItbGVmdDoyPQpweCBzb2xpZCAjMTAxMEZGO21hcmdpbi1sZWZ0 OjVweDtwYWRkaW5nLWxlZnQ6NXB4O2NvbG9yOiMwMDA7Zm9udC13ZWlnaHQ6bm89CnJtYWw7Zm9u dC1zdHlsZTpub3JtYWw7dGV4dC1kZWNvcmF0aW9uOm5vbmU7Zm9udC1mYW1pbHk6SGVsdmV0aWNh LEFyaWFsLHNhbj0Kcy1zZXJpZjtmb250LXNpemU6MTJwdDsiPjxiPkZyb206IDwvYj4iQWxhbiBN dXJyZWxsIiAmbHQ7bGlzdHNAbXVycmVsbC5jYSZnPQp0Ozxicj48Yj5UbzogPC9iPnVzZXJzQG92 aXJ0Lm9yZzxicj48Yj5TZW50OiA8L2I+VHVlc2RheSwgSnVseSAxNCwgMjAxNSA3OjE9Cjg6MzEg QU08YnI+PGI+U3ViamVjdDogPC9iPltvdmlydC11c2Vyc10gU3RvcmFnZSBxdWVzdGlvbjogc2lu Z2xlIG5vZGUgZ2x1cz0KdGVyPzxicj48ZGl2Pjxicj48L2Rpdj5IaSBUaGVyZSw8YnI+PGRpdj48 YnI+PC9kaXY+VGhpcyBpcyBub3Qgc3RyaWN0bHkgb1ZpPQpydCwgYnV0IGlzIHN0b3JhZ2UtcmVs YXRlZCwgc28gaG9wZWZ1bGx5IHlvdTxicj53aWxsIGluZHVsZ2UgbWU/PGJyPjxkaXY+PGI9CnI+ PC9kaXY+SXMgdGhlcmUgYW55IGRldHJpbWVudCAocGVyZm9ybWFuY2Ugb3Igb3RoZXJ3aXNlKSBp biBzZXR0aW5nIHVwIGE8Yj0Kcj5zaW5nbGUtbm9kZSBnbHVzdGVyRlMgc3RvcmFnZT8gJm5ic3A7 SSBrbm93IGdsdXN0ZXJGUyBpcyBkZXNpZ25lZCB0byBiZSB1PQpzZWQ8YnI+d2l0aCBtdWx0aXBs ZSBub2RlcywgYnV0IEkgYW0gd29uZGVyaW5nIGlmIHRoZXJlIGFyZSBhbnkgaWxsLWVmZmVjdHM9 CiBpbjxicj5jb25maWd1cmluZyBjdXJyZW50IHN0b3JhZ2UgYXMgYSBzaW5nbGUtbm9kZSBjbHVz dGVyLCB3aXRoIHRoZSBpZGVhID0Kb2Y8YnI+cG9zc2libHkgYWRkaW5nIGZ1dHVyZSBub2RlcyBp biB0aGUgZnV0dXJlPzxicj48ZGl2Pjxicj48L2Rpdj5UaGFua3MhPQogOi0pPGJyPjxkaXY+PGJy PjwvZGl2Pi1BbGFuPGJyPjxkaXY+PGJyPjwvZGl2Pl9fX19fX19fX19fX19fX19fX19fX19fX19f X189Cl9fX19fX19fX19fX19fX19fX188YnI+VXNlcnMgbWFpbGluZyBsaXN0PGJyPlVzZXJzQG92 aXJ0Lm9yZzxicj5odHRwOi8vbGlzdD0Kcy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2Vy czxicj48L2Jsb2NrcXVvdGU+PGRpdj48YnI+PC9kaXY+PC9kaXY+PC9iPQpvZHk+PC9odG1sPgot LS0tLS09X1BhcnRfNDEwNjU3MDdfMTgzMTc3NDM0Mi4xNDM2ODcwOTg4MDE1LS0K --===============4460524665350737816==-- From aaviram at redhat.com Wed Jul 15 03:20:23 2015 Content-Type: multipart/mixed; boundary="===============0174894250393554377==" MIME-Version: 1.0 From: Amit Aviram To: users at ovirt.org Subject: Re: [ovirt-users] Storage question: single node gluster? Date: Wed, 15 Jul 2015 03:20:20 -0400 Message-ID: <1150182391.45999512.1436944820149.JavaMail.zimbra@redhat.com> In-Reply-To: 55A48D97.9000405@murrell.ca --===============0174894250393554377== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Hey Alan. Using a single Gluster node will be like mounting any other storage, just w= ithout Gluster's advantages of backuping/distributing the FS. Also, you can attach nodes later on to your GlusterFS after you start using= it. (Thanks to Sahina Bose from Gluster's team for advicing (: ) ----- Original Message ----- From: "Alan Murrell" To: users(a)ovirt.org Sent: Tuesday, July 14, 2015 7:18:31 AM Subject: [ovirt-users] Storage question: single node gluster? Hi There, This is not strictly oVirt, but is storage-related, so hopefully you will indulge me? Is there any detriment (performance or otherwise) in setting up a single-node glusterFS storage? I know glusterFS is designed to be used with multiple nodes, but I am wondering if there are any ill-effects in configuring current storage as a single-node cluster, with the idea of possibly adding future nodes in the future? Thanks! :-) -Alan _______________________________________________ Users mailing list Users(a)ovirt.org http://lists.ovirt.org/mailman/listinfo/users --===============0174894250393554377==-- From lists at murrell.ca Sat Jul 18 17:27:00 2015 Content-Type: multipart/mixed; boundary="===============4030219382386825263==" MIME-Version: 1.0 From: Alan Murrell To: users at ovirt.org Subject: Re: [ovirt-users] Storage question: single node gluster? Date: Sat, 18 Jul 2015 14:18:46 -0700 Message-ID: <55AAC2B6.7090401@murrell.ca> In-Reply-To: 1150182391.45999512.1436944820149.JavaMail.zimbra@redhat.com --===============4030219382386825263== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Hi all. On 15/07/15 12:20 AM, Amit Aviram wrote: > Hey Alan. > > Using a single Gluster node will be like mounting any other storage, just= without Gluster's advantages of backuping/distributing the FS. > Also, you can attach nodes later on to your GlusterFS after you start usi= ng it. > > (Thanks to Sahina Bose from Gluster's team for advicing (: ) That is what I figured. I was only going to set up a single "node" now, so if I decide to add additional nodes in the future, I don't have to make any changes to my current storage (as far as preparing it for Gluster); I just need to add the additional node(s). -Alan --===============4030219382386825263==--