From sabose at redhat.com Tue Apr 9 06:11:06 2013 Content-Type: multipart/mixed; boundary="===============2166941514961326219==" MIME-Version: 1.0 From: Sahina Bose To: devel at ovirt.org Subject: [Engine-devel] ovirt-host-deploy and multible bridges Date: Tue, 09 Apr 2013 15:40:58 +0530 Message-ID: <5163E932.8040300@redhat.com> --===============2166941514961326219== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable This is a multi-part message in MIME format. --------------080204010900060200040902 Content-Type: text/plain; charset=3DISO-8859-1; format=3Dflowed Content-Transfer-Encoding: 7bit Hi all, I'm testing the bootstrapping of host without reboot on Fedora 18. After host's bootstrap, Ifconfig output returns this: ovirtmgmt: flags=3D4163 mtu 1500 inet 10.70.37.219 netmask 255.255.254.0 broadcast 10.70.37.255 virbr0: flags=3D4099 mtu 1500 inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255 Running*glusterHostsList* vdsm verb, returns the ip address 192.168.122.1, whereas my host has been added with ip address 10.70.37.219 If I reboot the host, the virbr0 bridge is removed, and there's no issue. The vdsm verb glusterHostsList - returns ipAddress of host + output of = gluster peer probe. This is needed because a periodic sync job needs to = make sure that the hosts added in engine are in sync with the gluster = cli (hosts could also be added/removed from gluster cli). How can we make sure glusterHostsList picks the correct ipAddress? = Reading the inetinfo based on bridge has been vetoed as we are doing = away with bridges. It would also work if virbr0 was updated in vds_interfaces table. Since = this is not happening either - we have an issue. thanks sahina --------------080204010900060200040902 Content-Type: text/html; charset=3DISO-8859-1 Content-Transfer-Encoding: 7bit
Hi all,

I'm testing the bootstrapping of host without reboot on Fedora 18. After
host's bootstrap,
Ifconfig output returns this:

ovirtmgmt: flags=3D4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
         inet 10.70.37.219  netmask 255.255.254.0  broadcast 10.70.37.255
   <snipped>

virbr0: flags=3D4099<UP,BROADCAST,MULTICAST>  mtu 1500
         inet 192.168.122.1  netmask 255.255.255.0  broadcast
192.168.122.255
        <snipped>

Running *glust=
erHostsList* vdsm verb, returns the =
ip address
192.168.122.1, whereas my host has been added with ip address 10.70.37.219

If I reboot the host, the virbr0 bridge is removed, and there's no issue.

The vdsm verb glusterHostsList - returns ipAddress of host + output of gluster peer probe. This is needed because a periodic sync job needs to make sure that the hosts added in engine are in sync with the gluster cli (hosts could also be added/removed from gluster cli).

How can we make sure glusterHostsList picks the correct ipAddress? Reading the inetinfo based on bridge has been vetoed as we are doing away with bridges.

It would also work if virbr0 was updated in vds_interfaces table. Since this is not happening either - we have an issue.

thanks
sahina

--------------080204010900060200040902-- --===============2166941514961326219== Content-Type: multipart/alternative MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="attachment.bin" VGhpcyBpcyBhIG11bHRpLXBhcnQgbWVzc2FnZSBpbiBNSU1FIGZvcm1hdC4KLS0tLS0tLS0tLS0t LS0wODAyMDQwMTA5MDAwNjAyMDAwNDA5MDIKQ29udGVudC1UeXBlOiB0ZXh0L3BsYWluOyBjaGFy c2V0PUlTTy04ODU5LTE7IGZvcm1hdD1mbG93ZWQKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzog N2JpdAoKSGkgYWxsLAoKSSdtIHRlc3RpbmcgdGhlIGJvb3RzdHJhcHBpbmcgb2YgaG9zdCB3aXRo b3V0IHJlYm9vdCBvbiBGZWRvcmEgMTguIEFmdGVyCmhvc3QncyBib290c3RyYXAsCklmY29uZmln IG91dHB1dCByZXR1cm5zIHRoaXM6CgpvdmlydG1nbXQ6IGZsYWdzPTQxNjM8VVAsQlJPQURDQVNU LFJVTk5JTkcsTVVMVElDQVNUPiAgbXR1IDE1MDAKICAgICAgICAgIGluZXQgMTAuNzAuMzcuMjE5 ICBuZXRtYXNrIDI1NS4yNTUuMjU0LjAgIGJyb2FkY2FzdCAxMC43MC4zNy4yNTUKICAgIDxzbmlw cGVkPgoKdmlyYnIwOiBmbGFncz00MDk5PFVQLEJST0FEQ0FTVCxNVUxUSUNBU1Q+ICBtdHUgMTUw MAogICAgICAgICAgaW5ldCAxOTIuMTY4LjEyMi4xICBuZXRtYXNrIDI1NS4yNTUuMjU1LjAgIGJy b2FkY2FzdAoxOTIuMTY4LjEyMi4yNTUKICAgICAgICAgPHNuaXBwZWQ+CgpSdW5uaW5nKmdsdXN0 ZXJIb3N0c0xpc3QqICB2ZHNtIHZlcmIsIHJldHVybnMgdGhlIGlwIGFkZHJlc3MKMTkyLjE2OC4x MjIuMSwgd2hlcmVhcyBteSBob3N0IGhhcyBiZWVuIGFkZGVkIHdpdGggaXAgYWRkcmVzcyAxMC43 MC4zNy4yMTkKCklmIEkgcmVib290IHRoZSBob3N0LCB0aGUgdmlyYnIwIGJyaWRnZSBpcyByZW1v dmVkLCBhbmQgdGhlcmUncyBubyBpc3N1ZS4KClRoZSB2ZHNtIHZlcmIgZ2x1c3Rlckhvc3RzTGlz dCAtIHJldHVybnMgaXBBZGRyZXNzIG9mIGhvc3QgKyBvdXRwdXQgb2YgCmdsdXN0ZXIgcGVlciBw cm9iZS4gVGhpcyBpcyBuZWVkZWQgYmVjYXVzZSBhIHBlcmlvZGljIHN5bmMgam9iIG5lZWRzIHRv IAptYWtlIHN1cmUgdGhhdCB0aGUgaG9zdHMgYWRkZWQgaW4gZW5naW5lIGFyZSBpbiBzeW5jIHdp dGggdGhlIGdsdXN0ZXIgCmNsaSAoaG9zdHMgY291bGQgYWxzbyBiZSBhZGRlZC9yZW1vdmVkIGZy b20gZ2x1c3RlciBjbGkpLgoKSG93IGNhbiB3ZSBtYWtlIHN1cmUgZ2x1c3Rlckhvc3RzTGlzdCBw aWNrcyB0aGUgY29ycmVjdCBpcEFkZHJlc3M/IApSZWFkaW5nIHRoZSBpbmV0aW5mbyBiYXNlZCBv biBicmlkZ2UgaGFzIGJlZW4gdmV0b2VkIGFzIHdlIGFyZSBkb2luZyAKYXdheSB3aXRoIGJyaWRn ZXMuCgpJdCB3b3VsZCBhbHNvIHdvcmsgaWYgdmlyYnIwIHdhcyB1cGRhdGVkIGluIHZkc19pbnRl cmZhY2VzIHRhYmxlLiBTaW5jZSAKdGhpcyBpcyBub3QgaGFwcGVuaW5nIGVpdGhlciAtIHdlIGhh dmUgYW4gaXNzdWUuCgp0aGFua3MKc2FoaW5hCgoKLS0tLS0tLS0tLS0tLS0wODAyMDQwMTA5MDAw NjAyMDAwNDA5MDIKQ29udGVudC1UeXBlOiB0ZXh0L2h0bWw7IGNoYXJzZXQ9SVNPLTg4NTktMQpD b250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0Cgo8aHRtbD4KICA8aGVhZD4KCiAgICA8bWV0 YSBodHRwLWVxdWl2PSJjb250ZW50LXR5cGUiIGNvbnRlbnQ9InRleHQvaHRtbDsgY2hhcnNldD1J U08tODg1OS0xIj4KICA8L2hlYWQ+CiAgPGJvZHkgYmdjb2xvcj0iI0ZGRkZGRiIgdGV4dD0iIzAw MDAwMCI+CiAgICA8cHJlPjxmb250IGZhY2U9InNhbnMtc2VyaWYiPkhpIGFsbCwKCkknbSB0ZXN0 aW5nIHRoZSBib290c3RyYXBwaW5nIG9mIGhvc3Qgd2l0aG91dCByZWJvb3Qgb24gRmVkb3JhIDE4 LiBBZnRlcgpob3N0J3MgYm9vdHN0cmFwLApJZmNvbmZpZyBvdXRwdXQgcmV0dXJucyB0aGlzOgoK b3ZpcnRtZ210OiBmbGFncz00MTYzJmx0O1VQLEJST0FEQ0FTVCxSVU5OSU5HLE1VTFRJQ0FTVCZn dDsgIG10dSAxNTAwCiAgICAgICAgIGluZXQgMTAuNzAuMzcuMjE5ICBuZXRtYXNrIDI1NS4yNTUu MjU0LjAgIGJyb2FkY2FzdCAxMC43MC4zNy4yNTUKICAgJmx0O3NuaXBwZWQmZ3Q7Cgp2aXJicjA6 IGZsYWdzPTQwOTkmbHQ7VVAsQlJPQURDQVNULE1VTFRJQ0FTVCZndDsgIG10dSAxNTAwCiAgICAg ICAgIGluZXQgMTkyLjE2OC4xMjIuMSAgbmV0bWFzayAyNTUuMjU1LjI1NS4wICBicm9hZGNhc3QK MTkyLjE2OC4xMjIuMjU1CiAgICAgICAgJmx0O3NuaXBwZWQmZ3Q7CgpSdW5uaW5nIDxiIGNsYXNz PSJtb3otdHh0LXN0YXIiPjxzcGFuIGNsYXNzPSJtb3otdHh0LXRhZyI+Kjwvc3Bhbj5nbHVzdGVy SG9zdHNMaXN0PHNwYW4gY2xhc3M9Im1vei10eHQtdGFnIj4qPC9zcGFuPjwvYj4gdmRzbSB2ZXJi LCByZXR1cm5zIHRoZSBpcCBhZGRyZXNzCjE5Mi4xNjguMTIyLjEsIHdoZXJlYXMgbXkgaG9zdCBo YXMgYmVlbiBhZGRlZCB3aXRoIGlwIGFkZHJlc3MgMTAuNzAuMzcuMjE5CgpJZiBJIHJlYm9vdCB0 aGUgaG9zdCwgdGhlIHZpcmJyMCBicmlkZ2UgaXMgcmVtb3ZlZCwgYW5kIHRoZXJlJ3Mgbm8gaXNz dWUuPC9mb250Pgo8Zm9udCBmYWNlPSJzYW5zLXNlcmlmIj4KPC9mb250PjwvcHJlPgogICAgPGZv bnQgZmFjZT0ic2Fucy1zZXJpZiI+VGhlIHZkc20gdmVyYiBnbHVzdGVySG9zdHNMaXN0IC0gcmV0 dXJucwogICAgICBpcEFkZHJlc3Mgb2YgaG9zdCArIG91dHB1dCBvZiBnbHVzdGVyIHBlZXIgcHJv YmUuIFRoaXMgaXMgbmVlZGVkCiAgICAgIGJlY2F1c2UgYSBwZXJpb2RpYyBzeW5jIGpvYiBuZWVk cyB0byBtYWtlIHN1cmUgdGhhdCB0aGUgaG9zdHMgYWRkZWQKICAgICAgaW4gZW5naW5lPC9mb250 PiBhcmUgaW4gc3luYyB3aXRoIHRoZSBnbHVzdGVyIGNsaSAoaG9zdHMgY291bGQKICAgIGFsc28g YmUgYWRkZWQvcmVtb3ZlZCBmcm9tIGdsdXN0ZXIgY2xpKS48YnI+CiAgICA8YnI+CiAgICBIb3cg Y2FuIHdlIG1ha2Ugc3VyZSBnbHVzdGVySG9zdHNMaXN0IHBpY2tzIHRoZSBjb3JyZWN0IGlwQWRk cmVzcz8KICAgIFJlYWRpbmcgdGhlIGluZXRpbmZvIGJhc2VkIG9uIGJyaWRnZSBoYXMgYmVlbiB2 ZXRvZWQgYXMgd2UgYXJlIGRvaW5nCiAgICBhd2F5IHdpdGggYnJpZGdlcy48YnI+CiAgICA8YnI+ CiAgICBJdCB3b3VsZCBhbHNvIHdvcmsgaWYgdmlyYnIwIHdhcyB1cGRhdGVkIGluIHZkc19pbnRl cmZhY2VzIHRhYmxlLgogICAgU2luY2UgdGhpcyBpcyBub3QgaGFwcGVuaW5nIGVpdGhlciAtIHdl IGhhdmUgYW4gaXNzdWUuPGJyPgogICAgPGJyPgogICAgdGhhbmtzPGJyPgogICAgc2FoaW5hPGJy PgogICAgPGJyPgogIDwvYm9keT4KPC9odG1sPgoKLS0tLS0tLS0tLS0tLS0wODAyMDQwMTA5MDAw NjAyMDAwNDA5MDItLQo= --===============2166941514961326219==-- From sabose at redhat.com Tue Apr 9 06:25:31 2013 Content-Type: multipart/mixed; boundary="===============4736432791935265916==" MIME-Version: 1.0 From: Sahina Bose To: devel at ovirt.org Subject: Re: [Engine-devel] ovirt-host-deploy and multible bridges Date: Tue, 09 Apr 2013 15:55:25 +0530 Message-ID: <5163EC95.50901@redhat.com> In-Reply-To: 5163E932.8040300@redhat.com --===============4736432791935265916== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable This is a multi-part message in MIME format. --------------040701030207030604030109 Content-Type: text/plain; charset=3DISO-8859-1; format=3Dflowed Content-Transfer-Encoding: 7bit [Adding vdsm-devel] On 04/09/2013 03:40 PM, Sahina Bose wrote: > Hi all, > > I'm testing the bootstrapping of host without reboot on Fedora 18. After > host's bootstrap, > Ifconfig output returns this: > > ovirtmgmt: flags=3D4163 mtu 1500 > inet 10.70.37.219 netmask 255.255.254.0 broadcast 10.70.37.255 > > > virbr0: flags=3D4099 mtu 1500 > inet 192.168.122.1 netmask 255.255.255.0 broadcast > 192.168.122.255 > > > Running*glusterHostsList* vdsm verb, returns the ip address > 192.168.122.1, whereas my host has been added with ip address 10.70.37.219 > > If I reboot the host, the virbr0 bridge is removed, and there's no issue. > > The vdsm verb glusterHostsList - returns ipAddress of host + output of = > gluster peer probe. This is needed because a periodic sync job needs = > to make sure that the hosts added in engine are in sync with the = > gluster cli (hosts could also be added/removed from gluster cli). > > How can we make sure glusterHostsList picks the correct ipAddress? = > Reading the inetinfo based on bridge has been vetoed as we are doing = > away with bridges. > > It would also work if virbr0 was updated in vds_interfaces table. = > Since this is not happening either - we have an issue. > > thanks > sahina > > > > _______________________________________________ > Engine-devel mailing list > Engine-devel(a)ovirt.org > http://lists.ovirt.org/mailman/listinfo/engine-devel --------------040701030207030604030109 Content-Type: text/html; charset=3DISO-8859-1 Content-Transfer-Encoding: 7bit [Adding vdsm-devel]

On 04/09/2013 03:40 PM, Sahina Bose wrote:
Hi all,

I'm testing the bootstrapping of host without reboot on Fedora 18. After
host's bootstrap,
Ifconfig output returns this:

ovirtmgmt: flags=3D4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
         inet 10.70.37.219  netmask 255.255.254.0  broadcast 10.70.37.255
   <snipped>

virbr0: flags=3D4099<UP,BROADCAST,MULTICAST>  mtu 1500
         inet 192.168.122.1  netmask 255.255.255.0  broadcast
192.168.122.255
        <snipped>

Running *glust=
erHostsList* vdsm verb, returns the =
ip address
192.168.122.1, whereas my host has been added with ip address 10.70.37.219

If I reboot the host, the virbr0 bridge is removed, and there's no issue.

The vdsm verb glusterHostsList - returns ipAddress of host + output of gluster peer probe. This is needed because a periodic sync job needs to make sure that the hosts added in engine are in sync with the gluster cli (hosts could also be added/removed from gluster cli).

How can we make sure glusterHostsList picks the correct ipAddress? Reading the inetinfo based on bridge has been vetoed as we are doing away with bridges.

It would also work if virbr0 was updated in vds_interfaces table. Since this is not happening either - we have an issue.

thanks
sahina



_______________________________________________
Engine-devel mailing list
Engine-devel(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/engine-devel<=
/a>

--------------040701030207030604030109-- --===============4736432791935265916== Content-Type: multipart/alternative MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="attachment.bin" VGhpcyBpcyBhIG11bHRpLXBhcnQgbWVzc2FnZSBpbiBNSU1FIGZvcm1hdC4KLS0tLS0tLS0tLS0t LS0wNDA3MDEwMzAyMDcwMzA2MDQwMzAxMDkKQ29udGVudC1UeXBlOiB0ZXh0L3BsYWluOyBjaGFy c2V0PUlTTy04ODU5LTE7IGZvcm1hdD1mbG93ZWQKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzog N2JpdAoKW0FkZGluZyB2ZHNtLWRldmVsXQoKT24gMDQvMDkvMjAxMyAwMzo0MCBQTSwgU2FoaW5h IEJvc2Ugd3JvdGU6Cj4gSGkgYWxsLAo+Cj4gSSdtIHRlc3RpbmcgdGhlIGJvb3RzdHJhcHBpbmcg b2YgaG9zdCB3aXRob3V0IHJlYm9vdCBvbiBGZWRvcmEgMTguIEFmdGVyCj4gaG9zdCdzIGJvb3Rz dHJhcCwKPiBJZmNvbmZpZyBvdXRwdXQgcmV0dXJucyB0aGlzOgo+Cj4gb3ZpcnRtZ210OiBmbGFn cz00MTYzPFVQLEJST0FEQ0FTVCxSVU5OSU5HLE1VTFRJQ0FTVD4gIG10dSAxNTAwCj4gICAgICAg ICAgIGluZXQgMTAuNzAuMzcuMjE5ICBuZXRtYXNrIDI1NS4yNTUuMjU0LjAgIGJyb2FkY2FzdCAx MC43MC4zNy4yNTUKPiAgICAgPHNuaXBwZWQ+Cj4KPiB2aXJicjA6IGZsYWdzPTQwOTk8VVAsQlJP QURDQVNULE1VTFRJQ0FTVD4gIG10dSAxNTAwCj4gICAgICAgICAgIGluZXQgMTkyLjE2OC4xMjIu MSAgbmV0bWFzayAyNTUuMjU1LjI1NS4wICBicm9hZGNhc3QKPiAxOTIuMTY4LjEyMi4yNTUKPiAg ICAgICAgICA8c25pcHBlZD4KPgo+IFJ1bm5pbmcqZ2x1c3Rlckhvc3RzTGlzdCogIHZkc20gdmVy YiwgcmV0dXJucyB0aGUgaXAgYWRkcmVzcwo+IDE5Mi4xNjguMTIyLjEsIHdoZXJlYXMgbXkgaG9z dCBoYXMgYmVlbiBhZGRlZCB3aXRoIGlwIGFkZHJlc3MgMTAuNzAuMzcuMjE5Cj4KPiBJZiBJIHJl Ym9vdCB0aGUgaG9zdCwgdGhlIHZpcmJyMCBicmlkZ2UgaXMgcmVtb3ZlZCwgYW5kIHRoZXJlJ3Mg bm8gaXNzdWUuCj4KPiBUaGUgdmRzbSB2ZXJiIGdsdXN0ZXJIb3N0c0xpc3QgLSByZXR1cm5zIGlw QWRkcmVzcyBvZiBob3N0ICsgb3V0cHV0IG9mIAo+IGdsdXN0ZXIgcGVlciBwcm9iZS4gVGhpcyBp cyBuZWVkZWQgYmVjYXVzZSBhIHBlcmlvZGljIHN5bmMgam9iIG5lZWRzIAo+IHRvIG1ha2Ugc3Vy ZSB0aGF0IHRoZSBob3N0cyBhZGRlZCBpbiBlbmdpbmUgYXJlIGluIHN5bmMgd2l0aCB0aGUgCj4g Z2x1c3RlciBjbGkgKGhvc3RzIGNvdWxkIGFsc28gYmUgYWRkZWQvcmVtb3ZlZCBmcm9tIGdsdXN0 ZXIgY2xpKS4KPgo+IEhvdyBjYW4gd2UgbWFrZSBzdXJlIGdsdXN0ZXJIb3N0c0xpc3QgcGlja3Mg dGhlIGNvcnJlY3QgaXBBZGRyZXNzPyAKPiBSZWFkaW5nIHRoZSBpbmV0aW5mbyBiYXNlZCBvbiBi cmlkZ2UgaGFzIGJlZW4gdmV0b2VkIGFzIHdlIGFyZSBkb2luZyAKPiBhd2F5IHdpdGggYnJpZGdl cy4KPgo+IEl0IHdvdWxkIGFsc28gd29yayBpZiB2aXJicjAgd2FzIHVwZGF0ZWQgaW4gdmRzX2lu dGVyZmFjZXMgdGFibGUuIAo+IFNpbmNlIHRoaXMgaXMgbm90IGhhcHBlbmluZyBlaXRoZXIgLSB3 ZSBoYXZlIGFuIGlzc3VlLgo+Cj4gdGhhbmtzCj4gc2FoaW5hCj4KPgo+Cj4gX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KPiBFbmdpbmUtZGV2ZWwgbWFpbGlu ZyBsaXN0Cj4gRW5naW5lLWRldmVsQG92aXJ0Lm9yZwo+IGh0dHA6Ly9saXN0cy5vdmlydC5vcmcv bWFpbG1hbi9saXN0aW5mby9lbmdpbmUtZGV2ZWwKCgotLS0tLS0tLS0tLS0tLTA0MDcwMTAzMDIw NzAzMDYwNDAzMDEwOQpDb250ZW50LVR5cGU6IHRleHQvaHRtbDsgY2hhcnNldD1JU08tODg1OS0x CkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKCjxodG1sPgogIDxoZWFkPgogICAgPG1l dGEgY29udGVudD0idGV4dC9odG1sOyBjaGFyc2V0PUlTTy04ODU5LTEiCiAgICAgIGh0dHAtZXF1 aXY9IkNvbnRlbnQtVHlwZSI+CiAgPC9oZWFkPgogIDxib2R5IGJnY29sb3I9IiNGRkZGRkYiIHRl eHQ9IiMwMDAwMDAiPgogICAgW0FkZGluZyB2ZHNtLWRldmVsXTxicj4KICAgIDxicj4KICAgIDxk aXYgY2xhc3M9Im1vei1jaXRlLXByZWZpeCI+T24gMDQvMDkvMjAxMyAwMzo0MCBQTSwgU2FoaW5h IEJvc2UKICAgICAgd3JvdGU6PGJyPgogICAgPC9kaXY+CiAgICA8YmxvY2txdW90ZSBjaXRlPSJt aWQ6NTE2M0U5MzIuODA0MDMwMEByZWRoYXQuY29tIiB0eXBlPSJjaXRlIj4KICAgICAgPG1ldGEg aHR0cC1lcXVpdj0iY29udGVudC10eXBlIiBjb250ZW50PSJ0ZXh0L2h0bWw7CiAgICAgICAgY2hh cnNldD1JU08tODg1OS0xIj4KICAgICAgPHByZT48Zm9udCBmYWNlPSJzYW5zLXNlcmlmIj5IaSBh bGwsCgpJJ20gdGVzdGluZyB0aGUgYm9vdHN0cmFwcGluZyBvZiBob3N0IHdpdGhvdXQgcmVib290 IG9uIEZlZG9yYSAxOC4gQWZ0ZXIKaG9zdCdzIGJvb3RzdHJhcCwKSWZjb25maWcgb3V0cHV0IHJl dHVybnMgdGhpczoKCm92aXJ0bWdtdDogZmxhZ3M9NDE2MyZsdDtVUCxCUk9BRENBU1QsUlVOTklO RyxNVUxUSUNBU1QmZ3Q7ICBtdHUgMTUwMAogICAgICAgICBpbmV0IDEwLjcwLjM3LjIxOSAgbmV0 bWFzayAyNTUuMjU1LjI1NC4wICBicm9hZGNhc3QgMTAuNzAuMzcuMjU1CiAgICZsdDtzbmlwcGVk Jmd0OwoKdmlyYnIwOiBmbGFncz00MDk5Jmx0O1VQLEJST0FEQ0FTVCxNVUxUSUNBU1QmZ3Q7ICBt dHUgMTUwMAogICAgICAgICBpbmV0IDE5Mi4xNjguMTIyLjEgIG5ldG1hc2sgMjU1LjI1NS4yNTUu MCAgYnJvYWRjYXN0CjE5Mi4xNjguMTIyLjI1NQogICAgICAgICZsdDtzbmlwcGVkJmd0OwoKUnVu bmluZyA8YiBjbGFzcz0ibW96LXR4dC1zdGFyIj48c3BhbiBjbGFzcz0ibW96LXR4dC10YWciPio8 L3NwYW4+Z2x1c3Rlckhvc3RzTGlzdDxzcGFuIGNsYXNzPSJtb3otdHh0LXRhZyI+Kjwvc3Bhbj48 L2I+IHZkc20gdmVyYiwgcmV0dXJucyB0aGUgaXAgYWRkcmVzcwoxOTIuMTY4LjEyMi4xLCB3aGVy ZWFzIG15IGhvc3QgaGFzIGJlZW4gYWRkZWQgd2l0aCBpcCBhZGRyZXNzIDEwLjcwLjM3LjIxOQoK SWYgSSByZWJvb3QgdGhlIGhvc3QsIHRoZSB2aXJicjAgYnJpZGdlIGlzIHJlbW92ZWQsIGFuZCB0 aGVyZSdzIG5vIGlzc3VlLjwvZm9udD4KPGZvbnQgZmFjZT0ic2Fucy1zZXJpZiI+CjwvZm9udD48 L3ByZT4KICAgICAgPGZvbnQgZmFjZT0ic2Fucy1zZXJpZiI+VGhlIHZkc20gdmVyYiBnbHVzdGVy SG9zdHNMaXN0IC0gcmV0dXJucwogICAgICAgIGlwQWRkcmVzcyBvZiBob3N0ICsgb3V0cHV0IG9m IGdsdXN0ZXIgcGVlciBwcm9iZS4gVGhpcyBpcyBuZWVkZWQKICAgICAgICBiZWNhdXNlIGEgcGVy aW9kaWMgc3luYyBqb2IgbmVlZHMgdG8gbWFrZSBzdXJlIHRoYXQgdGhlIGhvc3RzCiAgICAgICAg YWRkZWQgaW4gZW5naW5lPC9mb250PiBhcmUgaW4gc3luYyB3aXRoIHRoZSBnbHVzdGVyIGNsaSAo aG9zdHMKICAgICAgY291bGQgYWxzbyBiZSBhZGRlZC9yZW1vdmVkIGZyb20gZ2x1c3RlciBjbGkp Ljxicj4KICAgICAgPGJyPgogICAgICBIb3cgY2FuIHdlIG1ha2Ugc3VyZSBnbHVzdGVySG9zdHNM aXN0IHBpY2tzIHRoZSBjb3JyZWN0IGlwQWRkcmVzcz8KICAgICAgUmVhZGluZyB0aGUgaW5ldGlu Zm8gYmFzZWQgb24gYnJpZGdlIGhhcyBiZWVuIHZldG9lZCBhcyB3ZSBhcmUKICAgICAgZG9pbmcg YXdheSB3aXRoIGJyaWRnZXMuPGJyPgogICAgICA8YnI+CiAgICAgIEl0IHdvdWxkIGFsc28gd29y ayBpZiB2aXJicjAgd2FzIHVwZGF0ZWQgaW4gdmRzX2ludGVyZmFjZXMgdGFibGUuCiAgICAgIFNp bmNlIHRoaXMgaXMgbm90IGhhcHBlbmluZyBlaXRoZXIgLSB3ZSBoYXZlIGFuIGlzc3VlLjxicj4K ICAgICAgPGJyPgogICAgICB0aGFua3M8YnI+CiAgICAgIHNhaGluYTxicj4KICAgICAgPGJyPgog ICAgICA8YnI+CiAgICAgIDxmaWVsZHNldCBjbGFzcz0ibWltZUF0dGFjaG1lbnRIZWFkZXIiPjwv ZmllbGRzZXQ+CiAgICAgIDxicj4KICAgICAgPHByZSB3cmFwPSIiPl9fX19fX19fX19fX19fX19f X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fCkVuZ2luZS1kZXZlbCBtYWlsaW5nIGxpc3QK PGEgY2xhc3M9Im1vei10eHQtbGluay1hYmJyZXZpYXRlZCIgaHJlZj0ibWFpbHRvOkVuZ2luZS1k ZXZlbEBvdmlydC5vcmciPkVuZ2luZS1kZXZlbEBvdmlydC5vcmc8L2E+CjxhIGNsYXNzPSJtb3ot dHh0LWxpbmstZnJlZXRleHQiIGhyZWY9Imh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9s aXN0aW5mby9lbmdpbmUtZGV2ZWwiPmh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0 aW5mby9lbmdpbmUtZGV2ZWw8L2E+CjwvcHJlPgogICAgPC9ibG9ja3F1b3RlPgogICAgPGJyPgog IDwvYm9keT4KPC9odG1sPgoKLS0tLS0tLS0tLS0tLS0wNDA3MDEwMzAyMDcwMzA2MDQwMzAxMDkt LQo= --===============4736432791935265916==-- From danken at redhat.com Tue Apr 9 08:36:07 2013 Content-Type: multipart/mixed; boundary="===============8703611003900907323==" MIME-Version: 1.0 From: Dan Kenigsberg To: devel at ovirt.org Subject: Re: [Engine-devel] ovirt-host-deploy and multible bridges Date: Tue, 09 Apr 2013 15:35:35 +0300 Message-ID: <20130409123535.GK26069@redhat.com> In-Reply-To: 5163EC95.50901@redhat.com --===============8703611003900907323== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On Tue, Apr 09, 2013 at 03:55:25PM +0530, Sahina Bose wrote: > [Adding vdsm-devel] > = > On 04/09/2013 03:40 PM, Sahina Bose wrote: > >Hi all, > > > >I'm testing the bootstrapping of host without reboot on Fedora 18. After > >host's bootstrap, > >Ifconfig output returns this: > > > >ovirtmgmt: flags=3D4163 mtu 1500 > > inet 10.70.37.219 netmask 255.255.254.0 broadcast 10.70.37.2= 55 > > > > > >virbr0: flags=3D4099 mtu 1500 > > inet 192.168.122.1 netmask 255.255.255.0 broadcast > >192.168.122.255 > > > > > >Running*glusterHostsList* vdsm verb, returns the ip address > >192.168.122.1, whereas my host has been added with ip address 10.70.37.2= 19 > > > >If I reboot the host, the virbr0 bridge is removed, and there's no issue. > > > >The vdsm verb glusterHostsList - returns ipAddress of host + > >output of gluster peer probe. This is needed because a periodic > >sync job needs to make sure that the hosts added in engine are in > >sync with the gluster cli (hosts could also be added/removed from > >gluster cli). > > > >How can we make sure glusterHostsList picks the correct ipAddress? Can you define (in plain English) what is the "correct" address? The host may have multiple valid addresses (storage, migration, display, whatnot). Only when it's clear to us, we can start expressing this in Python. > >Reading the inetinfo based on bridge has been vetoed as we are > >doing away with bridges. > > > >It would also work if virbr0 was updated in vds_interfaces table. > >Since this is not happening either - we have an issue. It might be a valid hack to drop this default virbr0 on vdsm start - not only the libvirt definition thereof, but also the running kernel device. However, as expressed above, this would not solve your problem when you have a currently-running host with multiple addresses. Dan. --===============8703611003900907323==-- From sabose at redhat.com Tue Apr 9 09:08:16 2013 Content-Type: multipart/mixed; boundary="===============0939680731292867791==" MIME-Version: 1.0 From: Sahina Bose To: devel at ovirt.org Subject: Re: [Engine-devel] ovirt-host-deploy and multible bridges Date: Tue, 09 Apr 2013 18:37:56 +0530 Message-ID: <516412AC.7060907@redhat.com> In-Reply-To: 20130409123535.GK26069@redhat.com --===============0939680731292867791== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Decoding "correct address" - glusterHostsList should return any = ipAddress that engine knows as being associated with host. It could be either ipAddress used while adding host (stored as hostname = in vds_static) or any of the ipAddresses populated in vds_interface = table (addr column) . I do not have enough knowledge about this bit of code to say what = entries are made in vds_interface table. I know there's an entry for = ovirtmgmt here but not sure if this gets added as part of addHost flow = or not. thx sahina On 04/09/2013 06:05 PM, Dan Kenigsberg wrote: > On Tue, Apr 09, 2013 at 03:55:25PM +0530, Sahina Bose wrote: >> [Adding vdsm-devel] >> >> On 04/09/2013 03:40 PM, Sahina Bose wrote: >>> Hi all, >>> >>> I'm testing the bootstrapping of host without reboot on Fedora 18. After >>> host's bootstrap, >>> Ifconfig output returns this: >>> >>> ovirtmgmt: flags=3D4163 mtu 1500 >>> inet 10.70.37.219 netmask 255.255.254.0 broadcast 10.70.37.= 255 >>> >>> >>> virbr0: flags=3D4099 mtu 1500 >>> inet 192.168.122.1 netmask 255.255.255.0 broadcast >>> 192.168.122.255 >>> >>> >>> Running*glusterHostsList* vdsm verb, returns the ip address >>> 192.168.122.1, whereas my host has been added with ip address 10.70.37.= 219 >>> >>> If I reboot the host, the virbr0 bridge is removed, and there's no issu= e. >>> >>> The vdsm verb glusterHostsList - returns ipAddress of host + >>> output of gluster peer probe. This is needed because a periodic >>> sync job needs to make sure that the hosts added in engine are in >>> sync with the gluster cli (hosts could also be added/removed from >>> gluster cli). >>> >>> How can we make sure glusterHostsList picks the correct ipAddress? > Can you define (in plain English) what is the "correct" address? > The host may have multiple valid addresses (storage, migration, display, > whatnot). > > Only when it's clear to us, we can start expressing this in Python. > >>> Reading the inetinfo based on bridge has been vetoed as we are >>> doing away with bridges. >>> >>> It would also work if virbr0 was updated in vds_interfaces table. >>> Since this is not happening either - we have an issue. > It might be a valid hack to drop this default virbr0 on vdsm start - not > only the libvirt definition thereof, but also the running kernel device. > > However, as expressed above, this would not solve your problem when you > have a currently-running host with multiple addresses. > > Dan. --===============0939680731292867791==-- From barumuga at redhat.com Tue Apr 9 09:58:22 2013 Content-Type: multipart/mixed; boundary="===============4185056098056039216==" MIME-Version: 1.0 From: Balamurugan Arumugam To: devel at ovirt.org Subject: Re: [Engine-devel] [vdsm] ovirt-host-deploy and multible bridges Date: Tue, 09 Apr 2013 19:28:17 +0530 Message-ID: <51641E79.5030506@redhat.com> In-Reply-To: 516412AC.7060907@redhat.com --===============4185056098056039216== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On 04/09/2013 06:37 PM, Sahina Bose wrote: > Decoding "correct address" - glusterHostsList should return any > ipAddress that engine knows as being associated with host. > It could be either ipAddress used while adding host (stored as hostname > in vds_static) or any of the ipAddresses populated in vds_interface > table (addr column) . > I do not have enough knowledge about this bit of code to say what > entries are made in vds_interface table. I know there's an entry for > ovirtmgmt here but not sure if this gets added as part of addHost flow > or not. > I guess, vds_interface table is populated by ips given by vdsm through = getVdsCaps. Current glusterHostsList provides one of ipaddress of the local host = (other than 127.*.*.*). If virbr0 is enabled, it picks up = 192.168.122.1 ip address of the bridge and sends to the engine, but this = entry is missing in the table. The requirement is that we need a ip of the local host which is also = stored in the database. The database has entries of ips of a host those are from physical nics = and/or bridges who has slaves to nics. Regards, Bala > thx > sahina > > On 04/09/2013 06:05 PM, Dan Kenigsberg wrote: >> On Tue, Apr 09, 2013 at 03:55:25PM +0530, Sahina Bose wrote: >>> [Adding vdsm-devel] >>> >>> On 04/09/2013 03:40 PM, Sahina Bose wrote: >>>> Hi all, >>>> >>>> I'm testing the bootstrapping of host without reboot on Fedora 18. >>>> After >>>> host's bootstrap, >>>> Ifconfig output returns this: >>>> >>>> ovirtmgmt: flags=3D4163 mtu 1500 >>>> inet 10.70.37.219 netmask 255.255.254.0 broadcast >>>> 10.70.37.255 >>>> >>>> >>>> virbr0: flags=3D4099 mtu 1500 >>>> inet 192.168.122.1 netmask 255.255.255.0 broadcast >>>> 192.168.122.255 >>>> >>>> >>>> Running*glusterHostsList* vdsm verb, returns the ip address >>>> 192.168.122.1, whereas my host has been added with ip address >>>> 10.70.37.219 >>>> >>>> If I reboot the host, the virbr0 bridge is removed, and there's no >>>> issue. >>>> >>>> The vdsm verb glusterHostsList - returns ipAddress of host + >>>> output of gluster peer probe. This is needed because a periodic >>>> sync job needs to make sure that the hosts added in engine are in >>>> sync with the gluster cli (hosts could also be added/removed from >>>> gluster cli). >>>> >>>> How can we make sure glusterHostsList picks the correct ipAddress? >> Can you define (in plain English) what is the "correct" address? >> The host may have multiple valid addresses (storage, migration, display, >> whatnot). >> >> Only when it's clear to us, we can start expressing this in Python. >> >>>> Reading the inetinfo based on bridge has been vetoed as we are >>>> doing away with bridges. >>>> >>>> It would also work if virbr0 was updated in vds_interfaces table. >>>> Since this is not happening either - we have an issue. >> It might be a valid hack to drop this default virbr0 on vdsm start - not >> only the libvirt definition thereof, but also the running kernel device. >> >> However, as expressed above, this would not solve your problem when you >> have a currently-running host with multiple addresses. >> >> Dan. > > _______________________________________________ > vdsm-devel mailing list > vdsm-devel(a)lists.fedorahosted.org > https://lists.fedorahosted.org/mailman/listinfo/vdsm-devel --===============4185056098056039216==-- From danken at redhat.com Tue Apr 9 14:55:59 2013 Content-Type: multipart/mixed; boundary="===============0379950148766104213==" MIME-Version: 1.0 From: Dan Kenigsberg To: devel at ovirt.org Subject: Re: [Engine-devel] [vdsm] ovirt-host-deploy and multible bridges Date: Tue, 09 Apr 2013 21:55:55 +0300 Message-ID: <20130409185555.GD11630@redhat.com> In-Reply-To: 51641E79.5030506@redhat.com --===============0379950148766104213== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On Tue, Apr 09, 2013 at 07:28:17PM +0530, Balamurugan Arumugam wrote: > On 04/09/2013 06:37 PM, Sahina Bose wrote: > >Decoding "correct address" - glusterHostsList should return any > >ipAddress that engine knows as being associated with host. > >It could be either ipAddress used while adding host (stored as hostname > >in vds_static) or any of the ipAddresses populated in vds_interface > >table (addr column) . > >I do not have enough knowledge about this bit of code to say what > >entries are made in vds_interface table. I know there's an entry for > >ovirtmgmt here but not sure if this gets added as part of addHost flow > >or not. > > > = > I guess, vds_interface table is populated by ips given by vdsm > through getVdsCaps. > = > Current glusterHostsList provides one of ipaddress of the local host > (other than 127.*.*.*). If virbr0 is enabled, it picks up > 192.168.122.1 ip address of the bridge and sends to the engine, but > this entry is missing in the table. > = > The requirement is that we need a ip of the local host which is also > stored in the database. > = > The database has entries of ips of a host those are from physical > nics and/or bridges who has slaves to nics. It's not something I've tested, or want to encourage, but currently, outside of gluster, Vdsm may run behind a fancy NAT as a virtual server. I.e., its local undress may be utterly different from the address used by Engine. I'd like to keep having this flexibility, and not to assume otherwise. Why does glusterHostsList need to return the ip of the management network? The client that issued this verb has to know that IP in the first place. I notice that the idiom "_getLocalIpAddress() or _getGlusterHostName()" is used all too often in vdsm/gluster/cli.py. How about changing the Vdsm/Engine API so that the string "localhost" is returned instead? Then, Engine can replace it with whatever it seems fit. Dan. --===============0379950148766104213==-- From sabose at redhat.com Wed Apr 10 01:48:59 2013 Content-Type: multipart/mixed; boundary="===============3657039708702390715==" MIME-Version: 1.0 From: Sahina Bose To: devel at ovirt.org Subject: Re: [Engine-devel] [vdsm] ovirt-host-deploy and multible bridges Date: Wed, 10 Apr 2013 11:18:54 +0530 Message-ID: <5164FD46.7010107@redhat.com> In-Reply-To: 20130409185555.GD11630@redhat.com --===============3657039708702390715== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On 04/10/2013 12:25 AM, Dan Kenigsberg wrote: > On Tue, Apr 09, 2013 at 07:28:17PM +0530, Balamurugan Arumugam wrote: >> On 04/09/2013 06:37 PM, Sahina Bose wrote: >>> Decoding "correct address" - glusterHostsList should return any >>> ipAddress that engine knows as being associated with host. >>> It could be either ipAddress used while adding host (stored as hostname >>> in vds_static) or any of the ipAddresses populated in vds_interface >>> table (addr column) . >>> I do not have enough knowledge about this bit of code to say what >>> entries are made in vds_interface table. I know there's an entry for >>> ovirtmgmt here but not sure if this gets added as part of addHost flow >>> or not. >>> >> I guess, vds_interface table is populated by ips given by vdsm >> through getVdsCaps. >> >> Current glusterHostsList provides one of ipaddress of the local host >> (other than 127.*.*.*). If virbr0 is enabled, it picks up >> 192.168.122.1 ip address of the bridge and sends to the engine, but >> this entry is missing in the table. >> >> The requirement is that we need a ip of the local host which is also >> stored in the database. >> >> The database has entries of ips of a host those are from physical >> nics and/or bridges who has slaves to nics. > It's not something I've tested, or want to encourage, but currently, > outside of gluster, Vdsm may run behind a fancy NAT as a virtual server. > I.e., its local undress may be utterly different from the address used > by Engine. > > I'd like to keep having this flexibility, and not to assume otherwise. > > Why does glusterHostsList need to return the ip of the management > network? The client that issued this verb has to know that IP in the > first place. > > I notice that the idiom "_getLocalIpAddress() or _getGlusterHostName()" > is used all too often in vdsm/gluster/cli.py. > > How about changing the Vdsm/Engine API so that the string "localhost" is > returned instead? Then, Engine can replace it with whatever it seems > fit. > > Dan. Dan, Thanks for clarifying. Looks like relying on the IpAddress to determine = the host will be prone to errors going forward. We will change the approach and start using the UUID that gluster peer = status returns to identify host - will create a new verb glusterPeerList = that does this. And for the current host, like you mentioned, since the engine already = knows which vdsm host this command is executed on, the engine will not = rely on vdsm to return the host's IP. thanks sahina --===============3657039708702390715==-- From barumuga at redhat.com Wed Apr 10 04:49:17 2013 Content-Type: multipart/mixed; boundary="===============3311224497836087436==" MIME-Version: 1.0 From: Balamurugan Arumugam To: devel at ovirt.org Subject: Re: [Engine-devel] [vdsm] ovirt-host-deploy and multible bridges Date: Wed, 10 Apr 2013 14:19:11 +0530 Message-ID: <51652787.8020205@redhat.com> In-Reply-To: 5164FD46.7010107@redhat.com --===============3311224497836087436== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On 04/10/2013 11:18 AM, Sahina Bose wrote: > > On 04/10/2013 12:25 AM, Dan Kenigsberg wrote: >> On Tue, Apr 09, 2013 at 07:28:17PM +0530, Balamurugan Arumugam wrote: >>> On 04/09/2013 06:37 PM, Sahina Bose wrote: >>>> Decoding "correct address" - glusterHostsList should return any >>>> ipAddress that engine knows as being associated with host. >>>> It could be either ipAddress used while adding host (stored as hostname >>>> in vds_static) or any of the ipAddresses populated in vds_interface >>>> table (addr column) . >>>> I do not have enough knowledge about this bit of code to say what >>>> entries are made in vds_interface table. I know there's an entry for >>>> ovirtmgmt here but not sure if this gets added as part of addHost flow >>>> or not. >>>> >>> I guess, vds_interface table is populated by ips given by vdsm >>> through getVdsCaps. >>> >>> Current glusterHostsList provides one of ipaddress of the local host >>> (other than 127.*.*.*). If virbr0 is enabled, it picks up >>> 192.168.122.1 ip address of the bridge and sends to the engine, but >>> this entry is missing in the table. >>> >>> The requirement is that we need a ip of the local host which is also >>> stored in the database. >>> >>> The database has entries of ips of a host those are from physical >>> nics and/or bridges who has slaves to nics. >> It's not something I've tested, or want to encourage, but currently, >> outside of gluster, Vdsm may run behind a fancy NAT as a virtual server. >> I.e., its local undress may be utterly different from the address used >> by Engine. >> >> I'd like to keep having this flexibility, and not to assume otherwise. >> >> Why does glusterHostsList need to return the ip of the management >> network? The client that issued this verb has to know that IP in the >> first place. >> >> I notice that the idiom "_getLocalIpAddress() or _getGlusterHostName()" >> is used all too often in vdsm/gluster/cli.py. >> >> How about changing the Vdsm/Engine API so that the string "localhost" is >> returned instead? Then, Engine can replace it with whatever it seems >> fit. >> >> Dan. > Dan, > > Thanks for clarifying. Looks like relying on the IpAddress to determine > the host will be prone to errors going forward. > We will change the approach and start using the UUID that gluster peer > status returns to identify host - will create a new verb glusterPeerList > that does this. > Current glusterHostsList provides list of {'hostname': HOSTNAME, 'uuid': UUID, 'status': STATE} including local host. What will be the difference between new glusterPeerList and existing = glusterHostsList? > And for the current host, like you mentioned, since the engine already > knows which vdsm host this command is executed on, the engine will not > rely on vdsm to return the host's IP. > Regards, Bala --===============3311224497836087436==-- From sabose at redhat.com Wed Apr 10 04:53:18 2013 Content-Type: multipart/mixed; boundary="===============8274036877141616767==" MIME-Version: 1.0 From: Sahina Bose To: devel at ovirt.org Subject: Re: [Engine-devel] [vdsm] ovirt-host-deploy and multible bridges Date: Wed, 10 Apr 2013 14:23:14 +0530 Message-ID: <5165287A.1010003@redhat.com> In-Reply-To: 51652787.8020205@redhat.com --===============8274036877141616767== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On 04/10/2013 02:19 PM, Balamurugan Arumugam wrote: > On 04/10/2013 11:18 AM, Sahina Bose wrote: >> >> On 04/10/2013 12:25 AM, Dan Kenigsberg wrote: >>> On Tue, Apr 09, 2013 at 07:28:17PM +0530, Balamurugan Arumugam wrote: >>>> On 04/09/2013 06:37 PM, Sahina Bose wrote: >>>>> Decoding "correct address" - glusterHostsList should return any >>>>> ipAddress that engine knows as being associated with host. >>>>> It could be either ipAddress used while adding host (stored as = >>>>> hostname >>>>> in vds_static) or any of the ipAddresses populated in vds_interface >>>>> table (addr column) . >>>>> I do not have enough knowledge about this bit of code to say what >>>>> entries are made in vds_interface table. I know there's an entry for >>>>> ovirtmgmt here but not sure if this gets added as part of addHost = >>>>> flow >>>>> or not. >>>>> >>>> I guess, vds_interface table is populated by ips given by vdsm >>>> through getVdsCaps. >>>> >>>> Current glusterHostsList provides one of ipaddress of the local host >>>> (other than 127.*.*.*). If virbr0 is enabled, it picks up >>>> 192.168.122.1 ip address of the bridge and sends to the engine, but >>>> this entry is missing in the table. >>>> >>>> The requirement is that we need a ip of the local host which is also >>>> stored in the database. >>>> >>>> The database has entries of ips of a host those are from physical >>>> nics and/or bridges who has slaves to nics. >>> It's not something I've tested, or want to encourage, but currently, >>> outside of gluster, Vdsm may run behind a fancy NAT as a virtual = >>> server. >>> I.e., its local undress may be utterly different from the address used >>> by Engine. >>> >>> I'd like to keep having this flexibility, and not to assume otherwise. >>> >>> Why does glusterHostsList need to return the ip of the management >>> network? The client that issued this verb has to know that IP in the >>> first place. >>> >>> I notice that the idiom "_getLocalIpAddress() or _getGlusterHostName()" >>> is used all too often in vdsm/gluster/cli.py. >>> >>> How about changing the Vdsm/Engine API so that the string = >>> "localhost" is >>> returned instead? Then, Engine can replace it with whatever it seems >>> fit. >>> >>> Dan. >> Dan, >> >> Thanks for clarifying. Looks like relying on the IpAddress to determine >> the host will be prone to errors going forward. >> We will change the approach and start using the UUID that gluster peer >> status returns to identify host - will create a new verb glusterPeerList >> that does this. >> > > Current glusterHostsList provides list of > {'hostname': HOSTNAME, 'uuid': UUID, 'status': STATE} including local = > host. > > What will be the difference between new glusterPeerList and existing = > glusterHostsList? > If this is the case, we just need to make sure at engine we use UUID and = not IP address to identify host. We would still need a vdsm verb that = will return the current host gluster UUID, to store in engine in case of = Add Host flow. > > >> And for the current host, like you mentioned, since the engine already >> knows which vdsm host this command is executed on, the engine will not >> rely on vdsm to return the host's IP. >> > > > Regards, > Bala > > --===============8274036877141616767==-- From barumuga at redhat.com Wed Apr 10 04:58:00 2013 Content-Type: multipart/mixed; boundary="===============6020844253352640380==" MIME-Version: 1.0 From: Balamurugan Arumugam To: devel at ovirt.org Subject: Re: [Engine-devel] [vdsm] ovirt-host-deploy and multible bridges Date: Wed, 10 Apr 2013 14:27:55 +0530 Message-ID: <51652993.1000906@redhat.com> In-Reply-To: 5165287A.1010003@redhat.com --===============6020844253352640380== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On 04/10/2013 02:23 PM, Sahina Bose wrote: > > On 04/10/2013 02:19 PM, Balamurugan Arumugam wrote: >> On 04/10/2013 11:18 AM, Sahina Bose wrote: >>> >>> On 04/10/2013 12:25 AM, Dan Kenigsberg wrote: >>>> On Tue, Apr 09, 2013 at 07:28:17PM +0530, Balamurugan Arumugam wrote: >>>>> On 04/09/2013 06:37 PM, Sahina Bose wrote: >>>>>> Decoding "correct address" - glusterHostsList should return any >>>>>> ipAddress that engine knows as being associated with host. >>>>>> It could be either ipAddress used while adding host (stored as >>>>>> hostname >>>>>> in vds_static) or any of the ipAddresses populated in vds_interface >>>>>> table (addr column) . >>>>>> I do not have enough knowledge about this bit of code to say what >>>>>> entries are made in vds_interface table. I know there's an entry for >>>>>> ovirtmgmt here but not sure if this gets added as part of addHost >>>>>> flow >>>>>> or not. >>>>>> >>>>> I guess, vds_interface table is populated by ips given by vdsm >>>>> through getVdsCaps. >>>>> >>>>> Current glusterHostsList provides one of ipaddress of the local host >>>>> (other than 127.*.*.*). If virbr0 is enabled, it picks up >>>>> 192.168.122.1 ip address of the bridge and sends to the engine, but >>>>> this entry is missing in the table. >>>>> >>>>> The requirement is that we need a ip of the local host which is also >>>>> stored in the database. >>>>> >>>>> The database has entries of ips of a host those are from physical >>>>> nics and/or bridges who has slaves to nics. >>>> It's not something I've tested, or want to encourage, but currently, >>>> outside of gluster, Vdsm may run behind a fancy NAT as a virtual >>>> server. >>>> I.e., its local undress may be utterly different from the address used >>>> by Engine. >>>> >>>> I'd like to keep having this flexibility, and not to assume otherwise. >>>> >>>> Why does glusterHostsList need to return the ip of the management >>>> network? The client that issued this verb has to know that IP in the >>>> first place. >>>> >>>> I notice that the idiom "_getLocalIpAddress() or _getGlusterHostName()" >>>> is used all too often in vdsm/gluster/cli.py. >>>> >>>> How about changing the Vdsm/Engine API so that the string >>>> "localhost" is >>>> returned instead? Then, Engine can replace it with whatever it seems >>>> fit. >>>> >>>> Dan. >>> Dan, >>> >>> Thanks for clarifying. Looks like relying on the IpAddress to determine >>> the host will be prone to errors going forward. >>> We will change the approach and start using the UUID that gluster peer >>> status returns to identify host - will create a new verb glusterPeerList >>> that does this. >>> >> >> Current glusterHostsList provides list of >> {'hostname': HOSTNAME, 'uuid': UUID, 'status': STATE} including local >> host. >> >> What will be the difference between new glusterPeerList and existing >> glusterHostsList? >> > If this is the case, we just need to make sure at engine we use UUID and > not IP address to identify host. We would still need a vdsm verb that > will return the current host gluster UUID, to store in engine in case of > Add Host flow. I think, getVdsCaps can include this. Dan, is it good idea? >> >> >>> And for the current host, like you mentioned, since the engine already >>> knows which vdsm host this command is executed on, the engine will not >>> rely on vdsm to return the host's IP. >>> >> >> Regards, Bala --===============6020844253352640380==-- From danken at redhat.com Wed Apr 10 07:41:46 2013 Content-Type: multipart/mixed; boundary="===============1586578382417501661==" MIME-Version: 1.0 From: Dan Kenigsberg To: devel at ovirt.org Subject: Re: [Engine-devel] [vdsm] ovirt-host-deploy and multible bridges Date: Wed, 10 Apr 2013 14:41:42 +0300 Message-ID: <20130410114142.GE32275@redhat.com> In-Reply-To: 51652993.1000906@redhat.com --===============1586578382417501661== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On Wed, Apr 10, 2013 at 02:27:55PM +0530, Balamurugan Arumugam wrote: > On 04/10/2013 02:23 PM, Sahina Bose wrote: > > > >On 04/10/2013 02:19 PM, Balamurugan Arumugam wrote: > >>On 04/10/2013 11:18 AM, Sahina Bose wrote: > >>> > >>>On 04/10/2013 12:25 AM, Dan Kenigsberg wrote: > >>>>On Tue, Apr 09, 2013 at 07:28:17PM +0530, Balamurugan Arumugam wrote: > >>>>>On 04/09/2013 06:37 PM, Sahina Bose wrote: > >>>>>>Decoding "correct address" - glusterHostsList should return any > >>>>>>ipAddress that engine knows as being associated with host. > >>>>>>It could be either ipAddress used while adding host (stored as > >>>>>>hostname > >>>>>>in vds_static) or any of the ipAddresses populated in vds_interface > >>>>>>table (addr column) . > >>>>>>I do not have enough knowledge about this bit of code to say what > >>>>>>entries are made in vds_interface table. I know there's an entry for > >>>>>>ovirtmgmt here but not sure if this gets added as part of addHost > >>>>>>flow > >>>>>>or not. > >>>>>> > >>>>>I guess, vds_interface table is populated by ips given by vdsm > >>>>>through getVdsCaps. > >>>>> > >>>>>Current glusterHostsList provides one of ipaddress of the local host > >>>>>(other than 127.*.*.*). If virbr0 is enabled, it picks up > >>>>>192.168.122.1 ip address of the bridge and sends to the engine, but > >>>>>this entry is missing in the table. > >>>>> > >>>>>The requirement is that we need a ip of the local host which is also > >>>>>stored in the database. > >>>>> > >>>>>The database has entries of ips of a host those are from physical > >>>>>nics and/or bridges who has slaves to nics. > >>>>It's not something I've tested, or want to encourage, but currently, > >>>>outside of gluster, Vdsm may run behind a fancy NAT as a virtual > >>>>server. > >>>>I.e., its local undress may be utterly different from the address used > >>>>by Engine. > >>>> > >>>>I'd like to keep having this flexibility, and not to assume otherwise. > >>>> > >>>>Why does glusterHostsList need to return the ip of the management > >>>>network? The client that issued this verb has to know that IP in the > >>>>first place. > >>>> > >>>>I notice that the idiom "_getLocalIpAddress() or _getGlusterHostName(= )" > >>>>is used all too often in vdsm/gluster/cli.py. > >>>> > >>>>How about changing the Vdsm/Engine API so that the string > >>>>"localhost" is > >>>>returned instead? Then, Engine can replace it with whatever it seems > >>>>fit. > >>>> > >>>>Dan. > >>>Dan, > >>> > >>>Thanks for clarifying. Looks like relying on the IpAddress to determine > >>>the host will be prone to errors going forward. > >>>We will change the approach and start using the UUID that gluster peer > >>>status returns to identify host - will create a new verb glusterPeerLi= st > >>>that does this. > >>> > >> > >>Current glusterHostsList provides list of > >>{'hostname': HOSTNAME, 'uuid': UUID, 'status': STATE} including local > >>host. > >> > >>What will be the difference between new glusterPeerList and existing > >>glusterHostsList? > >> > >If this is the case, we just need to make sure at engine we use UUID and > >not IP address to identify host. We would still need a vdsm verb that > >will return the current host gluster UUID, to store in engine in case of > >Add Host flow. > = > = > I think, getVdsCaps can include this. Dan, is it good idea? I do not mind adding glusterUUID to this "bag of things". (preferably impleneting it within the vdsm-gluster subpackage) I hope Saggi or Adam do not mind making getVdsCaps a little bit more dirty - they may sagguest that you add a getGlusterUUID verb. Dan. --===============1586578382417501661==-- From danken at redhat.com Thu Apr 11 09:09:10 2013 Content-Type: multipart/mixed; boundary="===============8952842929157280907==" MIME-Version: 1.0 From: Dan Kenigsberg To: devel at ovirt.org Subject: Re: [Engine-devel] [vdsm] ovirt-host-deploy and multible bridges Date: Thu, 11 Apr 2013 16:09:02 +0300 Message-ID: <20130411130902.GK16563@redhat.com> In-Reply-To: 5165287A.1010003@redhat.com --===============8952842929157280907== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable On Wed, Apr 10, 2013 at 02:23:14PM +0530, Sahina Bose wrote: > = > On 04/10/2013 02:19 PM, Balamurugan Arumugam wrote: > >On 04/10/2013 11:18 AM, Sahina Bose wrote: > >> > >>On 04/10/2013 12:25 AM, Dan Kenigsberg wrote: > >>>On Tue, Apr 09, 2013 at 07:28:17PM +0530, Balamurugan Arumugam wrote: > >>>>On 04/09/2013 06:37 PM, Sahina Bose wrote: > >>>>>Decoding "correct address" - glusterHostsList should return any > >>>>>ipAddress that engine knows as being associated with host. > >>>>>It could be either ipAddress used while adding host > >>>>>(stored as hostname > >>>>>in vds_static) or any of the ipAddresses populated in vds_interface > >>>>>table (addr column) . > >>>>>I do not have enough knowledge about this bit of code to say what > >>>>>entries are made in vds_interface table. I know there's an entry for > >>>>>ovirtmgmt here but not sure if this gets added as part of > >>>>>addHost flow > >>>>>or not. > >>>>> > >>>>I guess, vds_interface table is populated by ips given by vdsm > >>>>through getVdsCaps. > >>>> > >>>>Current glusterHostsList provides one of ipaddress of the local host > >>>>(other than 127.*.*.*). If virbr0 is enabled, it picks up > >>>>192.168.122.1 ip address of the bridge and sends to the engine, but > >>>>this entry is missing in the table. > >>>> > >>>>The requirement is that we need a ip of the local host which is also > >>>>stored in the database. > >>>> > >>>>The database has entries of ips of a host those are from physical > >>>>nics and/or bridges who has slaves to nics. > >>>It's not something I've tested, or want to encourage, but currently, > >>>outside of gluster, Vdsm may run behind a fancy NAT as a > >>>virtual server. > >>>I.e., its local undress may be utterly different from the address used > >>>by Engine. > >>> > >>>I'd like to keep having this flexibility, and not to assume otherwise. > >>> > >>>Why does glusterHostsList need to return the ip of the management > >>>network? The client that issued this verb has to know that IP in the > >>>first place. > >>> > >>>I notice that the idiom "_getLocalIpAddress() or _getGlusterHostName()" > >>>is used all too often in vdsm/gluster/cli.py. And what about these ^^^ ? Is there any reason to keep these guesses elsewhere in the code? --===============8952842929157280907==--