which NIC/network NFS storage is using

On my hosts, I have configured a 1gbe nic for ovirtmgmt whose usage is currently setup for Management, Display, VM and Migration. I also have a 2 10gbe nics bonded LACP which are VLAN tagged and assigned the dozen or so VLANS needed for the various VM's to access. I have NFS storage mounted to the Data Center, and I would like to know how I check/specify which network connection ovirt is using for that NFS storage. I want to make sure it is utilizing the 10gbe bond on each host vs using the 1gbe connection. Thanks, Ryan

On Tue, Jun 14, 2016 at 5:26 PM, Ryan Mahoney <ryan@beaconhillentertainment.com> wrote:
On my hosts, I have configured a 1gbe nic for ovirtmgmt whose usage is currently setup for Management, Display, VM and Migration. I also have a 2 10gbe nics bonded LACP which are VLAN tagged and assigned the dozen or so VLANS needed for the various VM's to access. I have NFS storage mounted to the Data Center, and I would like to know how I check/specify which network connection ovirt is using for that NFS storage. I want to make sure it is utilizing the 10gbe bond on each host vs using the 1gbe connection.
We don't configured anything regarding network used for nfs storage, so it works just like any other nfs mount you create yourself. Nir

Right, but how do you specify which network the nfs traffic is using? On Tue, Jun 14, 2016 at 12:41 PM, Nir Soffer <nsoffer@redhat.com> wrote:
On Tue, Jun 14, 2016 at 5:26 PM, Ryan Mahoney <ryan@beaconhillentertainment.com> wrote:
On my hosts, I have configured a 1gbe nic for ovirtmgmt whose usage is currently setup for Management, Display, VM and Migration. I also have a 2 10gbe nics bonded LACP which are VLAN tagged and assigned the dozen or so VLANS needed for the various VM's to access. I have NFS storage mounted to the Data Center, and I would like to know how I check/specify which network connection ovirt is using for that NFS storage. I want to make sure it is utilizing the 10gbe bond on each host vs using the 1gbe connection.
We don't configured anything regarding network used for nfs storage, so it works just like any other nfs mount you create yourself.
Nir

--_000_D0611FC6DB8746528CCF9B36C46903E6acroniscom_ Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: base64 UnlhbiwNCg0KWW91IGNhbiBjaGVjayBpdCB3aXRoIHRoZSBmb2xsb3dpbmcgc2hlbGwgY29tbWFu ZDoNCiMgaXAgcm91dGUgZ2V0IHgueC54LngNCndoZXJlIHgueC54LnggaXMgYW4gSVAgYWRkcmVz cyBvZiB5b3VyIE5GUyBzdG9yYWdlLg0KDQpPbiAxNC8wNi8xNiAxOTo1MiwgInVzZXJzLWJvdW5j ZXNAb3ZpcnQub3JnPG1haWx0bzp1c2Vycy1ib3VuY2VzQG92aXJ0Lm9yZz4gb24gYmVoYWxmIG9m IFJ5YW4gTWFob25leSIgPHVzZXJzLWJvdW5jZXNAb3ZpcnQub3JnPG1haWx0bzp1c2Vycy1ib3Vu Y2VzQG92aXJ0Lm9yZz4gb24gYmVoYWxmIG9mIHJ5YW5AYmVhY29uaGlsbGVudGVydGFpbm1lbnQu Y29tPG1haWx0bzpyeWFuQGJlYWNvbmhpbGxlbnRlcnRhaW5tZW50LmNvbT4+IHdyb3RlOg0KDQpS aWdodCwgYnV0IGhvdyBkbyB5b3Ugc3BlY2lmeSB3aGljaCBuZXR3b3JrIHRoZSBuZnMgdHJhZmZp YyBpcyB1c2luZz8NCg0KT24gVHVlLCBKdW4gMTQsIDIwMTYgYXQgMTI6NDEgUE0sIE5pciBTb2Zm ZXIgPG5zb2ZmZXJAcmVkaGF0LmNvbTxtYWlsdG86bnNvZmZlckByZWRoYXQuY29tPj4gd3JvdGU6 DQpPbiBUdWUsIEp1biAxNCwgMjAxNiBhdCA1OjI2IFBNLCBSeWFuIE1haG9uZXkNCjxyeWFuQGJl YWNvbmhpbGxlbnRlcnRhaW5tZW50LmNvbTxtYWlsdG86cnlhbkBiZWFjb25oaWxsZW50ZXJ0YWlu bWVudC5jb20+PiB3cm90ZToNCj4gT24gbXkgaG9zdHMsIEkgaGF2ZSBjb25maWd1cmVkIGEgMWdi ZSBuaWMgZm9yIG92aXJ0bWdtdCB3aG9zZSB1c2FnZSBpcw0KPiBjdXJyZW50bHkgc2V0dXAgZm9y IE1hbmFnZW1lbnQsIERpc3BsYXksIFZNIGFuZCBNaWdyYXRpb24uIEkgYWxzbyBoYXZlIGEgMg0K PiAxMGdiZSBuaWNzIGJvbmRlZCBMQUNQIHdoaWNoIGFyZSBWTEFOIHRhZ2dlZCBhbmQgYXNzaWdu ZWQgdGhlIGRvemVuIG9yIHNvDQo+IFZMQU5TIG5lZWRlZCBmb3IgdGhlIHZhcmlvdXMgVk0ncyB0 byBhY2Nlc3MuICBJIGhhdmUgTkZTIHN0b3JhZ2UgbW91bnRlZCB0bw0KPiB0aGUgRGF0YSBDZW50 ZXIsIGFuZCBJIHdvdWxkIGxpa2UgdG8ga25vdyBob3cgSSBjaGVjay9zcGVjaWZ5IHdoaWNoIG5l dHdvcmsNCj4gY29ubmVjdGlvbiBvdmlydCBpcyB1c2luZyBmb3IgdGhhdCBORlMgc3RvcmFnZS4g IEkgd2FudCB0byBtYWtlIHN1cmUgaXQgaXMNCj4gdXRpbGl6aW5nIHRoZSAxMGdiZSBib25kIG9u IGVhY2ggaG9zdCB2cyB1c2luZyB0aGUgMWdiZSBjb25uZWN0aW9uLg0KDQpXZSBkb24ndCBjb25m aWd1cmVkIGFueXRoaW5nIHJlZ2FyZGluZyBuZXR3b3JrIHVzZWQgZm9yIG5mcyBzdG9yYWdlLCBz byBpdCB3b3Jrcw0KanVzdCBsaWtlIGFueSBvdGhlciBuZnMgbW91bnQgeW91IGNyZWF0ZSB5b3Vy c2VsZi4NCg0KTmlyDQoNCg== --_000_D0611FC6DB8746528CCF9B36C46903E6acroniscom_ Content-Type: text/html; charset="utf-8" Content-ID: <2A7E0A24A0043448A70B8B6CA932087D@acronis.com> Content-Transfer-Encoding: base64 PGh0bWwgeG1sbnM6bz0idXJuOnNjaGVtYXMtbWljcm9zb2Z0LWNvbTpvZmZpY2U6b2ZmaWNlIiB4 bWxuczp3PSJ1cm46c2NoZW1hcy1taWNyb3NvZnQtY29tOm9mZmljZTp3b3JkIiB4bWxuczptPSJo dHRwOi8vc2NoZW1hcy5taWNyb3NvZnQuY29tL29mZmljZS8yMDA0LzEyL29tbWwiIHhtbG5zPSJo dHRwOi8vd3d3LnczLm9yZy9UUi9SRUMtaHRtbDQwIj4NCjxoZWFkPg0KPG1ldGEgaHR0cC1lcXVp dj0iQ29udGVudC1UeXBlIiBjb250ZW50PSJ0ZXh0L2h0bWw7IGNoYXJzZXQ9dXRmLTgiPg0KPG1l dGEgbmFtZT0iVGl0bGUiIGNvbnRlbnQ9IiI+DQo8bWV0YSBuYW1lPSJLZXl3b3JkcyIgY29udGVu dD0iIj4NCjxtZXRhIG5hbWU9IkdlbmVyYXRvciIgY29udGVudD0iTWljcm9zb2Z0IFdvcmQgMTUg KGZpbHRlcmVkIG1lZGl1bSkiPg0KPHN0eWxlPjwhLS0NCi8qIEZvbnQgRGVmaW5pdGlvbnMgKi8N CkBmb250LWZhY2UNCgl7Zm9udC1mYW1pbHk6IkNhbWJyaWEgTWF0aCI7DQoJcGFub3NlLTE6MiA0 IDUgMyA1IDQgNiAzIDIgNDt9DQpAZm9udC1mYWNlDQoJe2ZvbnQtZmFtaWx5OkNhbGlicmk7DQoJ cGFub3NlLTE6MiAxNSA1IDIgMiAyIDQgMyAyIDQ7fQ0KLyogU3R5bGUgRGVmaW5pdGlvbnMgKi8N CnAuTXNvTm9ybWFsLCBsaS5Nc29Ob3JtYWwsIGRpdi5Nc29Ob3JtYWwNCgl7bWFyZ2luOjBjbTsN CgltYXJnaW4tYm90dG9tOi4wMDAxcHQ7DQoJZm9udC1zaXplOjEyLjBwdDsNCglmb250LWZhbWls eToiVGltZXMgTmV3IFJvbWFuIjt9DQphOmxpbmssIHNwYW4uTXNvSHlwZXJsaW5rDQoJe21zby1z dHlsZS1wcmlvcml0eTo5OTsNCgljb2xvcjpibHVlOw0KCXRleHQtZGVjb3JhdGlvbjp1bmRlcmxp bmU7fQ0KYTp2aXNpdGVkLCBzcGFuLk1zb0h5cGVybGlua0ZvbGxvd2VkDQoJe21zby1zdHlsZS1w cmlvcml0eTo5OTsNCgljb2xvcjpwdXJwbGU7DQoJdGV4dC1kZWNvcmF0aW9uOnVuZGVybGluZTt9 DQpzcGFuLmhvZW56Yg0KCXttc28tc3R5bGUtbmFtZTpob2VuemI7fQ0Kc3Bhbi5FbWFpbFN0eWxl MTgNCgl7bXNvLXN0eWxlLXR5cGU6cGVyc29uYWwtcmVwbHk7DQoJZm9udC1mYW1pbHk6Q2FsaWJy aTsNCgljb2xvcjp3aW5kb3d0ZXh0O30NCnNwYW4ubXNvSW5zDQoJe21zby1zdHlsZS10eXBlOmV4 cG9ydC1vbmx5Ow0KCW1zby1zdHlsZS1uYW1lOiIiOw0KCXRleHQtZGVjb3JhdGlvbjp1bmRlcmxp bmU7DQoJY29sb3I6dGVhbDt9DQouTXNvQ2hwRGVmYXVsdA0KCXttc28tc3R5bGUtdHlwZTpleHBv cnQtb25seTsNCglmb250LXNpemU6MTAuMHB0O30NCkBwYWdlIFdvcmRTZWN0aW9uMQ0KCXtzaXpl OjU5NS4wcHQgODQyLjBwdDsNCgltYXJnaW46NzIuMHB0IDcyLjBwdCA3Mi4wcHQgNzIuMHB0O30N CmRpdi5Xb3JkU2VjdGlvbjENCgl7cGFnZTpXb3JkU2VjdGlvbjE7fQ0KLS0+PC9zdHlsZT4NCjwv aGVhZD4NCjxib2R5IGJnY29sb3I9IndoaXRlIiBsYW5nPSJFTi1VUyIgbGluaz0iYmx1ZSIgdmxp bms9InB1cnBsZSI+DQo8ZGl2IGNsYXNzPSJXb3JkU2VjdGlvbjEiPg0KPHAgY2xhc3M9Ik1zb05v cm1hbCI+PHNwYW4gbGFuZz0iUlUiIHN0eWxlPSJmb250LXNpemU6MTEuMHB0O2ZvbnQtZmFtaWx5 OkNhbGlicmkiPlJ5YW4sPG86cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1h bCI+PHNwYW4gbGFuZz0iUlUiIHN0eWxlPSJmb250LXNpemU6MTEuMHB0O2ZvbnQtZmFtaWx5OkNh bGlicmkiPjxvOnA+Jm5ic3A7PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwi PjxzcGFuIGxhbmc9IlJVIiBzdHlsZT0iZm9udC1zaXplOjExLjBwdDtmb250LWZhbWlseTpDYWxp YnJpIj5Zb3UgY2FuIGNoZWNrIGl0IHdpdGggdGhlIGZvbGxvd2luZyBzaGVsbCBjb21tYW5kOjxv OnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIGxhbmc9IlJV IiBzdHlsZT0iZm9udC1zaXplOjExLjBwdDtmb250LWZhbWlseTpDYWxpYnJpIj4jIGlwIHJvdXRl IGdldCB4LngueC54PG86cD48L286cD48L3NwYW4+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+ PHNwYW4gbGFuZz0iUlUiIHN0eWxlPSJmb250LXNpemU6MTEuMHB0O2ZvbnQtZmFtaWx5OkNhbGli cmkiPndoZXJlIHgueC54LnggaXMgYW4gSVAgYWRkcmVzcyBvZiB5b3VyIE5GUyBzdG9yYWdlLjxv OnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFuIGxhbmc9IlJV IiBzdHlsZT0iZm9udC1zaXplOjExLjBwdDtmb250LWZhbWlseTpDYWxpYnJpIj48bzpwPiZuYnNw OzwvbzpwPjwvc3Bhbj48L3A+DQo8ZGl2Pg0KPGRpdj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPk9u IDE0LzA2LzE2IDE5OjUyLCAmcXVvdDs8YSBocmVmPSJtYWlsdG86dXNlcnMtYm91bmNlc0Bvdmly dC5vcmciPnVzZXJzLWJvdW5jZXNAb3ZpcnQub3JnPC9hPiBvbiBiZWhhbGYgb2YgUnlhbiBNYWhv bmV5JnF1b3Q7ICZsdDs8YSBocmVmPSJtYWlsdG86dXNlcnMtYm91bmNlc0BvdmlydC5vcmciPnVz ZXJzLWJvdW5jZXNAb3ZpcnQub3JnPC9hPiBvbiBiZWhhbGYgb2YNCjxhIGhyZWY9Im1haWx0bzpy eWFuQGJlYWNvbmhpbGxlbnRlcnRhaW5tZW50LmNvbSI+cnlhbkBiZWFjb25oaWxsZW50ZXJ0YWlu bWVudC5jb208L2E+Jmd0OyB3cm90ZTo8bzpwPjwvbzpwPjwvcD4NCjwvZGl2Pg0KPC9kaXY+DQo8 ZGl2Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PG86cD4mbmJzcDs8L286cD48L3A+DQo8L2Rpdj4N CjxibG9ja3F1b3RlIHN0eWxlPSJib3JkZXI6bm9uZTtib3JkZXItbGVmdDpzb2xpZCAjQjVDNERG IDQuNXB0O3BhZGRpbmc6MGNtIDBjbSAwY20gNC4wcHQ7bWFyZ2luLWxlZnQ6My43NXB0O21hcmdp bi1yaWdodDowY20iIGlkPSJNQUNfT1VUTE9PS19BVFRSSUJVVElPTl9CTE9DS1FVT1RFIj4NCjxk aXY+DQo8ZGl2Pg0KPGRpdj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPlJpZ2h0LCBidXQgaG93IGRv IHlvdSBzcGVjaWZ5IHdoaWNoIG5ldHdvcmsgdGhlIG5mcyB0cmFmZmljIGlzIHVzaW5nPyZuYnNw OzxvOnA+PC9vOnA+PC9wPg0KPC9kaXY+DQo8ZGl2Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PG86 cD4mbmJzcDs8L286cD48L3A+DQo8ZGl2Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+T24gVHVlLCBK dW4gMTQsIDIwMTYgYXQgMTI6NDEgUE0sIE5pciBTb2ZmZXIgJmx0OzxhIGhyZWY9Im1haWx0bzpu c29mZmVyQHJlZGhhdC5jb20iIHRhcmdldD0iX2JsYW5rIj5uc29mZmVyQHJlZGhhdC5jb208L2E+ Jmd0OyB3cm90ZTo8bzpwPjwvbzpwPjwvcD4NCjxibG9ja3F1b3RlIHN0eWxlPSJib3JkZXI6bm9u ZTtib3JkZXItbGVmdDpzb2xpZCAjQ0NDQ0NDIDEuMHB0O3BhZGRpbmc6MGNtIDBjbSAwY20gNi4w cHQ7bWFyZ2luLWxlZnQ6NC44cHQ7bWFyZ2luLXJpZ2h0OjBjbSI+DQo8cCBjbGFzcz0iTXNvTm9y bWFsIj5PbiBUdWUsIEp1biAxNCwgMjAxNiBhdCA1OjI2IFBNLCBSeWFuIE1haG9uZXk8YnI+DQom bHQ7PGEgaHJlZj0ibWFpbHRvOnJ5YW5AYmVhY29uaGlsbGVudGVydGFpbm1lbnQuY29tIj5yeWFu QGJlYWNvbmhpbGxlbnRlcnRhaW5tZW50LmNvbTwvYT4mZ3Q7IHdyb3RlOjxicj4NCiZndDsgT24g bXkgaG9zdHMsIEkgaGF2ZSBjb25maWd1cmVkIGEgMWdiZSBuaWMgZm9yIG92aXJ0bWdtdCB3aG9z ZSB1c2FnZSBpczxicj4NCiZndDsgY3VycmVudGx5IHNldHVwIGZvciBNYW5hZ2VtZW50LCBEaXNw bGF5LCBWTSBhbmQgTWlncmF0aW9uLiBJIGFsc28gaGF2ZSBhIDI8YnI+DQomZ3Q7IDEwZ2JlIG5p Y3MgYm9uZGVkIExBQ1Agd2hpY2ggYXJlIFZMQU4gdGFnZ2VkIGFuZCBhc3NpZ25lZCB0aGUgZG96 ZW4gb3Igc288YnI+DQomZ3Q7IFZMQU5TIG5lZWRlZCBmb3IgdGhlIHZhcmlvdXMgVk0ncyB0byBh Y2Nlc3MuJm5ic3A7IEkgaGF2ZSBORlMgc3RvcmFnZSBtb3VudGVkIHRvPGJyPg0KJmd0OyB0aGUg RGF0YSBDZW50ZXIsIGFuZCBJIHdvdWxkIGxpa2UgdG8ga25vdyBob3cgSSBjaGVjay9zcGVjaWZ5 IHdoaWNoIG5ldHdvcms8YnI+DQomZ3Q7IGNvbm5lY3Rpb24gb3ZpcnQgaXMgdXNpbmcgZm9yIHRo YXQgTkZTIHN0b3JhZ2UuJm5ic3A7IEkgd2FudCB0byBtYWtlIHN1cmUgaXQgaXM8YnI+DQomZ3Q7 IHV0aWxpemluZyB0aGUgMTBnYmUgYm9uZCBvbiBlYWNoIGhvc3QgdnMgdXNpbmcgdGhlIDFnYmUg Y29ubmVjdGlvbi48YnI+DQo8YnI+DQpXZSBkb24ndCBjb25maWd1cmVkIGFueXRoaW5nIHJlZ2Fy ZGluZyBuZXR3b3JrIHVzZWQgZm9yIG5mcyBzdG9yYWdlLCBzbyBpdCB3b3Jrczxicj4NCmp1c3Qg bGlrZSBhbnkgb3RoZXIgbmZzIG1vdW50IHlvdSBjcmVhdGUgeW91cnNlbGYuPGJyPg0KPHNwYW4g c3R5bGU9ImNvbG9yOiM4ODg4ODgiPjxicj4NCjxzcGFuIGNsYXNzPSJob2VuemIiPk5pcjwvc3Bh bj48L3NwYW4+PG86cD48L286cD48L3A+DQo8L2Jsb2NrcXVvdGU+DQo8L2Rpdj4NCjxwIGNsYXNz PSJNc29Ob3JtYWwiPjxvOnA+Jm5ic3A7PC9vOnA+PC9wPg0KPC9kaXY+DQo8L2Rpdj4NCjwvZGl2 Pg0KPC9ibG9ja3F1b3RlPg0KPC9kaXY+DQo8L2JvZHk+DQo8L2h0bWw+DQo= --_000_D0611FC6DB8746528CCF9B36C46903E6acroniscom_--

This is a multi-part message in MIME format. --------------040002070302010201090208 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit I guess what the colleague wants to know is how to specify a interface in a different VLAN on the top of the 10Gb LACP in order for the NFS traffic to flow. In VMware world that would be vmkernel interface, so a new network/interface with an different IP address than Management (ovirtmgmt). Fernando Em 14/06/2016 13:52, Ryan Mahoney escreveu:
Right, but how do you specify which network the nfs traffic is using?
On Tue, Jun 14, 2016 at 12:41 PM, Nir Soffer <nsoffer@redhat.com <mailto:nsoffer@redhat.com>> wrote:
On Tue, Jun 14, 2016 at 5:26 PM, Ryan Mahoney <ryan@beaconhillentertainment.com <mailto:ryan@beaconhillentertainment.com>> wrote: > On my hosts, I have configured a 1gbe nic for ovirtmgmt whose usage is > currently setup for Management, Display, VM and Migration. I also have a 2 > 10gbe nics bonded LACP which are VLAN tagged and assigned the dozen or so > VLANS needed for the various VM's to access. I have NFS storage mounted to > the Data Center, and I would like to know how I check/specify which network > connection ovirt is using for that NFS storage. I want to make sure it is > utilizing the 10gbe bond on each host vs using the 1gbe connection.
We don't configured anything regarding network used for nfs storage, so it works just like any other nfs mount you create yourself.
Nir
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--------------040002070302010201090208 Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=windows-1252" http-equiv="Content-Type"> </head> <body text="#000000" bgcolor="#FFFFFF"> I guess what the colleague wants to know is how to specify a interface in a different VLAN on the top of the 10Gb LACP in order for the NFS traffic to flow.<br> In VMware world that would be vmkernel interface, so a new network/interface with an different IP address than Management (ovirtmgmt).<br> <br> Fernando<br> <br> <div class="moz-cite-prefix">Em 14/06/2016 13:52, Ryan Mahoney escreveu:<br> </div> <blockquote cite="mid:CADcVmL3w_e-SzeGiJt2aUp2ucfAHov2uTO6b+Ef+zHQ-LP1epg@mail.gmail.com" type="cite"> <div dir="ltr">Right, but how do you specify which network the nfs traffic is using? </div> <div class="gmail_extra"><br> <div class="gmail_quote">On Tue, Jun 14, 2016 at 12:41 PM, Nir Soffer <span dir="ltr"><<a moz-do-not-send="true" href="mailto:nsoffer@redhat.com" target="_blank">nsoffer@redhat.com</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On Tue, Jun 14, 2016 at 5:26 PM, Ryan Mahoney<br> <<a moz-do-not-send="true" href="mailto:ryan@beaconhillentertainment.com">ryan@beaconhillentertainment.com</a>> wrote:<br> > On my hosts, I have configured a 1gbe nic for ovirtmgmt whose usage is<br> > currently setup for Management, Display, VM and Migration. I also have a 2<br> > 10gbe nics bonded LACP which are VLAN tagged and assigned the dozen or so<br> > VLANS needed for the various VM's to access. I have NFS storage mounted to<br> > the Data Center, and I would like to know how I check/specify which network<br> > connection ovirt is using for that NFS storage. I want to make sure it is<br> > utilizing the 10gbe bond on each host vs using the 1gbe connection.<br> <br> </span>We don't configured anything regarding network used for nfs storage, so it works<br> just like any other nfs mount you create yourself.<br> <span class="HOEnZb"><font color="#888888"><br> Nir<br> </font></span></blockquote> </div> <br> </div> <br> <fieldset class="mimeAttachmentHeader"></fieldset> <br> <pre wrap="">_______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <br> </body> </html> --------------040002070302010201090208--

Yes, Fernando this is exactly what I'm asking for On Tue, Jun 14, 2016 at 1:25 PM, Fernando Frediani < fernando.frediani@upx.com.br> wrote:
I guess what the colleague wants to know is how to specify a interface in a different VLAN on the top of the 10Gb LACP in order for the NFS traffic to flow. In VMware world that would be vmkernel interface, so a new network/interface with an different IP address than Management (ovirtmgmt).
Fernando
Em 14/06/2016 13:52, Ryan Mahoney escreveu:
Right, but how do you specify which network the nfs traffic is using?
On Tue, Jun 14, 2016 at 12:41 PM, Nir Soffer <nsoffer@redhat.com> wrote:
On Tue, Jun 14, 2016 at 5:26 PM, Ryan Mahoney <ryan@beaconhillentertainment.com> wrote:
On my hosts, I have configured a 1gbe nic for ovirtmgmt whose usage is currently setup for Management, Display, VM and Migration. I also have a 2 10gbe nics bonded LACP which are VLAN tagged and assigned the dozen or so VLANS needed for the various VM's to access. I have NFS storage mounted to the Data Center, and I would like to know how I check/specify which network connection ovirt is using for that NFS storage. I want to make sure it is utilizing the 10gbe bond on each host vs using the 1gbe connection.
We don't configured anything regarding network used for nfs storage, so it works just like any other nfs mount you create yourself.
Nir
_______________________________________________ Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Set a static route to the storage to go through the NIC(s) you want it to ip route add x.x.x.x/32 dev <nic> via <gw> where x.x.x.x/32 is the IP of the NFS server On Tue, Jun 14, 2016 at 1:30 PM, Ryan Mahoney < ryan@beaconhillentertainment.com> wrote:
Yes, Fernando this is exactly what I'm asking for
On Tue, Jun 14, 2016 at 1:25 PM, Fernando Frediani < fernando.frediani@upx.com.br> wrote:
I guess what the colleague wants to know is how to specify a interface in a different VLAN on the top of the 10Gb LACP in order for the NFS traffic to flow. In VMware world that would be vmkernel interface, so a new network/interface with an different IP address than Management (ovirtmgmt).
Fernando
Em 14/06/2016 13:52, Ryan Mahoney escreveu:
Right, but how do you specify which network the nfs traffic is using?
On Tue, Jun 14, 2016 at 12:41 PM, Nir Soffer <nsoffer@redhat.com> wrote:
On Tue, Jun 14, 2016 at 5:26 PM, Ryan Mahoney <ryan@beaconhillentertainment.com> wrote:
On my hosts, I have configured a 1gbe nic for ovirtmgmt whose usage is currently setup for Management, Display, VM and Migration. I also have a 2 10gbe nics bonded LACP which are VLAN tagged and assigned the dozen or so VLANS needed for the various VM's to access. I have NFS storage mounted to the Data Center, and I would like to know how I check/specify which network connection ovirt is using for that NFS storage. I want to make sure it is utilizing the 10gbe bond on each host vs using the 1gbe connection.
We don't configured anything regarding network used for nfs storage, so it works just like any other nfs mount you create yourself.
Nir
_______________________________________________ Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- *Charles Kozler* *Vice President, IT Operations* FIX Flyer, LLC 225 Broadway | Suite 1600 | New York, NY 10007 1-888-349-3593 http://www.fixflyer.com <http://fixflyer.com> NOTICE TO RECIPIENT: THIS E-MAIL IS MEANT ONLY FOR THE INTENDED RECIPIENT(S) OF THE TRANSMISSION, AND CONTAINS CONFIDENTIAL INFORMATION WHICH IS PROPRIETARY TO FIX FLYER LLC. ANY UNAUTHORIZED USE, COPYING, DISTRIBUTION, OR DISSEMINATION IS STRICTLY PROHIBITED. ALL RIGHTS TO THIS INFORMATION IS RESERVED BY FIX FLYER LLC. IF YOU ARE NOT THE INTENDED RECIPIENT, PLEASE CONTACT THE SENDER BY REPLY E-MAIL AND PLEASE DELETE THIS E-MAIL FROM YOUR SYSTEM AND DESTROY ANY COPIES.

And I should be able to give ip route bond0 as the nic for the 10gbe bond? or do I have to give it both individual nics in the bond separately On Tue, Jun 14, 2016 at 1:35 PM, Charles Kozler <charles@fixflyer.com> wrote:
Set a static route to the storage to go through the NIC(s) you want it to
ip route add x.x.x.x/32 dev <nic> via <gw>
where x.x.x.x/32 is the IP of the NFS server
On Tue, Jun 14, 2016 at 1:30 PM, Ryan Mahoney < ryan@beaconhillentertainment.com> wrote:
Yes, Fernando this is exactly what I'm asking for
On Tue, Jun 14, 2016 at 1:25 PM, Fernando Frediani < fernando.frediani@upx.com.br> wrote:
I guess what the colleague wants to know is how to specify a interface in a different VLAN on the top of the 10Gb LACP in order for the NFS traffic to flow. In VMware world that would be vmkernel interface, so a new network/interface with an different IP address than Management (ovirtmgmt).
Fernando
Em 14/06/2016 13:52, Ryan Mahoney escreveu:
Right, but how do you specify which network the nfs traffic is using?
On Tue, Jun 14, 2016 at 12:41 PM, Nir Soffer <nsoffer@redhat.com> wrote:
On Tue, Jun 14, 2016 at 5:26 PM, Ryan Mahoney <ryan@beaconhillentertainment.com> wrote:
On my hosts, I have configured a 1gbe nic for ovirtmgmt whose usage is currently setup for Management, Display, VM and Migration. I also have a 2 10gbe nics bonded LACP which are VLAN tagged and assigned the dozen or so VLANS needed for the various VM's to access. I have NFS storage mounted to the Data Center, and I would like to know how I check/specify which network connection ovirt is using for that NFS storage. I want to make sure it is utilizing the 10gbe bond on each host vs using the 1gbe connection.
We don't configured anything regarding network used for nfs storage, so it works just like any other nfs mount you create yourself.
Nir
_______________________________________________ Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--
*Charles Kozler* *Vice President, IT Operations*
FIX Flyer, LLC 225 Broadway | Suite 1600 | New York, NY 10007 1-888-349-3593 http://www.fixflyer.com <http://fixflyer.com>
NOTICE TO RECIPIENT: THIS E-MAIL IS MEANT ONLY FOR THE INTENDED RECIPIENT(S) OF THE TRANSMISSION, AND CONTAINS CONFIDENTIAL INFORMATION WHICH IS PROPRIETARY TO FIX FLYER LLC. ANY UNAUTHORIZED USE, COPYING, DISTRIBUTION, OR DISSEMINATION IS STRICTLY PROHIBITED. ALL RIGHTS TO THIS INFORMATION IS RESERVED BY FIX FLYER LLC. IF YOU ARE NOT THE INTENDED RECIPIENT, PLEASE CONTACT THE SENDER BY REPLY E-MAIL AND PLEASE DELETE THIS E-MAIL FROM YOUR SYSTEM AND DESTROY ANY COPIES.

I tried a few different additions of ip route to the host, but I am unable to connect/ping to the nfs server once I add them. I first tried adding it to the bond0 dev, and then to one of the logical interfaces within that bond, but neither worked. I also get an error of "rtnetlink answers: network is unreachable" when I include a gateway in the ip route add command. if I leave the gateway off, it will allow me to add the route, but it does not work as a valid route. I do not have an IP address on the logical network that I am attempting to use on the bonded connection, the manager does not like it when I try and assign one. On Tue, Jun 14, 2016 at 2:04 PM, Ryan Mahoney < ryan@beaconhillentertainment.com> wrote:
And I should be able to give ip route bond0 as the nic for the 10gbe bond? or do I have to give it both individual nics in the bond separately
On Tue, Jun 14, 2016 at 1:35 PM, Charles Kozler <charles@fixflyer.com> wrote:
Set a static route to the storage to go through the NIC(s) you want it to
ip route add x.x.x.x/32 dev <nic> via <gw>
where x.x.x.x/32 is the IP of the NFS server
On Tue, Jun 14, 2016 at 1:30 PM, Ryan Mahoney < ryan@beaconhillentertainment.com> wrote:
Yes, Fernando this is exactly what I'm asking for
On Tue, Jun 14, 2016 at 1:25 PM, Fernando Frediani < fernando.frediani@upx.com.br> wrote:
I guess what the colleague wants to know is how to specify a interface in a different VLAN on the top of the 10Gb LACP in order for the NFS traffic to flow. In VMware world that would be vmkernel interface, so a new network/interface with an different IP address than Management (ovirtmgmt).
Fernando
Em 14/06/2016 13:52, Ryan Mahoney escreveu:
Right, but how do you specify which network the nfs traffic is using?
On Tue, Jun 14, 2016 at 12:41 PM, Nir Soffer <nsoffer@redhat.com> wrote:
On Tue, Jun 14, 2016 at 5:26 PM, Ryan Mahoney <ryan@beaconhillentertainment.com> wrote:
On my hosts, I have configured a 1gbe nic for ovirtmgmt whose usage is currently setup for Management, Display, VM and Migration. I also have a 2 10gbe nics bonded LACP which are VLAN tagged and assigned the dozen or so VLANS needed for the various VM's to access. I have NFS storage mounted to the Data Center, and I would like to know how I check/specify which network connection ovirt is using for that NFS storage. I want to make sure it is utilizing the 10gbe bond on each host vs using the 1gbe connection.
We don't configured anything regarding network used for nfs storage, so it works just like any other nfs mount you create yourself.
Nir
_______________________________________________ Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--
*Charles Kozler* *Vice President, IT Operations*
FIX Flyer, LLC 225 Broadway | Suite 1600 | New York, NY 10007 1-888-349-3593 http://www.fixflyer.com <http://fixflyer.com>
NOTICE TO RECIPIENT: THIS E-MAIL IS MEANT ONLY FOR THE INTENDED RECIPIENT(S) OF THE TRANSMISSION, AND CONTAINS CONFIDENTIAL INFORMATION WHICH IS PROPRIETARY TO FIX FLYER LLC. ANY UNAUTHORIZED USE, COPYING, DISTRIBUTION, OR DISSEMINATION IS STRICTLY PROHIBITED. ALL RIGHTS TO THIS INFORMATION IS RESERVED BY FIX FLYER LLC. IF YOU ARE NOT THE INTENDED RECIPIENT, PLEASE CONTACT THE SENDER BY REPLY E-MAIL AND PLEASE DELETE THIS E-MAIL FROM YOUR SYSTEM AND DESTROY ANY COPIES.
participants (5)
-
Charles Kozler
-
Fernando Frediani
-
Nir Soffer
-
Pavel Gashev
-
Ryan Mahoney