<div dir="ltr"><div><div>Network configuration seems OK.<br></div>Please provide engine.log and vdsm.log<br><br></div>Thanks<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Aug 24, 2016 at 3:22 PM, Uwe Laverenz <span dir="ltr"><<a href="mailto:uwe@laverenz.de" target="_blank">uwe@laverenz.de</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi,<br>
<br>
sorry for the delay, I reinstalled everything, configured the networks, attached the iSCSI storage with 2 interfaces and finally created the iSCSI-bond:<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
[root@ovh01 ~]# route<br>
Kernel IP Routentabelle<br>
Ziel Router Genmask Flags Metric Ref Use Iface<br>
default hp5406-1-srv.mo 0.0.0.0 UG 0 0 0 ovirtmgmt<br>
10.0.24.0 0.0.0.0 255.255.255.0 U 0 0 0 ovirtmgmt<br>
10.0.131.0 0.0.0.0 255.255.255.0 U 0 0 0 enp9s0f0<br>
10.0.132.0 0.0.0.0 255.255.255.0 U 0 0 0 enp9s0f1<br>
link-local 0.0.0.0 255.255.0.0 U 1005 0 0 enp9s0f0<br>
link-local 0.0.0.0 255.255.0.0 U 1006 0 0 enp9s0f1<br>
link-local 0.0.0.0 255.255.0.0 U 1008 0 0 ovirtmgmt<br>
link-local 0.0.0.0 255.255.0.0 U 1015 0 0 bond0<br>
link-local 0.0.0.0 255.255.0.0 U 1017 0 0 ADMIN<br>
link-local 0.0.0.0 255.255.0.0 U 1021 0 0 SRV<br>
</blockquote>
<br>
and:<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
[root@ovh01 ~]# ip a<br>
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN<br>
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00<br>
inet <a href="http://127.0.0.1/8" rel="noreferrer" target="_blank">127.0.0.1/8</a> scope host lo<br>
valid_lft forever preferred_lft forever<br>
inet6 ::1/128 scope host<br>
valid_lft forever preferred_lft forever<br>
2: enp13s0: <BROADCAST,MULTICAST,UP,LOWER_<wbr>UP> mtu 1500 qdisc pfifo_fast master ovirtmgmt state UP qlen 1000<br>
link/ether e0:3f:49:6d:68:c4 brd ff:ff:ff:ff:ff:ff<br>
3: enp8s0f0: <BROADCAST,MULTICAST,SLAVE,UP,<wbr>LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000<br>
link/ether 90:e2:ba:11:21:d0 brd ff:ff:ff:ff:ff:ff<br>
4: enp8s0f1: <BROADCAST,MULTICAST,SLAVE,UP,<wbr>LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000<br>
link/ether 90:e2:ba:11:21:d0 brd ff:ff:ff:ff:ff:ff<br>
5: enp9s0f0: <BROADCAST,MULTICAST,UP,LOWER_<wbr>UP> mtu 1500 qdisc mq state UP qlen 1000<br>
link/ether 90:e2:ba:11:21:d4 brd ff:ff:ff:ff:ff:ff<br>
inet <a href="http://10.0.131.181/24" rel="noreferrer" target="_blank">10.0.131.181/24</a> brd 10.0.131.255 scope global enp9s0f0<br>
valid_lft forever preferred_lft forever<br>
6: enp9s0f1: <BROADCAST,MULTICAST,UP,LOWER_<wbr>UP> mtu 1500 qdisc mq state UP qlen 1000<br>
link/ether 90:e2:ba:11:21:d5 brd ff:ff:ff:ff:ff:ff<br>
inet <a href="http://10.0.132.181/24" rel="noreferrer" target="_blank">10.0.132.181/24</a> brd 10.0.132.255 scope global enp9s0f1<br>
valid_lft forever preferred_lft forever<br>
7: ;vdsmdummy;: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN<br>
link/ether 26:b2:4e:5e:f0:60 brd ff:ff:ff:ff:ff:ff<br>
8: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_<wbr>UP> mtu 1500 qdisc noqueue state UP<br>
link/ether e0:3f:49:6d:68:c4 brd ff:ff:ff:ff:ff:ff<br>
inet <a href="http://10.0.24.181/24" rel="noreferrer" target="_blank">10.0.24.181/24</a> brd 10.0.24.255 scope global ovirtmgmt<br>
valid_lft forever preferred_lft forever<br>
14: vnet0: <BROADCAST,MULTICAST,UP,LOWER_<wbr>UP> mtu 1500 qdisc pfifo_fast master ovirtmgmt state UNKNOWN qlen 500<br>
link/ether fe:16:3e:79:25:86 brd ff:ff:ff:ff:ff:ff<br>
inet6 fe80::fc16:3eff:fe79:2586/64 scope link<br>
valid_lft forever preferred_lft forever<br>
15: bond0: <BROADCAST,MULTICAST,MASTER,UP<wbr>,LOWER_UP> mtu 1500 qdisc noqueue state UP<br>
link/ether 90:e2:ba:11:21:d0 brd ff:ff:ff:ff:ff:ff<br>
16: bond0.32@bond0: <BROADCAST,MULTICAST,UP,LOWER_<wbr>UP> mtu 1500 qdisc noqueue master ADMIN state UP<br>
link/ether 90:e2:ba:11:21:d0 brd ff:ff:ff:ff:ff:ff<br>
17: ADMIN: <BROADCAST,MULTICAST,UP,LOWER_<wbr>UP> mtu 1500 qdisc noqueue state UP<br>
link/ether 90:e2:ba:11:21:d0 brd ff:ff:ff:ff:ff:ff<br>
20: bond0.24@bond0: <BROADCAST,MULTICAST,UP,LOWER_<wbr>UP> mtu 1500 qdisc noqueue master SRV state UP<br>
link/ether 90:e2:ba:11:21:d0 brd ff:ff:ff:ff:ff:ff<br>
21: SRV: <BROADCAST,MULTICAST,UP,LOWER_<wbr>UP> mtu 1500 qdisc noqueue state UP<br>
link/ether 90:e2:ba:11:21:d0 brd ff:ff:ff:ff:ff:ff<br>
</blockquote>
<br>
The host keeps toggling all storage domains on and off as soon as there is an iSCSI bond configured.<br>
<br>
Thank you for your patience.<br>
<br>
cu,<br>
Uwe<span class=""><br>
<br>
<br>
Am 18.08.2016 um 11:10 schrieb Elad Ben Aharon:<br>
</span><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">
I don't think it's necessary.<br>
Please provide the host's routing table and interfaces list ('ip a' or<br>
ifconfing) while it's configured with the bond.<br>
<br>
Thanks<br>
<br>
On Tue, Aug 16, 2016 at 4:39 PM, Uwe Laverenz <<a href="mailto:uwe@laverenz.de" target="_blank">uwe@laverenz.de</a><br></span><div><div class="h5">
<mailto:<a href="mailto:uwe@laverenz.de" target="_blank">uwe@laverenz.de</a>>> wrote:<br>
<br>
Hi Elad,<br>
<br>
Am 16.08.2016 um 10:52 schrieb Elad Ben Aharon:<br>
<br>
Please be sure that ovirtmgmt is not part of the iSCSI bond.<br>
<br>
<br>
Yes, I made sure it is not part of the bond.<br>
<br>
It does seem to have a conflict between default and enp9s0f0/<br>
enp9s0f1.<br>
Try to put the host in maintenance and then delete the iscsi<br>
nodes using<br>
'iscsiadm -m node -o delete'. Then activate the host.<br>
<br>
<br>
I tried that, I managed to get the iSCSI interface clean, no<br>
"default" anymore. But that didn't solve the problem of the host<br>
becoming "inactive". Not even the NFS domains would come up.<br>
<br>
As soon as I remove the iSCSI-bond, the host becomes responsive<br>
again and I can activate all storage domains. Removing the bond also<br>
brings the duplicated "Iface Name" back (but this time causes no<br>
problems).<br>
<br>
...<br>
<br>
I wonder if there is a basic misunderstanding on my side: wouldn't<br>
it be necessary that all targets are reachable from all interfaces<br>
that are configured into the bond to make it work?<br>
<br>
But this would either mean two interfaces in the same network or<br>
routing between the iSCSI networks.<br>
<br>
Thanks,<br>
<br>
Uwe<br>
______________________________<wbr>_________________<br>
Users mailing list<br></div></div>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a> <mailto:<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a>><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
<<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailma<wbr>n/listinfo/users</a>><br>
<br>
<br>
</blockquote><div class="HOEnZb"><div class="h5">
______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/mailman<wbr>/listinfo/users</a><br>
</div></div></blockquote></div><br></div>