[ovirt-users] Network interface not working
Герасимов Александр
gerasimov.ay at eksmo.ru
Thu Jun 8 10:59:46 UTC 2017
Hi Lev.
I am create new virtual host, and run ping from internet.
Ping results 70% packet loss.
But, if i am running ping from virtual host to somebody host in the
internet and at the same time, i am pinging this virtual host from internet.
then ping results 0% packet loss or 10% packet loss.
28.05.2017 13:34, Lev Veyde пишет:
> Hi Alex,
>
> That is quite strange...
>
> Does this happen on both hosts - have you tried to migrate the VM to
> the second host and see if the issue still remains?
>
> Thanks in advance,
>
>
> On Fri, May 26, 2017 at 3:02 PM, Герасимов Александр
> <gerasimov.ay at eksmo.ru <mailto:gerasimov.ay at eksmo.ru>> wrote:
>
> Hi Lev.
>
>
> On one of the VMs you only see 1 NIC instead of the 2?
>
> NO. both VM's sees two NIC, but on first VM ping with no error,
> and second VM ping with 75% error.
>
> OS version on hosts [root at node01 ~]# cat /etc/redhat-release
> CentOS Linux release 7.3.1611 (Core)
>
> OS veriosion on VM's [root at node03 ~]# cat /etc/redhat-release
> CentOS Linux release 7.3.1611 (Core)
>
>
>
> *first VM*
>
> 00:03.0 Ethernet controller: Red Hat, Inc Virtio network device
>
> 00:09.0 Ethernet controller: Red Hat, Inc Virtio network device
>
> [root at node03 ~]# ip l
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state
> UNKNOWN mode DEFAULT qlen 1
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> pfifo_fast state UP mode DEFAULT qlen 1000
> link/ether 00:1a:4a:16:01:51 brd ff:ff:ff:ff:ff:ff
> 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> pfifo_fast state UP mode DEFAULT qlen 1000
> link/ether 00:1a:4a:16:01:55 brd ff:ff:ff:ff:ff:ff
>
> *second VM*
>
> 00:03.0 Ethernet controller: Realtek Semiconductor Co., Ltd.
> RTL-8100/8101L/8139 PCI Fast Ethernet Adapter (rev 20) - but i
> tested all version of NIC and no effect
>
> 00:0a.0 Ethernet controller: Red Hat, Inc Virtio network device
>
> [root at node04 ~]# ip link
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state
> UNKNOWN mode DEFAULT qlen 1
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> pfifo_fast state UP mode DEFAULT qlen 1000
> link/ether 00:1a:4a:16:01:53 brd ff:ff:ff:ff:ff:ff
> 3: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> pfifo_fast state UP mode DEFAULT qlen 1000
> link/ether 00:1a:4a:16:01:52 brd ff:ff:ff:ff:ff:ff
>
> In logs no messages only like this :
>
> May 26 15:01:01 node04 systemd: Started Session 67263 of user root.
> May 26 15:01:01 node04 systemd: Starting Session 67263 of user root.
> May 26 15:01:01 node04 systemd: Created slice user-600.slice.
> May 26 15:01:01 node04 systemd: Starting user-600.slice.
> May 26 15:01:01 node04 systemd: Started Session 67262 of user bitrix.
> May 26 15:01:01 node04 systemd: Starting Session 67262 of user bitrix.
> May 26 15:01:01 node04 systemd: Removed slice user-600.slice.
> May 26 15:01:01 node04 systemd: Stopping user-600.slice.
>
>
> Hi Alexander,
>
> So if I understand it correctly, you have the following configuration:
> - 2 hosts, each having 2 NICs
> - 2 virtual machines, each have a connection to each one of the NICs
> available on the hosts
>
> On one of the VMs you only see 1 NIC instead of the 2?
>
> Are you sure that the VM is properly configured to have 2 NICs?
>
> What Linux distro and version you're using on the hosts and inside
> the VMs ?
>
> Can you please send us:
> - the logs from the VM, e.g. /var/log/messages
> - the output of lspci -v
> - the output of ip link
>
> Thanks in advance,
>
> 2017-05-18 12:19 GMT+03:00 Герасимов Александр <gerasimov.ay at
> eksmo.ru <http://eksmo.ru>>:
>
> > Hi all.
> >
> > I have to servers with ovirt.
> >
> > And to identical virtual machines.
> >
> > Both servers are identical. But on second virtual server not
> working one
> > network interface. Ping have a problem. I tried to change
> network driver,
> > but has no effect.
> >
> > I don't understand that to do
> >
> >
> > ovirt version and package:
> >
> > rpm -qa|grep ovirt
> >
> ovirt-imageio-proxy-0.4.0-0.201608310602.gita9b573b.el7.centos.noarch
> > ovirt-engine-vmconsole-proxy-helper-4.0.5.5-1.el7.centos.noarch
> > ovirt-engine-restapi-4.0.5.5-1.el7.centos.noarch
> > ovirt-engine-extensions-api-impl-4.0.5.5-1.el7.centos.noarch
> > ovirt-imageio-daemon-0.4.0-1.el7.noarch
> > ovirt-engine-wildfly-10.1.0-1.el7.x86_64
> > ovirt-vmconsole-1.0.4-1.el7.centos.noarch
> > ovirt-engine-cli-3.6.9.2-1.el7.noarch
> > ovirt-engine-websocket-proxy-4.0.5.5-1.el7.centos.noarch
> > ovirt-engine-dashboard-1.0.5-1.el7.centos.noarch
> > ovirt-host-deploy-1.5.3-1.el7.centos.noarch
> > ovirt-engine-wildfly-overlay-10.0.0-1.el7.noarch
> > ovirt-engine-setup-base-4.0.5.5-1.el7.centos.noarch
> > ovirt-engine-dwh-setup-4.0.5-1.el7.centos.noarch
> >
> ovirt-engine-setup-plugin-websocket-proxy-4.0.5.5-1.el7.centos.noarch
> > ovirt-engine-setup-4.0.5.5-1.el7.centos.noarch
> > ovirt-engine-dbscripts-4.0.5.5-1.el7.centos.noarch
> > ovirt-engine-userportal-4.0.5.5-1.el7.centos.noarch
> > ovirt-imageio-common-0.4.0-1.el7.noarch
> > python-ovirt-engine-sdk4-4.0.2-1.el7.centos.x86_64
> > ovirt-vmconsole-host-1.0.4-1.el7.centos.noarch
> > ovirt-engine-dwh-4.0.5-1.el7.centos.noarch
> > ovirt-engine-tools-backup-4.0.5.5-1.el7.centos.noarch
> > ovirt-image-uploader-4.0.1-1.el7.centos.noarch
> > ovirt-engine-setup-plugin-ovirt-engine-4.0.5.5-1.el7.centos.noarch
> > ovirt-engine-tools-4.0.5.5-1.el7.centos.noarch
> > ovirt-engine-4.0.5.5-1.el7.centos.noarch
> > ovirt-release40-4.0.5-2.noarch
> > ovirt-host-deploy-java-1.5.3-1.el7.centos.noarch
> >
> ovirt-engine-setup-plugin-ovirt-engine-common-4.0.5.5-1.el7.centos.noarch
> > ovirt-iso-uploader-4.0.2-1.el7.centos.noarch
> > ovirt-engine-webadmin-portal-4.0.5.5-1.el7.centos.noarch
> > ovirt-setup-lib-1.0.2-1.el7.centos.noarch
> > ovirt-engine-sdk-python-3.6.9.1-1.el7.centos.noarch
> > ovirt-engine-lib-4.0.5.5-1.el7.centos.noarch
> > ovirt-imageio-proxy-setup-0.4.0-0.201608310602.gita9b573b.
> > el7.centos.noarch
> > ovirt-engine-setup-plugin-vmconsole-proxy-helper-4.0.5.
> > 5-1.el7.centos.noarch
> > ovirt-engine-backend-4.0.5.5-1.el7.centos.noarch
> > ovirt-engine-extension-aaa-jdbc-1.1.1-1.el7.noarch
> > ovirt-vmconsole-proxy-1.0.4-1.el7.centos.noarch
> >
> >
>
> --
> С уважением, базисный админстратор
> Гераcимов Александр
> тел.+7(495)4116886 <tel:+7%20495%20411-68-86> доб. 5367
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org <mailto:Users at ovirt.org>
> http://lists.ovirt.org/mailman/listinfo/users
> <http://lists.ovirt.org/mailman/listinfo/users>
>
>
>
>
> --
>
> Lev Veyde
>
> Software Engineer, RHCE | RHCVA | MCITP
>
> Red Hat Israel
>
> <https://www.redhat.com>
>
> lev at redhat.com <mailto:lev at redhat.com> | lveyde at redhat.com
> <mailto:lveyde at redhat.com>
>
> <https://red.ht/sig>
> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
--
С уважением, базисный админстратор
Гераcимов Александр
тел. +7(495)4116886 доб. 5367
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170608/37655c92/attachment-0001.html>
More information about the Users
mailing list