[Users] Stack trace caused by FreeBSD client

Hi all, Interesting thing I found out this afternoon. I have a FreeBSD 10 guest with virtio drivers, both disk and net. The VM works fine, but when I connect over SSH to the VM, I see this stack trace in messages on the node: Feb 23 19:19:42 hv3 kernel: ------------[ cut here ]------------ Feb 23 19:19:42 hv3 kernel: WARNING: at net/core/dev.c:1907 skb_warn_bad_offload+0xc2/0xf0() (Tainted: G W --------------- ) Feb 23 19:19:42 hv3 kernel: Hardware name: X9DR3-F Feb 23 19:19:42 hv3 kernel: igb: caps=(0x12114bb3, 0x0) len=5686 data_len=5620 ip_summed=0 Feb 23 19:19:42 hv3 kernel: Modules linked in: ebt_arp nfs lockd fscache auth_rpcgss nfs_acl sunrpc bonding 8021q garp ebtable_nat ebtables bridge stp llc xt_physdev ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4 xt_multiport iptable_filter ip_tables ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables ipv6 dm_round_robin dm_multipath vhost_net macvtap macvlan tun kvm_intel kvm iTCO_wdt iTCO_vendor_support sg ixgbe mdio sb_edac edac_core lpc_ich mfd_core i2c_i801 ioatdma igb dca i2c_algo_bit i2c_core ptp pps_core ext4 jbd2 mbcache sd_mod crc_t10dif 3w_sas ahci isci libsas scsi_transport_sas dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan] Feb 23 19:19:42 hv3 kernel: Pid: 15280, comm: vhost-15276 Tainted: G W --------------- 2.6.32-431.5.1.el6.x86_64 #1 Feb 23 19:19:42 hv3 kernel: Call Trace: Feb 23 19:19:42 hv3 kernel: <IRQ> [<ffffffff81071e27>] ? warn_slowpath_common+0x87/0xc0 Feb 23 19:19:42 hv3 kernel: [<ffffffff81071f16>] ? warn_slowpath_fmt+0x46/0x50 Feb 23 19:19:42 hv3 kernel: [<ffffffffa016c862>] ? igb_get_drvinfo+0x82/0xe0 [igb] Feb 23 19:19:42 hv3 kernel: [<ffffffff8145b1d2>] ? skb_warn_bad_offload+0xc2/0xf0 Feb 23 19:19:42 hv3 kernel: [<ffffffff814602c1>] ? __skb_gso_segment+0x71/0xc0 Feb 23 19:19:42 hv3 kernel: [<ffffffff81460323>] ? skb_gso_segment+0x13/0x20 Feb 23 19:19:42 hv3 kernel: [<ffffffff814603cb>] ? dev_hard_start_xmit+0x9b/0x480 Feb 23 19:19:42 hv3 kernel: [<ffffffff8147bf5a>] ? sch_direct_xmit+0x15a/0x1c0 Feb 23 19:19:42 hv3 kernel: [<ffffffff81460a58>] ? dev_queue_xmit+0x228/0x320 Feb 23 19:19:42 hv3 kernel: [<ffffffffa035a898>] ? br_dev_queue_push_xmit+0x88/0xc0 [bridge] Feb 23 19:19:42 hv3 kernel: [<ffffffffa035a928>] ? br_forward_finish+0x58/0x60 [bridge] Feb 23 19:19:42 hv3 kernel: [<ffffffffa035a9da>] ? __br_forward+0xaa/0xd0 [bridge] Feb 23 19:19:42 hv3 kernel: [<ffffffff814897b6>] ? nf_hook_slow+0x76/0x120 Feb 23 19:19:42 hv3 kernel: [<ffffffffa035aa5d>] ? br_forward+0x5d/0x70 [bridge] Feb 23 19:19:42 hv3 kernel: [<ffffffffa035ba6b>] ? br_handle_frame_finish+0x17b/0x2a0 [bridge] Feb 23 19:19:42 hv3 kernel: [<ffffffffa035bd3a>] ? br_handle_frame+0x1aa/0x250 [bridge] Feb 23 19:19:42 hv3 kernel: [<ffffffff8145b7c9>] ? __netif_receive_skb+0x529/0x750 Feb 23 19:19:42 hv3 kernel: [<ffffffff8145ba8a>] ? process_backlog+0x9a/0x100 Feb 23 19:19:42 hv3 kernel: [<ffffffff81460d43>] ? net_rx_action+0x103/0x2f0 Feb 23 19:19:42 hv3 kernel: [<ffffffff8107a8e1>] ? __do_softirq+0xc1/0x1e0 Feb 23 19:19:42 hv3 kernel: [<ffffffff8100c30c>] ? call_softirq+0x1c/0x30 Feb 23 19:19:42 hv3 kernel: <EOI> [<ffffffff8100fa75>] ? do_softirq+0x65/0xa0 Feb 23 19:19:42 hv3 kernel: [<ffffffff814611c8>] ? netif_rx_ni+0x28/0x30 Feb 23 19:19:42 hv3 kernel: [<ffffffffa01a0749>] ? tun_sendmsg+0x229/0x4ec [tun] Feb 23 19:19:42 hv3 kernel: [<ffffffffa027bcf5>] ? handle_tx+0x275/0x5e0 [vhost_net] Feb 23 19:19:42 hv3 kernel: [<ffffffffa027c095>] ? handle_tx_kick+0x15/0x20 [vhost_net] Feb 23 19:19:42 hv3 kernel: [<ffffffffa027955c>] ? vhost_worker+0xbc/0x140 [vhost_net] Feb 23 19:19:42 hv3 kernel: [<ffffffffa02794a0>] ? vhost_worker+0x0/0x140 [vhost_net] Feb 23 19:19:42 hv3 kernel: [<ffffffff8109aee6>] ? kthread+0x96/0xa0 Feb 23 19:19:42 hv3 kernel: [<ffffffff8100c20a>] ? child_rip+0xa/0x20 Feb 23 19:19:42 hv3 kernel: [<ffffffff8109ae50>] ? kthread+0x0/0xa0 Feb 23 19:19:42 hv3 kernel: [<ffffffff8100c200>] ? child_rip+0x0/0x20 Feb 23 19:19:42 hv3 kernel: ---[ end trace e93142595d6ecfc7 ]--- This is 100% reproducable, every time. The login itself works just fine. Some more info: [root@hv3 ~]# uname -a Linux hv3.ovirt.gs.cloud.lan 2.6.32-431.5.1.el6.x86_64 #1 SMP Wed Feb 12 00:41:43 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux [root@hv3 ~]# rpm -qa | grep vdsm vdsm-4.13.3-3.el6.x86_64 vdsm-xmlrpc-4.13.3-3.el6.noarch vdsm-python-4.13.3-3.el6.x86_64 vdsm-cli-4.13.3-3.el6.noarch -- Met vriendelijke groeten / With kind regards, Johan Kooijman E mail@johankooijman.com

----- Original Message -----
From: "Johan Kooijman" <mail@johankooijman.com> To: "users" <users@ovirt.org> Sent: Sunday, February 23, 2014 8:22:41 PM Subject: [Users] Stack trace caused by FreeBSD client
Hi all,
Interesting thing I found out this afternoon. I have a FreeBSD 10 guest with virtio drivers, both disk and net.
The VM works fine, but when I connect over SSH to the VM, I see this stack trace in messages on the node:
This warning may be interesting to qemu/kvm/kernel developers, ccing Ronen.
Feb 23 19:19:42 hv3 kernel: ------------[ cut here ]------------ Feb 23 19:19:42 hv3 kernel: WARNING: at net/core/dev.c:1907 skb_warn_bad_offload+0xc2/0xf0() (Tainted: G W --------------- ) Feb 23 19:19:42 hv3 kernel: Hardware name: X9DR3-F Feb 23 19:19:42 hv3 kernel: igb: caps=(0x12114bb3, 0x0) len=5686 data_len=5620 ip_summed=0 Feb 23 19:19:42 hv3 kernel: Modules linked in: ebt_arp nfs lockd fscache auth_rpcgss nfs_acl sunrpc bonding 8021q garp ebtable_nat ebtables bridge stp llc xt_physdev ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4 xt_multiport iptable_filter ip_tables ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables ipv6 dm_round_robin dm_multipath vhost_net macvtap macvlan tun kvm_intel kvm iTCO_wdt iTCO_vendor_support sg ixgbe mdio sb_edac edac_core lpc_ich mfd_core i2c_i801 ioatdma igb dca i2c_algo_bit i2c_core ptp pps_core ext4 jbd2 mbcache sd_mod crc_t10dif 3w_sas ahci isci libsas scsi_transport_sas dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan] Feb 23 19:19:42 hv3 kernel: Pid: 15280, comm: vhost-15276 Tainted: G W --------------- 2.6.32-431.5.1.el6.x86_64 #1 Feb 23 19:19:42 hv3 kernel: Call Trace: Feb 23 19:19:42 hv3 kernel: <IRQ> [<ffffffff81071e27>] ? warn_slowpath_common+0x87/0xc0 Feb 23 19:19:42 hv3 kernel: [<ffffffff81071f16>] ? warn_slowpath_fmt+0x46/0x50 Feb 23 19:19:42 hv3 kernel: [<ffffffffa016c862>] ? igb_get_drvinfo+0x82/0xe0 [igb] Feb 23 19:19:42 hv3 kernel: [<ffffffff8145b1d2>] ? skb_warn_bad_offload+0xc2/0xf0 Feb 23 19:19:42 hv3 kernel: [<ffffffff814602c1>] ? __skb_gso_segment+0x71/0xc0 Feb 23 19:19:42 hv3 kernel: [<ffffffff81460323>] ? skb_gso_segment+0x13/0x20 Feb 23 19:19:42 hv3 kernel: [<ffffffff814603cb>] ? dev_hard_start_xmit+0x9b/0x480 Feb 23 19:19:42 hv3 kernel: [<ffffffff8147bf5a>] ? sch_direct_xmit+0x15a/0x1c0 Feb 23 19:19:42 hv3 kernel: [<ffffffff81460a58>] ? dev_queue_xmit+0x228/0x320 Feb 23 19:19:42 hv3 kernel: [<ffffffffa035a898>] ? br_dev_queue_push_xmit+0x88/0xc0 [bridge] Feb 23 19:19:42 hv3 kernel: [<ffffffffa035a928>] ? br_forward_finish+0x58/0x60 [bridge] Feb 23 19:19:42 hv3 kernel: [<ffffffffa035a9da>] ? __br_forward+0xaa/0xd0 [bridge] Feb 23 19:19:42 hv3 kernel: [<ffffffff814897b6>] ? nf_hook_slow+0x76/0x120 Feb 23 19:19:42 hv3 kernel: [<ffffffffa035aa5d>] ? br_forward+0x5d/0x70 [bridge] Feb 23 19:19:42 hv3 kernel: [<ffffffffa035ba6b>] ? br_handle_frame_finish+0x17b/0x2a0 [bridge] Feb 23 19:19:42 hv3 kernel: [<ffffffffa035bd3a>] ? br_handle_frame+0x1aa/0x250 [bridge] Feb 23 19:19:42 hv3 kernel: [<ffffffff8145b7c9>] ? __netif_receive_skb+0x529/0x750 Feb 23 19:19:42 hv3 kernel: [<ffffffff8145ba8a>] ? process_backlog+0x9a/0x100 Feb 23 19:19:42 hv3 kernel: [<ffffffff81460d43>] ? net_rx_action+0x103/0x2f0 Feb 23 19:19:42 hv3 kernel: [<ffffffff8107a8e1>] ? __do_softirq+0xc1/0x1e0 Feb 23 19:19:42 hv3 kernel: [<ffffffff8100c30c>] ? call_softirq+0x1c/0x30 Feb 23 19:19:42 hv3 kernel: <EOI> [<ffffffff8100fa75>] ? do_softirq+0x65/0xa0 Feb 23 19:19:42 hv3 kernel: [<ffffffff814611c8>] ? netif_rx_ni+0x28/0x30 Feb 23 19:19:42 hv3 kernel: [<ffffffffa01a0749>] ? tun_sendmsg+0x229/0x4ec [tun] Feb 23 19:19:42 hv3 kernel: [<ffffffffa027bcf5>] ? handle_tx+0x275/0x5e0 [vhost_net] Feb 23 19:19:42 hv3 kernel: [<ffffffffa027c095>] ? handle_tx_kick+0x15/0x20 [vhost_net] Feb 23 19:19:42 hv3 kernel: [<ffffffffa027955c>] ? vhost_worker+0xbc/0x140 [vhost_net] Feb 23 19:19:42 hv3 kernel: [<ffffffffa02794a0>] ? vhost_worker+0x0/0x140 [vhost_net] Feb 23 19:19:42 hv3 kernel: [<ffffffff8109aee6>] ? kthread+0x96/0xa0 Feb 23 19:19:42 hv3 kernel: [<ffffffff8100c20a>] ? child_rip+0xa/0x20 Feb 23 19:19:42 hv3 kernel: [<ffffffff8109ae50>] ? kthread+0x0/0xa0 Feb 23 19:19:42 hv3 kernel: [<ffffffff8100c200>] ? child_rip+0x0/0x20 Feb 23 19:19:42 hv3 kernel: ---[ end trace e93142595d6ecfc7 ]---
This is 100% reproducable, every time. The login itself works just fine. Some more info:
[root@hv3 ~]# uname -a Linux hv3.ovirt.gs.cloud.lan 2.6.32-431.5.1.el6.x86_64 #1 SMP Wed Feb 12 00:41:43 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux [root@hv3 ~]# rpm -qa | grep vdsm vdsm-4.13.3-3.el6.x86_64 vdsm-xmlrpc-4.13.3-3.el6.noarch vdsm-python-4.13.3-3.el6.x86_64 vdsm-cli-4.13.3-3.el6.noarch
-- Met vriendelijke groeten / With kind regards, Johan Kooijman
E mail@johankooijman.com
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 02/23/2014 10:13 PM, Nir Soffer wrote:
----- Original Message -----
From: "Johan Kooijman" <mail@johankooijman.com> To: "users" <users@ovirt.org> Sent: Sunday, February 23, 2014 8:22:41 PM Subject: [Users] Stack trace caused by FreeBSD client
Hi all,
Interesting thing I found out this afternoon. I have a FreeBSD 10 guest with virtio drivers, both disk and net.
The VM works fine, but when I connect over SSH to the VM, I see this stack trace in messages on the node: This warning may be interesting to qemu/kvm/kernel developers, ccing Ronen.
Probably, nobody bothered to productize FreeBSD. You can try to use E1000 instead of virtio. Ronen.
Feb 23 19:19:42 hv3 kernel: ------------[ cut here ]------------ Feb 23 19:19:42 hv3 kernel: WARNING: at net/core/dev.c:1907 skb_warn_bad_offload+0xc2/0xf0() (Tainted: G W --------------- ) Feb 23 19:19:42 hv3 kernel: Hardware name: X9DR3-F Feb 23 19:19:42 hv3 kernel: igb: caps=(0x12114bb3, 0x0) len=5686 data_len=5620 ip_summed=0 Feb 23 19:19:42 hv3 kernel: Modules linked in: ebt_arp nfs lockd fscache auth_rpcgss nfs_acl sunrpc bonding 8021q garp ebtable_nat ebtables bridge stp llc xt_physdev ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4 xt_multiport iptable_filter ip_tables ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables ipv6 dm_round_robin dm_multipath vhost_net macvtap macvlan tun kvm_intel kvm iTCO_wdt iTCO_vendor_support sg ixgbe mdio sb_edac edac_core lpc_ich mfd_core i2c_i801 ioatdma igb dca i2c_algo_bit i2c_core ptp pps_core ext4 jbd2 mbcache sd_mod crc_t10dif 3w_sas ahci isci libsas scsi_transport_sas dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan] Feb 23 19:19:42 hv3 kernel: Pid: 15280, comm: vhost-15276 Tainted: G W --------------- 2.6.32-431.5.1.el6.x86_64 #1 Feb 23 19:19:42 hv3 kernel: Call Trace: Feb 23 19:19:42 hv3 kernel: <IRQ> [<ffffffff81071e27>] ? warn_slowpath_common+0x87/0xc0 Feb 23 19:19:42 hv3 kernel: [<ffffffff81071f16>] ? warn_slowpath_fmt+0x46/0x50 Feb 23 19:19:42 hv3 kernel: [<ffffffffa016c862>] ? igb_get_drvinfo+0x82/0xe0 [igb] Feb 23 19:19:42 hv3 kernel: [<ffffffff8145b1d2>] ? skb_warn_bad_offload+0xc2/0xf0 Feb 23 19:19:42 hv3 kernel: [<ffffffff814602c1>] ? __skb_gso_segment+0x71/0xc0 Feb 23 19:19:42 hv3 kernel: [<ffffffff81460323>] ? skb_gso_segment+0x13/0x20 Feb 23 19:19:42 hv3 kernel: [<ffffffff814603cb>] ? dev_hard_start_xmit+0x9b/0x480 Feb 23 19:19:42 hv3 kernel: [<ffffffff8147bf5a>] ? sch_direct_xmit+0x15a/0x1c0 Feb 23 19:19:42 hv3 kernel: [<ffffffff81460a58>] ? dev_queue_xmit+0x228/0x320 Feb 23 19:19:42 hv3 kernel: [<ffffffffa035a898>] ? br_dev_queue_push_xmit+0x88/0xc0 [bridge] Feb 23 19:19:42 hv3 kernel: [<ffffffffa035a928>] ? br_forward_finish+0x58/0x60 [bridge] Feb 23 19:19:42 hv3 kernel: [<ffffffffa035a9da>] ? __br_forward+0xaa/0xd0 [bridge] Feb 23 19:19:42 hv3 kernel: [<ffffffff814897b6>] ? nf_hook_slow+0x76/0x120 Feb 23 19:19:42 hv3 kernel: [<ffffffffa035aa5d>] ? br_forward+0x5d/0x70 [bridge] Feb 23 19:19:42 hv3 kernel: [<ffffffffa035ba6b>] ? br_handle_frame_finish+0x17b/0x2a0 [bridge] Feb 23 19:19:42 hv3 kernel: [<ffffffffa035bd3a>] ? br_handle_frame+0x1aa/0x250 [bridge] Feb 23 19:19:42 hv3 kernel: [<ffffffff8145b7c9>] ? __netif_receive_skb+0x529/0x750 Feb 23 19:19:42 hv3 kernel: [<ffffffff8145ba8a>] ? process_backlog+0x9a/0x100 Feb 23 19:19:42 hv3 kernel: [<ffffffff81460d43>] ? net_rx_action+0x103/0x2f0 Feb 23 19:19:42 hv3 kernel: [<ffffffff8107a8e1>] ? __do_softirq+0xc1/0x1e0 Feb 23 19:19:42 hv3 kernel: [<ffffffff8100c30c>] ? call_softirq+0x1c/0x30 Feb 23 19:19:42 hv3 kernel: <EOI> [<ffffffff8100fa75>] ? do_softirq+0x65/0xa0 Feb 23 19:19:42 hv3 kernel: [<ffffffff814611c8>] ? netif_rx_ni+0x28/0x30 Feb 23 19:19:42 hv3 kernel: [<ffffffffa01a0749>] ? tun_sendmsg+0x229/0x4ec [tun] Feb 23 19:19:42 hv3 kernel: [<ffffffffa027bcf5>] ? handle_tx+0x275/0x5e0 [vhost_net] Feb 23 19:19:42 hv3 kernel: [<ffffffffa027c095>] ? handle_tx_kick+0x15/0x20 [vhost_net] Feb 23 19:19:42 hv3 kernel: [<ffffffffa027955c>] ? vhost_worker+0xbc/0x140 [vhost_net] Feb 23 19:19:42 hv3 kernel: [<ffffffffa02794a0>] ? vhost_worker+0x0/0x140 [vhost_net] Feb 23 19:19:42 hv3 kernel: [<ffffffff8109aee6>] ? kthread+0x96/0xa0 Feb 23 19:19:42 hv3 kernel: [<ffffffff8100c20a>] ? child_rip+0xa/0x20 Feb 23 19:19:42 hv3 kernel: [<ffffffff8109ae50>] ? kthread+0x0/0xa0 Feb 23 19:19:42 hv3 kernel: [<ffffffff8100c200>] ? child_rip+0x0/0x20 Feb 23 19:19:42 hv3 kernel: ---[ end trace e93142595d6ecfc7 ]---
This is 100% reproducable, every time. The login itself works just fine. Some more info:
[root@hv3 ~]# uname -a Linux hv3.ovirt.gs.cloud.lan 2.6.32-431.5.1.el6.x86_64 #1 SMP Wed Feb 12 00:41:43 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux [root@hv3 ~]# rpm -qa | grep vdsm vdsm-4.13.3-3.el6.x86_64 vdsm-xmlrpc-4.13.3-3.el6.noarch vdsm-python-4.13.3-3.el6.x86_64 vdsm-cli-4.13.3-3.el6.noarch
-- Met vriendelijke groeten / With kind regards, Johan Kooijman
E mail@johankooijman.com
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

----- Original Message -----
From: "Ronen Hod" <rhod@redhat.com> To: "Nir Soffer" <nsoffer@redhat.com>, "Johan Kooijman" <mail@johankooijman.com> Cc: "users" <users@ovirt.org> Sent: Monday, February 24, 2014 5:27:06 PM Subject: Re: [Users] Stack trace caused by FreeBSD client
On 02/23/2014 10:13 PM, Nir Soffer wrote:
----- Original Message -----
From: "Johan Kooijman" <mail@johankooijman.com> To: "users" <users@ovirt.org> Sent: Sunday, February 23, 2014 8:22:41 PM Subject: [Users] Stack trace caused by FreeBSD client
Interesting thing I found out this afternoon. I have a FreeBSD 10 guest with virtio drivers, both disk and net.
The VM works fine, but when I connect over SSH to the VM, I see this stack trace in messages on the node: This warning may be interesting to qemu/kvm/kernel developers, ccing Ronen.
Probably, nobody bothered to productize FreeBSD. You can try to use E1000 instead of virtio.
You may find this useful: http://www.linux-kvm.org/page/Guest_Support_Status#FreeBSD Nir
participants (3)
-
Johan Kooijman
-
Nir Soffer
-
Ronen Hod