The hosts are 3 ProLiant DL380 Gen10, 2 hosts with HPE Smart Array P816i-a SR Gen10 like controller and the other host with
HPE Smart Array P408i-a SR Gen10.  The storage for ovirt enviroment is gluster and the last host is the arbiter in the gluster enviroment.
The S.M.A.R.T. healt status is ok for all host
Edoardo





Il giorno gio 8 ago 2019 alle ore 16:19 Sandro Bonazzola <sbonazzo@redhat.com> ha scritto:


Il giorno gio 8 ago 2019 alle ore 11:19 Edoardo Mazza <edo7411@gmail.com> ha scritto:
Hi all,
It is more days that for same vm I received this error, but I don't underdand why.
The traffic of the virtual machine is not excessive, cpu and ram to, but for few minutes the vm is not responding. and in the messages log file of the vm I received the error under, yo can help me?
thanks

can you check the S.M.A.R.T. health status of the disks? 

 
Edoardo
kernel: NMI watchdog: BUG: soft lockup - CPU#2 stuck for 25s! [kworker/2:0:26227]
Aug  8 02:51:11 vmmysql kernel: Modules linked in: binfmt_misc ip6t_rpfilter ipt_REJECT nf_reject_ipv4 ip6t_REJECT nf_reject_
ipv6 xt_conntrack ip_set nfnetlink ebtable_nat ebtable_broute bridge stp llc ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_
nat_ipv6 ip6table_mangle ip6table_security ip6table_raw iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_con
ntrack iptable_mangle iptable_security iptable_raw ebtable_filter ebtables ip6table_filter ip6_tables iptable_filter snd_hda_c
odec_generic iosf_mbi crc32_pclmul ppdev ghash_clmulni_intel snd_hda_intel snd_hda_codec aesni_intel snd_hda_core lrw gf128mul
 glue_helper ablk_helper snd_hwdep cryptd snd_seq snd_seq_device snd_pcm snd_timer snd soundcore virtio_rng sg virtio_balloon
i2c_piix4 parport_pc parport joydev pcspkr ip_tables xfs libcrc32c sd_mod
Aug  8 02:51:14 vmmysql kernel: crc_t10dif crct10dif_generic sr_mod cdrom virtio_net virtio_console virtio_scsi ata_generic p
ata_acpi crct10dif_pclmul crct10dif_common crc32c_intel serio_raw qxl floppy drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm ata_piix libata virtio_pci drm_panel_orientation_quirks virtio_ring virtio dm_mirror dm_region_hash dm_log dm_mod
Aug  8 02:51:14 vmmysql kernel: CPU: 2 PID: 26227 Comm: kworker/2:0 Kdump: loaded Tainted: G             L ------------   3.10.0-957.12.1.el7.x86_64 #1
Aug  8 02:51:14 vmmysql kernel: Hardware name: oVirt oVirt Node, BIOS 1.11.0-2.el7 04/01/2014
Aug  8 02:51:14 vmmysql kernel: Workqueue: events_freezable disk_events_workfn
Aug  8 02:51:14 vmmysql kernel: task: ffff9e25b6609040 ti: ffff9e27b1610000 task.ti: ffff9e27b1610000
Aug  8 02:51:14 vmmysql kernel: RIP: 0010:[<ffffffffb8b6b355>]  [<ffffffffb8b6b355>] _raw_spin_unlock_irqrestore+0x15/0x20
Aug  8 02:51:14 vmmysql kernel: RSP: 0000:ffff9e27b1613a68  EFLAGS: 00000286
Aug  8 02:51:14 vmmysql kernel: RAX: 0000000000000001 RBX: ffff9e27b1613a10 RCX: ffff9e27b72a3d05
Aug  8 02:51:14 vmmysql kernel: RDX: ffff9e27b729a420 RSI: 0000000000000286 RDI: 0000000000000286
Aug  8 02:51:14 vmmysql kernel: RBP: ffff9e27b1613a68 R08: 0000000000000001 R09: ffff9e25b67fc198
Aug  8 02:51:14 vmmysql kernel: R10: ffff9e27b45bd8d8 R11: 0000000000000000 R12: ffff9e25b67fde80
Aug  8 02:51:14 vmmysql kernel: R13: ffff9e25b67fc000 R14: ffff9e25b67fc158 R15: ffffffffc032f8e0
Aug  8 02:51:14 vmmysql kernel: FS:  0000000000000000(0000) GS:ffff9e27b7280000(0000) knlGS:0000000000000000
Aug  8 02:51:14 vmmysql kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Aug  8 02:51:14 vmmysql kernel: CR2: 00007f0c9e9b6008 CR3: 0000000232480000 CR4: 00000000003606e0
Aug  8 02:51:14 vmmysql kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
Aug  8 02:51:14 vmmysql kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Aug  8 02:51:14 vmmysql kernel: Call Trace:
Aug  8 02:51:14 vmmysql kernel: [<ffffffffc0323d65>] ata_scsi_queuecmd+0x155/0x450 [libata]
Aug  8 02:51:14 vmmysql kernel: [<ffffffffc031fdb0>] ? ata_scsiop_inq_std+0xf0/0xf0 [libata]
Aug  8 02:51:14 vmmysql kernel: [<ffffffffb88d14f0>] scsi_dispatch_cmd+0xb0/0x240
Aug  8 02:51:14 vmmysql kernel: [<ffffffffb88daa8c>] scsi_request_fn+0x4cc/0x680
Aug  8 02:51:14 vmmysql kernel: [<ffffffffb8743679>] __blk_run_queue+0x39/0x50
Aug  8 02:51:14 vmmysql kernel: [<ffffffffb874b6d5>] blk_execute_rq_nowait+0xb5/0x170
Aug  8 02:51:14 vmmysql kernel: [<ffffffffb874b81b>] blk_execute_rq+0x8b/0x150
Aug  8 02:51:14 vmmysql kernel: [<ffffffffb867d369>] ? bio_phys_segments+0x19/0x20
Aug  8 02:51:14 vmmysql kernel: [<ffffffffb8746cb1>] ? blk_rq_bio_prep+0x31/0xb0
Aug  8 02:51:14 vmmysql kernel: [<ffffffffb874b537>] ? blk_rq_map_kern+0xc7/0x180
Aug  8 02:51:14 vmmysql kernel: [<ffffffffb88d75b3>] scsi_execute+0xd3/0x170
Aug  8 02:51:14 vmmysql kernel: [<ffffffffb88d929e>] scsi_execute_req_flags+0x8e/0x100
Aug  8 02:51:14 vmmysql kernel: [<ffffffffc041431c>] sr_check_events+0xbc/0x2d0 [sr_mod]
Aug  8 02:51:14 vmmysql kernel: [<ffffffffc036905e>] cdrom_check_events+0x1e/0x40 [cdrom]
Aug  8 02:51:14 vmmysql kernel: [<ffffffffc04150b1>] sr_block_check_events+0xb1/0x120 [sr_mod]
Aug  8 02:51:14 vmmysql kernel: [<ffffffffb8759276>] disk_check_events+0x66/0x190
Aug  8 02:51:14 vmmysql kernel: [<ffffffffb87593b6>] disk_events_workfn+0x16/0x20
Aug  8 02:51:14 vmmysql kernel: [<ffffffffb84b9d8f>] process_one_work+0x17f/0x440
Aug  8 02:51:14 vmmysql kernel: [<ffffffffb84bae26>] worker_thread+0x126/0x3c0
Aug  8 02:51:14 vmmysql kernel: [<ffffffffb84bad00>] ? manage_workers.isra.25+0x2a0/0x2a0
Aug  8 02:51:14 vmmysql kernel: [<ffffffffb84c1c71>] kthread+0xd1/0xe0
Aug  8 02:51:14 vmmysql kernel: [<ffffffffb84c1ba0>] ? insert_kthread_work+0x40/0x40
Aug  8 02:51:14 vmmysql kernel: [<ffffffffb8b75bf7>] ret_from_fork_nospec_begin+0x21/0x21
Aug  8 02:51:14 vmmysql kernel: [<ffffffffb84c1ba0>] ? insert_kthread_work+0x40/0x40
Aug  8 02:51:14 vmmysql kernel: Code: 14 25 10 43 03 b9 5d c3 0f 1f 40 00 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 55 48 89 e5 ff 14 25 10 43 03 b9 48 89 f7 57 9d <0f> 1f 44 00 00 5d c3 0f 1f 40 00 0f 1f 44 00 00 55 48 89 e5 48
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-leave@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/5YPSA7G5NT6IGOCIAJ3QQF25D3OMN5EH/


--