I installed hpssacli-2.40-13.0.x86_64.rpm and the result of "hpssacli ctrl all show status" is:
Error: No controllers detected. Possible causes:.....
The s.o. run on sd cards and the vm runs on array on traditional disk
thanks
Edoardo

Il giorno lun 12 ago 2019 alle ore 05:59 Strahil <hunter86_bg@yahoo.com> ha scritto:

Would you check the health status of the controllers :
hpssacli ctrl all show status

Best Regards,
Strahil Nikolov

On Aug 11, 2019 09:55, Edoardo Mazza <edo7411@gmail.com> wrote:
The hosts are 3 ProLiant DL380 Gen10, 2 hosts with HPE Smart Array P816i-a SR Gen10 like controller and the other host with
HPE Smart Array P408i-a SR Gen10.  The storage for ovirt enviroment is gluster and the last host is the arbiter in the gluster enviroment.
The S.M.A.R.T. healt status is ok for all host
Edoardo





Il giorno gio 8 ago 2019 alle ore 16:19 Sandro Bonazzola <sbonazzo@redhat.com> ha scritto:


Il giorno gio 8 ago 2019 alle ore 11:19 Edoardo Mazza <edo7411@gmail.com> ha scritto:
Hi all,
It is more days that for same vm I received this error, but I don't underdand why.
The traffic of the virtual machine is not excessive, cpu and ram to, but for few minutes the vm is not responding. and in the messages log file of the vm I received the error under, yo can help me?
thanks

can you check the S.M.A.R.T. health status of the disks? 

 
Edoardo
kernel: NMI watchdog: BUG: soft lockup - CPU#2 stuck for 25s! [kworker/2:0:26227]
Aug  8 02:51:11 vmmysql kernel: Modules linked in: binfmt_misc ip6t_rpfilter ipt_REJECT nf_reject_ipv4 ip6t_REJECT nf_reject_
ipv6 xt_conntrack ip_set nfnetlink ebtable_nat ebtable_broute bridge stp llc ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_
nat_ipv6 ip6table_mangle ip6table_security ip6table_raw iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_con
ntrack iptable_mangle iptable_security iptable_raw ebtable_filter ebtables ip6table_filter ip6_tables iptable_filter snd_hda_c
odec_generic iosf_mbi crc32_pclmul ppdev ghash_clmulni_intel snd_hda_intel snd_hda_codec aesni_intel snd_hda_core lrw gf128mul
 glue_helper ablk_helper snd_hwdep cryptd snd_seq snd_seq_device snd_pcm snd_timer snd soundcore virtio_rng sg virtio_balloon
i2c_piix4 parport_pc parport joydev pcspkr ip_tables xfs libcrc32c sd_mod
Aug  8 02:51:14 vmmysql kernel: crc_t10dif crct10dif_generic sr_mod cdrom virtio_net virtio_console virtio_scsi ata_generic p
ata_acpi crct10dif_pclmul crct10dif_common crc32c_intel serio_raw qxl floppy drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm ata_piix libata virtio_pci drm_panel_orientation_quirks virtio_ring virtio dm_mirror dm_region_hash dm_log dm_mod
Aug  8 02:51:14 vmmysql kernel: CPU: 2 PID: 26227 Comm: kworker/2:0 Kdump: loaded Tainted: G             L ------------   3.10.0-957.12.1.el7.x86_64 #1
Aug  8 02:51:14 vmmysql kernel: Hardware name: oVirt oVirt Node, BIOS 1.11.0-2.el7 04/01/2014
Aug  8 02:51:14 vmmysql kernel: Workqueue: events_freezable disk_events_workfn
Aug  8 02:51:14 vmmysql kernel: task: ffff9e25b6609040 ti: ffff9e27b1610000 task.ti: ffff9e27b1610000
Aug  8 02:51:14 vmmysql kernel: RIP: 0010:[<ffffffffb8b6b355>]  [<ffffffffb8b6b355>] _raw_spin_unlock_irqrestore+0x15/0x20
Aug  8 02:51:14 vmmysql kernel: RSP: 0000:ffff9e27b1613a68  EFLAGS: 00000286
Aug  8 02:51:14 vmmysql kernel: RAX: 0000000000000001 RBX: ffff9e27b1613a10 RCX: ffff9e27b72a3d05
Aug  8 02:51:14 vmmysql kernel: RDX: ffff9e27b729a420 RSI: 0000000000000286 RDI: 0000000000000286
Aug  8 02:51:14 vmmysql kernel: RBP: ffff9e27b1613a68 R08: 0000000000000001 R09: ffff9e25b67fc198
Aug  8 02:51:14 vmmysql kernel: R10: ffff9e27b45bd8d8 R11: 0000000000000000 R12: ffff9e25b67fde80
Aug  8 02:51:14 vmmysql kernel: R13: ffff9e25b67fc000 R14: ffff9e25b67fc158 R15: ffffffffc032f8e0
Aug  8 02:51:14 vmmysql kernel: FS:  0000000000000000(0000) GS:ffff9e27b7280000(0000) knlGS:0000000000000000
Aug  8 02:51:14 vmmysql kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Aug  8 02:51:14 vmmysql kernel: CR2: 00007f0c9e9b6008 CR3: 00