
Check your vdsm Logs on your nodes. I bet you find something about I/O errors i guess... Yes there are many IO errors
Hello Mario, On 16/07/2015 04:12 μμ, ml@ohnewald.net wrote: libvirtEventLoop::INFO::2015-07-16 22:30:02,237::vm::3609::virt.vm::(onIOError) vmId=`bb46929c-0b4e-4f01-868a-7e7638fa943b`::abnormal vm stop device virtio-disk0 error eother libvirtEventLoop::INFO::2015-07-16 22:30:02,237::vm::4889::virt.vm::(_logGuestCpuStatus) vmId=`bb46929c-0b4e-4f01-868a-7e7638fa943b`::CPU stopped: onIOError Full vdsm log - https://paste.fedoraproject.org/245148/43707759/ and glusterfs errors W [MSGID: 114031] [client-rpc-fops.c:2973:client3_3_lookup_cbk] 0-distributed_vol-client-0: remote operation failed: Transport endpoint is not connected. Path: / (00000000-0000-0000-0000-000000000001) [Transport endpoint is not connected] W [fuse-bridge.c:2273:fuse_writev_cbk] 0-glusterfs-fuse: 362694: WRITE => -1 (Transport endpoint is not connected) K.
Also check your glusterfs logs. Maybe you can find some problems, too.
Mario
Am 16.07.15 um 10:29 schrieb Konstantinos Christidis:
Hello oVirt users,
I am facing a serious problem regarding my GlusterFS storage and virtual machines that have *bootable* disks on this storage.
All my VMs that have GlusterFS disks are occasionally (1-2 times/hour) becoming paused with the following Error: VM vm02.mytld has been paused due to unknown storage error.
Engine Log INFO [org.ovirt.engine.core.vdsbroker.VmAnalyzer] (DefaultQuartzScheduler_Worker-69) [] VM '247bb0f3-1a77-44e4-a404-3271eaee94be'(vm02.mytld) moved from 'Up' --> 'Paused' INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-69) [] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM vm02.mytld has been paused. ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-69) [] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM vm02.mytld has been paused due to unknown storage error
My iSCSI VM's, some of which may have mounted (not bootable) disks from the same GlusterFS storage, do NOT suffer from this issue AFAIK.
My installation (oVirt 3.6/CentOS 7) is pretty much a typical one, with a GlusterFS enabled cluster with 4 hosts, 2-3 networks, and 6-7 VMs..
Thanks,
K. _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users