[ovirt-users] Fwd: Re: question mark on VM ( DB status 8 )

paf1 at email.cz paf1 at email.cz
Thu Mar 17 15:14:20 UTC 2016


URGENT

-------- Forwarded Message --------
Subject: 	Re: [ovirt-users] question mark on VM ( DB status 8 )
Date: 	Thu, 17 Mar 2016 16:43:54 +0200
From: 	Nir Soffer <nsoffer at redhat.com>
To: 	paf1 at email.cz <paf1 at email.cz>



Can you send this to the users list?

This looks like virt issue, so it should be checked by the guys
working on this pars of the code.

Thanks,
Nir

On Thu, Mar 17, 2016 at 4:07 PM, paf1 at email.cz <paf1 at email.cz> wrote:
> Hi Nir,
> look at piece of logs which are repeated in cycle.
>
> The main issue happened about 3-5AM today ( 17.Mar)
>
> CSA_EBSDB_TEST2 - was shutted down from OS , but  status was not updated in
> oVirt GUI ( changed manually in DB ( status 1 )) ,  still one other VM is in
> status "8"  due snapshot locked file ( sf-sh-s07)  .
>
> engine.log
> ==========
>
> repeately hours by hours ... continually
>
> 2016-03-17 14:38:21,146 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-20) START,
> GlusterVolumesListVDSCommand(HostName = 2kvm1, HostId =
> 4c3a2622-14d5-43c8-8e15-99cb66104b5a), log id: 5a34e053
> 2016-03-17 14:38:21,830 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-20) FINISH, GlusterVolumesListVDSCommand,
> return:
> {a5a8ccbc-edee-4e49-9e2a-4d2ee5767f76=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 240192c6,
> 18310aeb-639f-4b6d-9ef4-9ef560d6175c=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 753f6685,
> 4a6d775d-4a51-4f6c-9bfa-f7ef57f3ca1d=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 79a21b20,
> f410c6a9-9a51-42b3-89bb-c20ac72a0461=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at a4634e44,
> 62c89345-fd61-4b67-b8b4-69296eb7d217=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at fd990620,
> aa2d607d-3c6c-4f13-8205-aae09dcc9d35=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 57883869,
> b4356604-4404-428a-9da6-f1636115e2fd=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 3b458bc8,
> 9745551f-4696-4a6c-820a-619e359a61fd=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 80f225de,
> 25a5ec22-660e-42a0-aa00-45211d341738=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at ec4c19bd,
> 6060ff77-d552-4d94-97bf-5a32982e7d8a=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 947dc2e4,
> cbf142f8-a40b-4cf4-ad29-2243c81d30c1=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at f773ab98},
> log id: 5a34e053
> 2016-03-17 14:38:27,131 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-79) START,
> GlusterVolumesListVDSCommand(HostName = 2kvm1, HostId =
> 4c3a2622-14d5-43c8-8e15-99cb66104b5a), log id: 24e7703f
> 2016-03-17 14:38:27,801 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-79) FINISH, GlusterVolumesListVDSCommand,
> return:
> {a5a8ccbc-edee-4e49-9e2a-4d2ee5767f76=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 4e72f0f4,
> 18310aeb-639f-4b6d-9ef4-9ef560d6175c=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 89bfd4dd,
> 4a6d775d-4a51-4f6c-9bfa-f7ef57f3ca1d=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at f6cb25b,
> f410c6a9-9a51-42b3-89bb-c20ac72a0461=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at f4bb56bf,
> 62c89345-fd61-4b67-b8b4-69296eb7d217=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at e0121f88,
> aa2d607d-3c6c-4f13-8205-aae09dcc9d35=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 435fc00f,
> b4356604-4404-428a-9da6-f1636115e2fd=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 7b23bf23,
> 9745551f-4696-4a6c-820a-619e359a61fd=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 1f8e886,
> 25a5ec22-660e-42a0-aa00-45211d341738=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 1fbbe1c1,
> 6060ff77-d552-4d94-97bf-5a32982e7d8a=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 87c991cd,
> cbf142f8-a40b-4cf4-ad29-2243c81d30c1=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 2fc8ef3e},
> log id: 24e7703f
> 2016-03-17 14:38:33,097 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-15) START,
> GlusterVolumesListVDSCommand(HostName = 2kvm1, HostId =
> 4c3a2622-14d5-43c8-8e15-99cb66104b5a), log id: 2e987652
> 2016-03-17 14:38:33,809 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-15) FINISH, GlusterVolumesListVDSCommand,
> return:
> {a5a8ccbc-edee-4e49-9e2a-4d2ee5767f76=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 22f57524,
> 18310aeb-639f-4b6d-9ef4-9ef560d6175c=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 229b8873,
> 4a6d775d-4a51-4f6c-9bfa-f7ef57f3ca1d=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at d9e0727e,
> f410c6a9-9a51-42b3-89bb-c20ac72a0461=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 3e54e436,
> 62c89345-fd61-4b67-b8b4-69296eb7d217=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at a32a922d,
> aa2d607d-3c6c-4f13-8205-aae09dcc9d35=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at ef616411,
> b4356604-4404-428a-9da6-f1636115e2fd=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 987712e1,
> 9745551f-4696-4a6c-820a-619e359a61fd=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 21786d69,
> 25a5ec22-660e-42a0-aa00-45211d341738=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 3411ecb4,
> 6060ff77-d552-4d94-97bf-5a32982e7d8a=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 9ccdb073,
> cbf142f8-a40b-4cf4-ad29-2243c81d30c1=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at ae4e2f13},
> log id: 2e987652
> 2016-03-17 14:38:39,131 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-70) START,
> GlusterVolumesListVDSCommand(HostName = 2kvm1, HostId =
> 4c3a2622-14d5-43c8-8e15-99cb66104b5a), log id: 2d9df607
> 2016-03-17 14:38:39,812 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-70) FINISH, GlusterVolumesListVDSCommand,
> return:
> {a5a8ccbc-edee-4e49-9e2a-4d2ee5767f76=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 24ba8cf2,
> 18310aeb-639f-4b6d-9ef4-9ef560d6175c=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at be52739d,
> 4a6d775d-4a51-4f6c-9bfa-f7ef57f3ca1d=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at fa7acd26,
> f410c6a9-9a51-42b3-89bb-c20ac72a0461=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at bfa54163,
> 62c89345-fd61-4b67-b8b4-69296eb7d217=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at a50ab364,
> aa2d607d-3c6c-4f13-8205-aae09dcc9d35=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at c85c798b,
> b4356604-4404-428a-9da6-f1636115e2fd=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 4404dc57,
> 9745551f-4696-4a6c-820a-619e359a61fd=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at a87b6b00,
> 25a5ec22-660e-42a0-aa00-45211d341738=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 58e582ba,
> 6060ff77-d552-4d94-97bf-5a32982e7d8a=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 127588cb,
> cbf142f8-a40b-4cf4-ad29-2243c81d30c1=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 101be9b2},
> log id: 2d9df607
> 2016-03-17 14:38:45,136 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-91) START,
> GlusterVolumesListVDSCommand(HostName = 2kvm1, HostId =
> 4c3a2622-14d5-43c8-8e15-99cb66104b5a), log id: 4b3faf1c
> 2016-03-17 14:38:45,152 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterTasksListVDSCommand]
> (DefaultQuartzScheduler_Worker-43) START,
> GlusterTasksListVDSCommand(HostName = 1kvm1, HostId =
> 98c4520a-bcff-45b2-8f66-e360e10e1fb2), log id: 1df75525
> 2016-03-17 14:38:45,400 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterTasksListVDSCommand]
> (DefaultQuartzScheduler_Worker-43) FINISH, GlusterTasksListVDSCommand,
> return: [], log id: 1df75525
> 2016-03-17 14:38:45,814 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-91) FINISH, GlusterVolumesListVDSCommand,
> return:
> {a5a8ccbc-edee-4e49-9e2a-4d2ee5767f76=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 53cb3de0,
> 18310aeb-639f-4b6d-9ef4-9ef560d6175c=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 2cf0ae7f,
> 4a6d775d-4a51-4f6c-9bfa-f7ef57f3ca1d=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at bac16bbe,
> f410c6a9-9a51-42b3-89bb-c20ac72a0461=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 5b460bdf,
> 62c89345-fd61-4b67-b8b4-69296eb7d217=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 47213703,
> aa2d607d-3c6c-4f13-8205-aae09dcc9d35=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 714406f8,
> b4356604-4404-428a-9da6-f1636115e2fd=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 94740550,
> 9745551f-4696-4a6c-820a-619e359a61fd=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at c00582e3,
> 25a5ec22-660e-42a0-aa00-45211d341738=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at e5687263,
> 6060ff77-d552-4d94-97bf-5a32982e7d8a=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at a0163ead,
> cbf142f8-a40b-4cf4-ad29-2243c81d30c1=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 1e1ea424},
> log id: 4b3faf1c
>
>
> vdsm.log
> this block repeate non stop ..
>
> Thread-798::DEBUG::2016-03-17
> 14:41:07,108::libvirtconnection::151::root::(wrapper) Unknown libvirterror:
> ecode: 42 edom: 10 level: 2 message: Domain not found: no domain with
> matching uuid 'a60a0eae-9738-4833-9feb-de2494c545a4' (CSA_EBSDB_TEST2)
> Thread-798::DEBUG::2016-03-17
> 14:41:07,109::libvirtconnection::151::root::(wrapper) Unknown libvirterror:
> ecode: 42 edom: 10 level: 2 message: Domain not found: no domain with
> matching uuid 'a60a0eae-9738-4833-9feb-de2494c545a4' (CSA_EBSDB_TEST2)
> Thread-798::DEBUG::2016-03-17
> 14:41:07,111::libvirtconnection::151::root::(wrapper) Unknown libvirterror:
> ecode: 42 edom: 10 level: 2 message: Domain not found: no domain with
> matching uuid 'a60a0eae-9738-4833-9feb-de2494c545a4' (CSA_EBSDB_TEST2)
> Thread-798::DEBUG::2016-03-17
> 14:41:07,113::libvirtconnection::151::root::(wrapper) Unknown libvirterror:
> ecode: 42 edom: 10 level: 2 message: Domain not found: no domain with
> matching uuid 'a60a0eae-9738-4833-9feb-de2494c545a4' (CSA_EBSDB_TEST2)
> Thread-798::DEBUG::2016-03-17
> 14:41:07,114::libvirtconnection::151::root::(wrapper) Unknown libvirterror:
> ecode: 42 edom: 10 level: 2 message: Domain not found: no domain with
> matching uuid 'a60a0eae-9738-4833-9feb-de2494c545a4' (CSA_EBSDB_TEST2)
> Thread-158::DEBUG::2016-03-17
> 14:41:07,521::fileSD::262::Storage.Misc.excCmd::(getReadDelay) /usr/bin/dd
> if=/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P5/1ca56b45-701e-4c22-9f59-3aebea4d8477/dom_md/metadata
> iflag=direct of=/dev/null bs=4096 count=1 (cwd None)
> Thread-158::DEBUG::2016-03-17
> 14:41:07,560::fileSD::262::Storage.Misc.excCmd::(getReadDelay) SUCCESS:
> <err> = '0+1 records in\n0+1 records out\n998 bytes (998 B) copied,
> 0.000523763 s, 1.9 MB/s\n'; <rc> = 0
> Thread-180::DEBUG::2016-03-17
> 14:41:07,565::fileSD::262::Storage.Misc.excCmd::(getReadDelay) /usr/bin/dd
> if=/rhev/data-center/mnt/glusterSD/localhost:_2KVM12-P1/42d710a9-b844-43dc-be41-77002d1cd553/dom_md/metadata
> iflag=direct of=/dev/null bs=4096 count=1 (cwd None)
> Thread-126::DEBUG::2016-03-17
> 14:41:07,566::fileSD::262::Storage.Misc.excCmd::(getReadDelay) /usr/bin/dd
> if=/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P1/553d9b92-e4a0-4042-a579-4cabeb55ded4/dom_md/metadata
> iflag=direct of=/dev/null bs=4096 count=1 (cwd None)
> Thread-180::DEBUG::2016-03-17
> 14:41:07,616::fileSD::262::Storage.Misc.excCmd::(getReadDelay) SUCCESS:
> <err> = '0+1 records in\n0+1 records out\n333 bytes (333 B) copied,
> 0.000606695 s, 549 kB/s\n'; <rc> = 0
> Thread-158::INFO::2016-03-17
> 14:41:07,616::clusterlock::219::Storage.SANLock::(acquireHostId) Acquiring
> host id for domain 1ca56b45-701e-4c22-9f59-3aebea4d8477 (id: 3)
> Thread-158::DEBUG::2016-03-17
> 14:41:07,619::clusterlock::237::Storage.SANLock::(acquireHostId) Host id for
> domain 1ca56b45-701e-4c22-9f59-3aebea4d8477 successfully acquired (id: 3)
> Thread-126::DEBUG::2016-03-17
> 14:41:07,620::fileSD::262::Storage.Misc.excCmd::(getReadDelay) SUCCESS:
> <err> = '0+1 records in\n0+1 records out\n333 bytes (333 B) copied,
> 0.000476478 s, 699 kB/s\n'; <rc> = 0
> Thread-180::INFO::2016-03-17
> 14:41:07,623::clusterlock::219::Storage.SANLock::(acquireHostId) Acquiring
> host id for domain 42d710a9-b844-43dc-be41-77002d1cd553 (id: 3)
> Thread-180::DEBUG::2016-03-17
> 14:41:07,624::clusterlock::237::Storage.SANLock::(acquireHostId) Host id for
> domain 42d710a9-b844-43dc-be41-77002d1cd553 successfully acquired (id: 3)
> Thread-126::INFO::2016-03-17
> 14:41:07,626::clusterlock::219::Storage.SANLock::(acquireHostId) Acquiring
> host id for domain 553d9b92-e4a0-4042-a579-4cabeb55ded4 (id: 3)
> Thread-126::DEBUG::2016-03-17
> 14:41:07,626::clusterlock::237::Storage.SANLock::(acquireHostId) Host id for
> domain 553d9b92-e4a0-4042-a579-4cabeb55ded4 successfully acquired (id: 3)
> Thread-1897022::DEBUG::2016-03-17
> 14:41:07,701::__init__::481::jsonrpc.JsonRpcServer::(_serveRequest) Calling
> 'GlusterVolume.list' in bridge with {}
> Thread-113::DEBUG::2016-03-17
> 14:41:07,704::fileSD::262::Storage.Misc.excCmd::(getReadDelay) /usr/bin/dd
> if=/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-BCK/0fcad888-d573-47be-bef3-0bc0b7a99fb7/dom_md/metadata
> iflag=direct of=/dev/null bs=4096 count=1 (cwd None)
> Thread-113::DEBUG::2016-03-17
> 14:41:07,747::fileSD::262::Storage.Misc.excCmd::(getReadDelay) SUCCESS:
> <err> = '0+1 records in\n0+1 records out\n335 bytes (335 B) copied,
> 0.000568018 s, 590 kB/s\n'; <rc> = 0
> Thread-198::DEBUG::2016-03-17
> 14:41:07,758::fileSD::262::Storage.Misc.excCmd::(getReadDelay) /usr/bin/dd
> if=/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P2/88adbd49-62d6-45b1-9992-b04464a04112/dom_md/metadata
> iflag=direct of=/dev/null bs=4096 count=1 (cwd None)
> Thread-188::DEBUG::2016-03-17
> 14:41:07,758::fileSD::262::Storage.Misc.excCmd::(getReadDelay) /usr/bin/dd
> if=/rhev/data-center/mnt/glusterSD/localhost:_2KVM12__P4/300e9ac8-3c2f-4703-9bb1-1df2130c7c97/dom_md/metadata
> iflag=direct of=/dev/null bs=4096 count=1 (cwd None)
> Thread-198::DEBUG::2016-03-17
> 14:41:07,811::fileSD::262::Storage.Misc.excCmd::(getReadDelay) SUCCESS:
> <err> = '0+1 records in\n0+1 records out\n333 bytes (333 B) copied,
> 0.000455407 s, 731 kB/s\n'; <rc> = 0
> Thread-188::DEBUG::2016-03-17
> 14:41:07,815::fileSD::262::Storage.Misc.excCmd::(getReadDelay) SUCCESS:
> <err> = '0+1 records in\n0+1 records out\n734 bytes (734 B) copied,
> 0.000535009 s, 1.4 MB/s\n'; <rc> = 0
> Thread-198::INFO::2016-03-17
> 14:41:07,826::clusterlock::219::Storage.SANLock::(acquireHostId) Acquiring
> host id for domain 88adbd49-62d6-45b1-9992-b04464a04112 (id: 3)
> Thread-198::DEBUG::2016-03-17
> 14:41:07,828::clusterlock::237::Storage.SANLock::(acquireHostId) Host id for
> domain 88adbd49-62d6-45b1-9992-b04464a04112 successfully acquired (id: 3)
> Thread-98::DEBUG::2016-03-17
> 14:41:07,838::fileSD::262::Storage.Misc.excCmd::(getReadDelay) /usr/bin/dd
> if=/rhev/data-center/mnt/glusterSD/localhost:_1KVM12-P3/3c34ad63-6c66-4e23-ab46-084f3d70b147/dom_md/metadata
> iflag=direct of=/dev/null bs=4096 count=1 (cwd None)
> Thread-98::DEBUG::2016-03-17
> 14:41:07,870::fileSD::262::Storage.Misc.excCmd::(getReadDelay) SUCCESS:
> <err> = '0+1 records in\n0+1 records out\n998 bytes (998 B) copied,
> 0.000564777 s, 1.8 MB/s\n'; <rc> = 0
> VM Channels Listener::DEBUG::2016-03-17
> 14:41:07,883::vmchannels::133::vds::(_handle_unconnected) Trying to connect
> fileno 43.
> VM Channels Listener::DEBUG::2016-03-17
> 14:41:07,889::vmchannels::133::vds::(_handle_unconnected) Trying to connect
> fileno 47.
>
>
>
>
> On 17.3.2016 12:58, Nir Soffer wrote:
>
> On Thu, Mar 17, 2016 at 11:35 AM, paf1 at email.cz <paf1 at email.cz> wrote:
>
> I used that, but lock active in a few seconds again.
> And oVirt do not update any VM's status
>
> Unlocking entities is ok when you know that the operation that took
> the lock is finished
> or failed. This is a workaround for buggy operations leaving disks in
> locked state, not
> a normal way to use the system.
>
> We first must understand what is the flow that caused the snapshot to
> be locked, and
> why it remain locked.
>
> Please describe in detail the operations in the engine side, and
> provide engine and vdsm
> logs showing this timeframe.
>
> Nir
>
> Pa.
>
>
> On 17.3.2016 10:26, Eli Mesika wrote:
>
>
>
> ________________________________
>
> From: paf1 at email.cz
> To: "users" <users at ovirt.org>
> Sent: Thursday, March 17, 2016 9:27:11 AM
> Subject: [ovirt-users] question mark on VM ( DB status 8 )
>
> Hello,
> during backup
>
> What do you mean by "backup"? Can you describe how do you backup the vm?
>
> VM hanged with question mark in ovirt and status 8 in DB,
> snapshot file ( for backup )is locked.
> How to clean snapshot locking a wake up this VM from "unknow" state ???
>
>
> Try using the unlock_entity.sh utility (run with --help for usage)
>
>
>
> regs.
> pavel
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20160317/6088bac5/attachment-0001.html>


More information about the Users mailing list