it's the same error:
c1d7c4e-392b-4a62-9836-3add1360a46d::DEBUG::2014-04-22
12:13:44,340::volume::1058::Storage.Misc.excCmd::(createVolume) FAILED:
<err> =
'/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/4
66d9ae9-e46a-46f8-9f4b-964d8af0675b/0b2d15e5-bf4f-4eaf-90e2-f1bd51a3a936: error
while creating qcow2: No such file or directory\n'; <rc> = 1
were these 23 snapshots created any way each time we fail to create the
snapshot or are these older snapshots which you actually created before
the failure?
at this point my main theory is that somewhere along the line you had
some sort of failure in your storage and from that time each snapshot
you create will fail.
if the snapshots are created during the failure can you please delete
the snapshots you do not need and try again?
There should not be a limit on how many snapshots you can have since
it's only a link changing the image the vm should boot from.
Having said that, it's not ideal to have that many snapshots and can
probably lead to unexpected results so I would not recommend having that
many snapshots on a single vm :)
for example, my second theory would be that because we have so many
snapshots we have some sort of race where part of the createVolume
command expects some result from a query run before the create itself
and because there are so many snapshots there is "no such file" on the
volume because it's too far up the list.
can you also run: ls -l
/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-964d8af0675b
lets see what images are listed under that vm.
btw, you know that your export domain is getting
StorageDomainDoesNotExist in the vdsm log? is that domain in up state?
can you try to deactivate the export domain?
Thanks,
Dafna
On 04/22/2014 05:20 PM, Steve Dainard wrote:
Ominous..
23 snapshots. Is there an upper limit?
Offline snapshot fails as well. Both logs attached again (snapshot
attempted at 12:13 EST).
*Steve *
On Tue, Apr 22, 2014 at 11:20 AM, Dafna Ron <dron(a)redhat.com
<mailto:dron@redhat.com>> wrote:
are you able to take an offline snapshot? (while the vm is down)
how many snapshots do you have on this vm?
On 04/22/2014 04:19 PM, Steve Dainard wrote:
No alert in web ui, I restarted the VM yesterday just in case,
no change. I also restored an earlier snapshot and tried to
re-snapshot, same result.
*Steve
*
On Tue, Apr 22, 2014 at 10:57 AM, Dafna Ron <dron(a)redhat.com
<mailto:dron@redhat.com> <mailto:dron@redhat.com
<mailto:dron@redhat.com>>> wrote:
This is the actual problem:
bf025a73-eeeb-4ac5-b8a9-32afa4ae482e::DEBUG::2014-04-22
10:21:49,374::volume::1058::Storage.Misc.excCmd::(createVolume)
FAILED: <err> =
'/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/4
66d9ae9-e46a-46f8-9f4b-964d8af0675b/87efa937-b31f-4bb1-aee1-0ee14a0dc6fb:
error while creating qcow2: No such file or directory\n';
<rc> = 1
from that you see the actual failure:
bf025a73-eeeb-4ac5-b8a9-32afa4ae482e::ERROR::2014-04-22
10:21:49,392::volume::286::Storage.Volume::(clone)
Volume.clone:
can't clone:
/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d
9ae9-e46a-46f8-9f4b-964d8af0675b/1a67de4b-aa1c-4436-baca-ca55726d54d7
to
/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-964d8af0675b/87efa937-b31f-4bb1-aee1-0ee1
4a0dc6fb
bf025a73-eeeb-4ac5-b8a9-32afa4ae482e::ERROR::2014-04-22
10:21:49,392::volume::508::Storage.Volume::(create)
Unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/volume.py", line 466, in
create
srcVolUUID, imgPath, volPath)
File "/usr/share/vdsm/storage/fileVolume.py", line 160,
in _create
volParent.clone(imgPath, volUUID, volFormat, preallocate)
File "/usr/share/vdsm/storage/volume.py", line 287, in clone
raise se.CannotCloneVolume(self.volumePath, dst_path,
str(e))
CannotCloneVolume: Cannot clone volume:
'src=/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-964d8af0675b/1a67de4b-aa1c-4436-baca-ca55726d54d7,
dst=/rhev/data-cen
ter/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-964d8af0675b/87efa937-b31f-4bb1-aee1-0ee14a0dc6fb:
Error creating a new volume: (["Formatting
\'/rhev/data-center/9497ef2c-8368-
4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-964d8af0675b/87efa937-b31f-4bb1-aee1-0ee14a0dc6fb\',
fmt=qcow2 size=21474836480
backing_file=\'../466d9ae9-e46a-46f8-9f4b-964d8af0675b/1a67de4b-aa
1c-4436-baca-ca55726d54d7\' backing_fmt=\'qcow2\'
encryption=off
cluster_size=65536 "],)'
do you have any alert in the webadmin to restart the vm?
Dafna
On 04/22/2014 03:31 PM, Steve Dainard wrote:
Sorry for the confusion.
I attempted to take a live snapshot of a running VM. After
that failed, I migrated the VM to another host, and
attempted
the live snapshot again without success, eliminating a
single
host as the cause of failure.
Ovirt is 3.3.4, storage domain is gluster 3.4.2.1, OS is
CentOS 6.5.
Package versions:
libvirt-0.10.2-29.el6_5.5.x86_64
libvirt-lock-sanlock-0.10.2-29.el6_5.5.x86_64
qemu-img-rhev-0.12.1.2-2.415.el6.nux.3.x86_64
qemu-kvm-rhev-0.12.1.2-2.415.el6.nux.3.x86_64
qemu-kvm-rhev-tools-0.12.1.2-2.415.el6.nux.3.x86_64
vdsm-4.13.3-4.el6.x86_64
vdsm-gluster-4.13.3-4.el6.noarch
I made another live snapshot attempt at 10:21 EST
today, full
vdsm.log attached, and a truncated engine.log.
Thanks,
*Steve
*
On Tue, Apr 22, 2014 at 9:48 AM, Dafna Ron
<dron(a)redhat.com <mailto:dron@redhat.com>
<mailto:dron@redhat.com <mailto:dron@redhat.com>>
<mailto:dron@redhat.com <mailto:dron@redhat.com>
<mailto:dron@redhat.com <mailto:dron@redhat.com>>>>
wrote:
please explain the flow of what you are trying to
do, are you
trying to live migrate the disk (from one storage to
another), are
you trying to migrate the vm and after vm migration is
finished
you try to take a live snapshot of the vm? or are you
trying to
take a live snapshot of the vm during a vm
migration from
host1 to
host2?
Please attach full vdsm logs from any host you are
using
(if you
are trying to migrate the vm from host1 to host2)
+ please
attach
engine log.
Also, what is the vdsm, libvirt and qemu versions,
what ovirt
version are you using and what is the storage you
are using?
Thanks,
Dafna
On 04/22/2014 02:12 PM, Steve Dainard wrote:
I've attempted migrating the vm to another
host and
taking a
snapshot, but I get this error:
6efd33f4-984c-4513-b5e6-fffdca2e983b::ERROR::2014-04-22
01:09:37,296::volume::286::Storage.Volume::(clone)
Volume.clone: can't clone:
/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-964d8af0675b/1a67de4b-aa1c-4436-baca-ca55726d54d7
to
/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-964d8af0675b/b230596f-97bc-4532-ba57-5654fa9c6c51
A bit more of the vdsm log is attached.
Other vm's are snapshotting without issue.
Any help appreciated,
*Steve
*
_______________________________________________
Users mailing list
Users(a)ovirt.org <mailto:Users@ovirt.org>
<mailto:Users@ovirt.org <mailto:Users@ovirt.org>>
<mailto:Users@ovirt.org <mailto:Users@ovirt.org>
<mailto:Users@ovirt.org <mailto:Users@ovirt.org>>>
http://lists.ovirt.org/mailman/listinfo/users
-- Dafna Ron
-- Dafna Ron
--
Dafna Ron