Ominous..
23 snapshots. Is there an upper limit?
Offline snapshot fails as well. Both logs attached again (snapshot
attempted at 12:13 EST).
*Steve *
On Tue, Apr 22, 2014 at 11:20 AM, Dafna Ron <dron(a)redhat.com> wrote:
are you able to take an offline snapshot? (while the vm is down)
how many snapshots do you have on this vm?
On 04/22/2014 04:19 PM, Steve Dainard wrote:
> No alert in web ui, I restarted the VM yesterday just in case, no change.
> I also restored an earlier snapshot and tried to re-snapshot, same result.
>
> *Steve
> *
>
>
>
> On Tue, Apr 22, 2014 at 10:57 AM, Dafna Ron <dron(a)redhat.com <mailto:
> dron(a)redhat.com>> wrote:
>
> This is the actual problem:
>
> bf025a73-eeeb-4ac5-b8a9-32afa4ae482e::DEBUG::2014-04-22
> 10:21:49,374::volume::1058::Storage.Misc.excCmd::(createVolume)
> FAILED: <err> =
> '/rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/
> 95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/4
> 66d9ae9-e46a-46f8-9f4b-964d8af0675b/87efa937-b31f-
> 4bb1-aee1-0ee14a0dc6fb:
> error while creating qcow2: No such file or directory\n'; <rc> = 1
>
> from that you see the actual failure:
>
> bf025a73-eeeb-4ac5-b8a9-32afa4ae482e::ERROR::2014-04-22
> 10:21:49,392::volume::286::Storage.Volume::(clone) Volume.clone:
> can't clone:
> /rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/
> 95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d
> 9ae9-e46a-46f8-9f4b-964d8af0675b/1a67de4b-aa1c-4436-baca-ca55726d54d7
> to
> /rhev/data-center/9497ef2c-8368-4c92-8d61-7f318a90748f/
> 95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/466d9ae9-
> e46a-46f8-9f4b-964d8af0675b/87efa937-b31f-4bb1-aee1-0ee1
> 4a0dc6fb
> bf025a73-eeeb-4ac5-b8a9-32afa4ae482e::ERROR::2014-04-22
> 10:21:49,392::volume::508::Storage.Volume::(create) Unexpected error
> Traceback (most recent call last):
> File "/usr/share/vdsm/storage/volume.py", line 466, in create
> srcVolUUID, imgPath, volPath)
> File "/usr/share/vdsm/storage/fileVolume.py", line 160, in _create
> volParent.clone(imgPath, volUUID, volFormat, preallocate)
> File "/usr/share/vdsm/storage/volume.py", line 287, in clone
> raise se.CannotCloneVolume(self.volumePath, dst_path, str(e))
> CannotCloneVolume: Cannot clone volume:
> 'src=/rhev/data-center/9497ef2c-8368-4c92-8d61-
> 7f318a90748f/95b9d922-4df7-4d3b-9bca-467e2fd9d573/images/
> 466d9ae9-e46a-46f8-9f4b-964d8af0675b/1a67de4b-aa1c-
> 4436-baca-ca55726d54d7,
> dst=/rhev/data-cen
> ter/9497ef2c-8368-4c92-8d61-7f318a90748f/95b9d922-4df7-
> 4d3b-9bca-467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-
> 964d8af0675b/87efa937-b31f-4bb1-aee1-0ee14a0dc6fb:
> Error creating a new volume: (["Formatting
> \'/rhev/data-center/9497ef2c-8368-
> 4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-
> 467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-964d8af0675b/
> 87efa937-b31f-4bb1-aee1-0ee14a0dc6fb\',
> fmt=qcow2 size=21474836480
> backing_file=\'../466d9ae9-e46a-46f8-9f4b-964d8af0675b/1a67de4b-aa
> 1c-4436-baca-ca55726d54d7\' backing_fmt=\'qcow2\' encryption=off
> cluster_size=65536 "],)'
>
>
> do you have any alert in the webadmin to restart the vm?
>
> Dafna
>
>
> On 04/22/2014 03:31 PM, Steve Dainard wrote:
>
> Sorry for the confusion.
>
> I attempted to take a live snapshot of a running VM. After
> that failed, I migrated the VM to another host, and attempted
> the live snapshot again without success, eliminating a single
> host as the cause of failure.
>
> Ovirt is 3.3.4, storage domain is gluster 3.4.2.1, OS is
> CentOS 6.5.
>
> Package versions:
> libvirt-0.10.2-29.el6_5.5.x86_64
> libvirt-lock-sanlock-0.10.2-29.el6_5.5.x86_64
> qemu-img-rhev-0.12.1.2-2.415.el6.nux.3.x86_64
> qemu-kvm-rhev-0.12.1.2-2.415.el6.nux.3.x86_64
> qemu-kvm-rhev-tools-0.12.1.2-2.415.el6.nux.3.x86_64
> vdsm-4.13.3-4.el6.x86_64
> vdsm-gluster-4.13.3-4.el6.noarch
>
>
> I made another live snapshot attempt at 10:21 EST today, full
> vdsm.log attached, and a truncated engine.log.
>
> Thanks,
>
> *Steve
> *
>
>
>
> On Tue, Apr 22, 2014 at 9:48 AM, Dafna Ron <dron(a)redhat.com
> <mailto:dron@redhat.com> <mailto:dron@redhat.com
>
> <mailto:dron@redhat.com>>> wrote:
>
> please explain the flow of what you are trying to do, are you
> trying to live migrate the disk (from one storage to
> another), are
> you trying to migrate the vm and after vm migration is
> finished
> you try to take a live snapshot of the vm? or are you
> trying to
> take a live snapshot of the vm during a vm migration from
> host1 to
> host2?
>
> Please attach full vdsm logs from any host you are using
> (if you
> are trying to migrate the vm from host1 to host2) + please
> attach
> engine log.
>
> Also, what is the vdsm, libvirt and qemu versions, what ovirt
> version are you using and what is the storage you are using?
>
> Thanks,
>
> Dafna
>
>
>
>
> On 04/22/2014 02:12 PM, Steve Dainard wrote:
>
> I've attempted migrating the vm to another host and
> taking a
> snapshot, but I get this error:
>
> 6efd33f4-984c-4513-b5e6-fffdca2e983b::ERROR::2014-04-22
> 01:09:37,296::volume::286::Storage.Volume::(clone)
> Volume.clone: can't clone:
> /rhev/data-center/9497ef2c-
> 8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-
> 467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-964d8af0675b/
> 1a67de4b-aa1c-4436-baca-ca55726d54d7
> to
> /rhev/data-center/9497ef2c-
> 8368-4c92-8d61-7f318a90748f/95b9d922-4df7-4d3b-9bca-
> 467e2fd9d573/images/466d9ae9-e46a-46f8-9f4b-964d8af0675b/
> b230596f-97bc-4532-ba57-5654fa9c6c51
>
> A bit more of the vdsm log is attached.
>
> Other vm's are snapshotting without issue.
>
>
>
> Any help appreciated,
>
> *Steve
> *
>
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org <mailto:Users@ovirt.org>
> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>
>
>
>
http://lists.ovirt.org/mailman/listinfo/users
>
>
>
> -- Dafna Ron
>
>
>
>
> -- Dafna Ron
>
>
>
--
Dafna Ron