--Apple-Mail=_B163F481-D727-44B7-8F44-BA45A8201A7C
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
charset=utf-8
On 13 Jun 2016, at 11:48, Colin Coe <colin.coe(a)gmail.com>
wrote:
=20
Initially we only saw this on VMs with 2 or more disks. Subsequently =
we confirmed
that it does happen on single disk VMs also.
=20
Sounds like an issue with snapshots, "backing filename
too long=E2=80=9D =
should be
https://bugzilla.redhat.com/show_bug.cgi?id=3D1333627
Tal, can you confirm?
Thanks,
michal
> CC
=20
> ---
=20
> Sent from my Nexus 5
=20
> On Jun 13, 2016 5:12 PM, "gregor"
<gregor_forum(a)catrix.at =
<mailto:gregor_forum@catrix.at>> wrote:
The VM has two disks both are VirtIO. During testing its now clear =
that
> the problem occur only with two disks. When I select only one disk for
> the snapshost it works.
> Is this a problem of oVirt or is it not possible to use two disks on a
> VM in oVirt?
=20
> Have you also two or more disks on your VM?
=20
> Here are the Testresults:
> -------------------------
> What does not work:
> - Export the VM: Failed with error "ImageIsNotLegalChain and code 262"
> - Clone the VM: Failed with error "IRSErrorException: Image is not a
> legal chain" with the ID of the second Disk.
=20
> After removing the second Disk:
> - Create offline snapshot: Works
> - Remove offline snapshot: After two hours I run "engine-setup
> --offline" to clean the looked snapshot !!!
> - Export the VM: Works
> - Import the exported VM: Works
> - Add Disk to the imported VM: Works
> - Create offline snapshot of the imported VM: Failed
> - Clone the VM: Works
> - Add Disk to the cloned VM: Works
> - Create offline snapshot of the cloned VM: Failed
=20
> What works:
> - Make offline snapshot only with the system disk: Works
> - Remove offline snapshot of the system disk: Works
> - Make online snapshot only with the system disk: Works
> - Remove online snapshot of the system disk: Works
=20
> cheers
> gregor
=20
> On 12/06/16 19:42, gregor wrote:
> > Hi,
> >
> > I solved my problem, here are the steps but be carefully if you =
don't
> know what the commands did and how to restore from backup
don't =
follow this:
>
> - ssh to the host
> - systemctl stop ovirt-engine
> - backup the database with "engine-backup"
> - navigate to the image files
> - backup the images: sudo -u vdsm rsync -av <uuid> <uuid_backup>
> - check which one is the backing file: qemu-img info <file>
> - check for damages: qemu-img check <file>
> - qemu-img commit <snapshot file>
> - rename the <snapshot file> + .lease and .meta so it can't be =
accessed
>
> - vmname=3Dsrv03
> - db=3Dengine
> - sudo -u postgres psql $db -c "SELECT b.disk_alias, s.description,
> s.snapshot_id, i.creation_date, s.status, i.imagestatus, i.size,
> i.image_group_id, i.vm_snapshot_id, i.image_guid, i.parentid, =
i.active
> FROM images as i JOIN snapshots AS s ON (i.vm_snapshot_id =3D
> s.snapshot_id) LEFT JOIN vm_static AS v ON (s.vm_id =3D v.vm_guid) =
JOIN
> base_disks AS b ON (i.image_group_id =3D b.disk_id) WHERE
v.vm_name =
=3D
> '$vmname' ORDER BY creation_date, description,
disk_alias"
>
> - note the image_guid and parent_id from the broken snapshot and the
> active snapshot, the active state is the image_guuid with the =
parentid
> 00000000-0000-0000-0000-000000000000
> - igid_active=3D<active uuid>
> - igid_broken=3D<broken uuid>
> - the parentid of the image_guuid of the broken snapshot must be the
> same as the activ snapshots image_guuid
> - note the snapshot id
> - sid_active=3D<id of the active snapshot with parrent id 000000>
> - sid_broken=3D<id of the broken shapshot>
>
> - delete the broken snapshot
> - sudo -u postgres psql $db -c "DELETE FROM snapshots AS s WHERE
> s.snapshot_id =3D '$sid_broken'"
>
> - pid_new=3D00000000-0000-0000-0000-000000000000
> - sudo -u postgres psql $db -c "SELECT * FROM images WHERE
> vm_snapshot_id =3D '$sid_active' AND image_guid =3D
'$igid_broken'"
> - sudo -u postgres psql $db -c "DELETE FROM images WHERE =
vm_snapshot_id
> =3D '$sid_broken' AND image_guid =3D
'$igid_active'"
> - sudo -u postgres psql $db -c "SELECT * FROM =
image_storage_domain_map
> WHERE image_id =3D '$igid_broken'"
> - sudo -u postgres psql $db -c "DELETE FROM image_storage_domain_map
> WHERE image_id =3D '$igid_broken'"
> - sudo -u postgres psql $db -c "UPDATE images SET image_guid =3D
> '$igid_active', parentid =3D '$pid_new' WHERE vm_snapshot_id =3D
> '$sid_active' AND image_guid =3D '$igid_broken'"
> - sudo -u postgres psql $db -c "SELECT * FROM =
image_storage_domain_map"
> - storid=3D<storage_domain_id>
> - diskprofileid=3D<disk_profile_id>
> - sudo -u postgres psql $db -c "INSERT INTO image_storage_domain_map
> (image_id, storage_domain_id, disk_profile_id) VALUES =
('$igid_broken',
> '$stor_id', '$diskprofileid')"
>
> - check values
> - sudo -u postgres psql $db -c "SELECT b.disk_alias, s.description,
> s.snapshot_id, i.creation_date, s.status, i.imagestatus, i.size,
> i.image_group_id, i.vm_snapshot_id, i.image_guid, i.parentid, =
i.active
> FROM images as i JOIN snapshots AS s ON (i.vm_snapshot_id =3D
> s.snapshot_id) LEFT JOIN vm_static AS v ON (s.vm_id =3D v.vm_guid) =
JOIN
> base_disks AS b ON (i.image_group_id =3D b.disk_id) WHERE
v.vm_name =
=3D
> '$vmname' ORDER BY creation_date, description,
disk_alias"could not
> change directory to "/root/Backups/oVirt"
>
> - check for errors
> - engine-setup --offline
> - systemctl start ovirt-engine
>
> Now you should have a clean state and a working VM ;-)
>
> What was tested:
> - Power up and down the VM
>
> What does not work:
> - Its not possible to make offline snapshots, online was not tested
> because I will not getting into such trouble again. It took many =
hours
> after the machine is up again.
>
> PLEASE be aware and don't destroy your Host and VM !!!
>
> cheers
> gregor
>
> On 12/06/16 13:40, Colin Coe wrote:
>> We've seen this with both Linux and Windows VMs. I'm guessing that
>> you've had failures on this VM in both snapshot create and delete
>> operations. oVirt/RHEV 3.5 seems particularly affected. I'm told =
that
>> oVirt 3.6.7 has the last of the fixes for these known
snapshot =
problems.
>>
>> My original email was eorded wrong. I meant that qemu-img gives
>> "backing filename too long" errors. You may have seen this in your =
logs.
>>
>> Note also that you may be seeing an entirely un-related problem.
>>
>> You may wish to post you're VDSM logs and the qemu log from
>> /var/lib/libvirt/qemu/<vm_name>.log
>>
>> Hope this helps
>>
>> CC
>>
>>
>> On Sun, Jun 12, 2016 at 4:45 PM, gregor <gregor_forum(a)catrix.at =
<mailto:gregor_forum@catrix.at>
>> <mailto:gregor_forum@catrix.at
<mailto:gregor_forum@catrix.at>>> =
wrote:
>>
>> Sound's bad. Recreating the VM is no way because this is a =
productive
>> VM. During testing I need to recreate it more than once.
oVirt =
works
>> perfect which Linux VM's but when it comes to
Windows VM's we =
get lots
>> of problems.
>>
>> Which OS you used on the problematic VM?
>>
>> cheers
>> gregor
>>
>> On 11/06/16 19:22, Anantha Raghava wrote:
>> > Hi,
>> >
>> > Even I observed this behaviour.
>> >
>> > When we take the snapshot, the main VM using which the =
snapshot
was
>> > taken is shutdown and a new VM with
external-<VMName> comes =
to
>> life. We
>> > cannot get the original VM back to life, but a clone starts
>> functioning.
>> >
>> > We cannot remove the snapshot whether or not the VM is =
running.
I
>> had to
>> > remove the entire VM that came to life with snapshot and =
recreate the
>> > entire VM from scratch. Luckily the VM was still
not in =
production,
>> > hence could afford it.
>> >
>> > First I could not understand, why, when a snapshot is =
created,
the VM
>> > with snapshot comes to life and starts running and
not the
>> original VM.
>> >
>> > Is it necessary that we shutdown the VM before taking =
snapshots?
>> > Snapshot is supposed to be a backup of original VM,
that =
unless we
>> > restore by cloning should not come to life as I
understand.
>> >
>> > --
>> >
>> > Thanks & Regards,
>> >
>> > Anantha Raghava
>> >
>> >
>> > On Saturday 11 June 2016 08:09 PM, gregor wrote:
>> >> Hi,
>> >>
>> >> a VM has snapshots which are unable to remove during when =
the VM
>> is up.
>> >> Therefore I power down the Windows Server 2012 VM. The =
snapshots are
>> >> still unable to remove and the VM can't
boot anymore !!!
>> >>
>> >> This is the message from engine.log
>> >>
>> >> ------------------
>> >> Message: VM srv03 is down with error. Exit message: Bad =
volume
>> specification
>> >> ------------------
>> >>
>> >> Clone is not possible I get:
>> >> ------------------
>> >> Message: VDSM command failed: Image is not a legal chain
>> >> ------------------
>> >>
>> >> All others VM's can be powered down and start without any =
problem.
>> >> What can I do?
>> >> This is very important because now no one can work :-( !!!
>> >>
>> >> cheers
>> >> gregor
>> >> _______________________________________________
>> >> Users mailing list
>> >> Users(a)ovirt.org <mailto:Users@ovirt.org> =
<mailto:Users@ovirt.org <mailto:Users@ovirt.org>>
<
http://lists.ovirt.org/mailman/listinfo/users>
>> >
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org <mailto:Users@ovirt.org> =
<mailto:Users@ovirt.org <mailto:Users@ovirt.org>>
<
http://lists.ovirt.org/mailman/listinfo/users>
>>
>>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org <mailto:Users@ovirt.org>
>
http://lists.ovirt.org/mailman/listinfo/users =
<
http://lists.ovirt.org/mailman/listinfo/users>
>
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
--Apple-Mail=_B163F481-D727-44B7-8F44-BA45A8201A7C
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
charset=utf-8
<html><head><meta http-equiv=3D"Content-Type"
content=3D"text/html =
charset=3Dutf-8"></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" =
class=3D""><br class=3D""><div><blockquote
type=3D"cite" class=3D""><div =
class=3D"">On 13 Jun 2016, at 11:48, Colin Coe <<a =
href=3D"mailto:colin.coe@gmail.com"
class=3D"">colin.coe(a)gmail.com</a>&gt;=
wrote:</div><br class=3D"Apple-interchange-newline"><div
class=3D""><p =
dir=3D"ltr" class=3D"">Initially we only saw this on VMs with 2 or
more =
disks. Subsequently we confirmed that it does happen on single disk VMs =
also. </p></div></blockquote>Sounds like an issue with snapshots, =
"backing filename too long=E2=80=9D should be <a =
href=3D"https://bugzilla.redhat.com/show_bug.cgi?id=3D1333627" =
class=3D"">https://bugzilla.redhat.com/show_bug.cgi?id=3D133...
<div>Tal, can you confirm?</div><div><br =
class=3D""></div><div>Thanks,</div><div>michal<br
class=3D""><blockquote =
type=3D"cite" class=3D""><div class=3D""><p
dir=3D"ltr" class=3D"">CC<br =
class=3D""></p><p dir=3D"ltr"
class=3D"">---</p><p dir=3D"ltr" =
class=3D"">Sent from my Nexus 5</p>
<div class=3D"gmail_quote">On Jun 13, 2016 5:12 PM, "gregor"
<<a =
href=3D"mailto:gregor_forum@catrix.at" =
class=3D"">gregor_forum(a)catrix.at</a>&gt; wrote:<br
type=3D"attribution" =
class=3D""><blockquote class=3D"gmail_quote"
style=3D"margin:0 0 0 =
.8ex;border-left:1px #ccc solid;padding-left:1ex">The VM has two disks =
both are VirtIO. During testing its now clear that<br class=3D"">
the problem occur only with two disks. When I select only one disk =
for<br class=3D"">
the snapshost it works.<br class=3D"">
Is this a problem of oVirt or is it not possible to use two disks on =
a<br class=3D"">
VM in oVirt?<br class=3D"">
<br class=3D"">
Have you also two or more disks on your VM?<br class=3D"">
<br class=3D"">
Here are the Testresults:<br class=3D"">
-------------------------<br class=3D"">
What does not work:<br class=3D"">
- Export the VM: Failed with error "ImageIsNotLegalChain and code =
262"<br class=3D"">
- Clone the VM: Failed with error "IRSErrorException: Image is not a<br =
class=3D"">
legal chain" with the ID of the second Disk.<br class=3D"">
<br class=3D"">
After removing the second Disk:<br class=3D"">
- Create offline snapshot: Works<br class=3D"">
- Remove offline snapshot: After two hours I run "engine-setup<br =
class=3D"">
--offline" to clean the looked snapshot !!!<br class=3D"">
- Export the VM: Works<br class=3D"">
- Import the exported VM: Works<br class=3D"">
- Add Disk to the imported VM: Works<br class=3D"">
- Create offline snapshot of the imported VM: Failed<br class=3D"">
- Clone the VM: Works<br class=3D"">
- Add Disk to the cloned VM: Works<br class=3D"">
- Create offline snapshot of the cloned VM: Failed<br class=3D"">
<br class=3D"">
What works:<br class=3D"">
- Make offline snapshot only with the system disk: Works<br class=3D"">
- Remove offline snapshot of the system disk: Works<br class=3D"">
- Make online snapshot only with the system disk: Works<br class=3D"">
- Remove online snapshot of the system disk: Works<br class=3D"">
<br class=3D"">
cheers<br class=3D"">
gregor<br class=3D"">
<br class=3D"">
On 12/06/16 19:42, gregor wrote:<br class=3D"">
> Hi,<br class=3D"">
><br class=3D"">
> I solved my problem, here are the steps but be carefully if you =
don't<br class=3D"">
> know what the commands did and how to restore from backup don't =
follow this:<br class=3D"">
><br class=3D"">
> - ssh to the host<br class=3D"">
> - systemctl stop ovirt-engine<br class=3D"">
> - backup the database with "engine-backup"<br
class=3D"">
> - navigate to the image files<br class=3D"">
> - backup the images: sudo -u vdsm rsync -av <uuid> =
<uuid_backup><br class=3D"">
> - check which one is the backing file: qemu-img info =
<file><br class=3D"">
> - check for damages: qemu-img check <file><br
class=3D"">
> - qemu-img commit <snapshot file><br class=3D"">
> - rename the <snapshot file> + .lease and .meta so it can't =
be accessed<br class=3D"">
><br class=3D"">
> - vmname=3Dsrv03<br class=3D"">
> - db=3Dengine<br class=3D"">
> - sudo -u postgres psql $db -c "SELECT b.disk_alias, =
s.description,<br class=3D"">
> s.snapshot_id, i.creation_date, s.status, i.imagestatus, i.size,<br =
class=3D"">
> i.image_group_id, i.vm_snapshot_id, i.image_guid, i.parentid, =
i.active<br class=3D"">
> FROM images as i JOIN snapshots AS s ON (i.vm_snapshot_id =3D<br =
class=3D"">
> s.snapshot_id) LEFT JOIN vm_static AS v ON (s.vm_id =3D v.vm_guid) =
JOIN<br class=3D"">
> base_disks AS b ON (i.image_group_id =3D b.disk_id) WHERE v.vm_name =
=3D<br class=3D"">
> '$vmname' ORDER BY creation_date, description, disk_alias"<br =
class=3D"">
><br class=3D"">
> - note the image_guid and parent_id from the broken snapshot and =
the<br class=3D"">
> active snapshot, the active state is the image_guuid with the =
parentid<br class=3D"">
> 00000000-0000-0000-0000-000000000000<br class=3D"">
> - igid_active=3D<active uuid><br class=3D"">
> - igid_broken=3D<broken uuid><br class=3D"">
> - the parentid of the image_guuid of the broken snapshot must be =
the<br class=3D"">
> same as the activ snapshots image_guuid<br class=3D"">
> - note the snapshot id<br class=3D"">
> - sid_active=3D<id of the active snapshot with parrent id =
000000><br class=3D"">
> - sid_broken=3D<id of the broken shapshot><br
class=3D"">
><br class=3D"">
> - delete the broken snapshot<br class=3D"">
> - sudo -u postgres psql $db -c "DELETE FROM snapshots AS s WHERE<br =
class=3D"">
> s.snapshot_id =3D '$sid_broken'"<br class=3D"">
><br class=3D"">
> - pid_new=3D00000000-0000-0000-0000-000000000000<br class=3D"">
> - sudo -u postgres psql $db -c "SELECT * FROM images WHERE<br =
class=3D"">
> vm_snapshot_id =3D '$sid_active' AND image_guid =3D =
'$igid_broken'"<br class=3D"">
> - sudo -u postgres psql $db -c "DELETE FROM images WHERE =
vm_snapshot_id<br class=3D"">
> =3D '$sid_broken' AND image_guid =3D '$igid_active'"<br
class=3D"">
> - sudo -u postgres psql $db -c "SELECT * FROM =
image_storage_domain_map<br class=3D"">
> WHERE image_id =3D '$igid_broken'"<br class=3D"">
> - sudo -u postgres psql $db -c "DELETE FROM =
image_storage_domain_map<br class=3D"">
> WHERE image_id =3D '$igid_broken'"<br class=3D"">
> - sudo -u postgres psql $db -c "UPDATE images SET image_guid =3D<br =
class=3D"">
> '$igid_active', parentid =3D '$pid_new' WHERE vm_snapshot_id
=3D<br =
class=3D"">
> '$sid_active' AND image_guid =3D '$igid_broken'"<br
class=3D"">
> - sudo -u postgres psql $db -c "SELECT * FROM =
image_storage_domain_map"<br class=3D"">
> - storid=3D<storage_domain_id><br class=3D"">
> - diskprofileid=3D<disk_profile_id><br class=3D"">
> - sudo -u postgres psql $db -c "INSERT INTO =
image_storage_domain_map<br class=3D"">
> (image_id, storage_domain_id, disk_profile_id) VALUES =
('$igid_broken',<br class=3D"">
> '$stor_id', '$diskprofileid')"<br
class=3D"">
><br class=3D"">
> - check values<br class=3D"">
> - sudo -u postgres psql $db -c "SELECT b.disk_alias, =
s.description,<br class=3D"">
> s.snapshot_id, i.creation_date, s.status, i.imagestatus, i.size,<br =
class=3D"">
> i.image_group_id, i.vm_snapshot_id, i.image_guid, i.parentid, =
i.active<br class=3D"">
> FROM images as i JOIN snapshots AS s ON (i.vm_snapshot_id =3D<br =
class=3D"">
> s.snapshot_id) LEFT JOIN vm_static AS v ON (s.vm_id =3D v.vm_guid) =
JOIN<br class=3D"">
> base_disks AS b ON (i.image_group_id =3D b.disk_id) WHERE v.vm_name =
=3D<br class=3D"">
> '$vmname' ORDER BY creation_date, description, disk_alias"could =
not<br class=3D"">
> change directory to "/root/Backups/oVirt"<br
class=3D"">
><br class=3D"">
> - check for errors<br class=3D"">
> - engine-setup --offline<br class=3D"">
> - systemctl start ovirt-engine<br class=3D"">
><br class=3D"">
> Now you should have a clean state and a working VM ;-)<br
class=3D"">=
><br class=3D"">
> What was tested:<br class=3D"">
> - Power up and down the VM<br class=3D"">
><br class=3D"">
> What does not work:<br class=3D"">
> - Its not possible to make offline snapshots, online was not =
tested<br class=3D"">
> because I will not getting into such trouble again. It took many =
hours<br class=3D"">
> after the machine is up again.<br class=3D"">
><br class=3D"">
> PLEASE be aware and don't destroy your Host and VM !!!<br
class=3D"">=
><br class=3D"">
> cheers<br class=3D"">
> gregor<br class=3D"">
><br class=3D"">
> On 12/06/16 13:40, Colin Coe wrote:<br class=3D"">
>> We've seen this with both Linux and Windows VMs. I'm =
guessing that<br class=3D"">
>> you've had failures on this VM in both snapshot create and =
delete<br class=3D"">
>> operations. oVirt/RHEV 3.5 seems particularly =
affected. I'm told that<br class=3D"">
>> oVirt 3.6.7 has the last of the fixes for these known snapshot =
problems.<br class=3D"">
>><br class=3D"">
>> My original email was eorded wrong. I meant that qemu-img =
gives<br class=3D"">
>> "backing filename too long" errors. You may have seen
=
this in your logs.<br class=3D"">
>><br class=3D"">
>> Note also that you may be seeing an entirely un-related =
problem.<br class=3D"">
>><br class=3D"">
>> You may wish to post you're VDSM logs and the qemu log from<br =
class=3D"">
>> /var/lib/libvirt/qemu/<vm_name>.log<br
class=3D"">
>><br class=3D"">
>> Hope this helps<br class=3D"">
>><br class=3D"">
>> CC<br class=3D"">
>><br class=3D"">
>><br class=3D"">
>> On Sun, Jun 12, 2016 at 4:45 PM, gregor <<a =
href=3D"mailto:gregor_forum@catrix.at" =
class=3D"">gregor_forum(a)catrix.at</a><br class=3D"">
>> <mailto:<a href=3D"mailto:gregor_forum@catrix.at" =
class=3D"">gregor_forum(a)catrix.at</a>&gt;&gt; wrote:<br
class=3D"">
>><br class=3D"">
>> Sound's bad. Recreating the VM is no
way =
because this is a productive<br class=3D"">
>> VM. During testing I need to recreate it
=
more than once. oVirt works<br class=3D"">
>> perfect which Linux VM's but when it
comes =
to Windows VM's we get lots<br class=3D"">
>> of problems.<br
class=3D"">
>><br class=3D"">
>> Which OS you used on the problematic
VM?<br =
class=3D"">
>><br class=3D"">
>> cheers<br class=3D"">
>> gregor<br class=3D"">
>><br class=3D"">
>> On 11/06/16 19:22, Anantha Raghava
wrote:<br =
class=3D"">
>> > Hi,<br
class=3D"">
>> ><br class=3D"">
>> > Even I observed this
behaviour.<br =
class=3D"">
>> ><br class=3D"">
>> > When we take the snapshot, the
main VM =
using which the snapshot was<br class=3D"">
>> > taken is shutdown and a new VM
with =
external-<VMName> comes to<br class=3D"">
>> life. We<br class=3D"">
>> > cannot get the original VM back
to =
life, but a clone starts<br class=3D"">
>> functioning.<br
class=3D"">
>> ><br class=3D"">
>> > We cannot remove the snapshot
whether =
or not the VM is running. I<br class=3D"">
>> had to<br class=3D"">
>> > remove the entire VM that came to
life =
with snapshot and recreate the<br class=3D"">
>> > entire VM from scratch. Luckily
the VM =
was still not in production,<br class=3D"">
>> > hence could afford it.<br
class=3D"">
>> ><br class=3D"">
>> > First I could not understand,
why, when =
a snapshot is created, the VM<br class=3D"">
>> > with snapshot comes to life and
starts =
running and not the<br class=3D"">
>> original VM.<br
class=3D"">
>> ><br class=3D"">
>> > Is it necessary that we shutdown
the VM =
before taking snapshots?<br class=3D"">
>> > Snapshot is supposed to be a
backup of =
original VM, that unless we<br class=3D"">
>> > restore by cloning should not
come to =
life as I understand.<br class=3D"">
>> ><br class=3D"">
>> > --<br
class=3D"">
>> ><br class=3D"">
>> > Thanks & Regards,<br
class=3D"">
>> ><br class=3D"">
>> > Anantha Raghava<br
class=3D"">
>> ><br class=3D"">
>> ><br class=3D"">
>> > On Saturday 11 June 2016 08:09
PM, =
gregor wrote:<br class=3D"">
>> >> Hi,<br
class=3D"">
>> >><br
class=3D"">
>> >> a VM has snapshots which
are unable =
to remove during when the VM<br class=3D"">
>> is up.<br class=3D"">
>> >> Therefore I power down
the Windows =
Server 2012 VM. The snapshots are<br class=3D"">
>> >> still unable to remove
and the VM =
can't boot anymore !!!<br class=3D"">
>> >><br
class=3D"">
>> >> This is the message from
=
engine.log<br class=3D"">
>> >><br
class=3D"">
>> >> ------------------<br
class=3D"">
>> >> Message: VM srv03 is down
with =
error. Exit message: Bad volume<br class=3D"">
>> specification<br
class=3D"">
>> >> ------------------<br
class=3D"">
>> >><br
class=3D"">
>> >> Clone is not possible I
get:<br =
class=3D"">
>> >> ------------------<br
class=3D"">
>> >> Message: VDSM command
failed: Image =
is not a legal chain<br class=3D"">
>> >> ------------------<br
class=3D"">
>> >><br
class=3D"">
>> >> All others VM's can
be powered down =
and start without any problem.<br class=3D"">
>> >> What can I do?<br
class=3D"">
>> >> This is very important
because now =
no one can work :-( !!!<br class=3D"">
>> >><br
class=3D"">
>> >> cheers<br
class=3D"">
>> >> gregor<br
class=3D"">
>> >> =
_______________________________________________<br class=3D"">
>> >> Users mailing list<br
class=3D"">
>> >> <a
href=3D"mailto:Users@ovirt.org" =
class=3D"">Users(a)ovirt.org</a> <mailto:<a =
href=3D"mailto:Users@ovirt.org"
class=3D"">Users(a)ovirt.org</a>&gt;<br =
class=3D"">
>> >> <a =
href=3D"http://lists.ovirt.org/mailman/listinfo/users"
rel=3D"noreferrer" =
target=3D"_blank" =
class=3D"">http://lists.ovirt.org/mailman/listinfo/users<... =
class=3D"">
>> ><br class=3D"">
>> =
_______________________________________________<br class=3D"">
>> Users mailing list<br
class=3D"">
>> <a
href=3D"mailto:Users@ovirt.org" =
class=3D"">Users(a)ovirt.org</a> <mailto:<a =
href=3D"mailto:Users@ovirt.org"
class=3D"">Users(a)ovirt.org</a>&gt;<br =
class=3D"">
>> <a =
href=3D"http://lists.ovirt.org/mailman/listinfo/users"
rel=3D"noreferrer" =
target=3D"_blank" =
class=3D"">http://lists.ovirt.org/mailman/listinfo/users<... =
class=3D"">
>><br class=3D"">
>><br class=3D"">
> _______________________________________________<br class=3D"">
> Users mailing list<br class=3D"">
> <a href=3D"mailto:Users@ovirt.org"
class=3D"">Users(a)ovirt.org</a><br =
class=3D"">
> <a
href=3D"http://lists.ovirt.org/mailman/listinfo/users" =
rel=3D"noreferrer" target=3D"_blank" =
class=3D"">http://lists.ovirt.org/mailman/listinfo/users<... =
class=3D"">
><br class=3D"">
</blockquote></div>
_______________________________________________<br class=3D"">Users =
mailing list<br class=3D""><a href=3D"mailto:Users@ovirt.org"
=
class=3D"">Users(a)ovirt.org</a><br =
class=3D"">http://lists.ovirt.org/mailman/listinfo/users<br =
class=3D""></div></blockquote></div><br
class=3D""></body></html>=
--Apple-Mail=_B163F481-D727-44B7-8F44-BA45A8201A7C--