This message is in MIME format. Since your mail reader does not
understand
this format, some or all of this message may not be legible.
--B_3599421680_119924947
Content-type: text/plain;
charset="ISO-8859-1"
Content-transfer-encoding: quoted-printable
I am having an issue where when I use the REST API to connect a snapshot
from another VM to a Backup-Appliance-VM, after the clone when I remove the
Disk and delete the Snapshot, the disk remains in the Backup-Appliance-VM a=
s
/dev/sdb ot /dev/vdb.
If I reboot the Bakup-Applicance-VM the disk disappears.
If I manually remove the disk by "echo 1 > /sys/block/sdb/device/delete=B2 th=
e
disk will disappear, but if I rescan the scsi bus, it is found and shows up
again in the VM OS, but the oVirt WebGUI does NOT show it as connected.
I am also not able to attach any other disks as it complains of :
HotPlugDiskVDS failed: internal error: unable to execute QEMU command
'__com.redhat_drive_add': Duplicate ID 'drive-scsi0-0-0-2' for drive
I did see that others in the past have gotten around this issue by rebootin=
g
the Backup-Appliance-VM and then continuing on with the next VM backup and
looping through backup-reboot-backup-reboot-etc.
Anyone have an idea on how to solve this issue and remove the hooks from th=
e
guest OS?
Steps to reproduce this issue:
1. Create a backup appliance VM to be used for the backup script execution
2. Currently I have the Vms set to virtio with threaded I/O enabled. Also
tried virtio_scsi with same result.
3. Using REST API =AD make snapshot of target VM
4. Using REST API =AD fetch vm metadata
5. Using REST API =AD attach the snapshot/disk to the Backup-Appliance-VM
6. dd the drive to backup folder
7. Using REST API =AD remove the disk from the Backup-Appliance-VM
8. Using REST API =AD delete the snapshot
9. ** Check the guest OS of the Backup-Appliance-VM and the mounted drive
from the backup above still appears and behaves as mentioned in comments
above.
A second issue is that the above wont work when I have the Vms running on
MPIO iSCSI storage, so for testing I have moved to NFS4. Anyone have ideas
about either issue, I=B9d love to hear ;)
Thanks
Irc.oftc.net #ovirt
zipur
--B_3599421680_119924947
Content-type: text/html;
charset="ISO-8859-1"
Content-transfer-encoding: quoted-printable
<html><head></head><body style=3D"word-wrap: break-word;
-webkit-nbsp-mode: s=
pace; -webkit-line-break: after-white-space; color: rgb(0, 0, 0); font-size:=
14px; font-family: Calibri, sans-serif;"><div>I am having an issue where
wh=
en I use the REST API to connect a snapshot from another VM to a Backup-Appl=
iance-VM, after the clone when I remove the Disk and delete the Snapshot, th=
e disk remains in the Backup-Appliance-VM as /dev/sdb ot /dev/vdb. </di=
v><div><br></div><div>If I reboot the Bakup-Applicance-VM the
disk disappear=
s.</div><div><br></div><div>If I manually remove the disk by
"echo 1 > /s=
ys/block/sdb/device/delete” the disk will disappear, but if I rescan t=
he scsi bus, it is found and shows up again in the VM OS, but the oVirt WebG=
UI does NOT show it as connected.</div><div><br></div><div>I
am also not abl=
e to attach any other disks as it complains of
:</div><div><br></div><div><s=
pan class=3D"Apple-tab-span"
style=3D"white-space:pre"> </span>HotPlugDiskVDS fa=
iled: internal error: unable to execute QEMU command '__com.redhat_drive_add=
': Duplicate ID 'drive-scsi0-0-0-2' for
drive</div><div><br></div><div>I did=
see that others in the past have gotten around this issue by rebooting the =
Backup-Appliance-VM and then continuing on with the next VM backup and loopi=
ng through
backup-reboot-backup-reboot-etc. </div><div><br></div><div>A=
nyone have an idea on how to solve this issue and remove the hooks from the =
guest OS?</div><div><br></div><div>Steps to reproduce this
issue:</div><div>=
<br></div><ol><li>Create a backup appliance VM to be used for the
backup scr=
ipt execution</li><li>Currently I have the Vms set to virtio with threaded I=
/O enabled. Also tried virtio_scsi with same result.</li><li>Using REST API =
– make snapshot of target VM</li><li>Using REST API –
fetch vm m=
etadata</li><li>Using REST API – attach the snapshot/disk to the
Backu=
p-Appliance-VM</li><li>dd the drive to backup folder</li><li>Using
REST API =
– remove the disk from the Backup-Appliance-VM</li><li>Using REST
API =
– delete the snapshot</li><li>** Check the guest OS of the
Backup-Appl=
iance-VM and the mounted drive from the backup above still appears and behav=
es as mentioned in comments
above.</li></ol><div><br></div><div>A second iss=
ue is that the above wont work when I have the Vms running on MPIO iSCSI sto=
rage, so for testing I have moved to NFS4. Anyone have ideas about either is=
sue, I’d love to hear
;)</div><div><br></div><div>Thanks</div><div><br=
</div><div>Irc.oftc.net
#ovirt</div><div>zipur</div><div><br></div><br></bo=
dy></html>
--B_3599421680_119924947--