oVirt 4.2 - Removed disk remains in VM OS with hooks?

This message is in MIME format. Since your mail reader does not understand this format, some or all of this message may not be legible.
--B_3599421680_119924947 Content-type: text/plain; charset="ISO-8859-1" Content-transfer-encoding: quoted-printable I am having an issue where when I use the REST API to connect a snapshot from another VM to a Backup-Appliance-VM, after the clone when I remove the Disk and delete the Snapshot, the disk remains in the Backup-Appliance-VM a= s /dev/sdb ot /dev/vdb. If I reboot the Bakup-Applicance-VM the disk disappears. If I manually remove the disk by "echo 1 > /sys/block/sdb/device/delete=B2 th= e disk will disappear, but if I rescan the scsi bus, it is found and shows up again in the VM OS, but the oVirt WebGUI does NOT show it as connected. I am also not able to attach any other disks as it complains of : HotPlugDiskVDS failed: internal error: unable to execute QEMU command '__com.redhat_drive_add': Duplicate ID 'drive-scsi0-0-0-2' for drive I did see that others in the past have gotten around this issue by rebootin= g the Backup-Appliance-VM and then continuing on with the next VM backup and looping through backup-reboot-backup-reboot-etc. Anyone have an idea on how to solve this issue and remove the hooks from th= e guest OS? Steps to reproduce this issue: 1. Create a backup appliance VM to be used for the backup script execution 2. Currently I have the Vms set to virtio with threaded I/O enabled. Also tried virtio_scsi with same result. 3. Using REST API =AD make snapshot of target VM 4. Using REST API =AD fetch vm metadata 5. Using REST API =AD attach the snapshot/disk to the Backup-Appliance-VM 6. dd the drive to backup folder 7. Using REST API =AD remove the disk from the Backup-Appliance-VM 8. Using REST API =AD delete the snapshot 9. ** Check the guest OS of the Backup-Appliance-VM and the mounted drive from the backup above still appears and behaves as mentioned in comments above. A second issue is that the above wont work when I have the Vms running on MPIO iSCSI storage, so for testing I have moved to NFS4. Anyone have ideas about either issue, I=B9d love to hear ;) Thanks Irc.oftc.net #ovirt zipur --B_3599421680_119924947 Content-type: text/html; charset="ISO-8859-1" Content-transfer-encoding: quoted-printable <html><head></head><body style=3D"word-wrap: break-word; -webkit-nbsp-mode: s= pace; -webkit-line-break: after-white-space; color: rgb(0, 0, 0); font-size:= 14px; font-family: Calibri, sans-serif;"><div>I am having an issue where wh= en I use the REST API to connect a snapshot from another VM to a Backup-Appl= iance-VM, after the clone when I remove the Disk and delete the Snapshot, th= e disk remains in the Backup-Appliance-VM as /dev/sdb ot /dev/vdb. </di= v><div><br></div><div>If I reboot the Bakup-Applicance-VM the disk disappear= s.</div><div><br></div><div>If I manually remove the disk by "echo 1 > /s= ys/block/sdb/device/delete” the disk will disappear, but if I rescan t= he scsi bus, it is found and shows up again in the VM OS, but the oVirt WebG= UI does NOT show it as connected.</div><div><br></div><div>I am also not abl= e to attach any other disks as it complains of :</div><div><br></div><div><s= pan class=3D"Apple-tab-span" style=3D"white-space:pre"> </span>HotPlugDiskVDS fa= iled: internal error: unable to execute QEMU command '__com.redhat_drive_add= ': Duplicate ID 'drive-scsi0-0-0-2' for drive</div><div><br></div><div>I did= see that others in the past have gotten around this issue by rebooting the = Backup-Appliance-VM and then continuing on with the next VM backup and loopi= ng through backup-reboot-backup-reboot-etc. </div><div><br></div><div>A= nyone have an idea on how to solve this issue and remove the hooks from the = guest OS?</div><div><br></div><div>Steps to reproduce this issue:</div><div>= <br></div><ol><li>Create a backup appliance VM to be used for the backup scr= ipt execution</li><li>Currently I have the Vms set to virtio with threaded I= /O enabled. Also tried virtio_scsi with same result.</li><li>Using REST API = – make snapshot of target VM</li><li>Using REST API – fetch vm m= etadata</li><li>Using REST API – attach the snapshot/disk to the Backu= p-Appliance-VM</li><li>dd the drive to backup folder</li><li>Using REST API = – remove the disk from the Backup-Appliance-VM</li><li>Using REST API = – delete the snapshot</li><li>** Check the guest OS of the Backup-Appl= iance-VM and the mounted drive from the backup above still appears and behav= es as mentioned in comments above.</li></ol><div><br></div><div>A second iss= ue is that the above wont work when I have the Vms running on MPIO iSCSI sto= rage, so for testing I have moved to NFS4. Anyone have ideas about either is= sue, I’d love to hear ;)</div><div><br></div><div>Thanks</div><div><br=
</div><div>Irc.oftc.net #ovirt</div><div>zipur</div><div><br></div><br></bo= dy></html>
--B_3599421680_119924947--

On Mon, Jan 22, 2018 at 7:21 AM, Zip <plord@intricatenetworks.com> wrote:
I am having an issue where when I use the REST API to connect a snapshot from another VM to a Backup-Appliance-VM, after the clone when I remove the Disk and delete the Snapshot, the disk remains in the Backup-Appliance-VM as /dev/sdb ot /dev/vdb.
If I reboot the Bakup-Applicance-VM the disk disappears.
If I manually remove the disk by "echo 1 > /sys/block/sdb/device/delete” the disk will disappear, but if I rescan the scsi bus, it is found and shows up again in the VM OS, but the oVirt WebGUI does NOT show it as connected.
The first part is expected - the 2nd isn't.
I am also not able to attach any other disks as it complains of :
HotPlugDiskVDS failed: internal error: unable to execute QEMU command '__com.redhat_drive_add': Duplicate ID 'drive-scsi0-0-0-2' for drive
I did see that others in the past have gotten around this issue by rebooting the Backup-Appliance-VM and then continuing on with the next VM backup and looping through backup-reboot-backup-reboot-etc.
Anyone have an idea on how to solve this issue and remove the hooks from the guest OS?
Steps to reproduce this issue:
1. Create a backup appliance VM to be used for the backup script execution 2. Currently I have the Vms set to virtio with threaded I/O enabled. Also tried virtio_scsi with same result. 3. Using REST API – make snapshot of target VM 4. Using REST API – fetch vm metadata 5. Using REST API – attach the snapshot/disk to the Backup-Appliance-VM 6. dd the drive to backup folder 7. Using REST API – remove the disk from the Backup-Appliance-VM 8. Using REST API – delete the snapshot 9. ** Check the guest OS of the Backup-Appliance-VM and the mounted drive from the backup above still appears and behaves as mentioned in comments above.
There are many details missing, including versions of everything used, but logs would be most helpful here.
A second issue is that the above wont work when I have the Vms running on MPIO iSCSI storage, so for testing I have moved to NFS4. Anyone have ideas about either issue, I’d love to hear ;)
Same - logs would be helpful here. Y.
Thanks
Irc.oftc.net #ovirt zipur
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

This message is in MIME format. Since your mail reader does not understand this format, some or all of this message may not be legible.
I am having an issue where when I use the REST API to connect a snapshot = from another VM to a Backup-Appliance-VM, after the clone when I remove the Di= sk and delete the Snapshot, the disk remains in the Backup-Appliance-VM as /dev/sdb ot /dev/vdb. =20 If I reboot the Bakup-Applicance-VM the disk disappears. =20 If I manually remove the disk by "echo 1 > /sys/block/sdb/device/delete=B2 =
--B_3599458335_122065993 Content-type: text/plain; charset="ISO-8859-1" Content-transfer-encoding: quoted-printable the
disk will disappear, but if I rescan the scsi bus, it is found and shows = up again in the VM OS, but the oVirt WebGUI does NOT show it as connected.
=20 I am also not able to attach any other disks as it complains of : =20 HotPlugDiskVDS failed: internal error: unable to execute QEMU command '__com.redhat_drive_add': Duplicate ID 'drive-scsi0-0-0-2' for drive =20 I did see that others in the past have gotten around this issue by reboot= ing the Backup-Appliance-VM and then continuing on with the next VM backup an= d looping through backup-reboot-backup-reboot-etc. =20 Anyone have an idea on how to solve this issue and remove the hooks from =
-- The first part is expected - the 2nd isn't. What are you referring to as the first and second part? =20 the
guest OS? =20 Steps to reproduce this issue: =20 1. Create a backup appliance VM to be used for the backup script executio= n 2. Currently I have the Vms set to virtio with threaded I/O enabled. Also tried virtio_scsi with same result. 3. Using REST API =AD make snapshot of target VM 4. Using REST API =AD fetch vm metadata 5. Using REST API =AD attach the snapshot/disk to the Backup-Appliance-VM 6. dd the drive to backup folder 7. Using REST API =AD remove the disk from the Backup-Appliance-VM 8. Using REST API =AD delete the snapshot 9. ** Check the guest OS of the Backup-Appliance-VM and the mounted drive= from the backup above still appears and behaves as mentioned in comments above= . =20
=8B There are many details missing, including versions of everything used, bu= t logs would be most helpful here. =20 Versions for oVirt are all the most recent. This is a fresh install of the Hosted Engine. I will just script the backup to cycle through sdb, sdc, sdd= , =8A. Szzzz, just seems odd that once a disk is detached and a snapshot deleted, that the Backup_appliance-VM can still access the drive/snapshot?
A second issue is that the above wont work when I have the Vms running on= MPIO iSCSI storage, so for testing I have moved to NFS4. Anyone have ideas abo= ut either issue, I=B9d love to hear ;)
=8B Same - logs would be helpful here. I will continue to dig through this issue and will post logs if stuck. I just wanted to know if there was anything obvious that I should be doing differently with iSCSI vs NFS with mounting disks/snapshots. Zip =8B Y. =20
=20 Thanks =20 Irc.oftc.net <http://Irc.oftc.net> #ovirt zipur =20 =20 =20 _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users =20
--B_3599458335_122065993 Content-type: text/html; charset="ISO-8859-1" Content-transfer-encoding: quoted-printable <html><head></head><body style=3D"word-wrap: break-word; -webkit-nbsp-mode: s= pace; -webkit-line-break: after-white-space; color: rgb(0, 0, 0); font-size:= 14px; font-family: Calibri, sans-serif;"><div><div><div><br></div></div></d= iv><span id=3D"OLK_SRC_BODY_SECTION"><div><div><div dir=3D"ltr"><div class=3D"gmai= l_extra"><div class=3D"gmail_quote"><blockquote class=3D"gmail_quote" style=3D"mar= gin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style=3D"word= -wrap:break-word;color:rgb(0,0,0);font-size:14px;font-family:Calibri,sans-se= rif"><div>I am having an issue where when I use the REST API to connect a sn= apshot from another VM to a Backup-Appliance-VM, after the clone when I remo= ve the Disk and delete the Snapshot, the disk remains in the Backup-Applianc= e-VM as /dev/sdb ot /dev/vdb. </div><div><br></div><div>If I reboot the= Bakup-Applicance-VM the disk disappears.</div><div><br></div><div>If I manu= ally remove the disk by "echo 1 > /sys/block/sdb/device/delete” the= disk will disappear, but if I rescan the scsi bus, it is found and shows up= again in the VM OS, but the oVirt WebGUI does NOT show it as connected.</di= v></div></blockquote><div><br></div><div>-- The first part is expected - the= 2nd isn't.</div></div></div></div></div></div></span><div><br></div><div>Wh= at are you referring to as the first and second part? </div><div><br></= div><span id=3D"OLK_SRC_BODY_SECTION"><div><div><div dir=3D"ltr"><div class=3D"gma= il_extra"><div class=3D"gmail_quote"><div> </div><blockquote class=3D"gmail= _quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex= "><div style=3D"word-wrap:break-word;color:rgb(0,0,0);font-size:14px;font-fami= ly:Calibri,sans-serif"><div><br></div><div>I am also not able to attach any = other disks as it complains of :</div><div><br></div><div><span class=3D"m_-36= 70774818294595794Apple-tab-span" style=3D"white-space:pre-wrap"></span>HotPlug= DiskVDS failed: internal error: unable to execute QEMU command '__com.redhat= _drive_add': Duplicate ID 'drive-scsi0-0-0-2' for drive</div><div><br></div>= <div>I did see that others in the past have gotten around this issue by rebo= oting the Backup-Appliance-VM and then continuing on with the next VM backup= and looping through backup-reboot-backup-reboot-<wbr>etc. </div><div><= br></div><div>Anyone have an idea on how to solve this issue and remove the = hooks from the guest OS?</div><div><br></div><div>Steps to reproduce this is= sue:</div><div><br></div><ol><li>Create a backup appliance VM to be used for= the backup script execution</li><li>Currently I have the Vms set to virtio = with threaded I/O enabled. Also tried virtio_scsi with same result.</li><li>= Using REST API – make snapshot of target VM</li><li>Using REST API = 211; fetch vm metadata</li><li>Using REST API – attach the snapshot/di= sk to the Backup-Appliance-VM</li><li>dd the drive to backup folder</li><li>= Using REST API – remove the disk from the Backup-Appliance-VM</li><li>= Using REST API – delete the snapshot</li><li>** Check the guest OS of = the Backup-Appliance-VM and the mounted drive from the backup above still ap= pears and behaves as mentioned in comments above.</li></ol><div><br></div></= div></blockquote><div><br></div><div>— There are many details missing,= including versions of everything used, but logs would be most helpful here.= </div><div> </div></div></div></div></div></div></span><div>Versions fo= r oVirt are all the most recent. This is a fresh install of the Hosted Engin= e. I will just script the backup to cycle through sdb, sdc, sdd, …. Sz= zzz, just seems odd that once a disk is detached and a snapshot deleted, tha= t the Backup_appliance-VM can still access the drive/snapshot? </div><d= iv><br></div><div><br></div><span id=3D"OLK_SRC_BODY_SECTION"><div><div><div d= ir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote"><blockquote class= =3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-l= eft:1ex"><div style=3D"word-wrap:break-word;color:rgb(0,0,0);font-size:14px;fo= nt-family:Calibri,sans-serif"><div></div><div>A second issue is that the abo= ve wont work when I have the Vms running on MPIO iSCSI storage, so for testi= ng I have moved to NFS4. Anyone have ideas about either issue, I’d lov= e to hear ;)</div></div></blockquote><div><br></div><div>— Same - logs= would be helpful here.</div></div></div></div></div></div></span><div><br><= /div><div>I will continue to dig through this issue and will post logs if st= uck. I just wanted to know if there was anything obvious that I should be do= ing differently with iSCSI vs NFS with mounting disks/snapshots.</div><div><= br></div><div>Zip</div><div><br></div><span id=3D"OLK_SRC_BODY_SECTION"><div><= div><div dir=3D"ltr"><div class=3D"gmail_extra"><div class=3D"gmail_quote"><div>= 8212; Y.</div><div> </div><blockquote class=3D"gmail_quote" style=3D"margin= :0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style=3D"word-wr= ap:break-word;color:rgb(0,0,0);font-size:14px;font-family:Calibri,sans-serif= "><div><br></div><div>Thanks</div><div><br></div><div><a href=3D"http://Irc.of= tc.net" target=3D"_blank">Irc.oftc.net</a> #ovirt</div><div>zipur</div><div><b= r></div><br></div><br> ______________________________<wbr>_________________<br> Users mailing list<br><a href=3D"mailto:Users@ovirt.org">Users@ovirt.org</a><= br><a href=3D"http://lists.ovirt.org/mailman/listinfo/users" rel=3D"noreferrer" = target=3D"_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br><b= r></blockquote></div><br></div></div></div></div></span></body></html> --B_3599458335_122065993--
participants (2)
-
Yaniv Kaul
-
Zip