<div dir="ltr"><div><div>ok, so it looks like ovirt is creating PVs, VGs, and LVs associated to a iSCSI disk ... now, when i try to add the other storage with the same LUN ID - this is how it looks like : <br><br>[root@TESTHV01 ~]# pvs<br>
/dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument<br> /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument<br> WARNING: Volume Group 7d0f78ff-aa25-4b64-a2ea-c8a65beda616 is not consistent<br>
PV VG Fmt Attr PSize PFree <br> /dev/mapper/1IET_00010001 7d0f78ff-aa25-4b64-a2ea-c8a65beda616 lvm2 a-- 68.00g 54.12g<br> /dev/sda2 vg_root lvm2 a-- 78.12g 0 <br>
/dev/sda3 vg_vol lvm2 a-- 759.26g 0 <br><br>[root@TESTHV01 ~]# vgs<br> /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument<br> /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument<br>
/dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument<br> /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument<br> /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument<br>
WARNING: Inconsistent metadata found for VG 7d0f78ff-aa25-4b64-a2ea-c8a65beda616 - updating to use version 20<br> /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument<br> Automatic metadata correction failed<br>
Recovery of volume group "7d0f78ff-aa25-4b64-a2ea-c8a65beda616" failed.<br> /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument<br> VG #PV #LV #SN Attr VSize VFree<br> vg_root 1 2 0 wz--n- 78.12g 0 <br>
vg_vol 1 1 0 wz--n- 759.26g 0 <br><br>[root@TESTHV01 ~]# lvs<br> /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument<br> /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument<br>
/dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument<br> /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument<br> /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument<br>
WARNING: Inconsistent metadata found for VG 7d0f78ff-aa25-4b64-a2ea-c8a65beda616 - updating to use version 20<br> /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument<br> Automatic metadata correction failed<br>
Recovery of volume group "7d0f78ff-aa25-4b64-a2ea-c8a65beda616" failed.<br> Skipping volume group 7d0f78ff-aa25-4b64-a2ea-c8a65beda616<br> /dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument<br>
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert<br> lv_root vg_root -wi-ao-- 68.36g <br> lv_swap vg_root -wi-ao-- 9.77g <br>
lv_vol vg_vol -wi-ao-- 759.26g <br><br></div>VS <br><br>[root@TESTHV01 ~]# pvs<br>l PV VG Fmt Attr PSize PFree <br>
/dev/mapper/1IET_00010001 7d0f78ff-aa25-4b64-a2ea-c8a65beda616 lvm2 a-- 68.00g 54.12g<br> /dev/sda2 vg_root lvm2 a-- 78.12g 0 <br> /dev/sda3 vg_vol lvm2 a-- 759.26g 0 <br>
[root@TESTHV01 ~]# vgs<br> VG #PV #LV #SN Attr VSize VFree <br> 7d0f78ff-aa25-4b64-a2ea-c8a65beda616 1 7 0 wz--n- 68.00g 54.12g<br> vg_root 1 2 0 wz--n- 78.12g 0 <br>
vg_vol 1 1 0 wz--n- 759.26g 0 <br>[root@TESTHV01 ~]# lvs<br> LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert<br>
fe6f8584-b6da-4ef0-8879-bf23022827d7 7d0f78ff-aa25-4b64-a2ea-c8a65beda616 -wi----- 10.00g <br> ids 7d0f78ff-aa25-4b64-a2ea-c8a65beda616 -wi-a--- 128.00m <br>
inbox 7d0f78ff-aa25-4b64-a2ea-c8a65beda616 -wi-a--- 128.00m <br> leases 7d0f78ff-aa25-4b64-a2ea-c8a65beda616 -wi-a--- 2.00g <br>
master 7d0f78ff-aa25-4b64-a2ea-c8a65beda616 -wi-ao-- 1.00g <br> metadata 7d0f78ff-aa25-4b64-a2ea-c8a65beda616 -wi-a--- 512.00m <br>
outbox 7d0f78ff-aa25-4b64-a2ea-c8a65beda616 -wi-a--- 128.00m <br> lv_root vg_root -wi-ao-- 68.36g <br>
lv_swap vg_root -wi-ao-- 9.77g <br> lv_vol vg_vol -wi-ao-- 759.26g <br>
<br></div><br><div><div>Ovirt Node ( HV ) seems to use the LV created with in the same VG for the storage domain as the disk for the VM. And this is how it'll look like - various LVs are being created for the "management of the storage domain" (i guess) : <br>
<br>Disk /dev/sdb: 73.4 GB, 73400320000 bytes<br>255 heads, 63 sectors/track, 8923 cylinders<br>Units = cylinders of 16065 * 512 = 8225280 bytes<br>Sector size (logical/physical): 512 bytes / 512 bytes<br>I/O size (minimum/optimal): 512 bytes / 512 bytes<br>
Disk identifier: 0x00000000<br><br><br>Disk /dev/mapper/1IET_00010001: 73.4 GB, 73400320000 bytes<br>255 heads, 63 sectors/track, 8923 cylinders<br>Units = cylinders of 16065 * 512 = 8225280 bytes<br>Sector size (logical/physical): 512 bytes / 512 bytes<br>
I/O size (minimum/optimal): 512 bytes / 512 bytes<br>Disk identifier: 0x00000000<br><br><br>Disk /dev/mapper/7d0f78ff--aa25--4b64--a2ea--c8a65beda616-metadata: 536 MB, 536870912 bytes<br>255 heads, 63 sectors/track, 65 cylinders<br>
Units = cylinders of 16065 * 512 = 8225280 bytes<br>Sector size (logical/physical): 512 bytes / 512 bytes<br>I/O size (minimum/optimal): 512 bytes / 512 bytes<br>Disk identifier: 0x00000000<br><br><br>Disk /dev/mapper/7d0f78ff--aa25--4b64--a2ea--c8a65beda616-ids: 134 MB, 134217728 bytes<br>
255 heads, 63 sectors/track, 16 cylinders<br>Units = cylinders of 16065 * 512 = 8225280 bytes<br>Sector size (logical/physical): 512 bytes / 512 bytes<br>I/O size (minimum/optimal): 512 bytes / 512 bytes<br>Disk identifier: 0x00000000<br>
<br><br>Disk /dev/mapper/7d0f78ff--aa25--4b64--a2ea--c8a65beda616-leases: 2147 MB, 2147483648 bytes<br>255 heads, 63 sectors/track, 261 cylinders<br>Units = cylinders of 16065 * 512 = 8225280 bytes<br>Sector size (logical/physical): 512 bytes / 512 bytes<br>
I/O size (minimum/optimal): 512 bytes / 512 bytes<br>Disk identifier: 0x00000000<br><br><br>Disk /dev/mapper/7d0f78ff--aa25--4b64--a2ea--c8a65beda616-inbox: 134 MB, 134217728 bytes<br>255 heads, 63 sectors/track, 16 cylinders<br>
Units = cylinders of 16065 * 512 = 8225280 bytes<br>Sector size (logical/physical): 512 bytes / 512 bytes<br>I/O size (minimum/optimal): 512 bytes / 512 bytes<br>Disk identifier: 0x00000000<br><br><br>Disk /dev/mapper/7d0f78ff--aa25--4b64--a2ea--c8a65beda616-outbox: 134 MB, 134217728 bytes<br>
255 heads, 63 sectors/track, 16 cylinders<br>Units = cylinders of 16065 * 512 = 8225280 bytes<br>Sector size (logical/physical): 512 bytes / 512 bytes<br>I/O size (minimum/optimal): 512 bytes / 512 bytes<br>Disk identifier: 0x00000000<br>
<br><br>Disk /dev/mapper/7d0f78ff--aa25--4b64--a2ea--c8a65beda616-master: 1073 MB, 1073741824 bytes<br>255 heads, 63 sectors/track, 130 cylinders<br>Units = cylinders of 16065 * 512 = 8225280 bytes<br>Sector size (logical/physical): 512 bytes / 512 bytes<br>
I/O size (minimum/optimal): 512 bytes / 512 bytes<br>Disk identifier: 0x00000000<br><br><br>Disk /dev/mapper/7d0f78ff--aa25--4b64--a2ea--c8a65beda616-fe6f8584--b6da--4ef0--8879--bf23022827d7: 10.7 GB, 10737418240 bytes<br>
255 heads, 63 sectors/track, 1305 cylinders<br>Units = cylinders of 16065 * 512 = 8225280 bytes<br>Sector size (logical/physical): 512 bytes / 512 bytes<br>I/O size (minimum/optimal): 512 bytes / 512 bytes<br>Disk identifier: 0x0000c91a<br>
<br> Device Boot Start End Blocks Id System<br>/dev/mapper/7d0f78ff--aa25--4b64--a2ea--c8a65beda616-fe6f8584--b6da--4ef0--8879 * 1 64 512000 83 Linux<br>
Partition 1 does not end on cylinder boundary.<br>/dev/mapper/7d0f78ff--aa25--4b64--a2ea--c8a65beda616-fe6f8584--b6da--4ef0--8879 64 1306 9972736 8e Linux LVM<br><br><br><br></div><div>Now I wonder whether there is a MAX on the amount of iSCSI storage domains you can have ? I'd think that that depends on the max of the device-mapper ? Is there a max ? <br>
<br></div><div>Alex<br><br></div></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On 28 February 2013 16:30, Alex Leonhardt <span dir="ltr"><<a href="mailto:alex.tuxx@gmail.com" target="_blank">alex.tuxx@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>FWIW, restarting the HV "resolves" the issue and brings the storage domain back up; but I dont know whether that is because that's where the target (and iscsi initiator) runs or whether vdsmd then clears its cache / routes to a/the target(s) ? <br>
<span class="HOEnZb"><font color="#888888">
<br></font></span></div><span class="HOEnZb"><font color="#888888">Alex<br><br></font></span></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><br><div class="gmail_quote">On 28 February 2013 16:25, Alex Leonhardt <span dir="ltr"><<a href="mailto:alex.tuxx@gmail.com" target="_blank">alex.tuxx@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>another screenshot of how confused it can get <br><span><font color="#888888"><br></font></span></div>
<span><font color="#888888">alex<br></font></span></div><div><div><div class="gmail_extra"><br><br><div class="gmail_quote">On 28 February 2013 15:36, Alex Leonhardt <span dir="ltr"><<a href="mailto:alex.tuxx@gmail.com" target="_blank">alex.tuxx@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div>Hi there,<br><br></div>I was doing some testing around ovirt and iscsi and found an issue where as when you use "dd" to create "backing-stores" for iscsi and you point ovirt to it to discover & login, it thinks the LUN ID is the same although the target is different and adds additional paths to the config (automagically?) bringing down the iSCSI storage domain. <br>
<br></div>See attached screenshot of what I got when trying to a "new iscsi san storage domain" to ovirt. The Storage Domain is now down and I cannot get rid of the config (???) how do I force it to logout of the targets ?? <br>
<br></div><div>Also, anyone know how to deal with the duplicate LUN ID issue ? <br></div><div><br></div>Thanks<span><font color="#888888"><br>Alex<br clear="all"><div><div><div><div><br>-- <br><div dir="ltr">
<div><br></div>| RHCE | Senior Systems Engineer | <a href="http://www.vcore.co" target="_blank">www.vcore.co</a> | <a href="http://www.vsearchcloud.com" target="_blank">www.vsearchcloud.com</a> | <br>
</div>
</div></div></div></div></font></span></div>
</blockquote></div><br><br clear="all"><br>-- <br><div dir="ltr"><div><br></div>| RHCE | Senior Systems Engineer | <a href="http://www.vcore.co" target="_blank">www.vcore.co</a> | <a href="http://www.vsearchcloud.com" target="_blank">www.vsearchcloud.com</a> | <br>
</div>
</div>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><div dir="ltr"><div><br></div>| RHCE | Senior Systems Engineer | <a href="http://www.vcore.co" target="_blank">www.vcore.co</a> | <a href="http://www.vsearchcloud.com" target="_blank">www.vsearchcloud.com</a> | <br>
</div>
</div>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><div dir="ltr"><div><br></div>| RHCE | Senior Systems Engineer | <a href="http://www.vcore.co" target="_blank">www.vcore.co</a> | <a href="http://www.vsearchcloud.com" target="_blank">www.vsearchcloud.com</a> | <br>
</div>
</div>