ok, so it looks like ovirt is creating PVs, VGs, and LVs associated to a
iSCSI disk ... now, when i try to add the other storage with the same LUN
ID - this is how it looks like :
[root@TESTHV01 ~]# pvs
/dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument
/dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument
WARNING: Volume Group 7d0f78ff-aa25-4b64-a2ea-c8a65beda616 is not
consistent
PV VG Fmt Attr
PSize PFree
/dev/mapper/1IET_00010001 7d0f78ff-aa25-4b64-a2ea-c8a65beda616 lvm2 a--
68.00g 54.12g
/dev/sda2 vg_root lvm2 a--
78.12g 0
/dev/sda3 vg_vol lvm2 a--
759.26g 0
[root@TESTHV01 ~]# vgs
/dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument
/dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument
/dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument
/dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument
/dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument
WARNING: Inconsistent metadata found for VG
7d0f78ff-aa25-4b64-a2ea-c8a65beda616 - updating to use version 20
/dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument
Automatic metadata correction failed
Recovery of volume group "7d0f78ff-aa25-4b64-a2ea-c8a65beda616" failed.
/dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument
VG #PV #LV #SN Attr VSize VFree
vg_root 1 2 0 wz--n- 78.12g 0
vg_vol 1 1 0 wz--n- 759.26g 0
[root@TESTHV01 ~]# lvs
/dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument
/dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument
/dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument
/dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument
/dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument
WARNING: Inconsistent metadata found for VG
7d0f78ff-aa25-4b64-a2ea-c8a65beda616 - updating to use version 20
/dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument
Automatic metadata correction failed
Recovery of volume group "7d0f78ff-aa25-4b64-a2ea-c8a65beda616" failed.
Skipping volume group 7d0f78ff-aa25-4b64-a2ea-c8a65beda616
/dev/mapper/1IET_00010001: lseek 73266102272 failed: Invalid argument
LV VG Attr LSize Pool Origin Data% Move Log Copy%
Convert
lv_root vg_root -wi-ao--
68.36g
lv_swap vg_root -wi-ao--
9.77g
lv_vol vg_vol -wi-ao--
759.26g
VS
[root@TESTHV01 ~]# pvs
l PV VG Fmt Attr
PSize PFree
/dev/mapper/1IET_00010001 7d0f78ff-aa25-4b64-a2ea-c8a65beda616 lvm2 a--
68.00g 54.12g
/dev/sda2 vg_root lvm2 a--
78.12g 0
/dev/sda3 vg_vol lvm2 a--
759.26g 0
[root@TESTHV01 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
7d0f78ff-aa25-4b64-a2ea-c8a65beda616 1 7 0 wz--n- 68.00g 54.12g
vg_root 1 2 0 wz--n- 78.12g 0
vg_vol 1 1 0 wz--n- 759.26g 0
[root@TESTHV01 ~]# lvs
LV VG
Attr LSize Pool Origin Data% Move Log Copy% Convert
fe6f8584-b6da-4ef0-8879-bf23022827d7 7d0f78ff-aa25-4b64-a2ea-c8a65beda616
-wi----- 10.00g
ids 7d0f78ff-aa25-4b64-a2ea-c8a65beda616
-wi-a--- 128.00m
inbox 7d0f78ff-aa25-4b64-a2ea-c8a65beda616
-wi-a--- 128.00m
leases 7d0f78ff-aa25-4b64-a2ea-c8a65beda616
-wi-a--- 2.00g
master 7d0f78ff-aa25-4b64-a2ea-c8a65beda616
-wi-ao-- 1.00g
metadata 7d0f78ff-aa25-4b64-a2ea-c8a65beda616
-wi-a--- 512.00m
outbox 7d0f78ff-aa25-4b64-a2ea-c8a65beda616
-wi-a--- 128.00m
lv_root vg_root
-wi-ao-- 68.36g
lv_swap vg_root
-wi-ao-- 9.77g
lv_vol vg_vol
-wi-ao-- 759.26g
Ovirt Node ( HV ) seems to use the LV created with in the same VG for the
storage domain as the disk for the VM. And this is how it'll look like -
various LVs are being created for the "management of the storage domain" (i
guess) :
Disk /dev/sdb: 73.4 GB, 73400320000 bytes
255 heads, 63 sectors/track, 8923 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/1IET_00010001: 73.4 GB, 73400320000 bytes
255 heads, 63 sectors/track, 8923 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/7d0f78ff--aa25--4b64--a2ea--c8a65beda616-metadata: 536 MB,
536870912 bytes
255 heads, 63 sectors/track, 65 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/7d0f78ff--aa25--4b64--a2ea--c8a65beda616-ids: 134 MB,
134217728 bytes
255 heads, 63 sectors/track, 16 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/7d0f78ff--aa25--4b64--a2ea--c8a65beda616-leases: 2147 MB,
2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/7d0f78ff--aa25--4b64--a2ea--c8a65beda616-inbox: 134 MB,
134217728 bytes
255 heads, 63 sectors/track, 16 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/7d0f78ff--aa25--4b64--a2ea--c8a65beda616-outbox: 134 MB,
134217728 bytes
255 heads, 63 sectors/track, 16 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/7d0f78ff--aa25--4b64--a2ea--c8a65beda616-master: 1073 MB,
1073741824 bytes
255 heads, 63 sectors/track, 130 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk
/dev/mapper/7d0f78ff--aa25--4b64--a2ea--c8a65beda616-fe6f8584--b6da--4ef0--8879--bf23022827d7:
10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0000c91a
Device Boot Start End Blocks Id System
/dev/mapper/7d0f78ff--aa25--4b64--a2ea--c8a65beda616-fe6f8584--b6da--4ef0--8879
* 1 64 512000 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/mapper/7d0f78ff--aa25--4b64--a2ea--c8a65beda616-fe6f8584--b6da--4ef0--8879
64 1306 9972736 8e Linux LVM
Now I wonder whether there is a MAX on the amount of iSCSI storage domains
you can have ? I'd think that that depends on the max of the device-mapper
? Is there a max ?
Alex
On 28 February 2013 16:30, Alex Leonhardt <alex.tuxx(a)gmail.com> wrote:
FWIW, restarting the HV "resolves" the issue and brings the
storage domain
back up; but I dont know whether that is because that's where the target
(and iscsi initiator) runs or whether vdsmd then clears its cache / routes
to a/the target(s) ?
Alex
On 28 February 2013 16:25, Alex Leonhardt <alex.tuxx(a)gmail.com> wrote:
> another screenshot of how confused it can get
>
> alex
>
>
> On 28 February 2013 15:36, Alex Leonhardt <alex.tuxx(a)gmail.com> wrote:
>
>> Hi there,
>>
>> I was doing some testing around ovirt and iscsi and found an issue where
>> as when you use "dd" to create "backing-stores" for iscsi and
you point
>> ovirt to it to discover & login, it thinks the LUN ID is the same although
>> the target is different and adds additional paths to the config
>> (automagically?) bringing down the iSCSI storage domain.
>>
>> See attached screenshot of what I got when trying to a "new iscsi san
>> storage domain" to ovirt. The Storage Domain is now down and I cannot get
>> rid of the config (???) how do I force it to logout of the targets ??
>>
>> Also, anyone know how to deal with the duplicate LUN ID issue ?
>>
>> Thanks
>> Alex
>>
>> --
>>
>> | RHCE | Senior Systems Engineer |
www.vcore.co |
www.vsearchcloud.com|
>>
>
>
>
> --
>
> | RHCE | Senior Systems Engineer |
www.vcore.co |
www.vsearchcloud.com |
>
--
| RHCE | Senior Systems Engineer |
www.vcore.co |
www.vsearchcloud.com |
--
| RHCE | Senior Systems Engineer |
www.vcore.co |
www.vsearchcloud.com |