[ovirt-users] Resize iSCSI lun f storage domain

Gianluca Cecchi gianluca.cecchi at gmail.com
Tue Apr 11 10:51:09 UTC 2017


Hello,
my iSCSI storage domain in 4.1.1 is composed by 1 lun of size 1TB and there
are two hosts accessing it.
I have to extend this LUN to 4Tb.
What is supposed to be done after having completed the resize operation at
storage array level?
I read here the workflow:
http://www.ovirt.org/develop/release-management/features/storage/lun-resize/

Does this imply that I have to do nothing at host side?

Normally inside a physical server connecting to iSCSI volumes I run the
command "iscsiadm .... --rescan" and this one below is all my workflow (on
CentOS 5.9, quite similar in CentOS 6.x, except the multipath refresh).

Is it oVirt itself that takes in charge the iscsiadm --rescan command?

BTW: the "edit domain" screenshot should become a "manage domain"
screenshot now

Thanks,
Gianluca

I want to extend my filesystem from 250Gb to 600Gb

- current layout of FS, PV, multipath device

[g.cecchi at dbatest ~]$ df -h
Filesystem            Size  Used Avail Use% Mounted on
...
/dev/mapper/VG_ORASAVE-LV_ORASAVE
                      247G   66G  178G  28% /orasave


[root at dbatest ~]# pvs /dev/mpath/mpsave
  PV                VG         Fmt  Attr PSize   PFree
  /dev/mpath/mpsave VG_ORASAVE lvm2 a--  250.00G     0

[root at dbatest ~]# multipath -l mpsave
mpsave (36090a0585094695aed0f95af0601f066) dm-4 EQLOGIC,100E-00
[size=250G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][active]
 \_ 12:0:0:0 sdc 8:32  [active][undef]
 \_ 11:0:0:0 sdd 8:48  [active][undef]


- current configured iSCSI ifaces (ieth0 using eth0 and ieth1 using eth1)
[root at dbatest ~]# iscsiadm -m node -P 1

...

Target:
iqn.2001-05.com.equallogic:0-8a0906-5a6994505-66f00106af950fed-dbatest-save
Portal: 10.10.100.20:3260,1
Iface Name: ieth0
Iface Name: ieth1

- rescan of iSCSI
[root at dbatest ~]# for i in 0 1
> do
> iscsiadm -m node
--targetname=iqn.2001-05.com.equallogic:0-8a0906-5a6994505-66f00106af950fed-dbatest-save
-I ieth$i --rescan
> done
Rescanning session [sid: 3, target:
iqn.2001-05.com.equallogic:0-8a0906-5a6994505-66f00106af950fed-dbatest-save,
portal: 10.10.100.20,3260]
Rescanning session [sid: 4, target:
iqn.2001-05.com.equallogic:0-8a0906-5a6994505-66f00106af950fed-dbatest-save,
portal: 10.10.100.20,3260]

In messages I get:
Apr 24 16:07:17 dbatest kernel: SCSI device sdd: 1258291200 512-byte hdwr
sectors (644245 MB)
Apr 24 16:07:17 dbatest kernel: sdd: Write Protect is off
Apr 24 16:07:17 dbatest kernel: SCSI device sdd: drive cache: write through
Apr 24 16:07:17 dbatest kernel: sdd: detected capacity change from
268440698880 to 644245094400
Apr 24 16:07:17 dbatest kernel: SCSI device sdc: 1258291200 512-byte hdwr
sectors (644245 MB)
Apr 24 16:07:17 dbatest kernel: sdc: Write Protect is off
Apr 24 16:07:17 dbatest kernel: SCSI device sdc: drive cache: write through
Apr 24 16:07:17 dbatest kernel: sdc: detected capacity change from
268440698880 to 644245094400

- Dry run of multupath refresh
[root at dbatest ~]# multipath -v2 -d
: mpsave (36090a0585094695aed0f95af0601f066)  EQLOGIC,100E-00
[size=600G][features=1 queue_if_no_path][hwhandler=0][n/a]
\_ round-robin 0 [prio=1][undef]
 \_ 12:0:0:0 sdc 8:32  [active][ready]
 \_ 11:0:0:0 sdd 8:48  [active][ready]

- execute the refresh of multipath
[root at dbatest ~]# multipath -v2
: mpsave (36090a0585094695aed0f95af0601f066)  EQLOGIC,100E-00
[size=600G][features=1 queue_if_no_path][hwhandler=0][n/a]
\_ round-robin 0 [prio=1][undef]
 \_ 12:0:0:0 sdc 8:32  [active][ready]
 \_ 11:0:0:0 sdd 8:48  [active][ready]

- verify new size:
[root at dbatest ~]# multipath -l mpsave
mpsave (36090a0585094695aed0f95af0601f066) dm-4 EQLOGIC,100E-00
[size=600G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=0][active]
 \_ 12:0:0:0 sdc 8:32  [active][undef]
 \_ 11:0:0:0 sdd 8:48  [active][undef]

- pvresize
[root at dbatest ~]# pvresize /dev/mapper/mpsave
  Physical volume "/dev/mpath/mpsave" changed
  1 physical volume(s) resized / 0 physical volume(s) not resized

- verify the newly added 350Gb size in PV and in VG
[root at dbatest ~]# pvs  /dev/mpath/mpsave
  PV                VG         Fmt  Attr PSize   PFree
  /dev/mpath/mpsave VG_ORASAVE lvm2 a--  600.00G 349.99G

[root at dbatest ~]# vgs VG_ORASAVE
  VG         #PV #LV #SN Attr   VSize   VFree
  VG_ORASAVE   1   1   0 wz--n- 600.00G 349.99G

- lvextend of the existing LV
[root at dbatest ~]# lvextend -l+100%FREE /dev/VG_ORASAVE/LV_ORASAVE
  Extending logical volume LV_ORASAVE to 600.00 GB
  Logical volume LV_ORASAVE successfully resized

[root at dbatest ~]# lvs VG_ORASAVE/LV_ORASAVE
  LV           VG         Attr   LSize   Origin Snap%  Move Log Copy%
 Convert
  LV_ORASAVE   VG_ORASAVE -wi-ao 600.00G


- resize FS
[root at dbatest ~]# resize2fs /dev/VG_ORASAVE/LV_ORASAVE
resize2fs 1.39 (29-May-2006)
Filesystem at /dev/VG_ORASAVE/LV_ORASAVE is mounted on /orasave; on-line
resizing required
Performing an on-line resize of /dev/VG_ORASAVE/LV_ORASAVE to 157285376
(4k) blocks.
The filesystem on /dev/VG_ORASAVE/LV_ORASAVE is now 157285376 blocks long.

[root at dbatest ~]# df -h /orasave
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VG_ORASAVE-LV_ORASAVE
                      591G   66G  519G  12% /orasave
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170411/81b23bd0/attachment.html>


More information about the Users mailing list