
do iscsiadm -m node --targetname=iqn.2001-05.com.equallogic:0-8a0906-5a6994505-66f00106af950fed-dbatest-save -I ieth$i --rescan done Rescanning session [sid: 3, target: iqn.2001-05.com.equallogic:0-8a0906-5a6994505-66f00106af950fed-dbatest-save,
Hello, my iSCSI storage domain in 4.1.1 is composed by 1 lun of size 1TB and there are two hosts accessing it. I have to extend this LUN to 4Tb. What is supposed to be done after having completed the resize operation at storage array level? I read here the workflow: http://www.ovirt.org/develop/release-management/features/storage/lun-resize/ Does this imply that I have to do nothing at host side? Normally inside a physical server connecting to iSCSI volumes I run the command "iscsiadm .... --rescan" and this one below is all my workflow (on CentOS 5.9, quite similar in CentOS 6.x, except the multipath refresh). Is it oVirt itself that takes in charge the iscsiadm --rescan command? BTW: the "edit domain" screenshot should become a "manage domain" screenshot now Thanks, Gianluca I want to extend my filesystem from 250Gb to 600Gb - current layout of FS, PV, multipath device [g.cecchi@dbatest ~]$ df -h Filesystem Size Used Avail Use% Mounted on ... /dev/mapper/VG_ORASAVE-LV_ORASAVE 247G 66G 178G 28% /orasave [root@dbatest ~]# pvs /dev/mpath/mpsave PV VG Fmt Attr PSize PFree /dev/mpath/mpsave VG_ORASAVE lvm2 a-- 250.00G 0 [root@dbatest ~]# multipath -l mpsave mpsave (36090a0585094695aed0f95af0601f066) dm-4 EQLOGIC,100E-00 [size=250G][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=0][active] \_ 12:0:0:0 sdc 8:32 [active][undef] \_ 11:0:0:0 sdd 8:48 [active][undef] - current configured iSCSI ifaces (ieth0 using eth0 and ieth1 using eth1) [root@dbatest ~]# iscsiadm -m node -P 1 ... Target: iqn.2001-05.com.equallogic:0-8a0906-5a6994505-66f00106af950fed-dbatest-save Portal: 10.10.100.20:3260,1 Iface Name: ieth0 Iface Name: ieth1 - rescan of iSCSI [root@dbatest ~]# for i in 0 1 portal: 10.10.100.20,3260] Rescanning session [sid: 4, target: iqn.2001-05.com.equallogic:0-8a0906-5a6994505-66f00106af950fed-dbatest-save, portal: 10.10.100.20,3260] In messages I get: Apr 24 16:07:17 dbatest kernel: SCSI device sdd: 1258291200 512-byte hdwr sectors (644245 MB) Apr 24 16:07:17 dbatest kernel: sdd: Write Protect is off Apr 24 16:07:17 dbatest kernel: SCSI device sdd: drive cache: write through Apr 24 16:07:17 dbatest kernel: sdd: detected capacity change from 268440698880 to 644245094400 Apr 24 16:07:17 dbatest kernel: SCSI device sdc: 1258291200 512-byte hdwr sectors (644245 MB) Apr 24 16:07:17 dbatest kernel: sdc: Write Protect is off Apr 24 16:07:17 dbatest kernel: SCSI device sdc: drive cache: write through Apr 24 16:07:17 dbatest kernel: sdc: detected capacity change from 268440698880 to 644245094400 - Dry run of multupath refresh [root@dbatest ~]# multipath -v2 -d : mpsave (36090a0585094695aed0f95af0601f066) EQLOGIC,100E-00 [size=600G][features=1 queue_if_no_path][hwhandler=0][n/a] \_ round-robin 0 [prio=1][undef] \_ 12:0:0:0 sdc 8:32 [active][ready] \_ 11:0:0:0 sdd 8:48 [active][ready] - execute the refresh of multipath [root@dbatest ~]# multipath -v2 : mpsave (36090a0585094695aed0f95af0601f066) EQLOGIC,100E-00 [size=600G][features=1 queue_if_no_path][hwhandler=0][n/a] \_ round-robin 0 [prio=1][undef] \_ 12:0:0:0 sdc 8:32 [active][ready] \_ 11:0:0:0 sdd 8:48 [active][ready] - verify new size: [root@dbatest ~]# multipath -l mpsave mpsave (36090a0585094695aed0f95af0601f066) dm-4 EQLOGIC,100E-00 [size=600G][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=0][active] \_ 12:0:0:0 sdc 8:32 [active][undef] \_ 11:0:0:0 sdd 8:48 [active][undef] - pvresize [root@dbatest ~]# pvresize /dev/mapper/mpsave Physical volume "/dev/mpath/mpsave" changed 1 physical volume(s) resized / 0 physical volume(s) not resized - verify the newly added 350Gb size in PV and in VG [root@dbatest ~]# pvs /dev/mpath/mpsave PV VG Fmt Attr PSize PFree /dev/mpath/mpsave VG_ORASAVE lvm2 a-- 600.00G 349.99G [root@dbatest ~]# vgs VG_ORASAVE VG #PV #LV #SN Attr VSize VFree VG_ORASAVE 1 1 0 wz--n- 600.00G 349.99G - lvextend of the existing LV [root@dbatest ~]# lvextend -l+100%FREE /dev/VG_ORASAVE/LV_ORASAVE Extending logical volume LV_ORASAVE to 600.00 GB Logical volume LV_ORASAVE successfully resized [root@dbatest ~]# lvs VG_ORASAVE/LV_ORASAVE LV VG Attr LSize Origin Snap% Move Log Copy% Convert LV_ORASAVE VG_ORASAVE -wi-ao 600.00G - resize FS [root@dbatest ~]# resize2fs /dev/VG_ORASAVE/LV_ORASAVE resize2fs 1.39 (29-May-2006) Filesystem at /dev/VG_ORASAVE/LV_ORASAVE is mounted on /orasave; on-line resizing required Performing an on-line resize of /dev/VG_ORASAVE/LV_ORASAVE to 157285376 (4k) blocks. The filesystem on /dev/VG_ORASAVE/LV_ORASAVE is now 157285376 blocks long. [root@dbatest ~]# df -h /orasave Filesystem Size Used Avail Use% Mounted on /dev/mapper/VG_ORASAVE-LV_ORASAVE 591G 66G 519G 12% /orasave

On Tue, Apr 11, 2017 at 1:51 PM, Gianluca Cecchi <gianluca.cecchi@gmail.com> wrote:
Hello, my iSCSI storage domain in 4.1.1 is composed by 1 lun of size 1TB and there are two hosts accessing it. I have to extend this LUN to 4Tb. What is supposed to be done after having completed the resize operation at storage array level?
It is supposed to be done once the LUN as been extended on the storage server.
I read here the workflow: http://www.ovirt.org/develop/release-management/features/ storage/lun-resize/
Does this imply that I have to do nothing at host side?
Yes,nothing needs to be done on host side. Did you had any issue?
Normally inside a physical server connecting to iSCSI volumes I run the command "iscsiadm .... --rescan" and this one below is all my workflow (on CentOS 5.9, quite similar in CentOS 6.x, except the multipath refresh).
Is it oVirt itself that takes in charge the iscsiadm --rescan command?
The rescan will be done by the VDSM
BTW: the "edit domain" screenshot should become a "manage domain"
screenshot now
Thanks, I will update the web site
Thanks, Gianluca
I want to extend my filesystem from 250Gb to 600Gb
- current layout of FS, PV, multipath device
[g.cecchi@dbatest ~]$ df -h Filesystem Size Used Avail Use% Mounted on ... /dev/mapper/VG_ORASAVE-LV_ORASAVE 247G 66G 178G 28% /orasave
[root@dbatest ~]# pvs /dev/mpath/mpsave PV VG Fmt Attr PSize PFree /dev/mpath/mpsave VG_ORASAVE lvm2 a-- 250.00G 0
[root@dbatest ~]# multipath -l mpsave mpsave (36090a0585094695aed0f95af0601f066) dm-4 EQLOGIC,100E-00 [size=250G][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=0][active] \_ 12:0:0:0 sdc 8:32 [active][undef] \_ 11:0:0:0 sdd 8:48 [active][undef]
- current configured iSCSI ifaces (ieth0 using eth0 and ieth1 using eth1) [root@dbatest ~]# iscsiadm -m node -P 1
...
Target: iqn.2001-05.com.equallogic:0-8a0906-5a6994505- 66f00106af950fed-dbatest-save Portal: 10.10.100.20:3260,1 Iface Name: ieth0 Iface Name: ieth1
- rescan of iSCSI [root@dbatest ~]# for i in 0 1
do iscsiadm -m node --targetname=iqn.2001-05.com. equallogic:0-8a0906-5a6994505-66f00106af950fed-dbatest-save -I ieth$i --rescan done Rescanning session [sid: 3, target: iqn.2001-05.com.equallogic:0- 8a0906-5a6994505-66f00106af950fed-dbatest-save, portal: 10.10.100.20,3260] Rescanning session [sid: 4, target: iqn.2001-05.com.equallogic:0- 8a0906-5a6994505-66f00106af950fed-dbatest-save, portal: 10.10.100.20,3260]
In messages I get: Apr 24 16:07:17 dbatest kernel: SCSI device sdd: 1258291200 512-byte hdwr sectors (644245 MB) Apr 24 16:07:17 dbatest kernel: sdd: Write Protect is off Apr 24 16:07:17 dbatest kernel: SCSI device sdd: drive cache: write through Apr 24 16:07:17 dbatest kernel: sdd: detected capacity change from 268440698880 to 644245094400 Apr 24 16:07:17 dbatest kernel: SCSI device sdc: 1258291200 512-byte hdwr sectors (644245 MB) Apr 24 16:07:17 dbatest kernel: sdc: Write Protect is off Apr 24 16:07:17 dbatest kernel: SCSI device sdc: drive cache: write through Apr 24 16:07:17 dbatest kernel: sdc: detected capacity change from 268440698880 to 644245094400
- Dry run of multupath refresh [root@dbatest ~]# multipath -v2 -d : mpsave (36090a0585094695aed0f95af0601f066) EQLOGIC,100E-00 [size=600G][features=1 queue_if_no_path][hwhandler=0][n/a] \_ round-robin 0 [prio=1][undef] \_ 12:0:0:0 sdc 8:32 [active][ready] \_ 11:0:0:0 sdd 8:48 [active][ready]
- execute the refresh of multipath [root@dbatest ~]# multipath -v2 : mpsave (36090a0585094695aed0f95af0601f066) EQLOGIC,100E-00 [size=600G][features=1 queue_if_no_path][hwhandler=0][n/a] \_ round-robin 0 [prio=1][undef] \_ 12:0:0:0 sdc 8:32 [active][ready] \_ 11:0:0:0 sdd 8:48 [active][ready]
- verify new size: [root@dbatest ~]# multipath -l mpsave mpsave (36090a0585094695aed0f95af0601f066) dm-4 EQLOGIC,100E-00 [size=600G][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=0][active] \_ 12:0:0:0 sdc 8:32 [active][undef] \_ 11:0:0:0 sdd 8:48 [active][undef]
- pvresize [root@dbatest ~]# pvresize /dev/mapper/mpsave Physical volume "/dev/mpath/mpsave" changed 1 physical volume(s) resized / 0 physical volume(s) not resized
- verify the newly added 350Gb size in PV and in VG [root@dbatest ~]# pvs /dev/mpath/mpsave PV VG Fmt Attr PSize PFree /dev/mpath/mpsave VG_ORASAVE lvm2 a-- 600.00G 349.99G
[root@dbatest ~]# vgs VG_ORASAVE VG #PV #LV #SN Attr VSize VFree VG_ORASAVE 1 1 0 wz--n- 600.00G 349.99G
- lvextend of the existing LV [root@dbatest ~]# lvextend -l+100%FREE /dev/VG_ORASAVE/LV_ORASAVE Extending logical volume LV_ORASAVE to 600.00 GB Logical volume LV_ORASAVE successfully resized
[root@dbatest ~]# lvs VG_ORASAVE/LV_ORASAVE LV VG Attr LSize Origin Snap% Move Log Copy% Convert LV_ORASAVE VG_ORASAVE -wi-ao 600.00G
- resize FS [root@dbatest ~]# resize2fs /dev/VG_ORASAVE/LV_ORASAVE resize2fs 1.39 (29-May-2006) Filesystem at /dev/VG_ORASAVE/LV_ORASAVE is mounted on /orasave; on-line resizing required Performing an on-line resize of /dev/VG_ORASAVE/LV_ORASAVE to 157285376 (4k) blocks. The filesystem on /dev/VG_ORASAVE/LV_ORASAVE is now 157285376 blocks long.
[root@dbatest ~]# df -h /orasave Filesystem Size Used Avail Use% Mounted on /dev/mapper/VG_ORASAVE-LV_ORASAVE 591G 66G 519G 12% /orasave
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On Tue, Apr 11, 2017 at 11:31 PM, Fred Rolland <frolland@redhat.com> wrote:
On Tue, Apr 11, 2017 at 1:51 PM, Gianluca Cecchi < gianluca.cecchi@gmail.com> wrote:
Hello, my iSCSI storage domain in 4.1.1 is composed by 1 lun of size 1TB and there are two hosts accessing it. I have to extend this LUN to 4Tb. What is supposed to be done after having completed the resize operation at storage array level?
It is supposed to be done once the LUN as been extended on the storage server.
I read here the workflow: http://www.ovirt.org/develop/release-management/features/sto rage/lun-resize/
Does this imply that I have to do nothing at host side?
Yes,nothing needs to be done on host side. Did you had any issue?
No, I didn't resize yet. I asked to get confirmation before proceeding
Normally inside a physical server connecting to iSCSI volumes I run the command "iscsiadm .... --rescan" and this one below is all my workflow (on CentOS 5.9, quite similar in CentOS 6.x, except the multipath refresh).
Is it oVirt itself that takes in charge the iscsiadm --rescan command?
The rescan will be done by the VDSM
I confirm all went well and as described in the link above, without any action to be done at hosts side. Storage domain resized, multipath ok on both hosts, iSCSI connections maintained the same as before resize operation Thanks
participants (2)
-
Fred Rolland
-
Gianluca Cecchi