<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Apr 11, 2017 at 1:51 PM, Gianluca Cecchi <span dir="ltr"><<a href="mailto:gianluca.cecchi@gmail.com" target="_blank">gianluca.cecchi@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Hello,</div><div>my iSCSI storage domain in 4.1.1 is composed by 1 lun of size 1TB and there are two hosts accessing it.</div><div>I have to extend this LUN to 4Tb.</div><div>What is supposed to be done after having completed the resize operation at storage array level?</div></div></blockquote><div>It is supposed to be done once the LUN as been extended on the storage server. <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>I read here the workflow:</div><div><a href="http://www.ovirt.org/develop/release-management/features/storage/lun-resize/" target="_blank">http://www.ovirt.org/develop/<wbr>release-management/features/<wbr>storage/lun-resize/</a><br></div><br></div></blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div></div><div>Does this imply that I have to do nothing at host side?</div></div></blockquote><div><div><br></div>Yes,nothing needs to be done on host side.<br></div><div>Did you had any issue?<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div><br></div><div>Normally inside a physical server connecting to iSCSI volumes I run the command "iscsiadm .... --rescan" and this one below is all my workflow (on CentOS 5.9, quite similar in CentOS 6.x, except the multipath refresh).</div><br></div></div></blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div></div><div>Is it oVirt itself that takes in charge the iscsiadm --rescan command?</div></div></div></blockquote><div><div><br></div>The rescan will be done by the VDSM <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div> <br></div></div></div></blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div></div><div>BTW: the "edit domain" screenshot should become a "manage domain" screenshot now</div></div></div></blockquote><div>Thanks, I will update the web site <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div><br></div><div>Thanks,</div><div>Gianluca</div><div><br></div><div>I want to extend my filesystem from 250Gb to 600Gb</div><div><br></div><div>- current layout of FS, PV, multipath device</div><div><br></div><div>[g.cecchi@dbatest ~]$ df -h</div><div>Filesystem Size Used Avail Use% Mounted on</div><div>...</div><div>/dev/mapper/VG_ORASAVE-LV_<wbr>ORASAVE</div><div> 247G 66G 178G 28% /orasave</div><div><br></div><div><br></div><div>[root@dbatest ~]# pvs /dev/mpath/mpsave</div><div> PV VG Fmt Attr PSize PFree </div><div> /dev/mpath/mpsave VG_ORASAVE lvm2 a-- 250.00G 0 <br></div><div><br></div><div>[root@dbatest ~]# multipath -l mpsave</div><div>mpsave (<wbr>36090a0585094695aed0f95af0601f<wbr>066) dm-4 EQLOGIC,100E-00</div><div>[size=250G][features=1 queue_if_no_path][hwhandler=0]<wbr>[rw]</div><div>\_ round-robin 0 [prio=0][active]</div><div> \_ 12:0:0:0 sdc 8:32 [active][undef]</div><div> \_ 11:0:0:0 sdd 8:48 [active][undef]</div><div><br></div><div><br></div><div>- current configured iSCSI ifaces (ieth0 using eth0 and ieth1 using eth1)</div><div>[root@dbatest ~]# iscsiadm -m node -P 1</div><div><br></div><div>...</div><div><br></div><div>Target: iqn.2001-05.com.equallogic:0-<wbr>8a0906-5a6994505-<wbr>66f00106af950fed-dbatest-save</div><div><span class="gmail-m_-314707920245298645gmail-Apple-tab-span" style="white-space:pre-wrap">        </span>Portal: <a href="http://10.10.100.20:3260" target="_blank">10.10.100.20:3260</a>,1</div><div><span class="gmail-m_-314707920245298645gmail-Apple-tab-span" style="white-space:pre-wrap">                </span>Iface Name: ieth0</div><div><span class="gmail-m_-314707920245298645gmail-Apple-tab-span" style="white-space:pre-wrap">                </span>Iface Name: ieth1</div><div><br></div><div>- rescan of iSCSI </div><div>[root@dbatest ~]# for i in 0 1</div><div>> do</div><div>> iscsiadm -m node --targetname=<a href="http://iqn.2001-05.com">iqn.2001-05.com</a>.<wbr>equallogic:0-8a0906-5a6994505-<wbr>66f00106af950fed-dbatest-save -I ieth$i --rescan</div><div>> done</div><div>Rescanning session [sid: 3, target: iqn.2001-05.com.equallogic:0-<wbr>8a0906-5a6994505-<wbr>66f00106af950fed-dbatest-save, portal: 10.10.100.20,3260]</div><div>Rescanning session [sid: 4, target: iqn.2001-05.com.equallogic:0-<wbr>8a0906-5a6994505-<wbr>66f00106af950fed-dbatest-save, portal: 10.10.100.20,3260]</div><div><br></div><div>In messages I get:</div><div>Apr 24 16:07:17 dbatest kernel: SCSI device sdd: 1258291200 512-byte hdwr sectors (644245 MB)</div><div>Apr 24 16:07:17 dbatest kernel: sdd: Write Protect is off</div><div>Apr 24 16:07:17 dbatest kernel: SCSI device sdd: drive cache: write through</div><div>Apr 24 16:07:17 dbatest kernel: sdd: detected capacity change from 268440698880 to 644245094400</div><div>Apr 24 16:07:17 dbatest kernel: SCSI device sdc: 1258291200 512-byte hdwr sectors (644245 MB)</div><div>Apr 24 16:07:17 dbatest kernel: sdc: Write Protect is off</div><div>Apr 24 16:07:17 dbatest kernel: SCSI device sdc: drive cache: write through</div><div>Apr 24 16:07:17 dbatest kernel: sdc: detected capacity change from 268440698880 to 644245094400</div><div><br></div><div>- Dry run of multupath refresh</div><div>[root@dbatest ~]# multipath -v2 -d</div><div>: mpsave (<wbr>36090a0585094695aed0f95af0601f<wbr>066) EQLOGIC,100E-00</div><div>[size=600G][features=1 queue_if_no_path][hwhandler=0]<wbr>[n/a]</div><div>\_ round-robin 0 [prio=1][undef]</div><div> \_ 12:0:0:0 sdc 8:32 [active][ready]</div><div> \_ 11:0:0:0 sdd 8:48 [active][ready]</div><div><br></div><div>- execute the refresh of multipath</div><div>[root@dbatest ~]# multipath -v2</div><div>: mpsave (<wbr>36090a0585094695aed0f95af0601f<wbr>066) EQLOGIC,100E-00</div><div>[size=600G][features=1 queue_if_no_path][hwhandler=0]<wbr>[n/a]</div><div>\_ round-robin 0 [prio=1][undef]</div><div> \_ 12:0:0:0 sdc 8:32 [active][ready]</div><div> \_ 11:0:0:0 sdd 8:48 [active][ready]</div><div><br></div><div>- verify new size:</div><div>[root@dbatest ~]# multipath -l mpsave</div><div>mpsave (<wbr>36090a0585094695aed0f95af0601f<wbr>066) dm-4 EQLOGIC,100E-00</div><div>[size=600G][features=1 queue_if_no_path][hwhandler=0]<wbr>[rw]</div><div>\_ round-robin 0 [prio=0][active]</div><div> \_ 12:0:0:0 sdc 8:32 [active][undef]</div><div> \_ 11:0:0:0 sdd 8:48 [active][undef]</div><div><br></div><div>- pvresize </div><div>[root@dbatest ~]# pvresize /dev/mapper/mpsave </div><div> Physical volume "/dev/mpath/mpsave" changed</div><div> 1 physical volume(s) resized / 0 physical volume(s) not resized</div><div><br></div><div>- verify the newly added 350Gb size in PV and in VG</div><div>[root@dbatest ~]# pvs /dev/mpath/mpsave</div><div> PV VG Fmt Attr PSize PFree </div><div> /dev/mpath/mpsave VG_ORASAVE lvm2 a-- 600.00G 349.99G<br></div><div><br></div><div>[root@dbatest ~]# vgs VG_ORASAVE </div><div> VG #PV #LV #SN Attr VSize VFree </div><div> VG_ORASAVE 1 1 0 wz--n- 600.00G 349.99G</div><div><br></div><div>- lvextend of the existing LV</div><div>[root@dbatest ~]# lvextend -l+100%FREE /dev/VG_ORASAVE/LV_ORASAVE </div><div> Extending logical volume LV_ORASAVE to 600.00 GB</div><div> Logical volume LV_ORASAVE successfully resized</div><div><br></div><div>[root@dbatest ~]# lvs VG_ORASAVE/LV_ORASAVE</div><div> LV VG Attr LSize Origin Snap% Move Log Copy% Convert </div><div> LV_ORASAVE VG_ORASAVE -wi-ao 600.00G </div><div> <br></div><div>- resize FS</div><div>[root@dbatest ~]# resize2fs /dev/VG_ORASAVE/LV_ORASAVE </div><div>resize2fs 1.39 (29-May-2006)</div><div>Filesystem at /dev/VG_ORASAVE/LV_ORASAVE is mounted on /orasave; on-line resizing required</div><div>Performing an on-line resize of /dev/VG_ORASAVE/LV_ORASAVE to 157285376 (4k) blocks.</div><div>The filesystem on /dev/VG_ORASAVE/LV_ORASAVE is now 157285376 blocks long.</div><div><br></div><div>[root@dbatest ~]# df -h /orasave</div><div>Filesystem Size Used Avail Use% Mounted on</div><div>/dev/mapper/VG_ORASAVE-LV_<wbr>ORASAVE</div><div> 591G 66G 519G 12% /orasave</div><div><br></div></div></div>
<br>______________________________<wbr>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.ovirt.org/<wbr>mailman/listinfo/users</a><br>
<br></blockquote></div><br></div></div>