[Users] Increase storage domain

Alan Johnson alan at datdec.com
Thu Sep 27 14:26:44 UTC 2012


On Wed, Sep 26, 2012 at 6:12 PM, Ayal Baron <abaron at redhat.com> wrote:

> Sounds really over-complicated for what you're trying to do.
>

Agreed!  That's why I asked. =)  To be clear, all that was necessary to end
up where I wanted was to reboot the hosts, which is not terribly
complicated, but time consuming and should not be necessary.  I tired all
those other steps based on recommendations in this thread to avoid the
reboot.


> After increasing the size of the LUN in the storage side try running the
> following command on the SPM:
> vdsClient -s 0 getDeviceList
> (-s is only if ssl is enabled, otherwise just remove it)
>
> After that run pvresize (for LVM to update its metadata).
> That should be it on the SPM side.
>

This did not make any difference.  I increased the LUN to 14.1 on the
Equallogics box and then ran these commands (you may want to skip past this
to the text below since I am leaning heavily toward the add a LUN method):

[root at cloudhost04 ~]# pvdisplay
  --- Physical volume ---
  PV Name               /dev/mapper/364ed2a35d83f5d68b705e54229020027
  VG Name               64c4a870-98dc-40fc-b21e-092156febcdc
  PV Size               14.00 TiB / not usable 129.00 MiB
  Allocatable           yes
  PE Size               128.00 MiB
  Total PE              114686
  Free PE               111983
  Allocated PE          2703
  PV UUID               h8tZon-o5sB-FR4M-m8oT-UPub-eM1w-7eexhO
[root at cloudhost04 ~]# vdsClient -s 0 getDeviceList
[{'GUID': '364ed2a35d83f5d68b705e54229020027',
  'capacity': '15393163837440',
  'devtype': 'iSCSI',
  'fwrev': '5.2',
  'logicalblocksize': '512',
  'partitioned': False,
  'pathlist': [{'connection': '10.10.5.18',
                'initiatorname': 'default',
                'iqn':
'iqn.2001-05.com.equallogic:4-52aed6-685d3fd83-2700022942e505b7-cloud2',
                'port': '3260',
                'portal': '1'}],
  'pathstatus': [{'lun': '0',
                  'physdev': 'sdd',
                  'state': 'active',
                  'type': 'iSCSI'}],
  'physicalblocksize': '512',
  'productID': '100E-00',
  'pvUUID': 'h8tZon-o5sB-FR4M-m8oT-UPub-eM1w-7eexhO',
  'serial': '',
  'vendorID': 'EQLOGIC',
  'vgUUID': 'XtdGHH-5WwC-oWRa-bv0V-me7t-T6ti-M9WKd2'}]

[root at cloudhost04 ~]# pvdisplay
  --- Physical volume ---
  PV Name               /dev/mapper/364ed2a35d83f5d68b705e54229020027
  VG Name               64c4a870-98dc-40fc-b21e-092156febcdc
  PV Size               14.00 TiB / not usable 129.00 MiB
  Allocatable           yes
  PE Size               128.00 MiB
  Total PE              114686
  Free PE               111983
  Allocated PE          2703
  PV UUID               h8tZon-o5sB-FR4M-m8oT-UPub-eM1w-7eexhO

[root at cloudhost04 ~]# pvresize /dev/mapper/364ed2a35d83f5d68b705e54229020027
  Physical volume "/dev/mapper/364ed2a35d83f5d68b705e54229020027" changed
  1 physical volume(s) resized / 0 physical volume(s) not resized
[root at cloudhost04 ~]# pvdisplay
  --- Physical volume ---
  PV Name               /dev/mapper/364ed2a35d83f5d68b705e54229020027
  VG Name               64c4a870-98dc-40fc-b21e-092156febcdc
  PV Size               14.00 TiB / not usable 129.00 MiB
  Allocatable           yes
  PE Size               128.00 MiB
  Total PE              114686
  Free PE               111983
  Allocated PE          2703
  PV UUID               h8tZon-o5sB-FR4M-m8oT-UPub-eM1w-7eexhO


So, not change.


> Then if indeed it succeeds, wait a little while for engine to catch up (it
> periodically runs getStoragePoolInfo and updates its info about free space,
> you can find this in vdsm.log)
> regardless, see below for the preferred method.
>

Thanks for the confirmation.  Any idea what the interval is?

 > Alternately, would it just be better to create a new LUN on the iSCSI
> > target and add it to the storage domain? Is that even doable?
>
> This flow is fully supported and is currently the easiest way of doing
> this (supported from the GUI and from the CLI). Simply extend a domain with
> a new LUN
>

Great!  I'll give that a shot.


>
> > Certainly it is as simple as adding a new PV to the VG in LVM, but
> > does the engine/GUI support it? It seems a bit more messy than
> > growing an existing domain from an iSCSI target point of view, but
> > are there any technical down sides?
>
> The target has nothing to do with it, you can have multiple LUNs behind
> the same target.


The target serves the LUNs and it was the additional LUNs that I
was referring to as being messier when a single LUN could do the job.  Not
a big problem, just name the LUNs the same the same patters (cloud<#> in my
case), but when all other things are equal, less LUNs is less to think
about.

However, as I read this email, it occurred that some other things might not
be equal.  Specifically, using multiple LUNs could provide a means of
shrinking the storage domain in the future.  LVM provides a simple means to
remove a PV from a VG, but does the engine support this in the CLI or GUI?
 That is, if the a storage domain has multiple LUNs in it, can those be
removed at a later date?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20120927/d0555022/attachment.html>


More information about the Users mailing list