[Users] Fibre channel - LVM problems
Ayal Baron
abaron at redhat.com
Mon Jul 29 14:13:05 UTC 2013
Hi Lukasz,
----- Original Message -----
> Hello,
>
> I'm trying to set up an oVirt 3.2.1 cluster with FC storage.
> However, I have encountered a problem when adding the storage to the
> cluster. The action fails with the error shown in the log fragment
> below.
>
> Thread-628::DEBUG::2013-07-25
> 21:07:56,385::task::568::TaskManager.Task::(_updateState)
> Task=`2a9c1ec3-3ab7-467c-949d-f47260e95dda`::moving from state init -> state
> preparing
> Thread-628::INFO::2013-07-25
> 21:07:56,386::logUtils::41::dispatcher::(wrapper) Run and protect:
> createVG(vgname='7d6f6cd0-608a-4221-aca4-67fffb874b45',
> devlist=['3600a0b800074a36e000006e951f14e7d'], force=False, options=None)
> Thread-628::DEBUG::2013-07-25
> 21:07:56,387::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n
> /sbin/lvm pvcreate --config " devices { preferred_names =
> [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
> disable_after_error_count=3 filter = [
> \\"a%3600a0b800074a36e000006e951f14e7d%\\", \\"r%.*%\\" ] } global {
> locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup {
> retain_min = 50 retain_days = 0 } " --metadatasize 128m --metadatacopies 2
> --metadataignore y /dev/mapper/3600a0b800074a36e000006e951f14e7d' (cwd None)
> Thread-628::DEBUG::2013-07-25
> 21:07:56,427::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = '';
> <rc> = 0
> Thread-628::DEBUG::2013-07-25
> 21:07:56,428::lvm::471::OperationMutex::(_invalidatepvs) Operation 'lvm
> invalidate operation' got the operation mutex
> Thread-628::DEBUG::2013-07-25
> 21:07:56,428::lvm::474::OperationMutex::(_invalidatepvs) Operation 'lvm
> invalidate operation' released the operation mutex
> Thread-628::DEBUG::2013-07-25
> 21:07:56,429::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n
> /sbin/lvm pvchange --config " devices { preferred_names =
> [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
> disable_after_error_count=3 filter = [
> \\"a%3600a0b800074a36e000006e951f14e7d%\\", \\"r%.*%\\" ] } global {
> locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup {
> retain_min = 50 retain_days = 0 } " --metadataignore n
> /dev/mapper/3600a0b800074a36e000006e951f14e7d' (cwd None)
> Thread-628::DEBUG::2013-07-25
> 21:07:56,468::misc::84::Storage.Misc.excCmd::(<lambda>) FAILED: <err> = '
> No device found for PV W8eewg-xskW-2b4S-NJf6-6aux-ySua-1kAU9J.\n
> /dev/mapper/3600a0b800074a36e000006e951f14e7d: lseek 18446744073575333888
> failed: Invalid argument\n /dev/mapper/3600a0b800074a36e000006e951f14e7d:
> lseek 18446744073575333888 failed: Invalid argument\n Failed to store
> physical volume "/dev/mapper/3600a0b800074a36e000006e951f14e7d"\n'; <rc> = 5
> Thread-628::ERROR::2013-07-25
> 21:07:56,470::task::833::TaskManager.Task::(_setError)
> Task=`2a9c1ec3-3ab7-467c-949d-f47260e95dda`::Unexpected error
> Traceback (most recent call last):
> File "/usr/share/vdsm/storage/task.py", line 840, in _run
> return fn(*args, **kargs)
> File "/usr/share/vdsm/logUtils.py", line 42, in wrapper
> res = f(*args, **kwargs)
> File "/usr/share/vdsm/storage/hsm.py", line 1951, in createVG
> (force.capitalize() == "True")))
> File "/usr/share/vdsm/storage/lvm.py", line 865, in createVG
> raise se.PhysDevInitializationError(pvs[0])
> PhysDevInitializationError: Failed to initialize physical device:
> ('/dev/mapper/3600a0b800074a36e000006e951f14e7d',)
> Thread-628::DEBUG::2013-07-25
> 21:07:56,473::task::852::TaskManager.Task::(_run)
> Task=`2a9c1ec3-3ab7-467c-949d-f47260e95dda`::Task._run:
> 2a9c1ec3-3ab7-467c-949d-f47260e95dda
> ('7d6f6cd0-608a-4221-aca4-67fffb874b45',
> ['3600a0b800074a36e000006e951f14e7d'], False) {} failed - stopping task
>
>
> As you can see, the PV is created but the next action (pvchange) fails
> due to a wrong lseek parameter value. I have pinpointed the problem to the
> metadatacopies parameter (a manually created PV without this parameter
> works fine).
>
> I have found a similar issue here:
> https://www.redhat.com/archives/lvm-devel/2011-February/msg00127.html
After discussing this with Peter Rajnoha, it looks like the specific issue in the above thread has been resolved and your issue is probably a reincarnation of the problem. Can you manually run the pvchange with -vvvv to get more info?
Thanks.
>
> List of PVs after the error:
> # pvs
> PV VG Fmt Attr PSize
> PFree
> /dev/mapper/3600a0b800074a36e000006e951f14e7d lvm2 a-- 100,00g
> 100,00g
> /dev/sdb2 fedora lvm2 a-- 556,44g
> 0
>
> # pvck -v /dev/mapper/3600a0b800074a36e000006e951f14e7d
> Scanning /dev/mapper/3600a0b800074a36e000006e951f14e7d
> Found label on /dev/mapper/3600a0b800074a36e000006e951f14e7d, sector 1,
> type=LVM2 001
> Found text metadata area: offset=4096, size=135262208
> Huge memory allocation (size 50003968) rejected - metadata corruption?
> Bounce buffer malloc failed
> Read from /dev/mapper/3600a0b800074a36e000006e951f14e7d failed
> Found text metadata area: offset=107239964672, size=134217728
>
> Huge memory allocation (size 50003968) rejected - metadata corruption?
> Bounce buffer malloc failed
> Read from /dev/mapper/3600a0b800074a36e000006e951f14e7d failed
>
> The OS is Fedora 19.
>
> # lvm version
> LVM version: 2.02.98(2) (2012-10-15)
> Library version: 1.02.77 (2012-10-15)
> Driver version: 4.24.0
>
> Best regards,
> Łukasz
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
More information about the Users
mailing list