[Users] HELP! Errors creating iSCSI Storage Domain

Dafna Ron dron at redhat.com
Mon Feb 25 10:20:01 UTC 2013


the metadata warning is not the issue (you will always see this warning
in the log when changing metadata).
looking at the logs I can see the following error:

Thread-97::ERROR::2013-02-21
17:26:13,041::lvm::680::Storage.LVM::(_initpvs) [], ['  Can\'t
initialize physical volume
"/dev/mapper/36000eb35f449ac1c000000000000008c" of volume
 group "05fd0730-298a-4884-aa41-416416a1fb24" without -ff']
Thread-97::ERROR::2013-02-21
17:26:13,042::task::833::TaskManager.Task::(_setError)
Task=`37aa0fb8-b527-4d28-8a7c-1bc7a1d8732e`::Unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/task.py", line 840, in _run
    return fn(*args, **kargs)
  File "/usr/share/vdsm/logUtils.py", line 42, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/storage/hsm.py", line 1948, in createVG
    (force.capitalize() == "True")))
  File "/usr/share/vdsm/storage/lvm.py", line 859, in createVG
    _initpvs(pvs, metadataSize, force)
  File "/usr/share/vdsm/storage/lvm.py", line 681, in _initpvs
    raise se.PhysDevInitializationError(str(devices))
PhysDevInitializationError: Failed to initialize physical device:
("['/dev/mapper/36000eb35f449ac1c000000000000008c']",)


looks to me like this bug:
https://bugzilla.redhat.com/show_bug.cgi?id=908776 which was already
solved.
can you please update the vdsm and try again?

thanks,
Dafna
 


On 02/25/2013 03:57 AM, Кочанов Иван wrote:
>
> I have found the place where the error occurs preventing iSCSI
> storagedomain creation.
> The error is:
> FAILED: <err> = '  WARNING: This metadata update is NOT backed up\n 
> Not activating c59dcf20-2de2-44c6-858d-c4e9ec018f91/metadata since it
> does not pass activation filter.\n  Failed to activate new LV.\n 
> WARNING: This metadata update is NOT backed up\n'; <rc> = 5
> Probably the reason is in lvm mechanism on backuping metadata, but I'm
> not sure.
> Do you have any thoughts about it?
>
>
> Here is a part of log:
>
> Thread-161377::DEBUG::2013-02-24
> 22:58:14,217::BindingXMLRPC::161::vds::(wrapper) [192.168.0.22]
> Thread-161377::DEBUG::2013-02-24
> 22:58:14,218::task::568::TaskManager.Task::(_updateState)
> Task=`9bc26f2d-0ee7-4afa-8159-59883f6d98d6`::moving from state init ->
> state preparing
> Thread-161377::INFO::2013-02-24
> 22:58:14,218::logUtils::41::dispatcher::(wrapper) Run and protect:
> createStorageDomain(storageType=3,
> sdUUID='c59dcf20-2de2-44c6-858d-c4e9ec018f91', domainName='OS',
> typeSpecificArg='Ecc2t6-uLUq-BNHq-Yjmx-d4w8-HqV6-Na88Cd', domClass=1,
> domVersion='3', options=None)
> Thread-161377::INFO::2013-02-24
> 22:58:14,220::blockSD::463::Storage.StorageDomain::(create)
> sdUUID=c59dcf20-2de2-44c6-858d-c4e9ec018f91 domainName=OS domClass=1
> vgUUID=Ecc2t6-uLUq-BNHq-Yjmx-d4w8-HqV6-Na88Cd storageType=3 version=3
> Thread-161377::DEBUG::2013-02-24
> 22:58:14,221::lvm::368::OperationMutex::(_reloadvgs) Operation 'lvm
> reload operation' got the operation mutex
> Thread-161377::DEBUG::2013-02-24
> 22:58:14,222::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo
> -n /sbin/lvm vgs --config " devices { preferred_names =
> [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
> disable_after_error_count=3 filter = [
> \\"a%36000eb35f449ac1c000000000000005d|36000eb35f449ac1c000000000000008c%\\",
> \\"r%.*%\\" ] }  global {  locking_type=1  prioritise_write_locks=1 
> wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } "
> --noheadings --units b --nosuffix --separator | -o
> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free'
> (cwd None)
> Thread-161377::DEBUG::2013-02-24
> 22:58:14,470::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err>
> = ''; <rc> = 0
> Thread-161377::DEBUG::2013-02-24
> 22:58:14,474::lvm::397::OperationMutex::(_reloadvgs) Operation 'lvm
> reload operation' released the operation mutex
> Thread-161377::DEBUG::2013-02-24
> 22:58:14,474::lvm::409::OperationMutex::(_reloadlvs) Operation 'lvm
> reload operation' got the operation mutex
> Thread-161377::DEBUG::2013-02-24
> 22:58:14,475::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo
> -n /sbin/lvm lvs --config " devices { preferred_names =
> [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
> disable_after_error_count=3 filter = [
> \\"a%36000eb35f449ac1c000000000000005d|36000eb35f449ac1c000000000000008c%\\",
> \\"r%.*%\\" ] }  global {  locking_type=1  prioritise_write_locks=1 
> wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } "
> --noheadings --units b --nosuffix --separator | -o
> uuid,name,vg_name,attr,size,seg_start_pe,devices,tags
> c59dcf20-2de2-44c6-858d-c4e9ec018f91' (cwd None)
> Thread-161377::DEBUG::2013-02-24
> 22:58:14,702::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err>
> = ''; <rc> = 0
> Thread-161377::DEBUG::2013-02-24
> 22:58:14,703::lvm::442::OperationMutex::(_reloadlvs) Operation 'lvm
> reload operation' released the operation mutex
> Thread-161377::DEBUG::2013-02-24
> 22:58:14,704::lvm::334::OperationMutex::(_reloadpvs) Operation 'lvm
> reload operation' got the operation mutex
> Thread-161377::DEBUG::2013-02-24
> 22:58:14,705::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo
> -n /sbin/lvm pvs --config " devices { preferred_names =
> [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
> disable_after_error_count=3 filter = [
> \\"a%36000eb35f449ac1c000000000000005d|36000eb35f449ac1c000000000000008c%\\",
> \\"r%.*%\\" ] }  global {  locking_type=1  prioritise_write_locks=1 
> wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } "
> --noheadings --units b --nosuffix --separator | -o
> uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size
> /dev/mapper/36000eb35f449ac1c000000000000008c' (cwd None)
> Thread-161377::DEBUG::2013-02-24
> 22:58:14,889::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err>
> = ''; <rc> = 0
> Thread-161377::DEBUG::2013-02-24
> 22:58:14,890::lvm::359::OperationMutex::(_reloadpvs) Operation 'lvm
> reload operation' released the operation mutex
> Thread-161377::INFO::2013-02-24
> 22:58:14,890::blockSD::448::Storage.StorageDomain::(metaSize) size 512
> MB (metaratio 262144)
> Thread-161377::DEBUG::2013-02-24
> 22:58:14,891::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo
> -n /sbin/lvm lvcreate --config " devices { preferred_names =
> [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0
> disable_after_error_count=3 filter = [
> \\"a%36000eb35f449ac1c000000000000005d|36000eb35f449ac1c000000000000008c%\\",
> \\"r%.*%\\" ] }  global {  locking_type=1  prioritise_write_locks=1 
> wait_for_locks=1 }  backup {  retain_min = 50  retain_days = 0 } "
> --autobackup n --contiguous n --size 512m --name metadata
> c59dcf20-2de2-44c6-858d-c4e9ec018f91' (cwd None)
> Thread-161377::DEBUG::2013-02-24
> 22:58:15,152::misc::84::Storage.Misc.excCmd::(<lambda>) FAILED: <err>
> = '  WARNING: This metadata update is NOT backed up\n  Not activating
> c59dcf20-2de2-44c6-858d-c4e9ec018f91/metadata since it does not pass
> activation filter.\n  Failed to activate new LV.\n  WARNING: This
> metadata update is NOT backed up\n'; <rc> = 5
> Thread-161377::ERROR::2013-02-24
> 22:58:15,155::task::833::TaskManager.Task::(_setError)
> Task=`9bc26f2d-0ee7-4afa-8159-59883f6d98d6`::Unexpected error
> Thread-161377::DEBUG::2013-02-24
> 22:58:15,171::task::852::TaskManager.Task::(_run)
> Task=`9bc26f2d-0ee7-4afa-8159-59883f6d98d6`::Task._run:
> 9bc26f2d-0ee7-4afa-8159-59883f6d98d6 (3,
> 'c59dcf20-2de2-44c6-858d-c4e9ec018f91', 'OS',
> 'Ecc2t6-uLUq-BNHq-Yjmx-d4w8-HqV6-Na88Cd', 1, '3') {} failed - stopping
> task
>
>
>
> ----- Original Message -----
>> I did the following right now to be sure:
>> - rotated logs
>> - deleted the LUN from Storage and created new one (so LUN is unused)
>> - rebooted the server
>> - opened oVirt Engine Web Administration
>> - Storage -> New Domain
>> - checked my newly created LUN
>> --> Error
>> - pressed button "OK" one more time - Another Error
>>
>> I have no other iSCSI storage domains on my installaion. I tried to
>> create another LUN on SAN with smaller size, but result is the same.
>>
>> Thanks in advance. I hope this information will help to find out
>> what's wrong...
>>
>>
>> ----- Original Message -----
>>
>>
>>> We would also like to know whether the first creation of the iscsi
>>> storage domain succeeded and only the latter ones failed.
>>> Whether you possibly tried to create a storage domain with a already
>>> used lun, and specifically - do you have any iscsi storage domain at
>>> all that you can see on the webadmin?
>>>
>>> Regards,
>>> Vered
>>>
>>> ----- Original Message -----
>>>> From: "Eduardo Warszawski" <ewarszaw at redhat.com>
>>>> To: "Vered Volansky" <vered at redhat.com>
>>>> Cc: users at ovirt.org
>>>> Sent: Thursday, February 21, 2013 11:57:09 AM
>>>> Subject: Re: [Users] HELP! Errors creating iSCSI Storage Domain
>>>>
>>>>
>>>>
>>>> ----- Original Message -----
>>>>> Hi,
>>>>>
>>>>> Please add engine and vdsm full logs.
>>>>>
>>>>> Regards,
>>>>> Vered
>>>>>
>>>>> ----- Original Message -----
>>>>>> From: "Кочанов Иван" <kia at dcak.ru>
>>>>>> To: users at ovirt.org
>>>>>> Sent: Thursday, February 21, 2013 4:26:14 AM
>>>>>> Subject: [Users] HELP! Errors creating iSCSI Storage Domain
>>>>>>
>>>>>>
>>>>>> LUN info:
>>>>>> # iscsiadm -m discovery -t st -p 172.16.20.1
>>>>>> 172.16.20.1:3260,1
>>>>>> iqn.2003-10.com.lefthandnetworks:mg-dcak-p4000:134:vol-os-virtualimages
>>>>>>
>>>>>>
>>>>>> # pvs
>>>>>> PV VG Fmt Attr PSize PFree
>>>>>> /dev/mapper/36000eb35f449ac1c0000000000000086
>>>>>> db260c45-8dd2-4397-a5bf-fd577bbbe0d4 lvm2 a-- 1023.62g 1023.62g
>>>>>>
>>>> The PV already belongs to VG db260c45-8dd2-4397-a5bf-fd577bbbe0d4.
>>>> This VG was very probably created by ovirt and is part of a
>>>> StorageDomain.
>>>> (Then you already succeeded.)
>>>> If you are sure that no valuable info is in this VG you can use the
>>>> "force" thick.
>>>> Anyway I suggest you to look for the SDUUID:
>>>> db260c45-8dd2-4397-a5bf-fd577bbbe0d4
>>>> and removing this StorageDomain from the system or use this already
>>>> created SD.
>>>>
>>>> vdsm and engine logs will be very valuable, as Vered said.
>>>> Please include them since you started, in order to confirm to which
>>>> SD the lun belongs.
>>>>
>>>> Best regards,
>>>> E.
>>>>
>>>>>>
>>>>>>
>>>>>> -- 
>>>>>> Sysadmin DCAK kia at dcak.ru
>>>>>> _______________________________________________
>>>>>> Users mailing list
>>>>>> Users at ovirt.org
>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>
>>>>> _______________________________________________
>>>>> Users mailing list
>>>>> Users at ovirt.org
>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users at ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users at ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
>
> -- 
> С уважением, 
> системный администратор КГБУЗ "ДЦАК" 
> Кочанов Иван Александрович 
>
> mailto:kia at dcak.ru
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users


-- 
Dafna Ron



More information about the Users mailing list