[Users] HELP! Errors creating iSCSI Storage Domain

This is a multi-part message in MIME format. --------------070208030808080803000801 Content-Type: text/plain; charset=windows-1251; format=flowed Content-Transfer-Encoding: 7bit Hello oVirt Community! Need help in configuring iSCSI DataCenter on oVirt. I have HP P4000 iSCSI Storage (LeftHand). And LUN on it. This LUN is available for the Server. I tried to install oVirt from ovirt-dre repo. Everything works except adding iSCSI Storage Domain. After iSCSI-discovery i get list of iSCSI LUNs. But I get error after choosing LUN I need (Error#1). If I try to add it second time I get another errors (Error#2 or Error#3). If I operate with iSCSI volume manually, I have no problems - Partitions are created. Can you advise something? I stuck with this problem for two weeks already... First I installed ovirt3.1. This week ovirt3.2 released and I did a fresh installation of this version, but nothing changed - errors are the same. *Error #1:* Error while executing action New SAN Storage Domain: Cannot create Logical Volume *Error #2:* Error while executing action New SAN Storage Domain: Physical device initialization failed. Check that the device is empty. Please remove all files and partitions from the device. *Error #3:* Error while executing action New SAN Storage Domain: Invalid physical device *LUN info:* # iscsiadm -m discovery -t st -p 172.16.20.1 172.16.20.1:3260,1 iqn.2003-10.com.lefthandnetworks:mg-dcak-p4000:134:vol-os-virtualimages # pvs PV VG Fmt Attr PSize PFree /dev/mapper/36000eb35f449ac1c0000000000000086 db260c45-8dd2-4397-a5bf-fd577bbbe0d4 lvm2 a-- 1023.62g 1023.62g # fdisk -l Disk /dev/sdc: 1099.5 GB, 1099511627776 bytes 255 heads, 63 sectors/track, 133674 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/36000eb35f449ac1c0000000000000086: 1099.5 GB, 1099511627776 bytes 255 heads, 63 sectors/track, 133674 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 *Server info: * # rpm -q centos-release centos-release-6-3.el6.centos.9.x86_64 # uname -r 2.6.32-279.22.1.el6.x86_64 -- Sysadmin DCAK kia@dcak.ru --------------070208030808080803000801 Content-Type: text/html; charset=windows-1251 Content-Transfer-Encoding: 8bit <html> <head> <meta http-equiv="content-type" content="text/html; charset=windows-1251"> </head> <body bgcolor="#FFFFFF" text="#000000"> Hello oVirt Community!<br> Need help in configuring iSCSI DataCenter on oVirt.<br> <br> I have HP P4000 iSCSI Storage (LeftHand). And LUN on it. This LUN is available for the Server.<br> I tried to install oVirt from ovirt-dre repo. Everything works except adding iSCSI Storage Domain.<br> After iSCSI-discovery i get list of iSCSI LUNs. But I get error after choosing LUN I need (Error#1).<br> If I try to add it second time I get another errors (Error#2 or Error#3).<br> <br> If I operate with iSCSI volume manually, I have no problems - Partitions are created.<br> <br> Can you advise something? I stuck with this problem for two weeks already...<br> <br> First I installed ovirt3.1. This week ovirt3.2 released and I did a fresh installation of this version, but nothing changed - errors are the same.<br> <br> <b> Error #1:</b><br> <div class="GG0QU52CHWB"> <div class="GG0QU52CEGC GG0QU52CIWB"> <div class="gwt-HTML">Error while executing action New SAN Storage Domain: Cannot create Logical Volume<br> </div> </div> </div> <br> <b>Error #2:</b><br> Error while executing action New SAN Storage Domain: Physical device initialization failed. Check that the device is empty. Please remove all files and partitions from the device.<br> <br> <b>Error #3:</b><br> Error while executing action New SAN Storage Domain: Invalid physical device<br> <br> <br> <b>LUN info:</b><br> # iscsiadm -m discovery -t st -p 172.16.20.1<br> 172.16.20.1:3260,1 iqn.2003-10.com.lefthandnetworks:mg-dcak-p4000:134:vol-os-virtualimages<br> <br> # pvs<br> PV VG Fmt Attr PSize PFree<br> /dev/mapper/36000eb35f449ac1c0000000000000086 db260c45-8dd2-4397-a5bf-fd577bbbe0d4 lvm2 a-- 1023.62g 1023.62g<br> <br> # fdisk -l<br> <br> Disk /dev/sdc: 1099.5 GB, 1099511627776 bytes<br> 255 heads, 63 sectors/track, 133674 cylinders<br> Units = cylinders of 16065 * 512 = 8225280 bytes<br> Sector size (logical/physical): 512 bytes / 512 bytes<br> I/O size (minimum/optimal): 512 bytes / 512 bytes<br> Disk identifier: 0x00000000<br> <br> Disk /dev/mapper/36000eb35f449ac1c0000000000000086: 1099.5 GB, 1099511627776 bytes<br> 255 heads, 63 sectors/track, 133674 cylinders<br> Units = cylinders of 16065 * 512 = 8225280 bytes<br> Sector size (logical/physical): 512 bytes / 512 bytes<br> I/O size (minimum/optimal): 512 bytes / 512 bytes<br> Disk identifier: 0x00000000<br> <br> <b>Server info: </b><br> # rpm -q centos-release<br> centos-release-6-3.el6.centos.9.x86_64<br> <br> # uname -r<br> 2.6.32-279.22.1.el6.x86_64<br> <br> <br> <br> <pre class="moz-signature" cols="72">-- Sysadmin DCAK <a class="moz-txt-link-abbreviated" href="mailto:kia@dcak.ru">kia@dcak.ru</a> </pre> </body> </html> --------------070208030808080803000801--

Hi, Please add engine and vdsm full logs. Regards, Vered ----- Original Message -----
From: "Кочанов Иван" <kia@dcak.ru> To: users@ovirt.org Sent: Thursday, February 21, 2013 4:26:14 AM Subject: [Users] HELP! Errors creating iSCSI Storage Domain
Hello oVirt Community! Need help in configuring iSCSI DataCenter on oVirt.
I have HP P4000 iSCSI Storage (LeftHand). And LUN on it. This LUN is available for the Server. I tried to install oVirt from ovirt-dre repo. Everything works except adding iSCSI Storage Domain. After iSCSI-discovery i get list of iSCSI LUNs. But I get error after choosing LUN I need (Error#1). If I try to add it second time I get another errors (Error#2 or Error#3).
If I operate with iSCSI volume manually, I have no problems - Partitions are created.
Can you advise something? I stuck with this problem for two weeks already...
First I installed ovirt3.1. This week ovirt3.2 released and I did a fresh installation of this version, but nothing changed - errors are the same.
Error #1:
Error while executing action New SAN Storage Domain: Cannot create Logical Volume
Error #2: Error while executing action New SAN Storage Domain: Physical device initialization failed. Check that the device is empty. Please remove all files and partitions from the device.
Error #3: Error while executing action New SAN Storage Domain: Invalid physical device
LUN info: # iscsiadm -m discovery -t st -p 172.16.20.1 172.16.20.1:3260,1 iqn.2003-10.com.lefthandnetworks:mg-dcak-p4000:134:vol-os-virtualimages
# pvs PV VG Fmt Attr PSize PFree /dev/mapper/36000eb35f449ac1c0000000000000086 db260c45-8dd2-4397-a5bf-fd577bbbe0d4 lvm2 a-- 1023.62g 1023.62g
# fdisk -l
Disk /dev/sdc: 1099.5 GB, 1099511627776 bytes 255 heads, 63 sectors/track, 133674 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000
Disk /dev/mapper/36000eb35f449ac1c0000000000000086: 1099.5 GB, 1099511627776 bytes 255 heads, 63 sectors/track, 133674 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000
Server info: # rpm -q centos-release centos-release-6-3.el6.centos.9.x86_64
# uname -r 2.6.32-279.22.1.el6.x86_64
-- Sysadmin DCAK kia@dcak.ru _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

----- Original Message -----
Hi,
Please add engine and vdsm full logs.
Regards, Vered
----- Original Message -----
From: "Кочанов Иван" <kia@dcak.ru> To: users@ovirt.org Sent: Thursday, February 21, 2013 4:26:14 AM Subject: [Users] HELP! Errors creating iSCSI Storage Domain
LUN info: # iscsiadm -m discovery -t st -p 172.16.20.1 172.16.20.1:3260,1 iqn.2003-10.com.lefthandnetworks:mg-dcak-p4000:134:vol-os-virtualimages
# pvs PV VG Fmt Attr PSize PFree /dev/mapper/36000eb35f449ac1c0000000000000086 db260c45-8dd2-4397-a5bf-fd577bbbe0d4 lvm2 a-- 1023.62g 1023.62g
The PV already belongs to VG db260c45-8dd2-4397-a5bf-fd577bbbe0d4. This VG was very probably created by ovirt and is part of a StorageDomain. (Then you already succeeded.) If you are sure that no valuable info is in this VG you can use the "force" thick. Anyway I suggest you to look for the SDUUID: db260c45-8dd2-4397-a5bf-fd577bbbe0d4 and removing this StorageDomain from the system or use this already created SD.
vdsm and engine logs will be very valuable, as Vered said. Please include them since you started, in order to confirm to which SD the lun belongs. Best regards, E.
-- Sysadmin DCAK kia@dcak.ru _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

We would also like to know whether the first creation of the iscsi storage domain succeeded and only the latter ones failed. Whether you possibly tried to create a storage domain with a already used lun, and specifically - do you have any iscsi storage domain at all that you can see on the webadmin? Regards, Vered ----- Original Message -----
From: "Eduardo Warszawski" <ewarszaw@redhat.com> To: "Vered Volansky" <vered@redhat.com> Cc: users@ovirt.org Sent: Thursday, February 21, 2013 11:57:09 AM Subject: Re: [Users] HELP! Errors creating iSCSI Storage Domain
----- Original Message -----
Hi,
Please add engine and vdsm full logs.
Regards, Vered
----- Original Message -----
From: "Кочанов Иван" <kia@dcak.ru> To: users@ovirt.org Sent: Thursday, February 21, 2013 4:26:14 AM Subject: [Users] HELP! Errors creating iSCSI Storage Domain
LUN info: # iscsiadm -m discovery -t st -p 172.16.20.1 172.16.20.1:3260,1 iqn.2003-10.com.lefthandnetworks:mg-dcak-p4000:134:vol-os-virtualimages
# pvs PV VG Fmt Attr PSize PFree /dev/mapper/36000eb35f449ac1c0000000000000086 db260c45-8dd2-4397-a5bf-fd577bbbe0d4 lvm2 a-- 1023.62g 1023.62g
The PV already belongs to VG db260c45-8dd2-4397-a5bf-fd577bbbe0d4. This VG was very probably created by ovirt and is part of a StorageDomain. (Then you already succeeded.) If you are sure that no valuable info is in this VG you can use the "force" thick. Anyway I suggest you to look for the SDUUID: db260c45-8dd2-4397-a5bf-fd577bbbe0d4 and removing this StorageDomain from the system or use this already created SD.
vdsm and engine logs will be very valuable, as Vered said. Please include them since you started, in order to confirm to which SD the lun belongs.
Best regards, E.
-- Sysadmin DCAK kia@dcak.ru _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

I did the following right now to be sure: - rotated logs - deleted the LUN from Storage and created new one (so LUN is unused) - rebooted the server - opened oVirt Engine Web Administration - Storage -> New Domain - checked my newly created LUN --> Error - pressed button "OK" one more time - Another Error I have no other iSCSI storage domains on my installaion. I tried to create another LUN on SAN with smaller size, but result is the same. Thanks in advance. I hope this information will help to find out what's wrong... ----- Original Message -----
We would also like to know whether the first creation of the iscsi storage domain succeeded and only the latter ones failed. Whether you possibly tried to create a storage domain with a already used lun, and specifically - do you have any iscsi storage domain at all that you can see on the webadmin?
Regards, Vered
----- Original Message -----
From: "Eduardo Warszawski" <ewarszaw@redhat.com> To: "Vered Volansky" <vered@redhat.com> Cc: users@ovirt.org Sent: Thursday, February 21, 2013 11:57:09 AM Subject: Re: [Users] HELP! Errors creating iSCSI Storage Domain
----- Original Message -----
Hi,
Please add engine and vdsm full logs.
Regards, Vered
----- Original Message -----
From: "Кочанов Иван" <kia@dcak.ru> To: users@ovirt.org Sent: Thursday, February 21, 2013 4:26:14 AM Subject: [Users] HELP! Errors creating iSCSI Storage Domain
LUN info: # iscsiadm -m discovery -t st -p 172.16.20.1 172.16.20.1:3260,1 iqn.2003-10.com.lefthandnetworks:mg-dcak-p4000:134:vol-os-virtualimages
# pvs PV VG Fmt Attr PSize PFree /dev/mapper/36000eb35f449ac1c0000000000000086 db260c45-8dd2-4397-a5bf-fd577bbbe0d4 lvm2 a-- 1023.62g 1023.62g
The PV already belongs to VG db260c45-8dd2-4397-a5bf-fd577bbbe0d4. This VG was very probably created by ovirt and is part of a StorageDomain. (Then you already succeeded.) If you are sure that no valuable info is in this VG you can use the "force" thick. Anyway I suggest you to look for the SDUUID: db260c45-8dd2-4397-a5bf-fd577bbbe0d4 and removing this StorageDomain from the system or use this already created SD.
vdsm and engine logs will be very valuable, as Vered said. Please include them since you started, in order to confirm to which SD the lun belongs.
Best regards, E.
-- Sysadmin DCAK kia@dcak.ru _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

This is a multi-part message in MIME format. --------------090306060404070309050301 Content-Type: text/plain; charset=windows-1251; format=flowed Content-Transfer-Encoding: 8bit I have found the place where the error occurs preventing iSCSI storagedomain creation. The error is: FAILED: <err> = ' WARNING: This metadata update is NOT backed up\n Not activating c59dcf20-2de2-44c6-858d-c4e9ec018f91/metadata since it does not pass activation filter.\n Failed to activate new LV.\n WARNING: This metadata update is NOT backed up\n'; <rc> = 5 Probably the reason is in lvm mechanism on backuping metadata, but I'm not sure. Do you have any thoughts about it? Here is a part of log: Thread-161377::DEBUG::2013-02-24 22:58:14,217::BindingXMLRPC::161::vds::(wrapper) [192.168.0.22] Thread-161377::DEBUG::2013-02-24 22:58:14,218::task::568::TaskManager.Task::(_updateState) Task=`9bc26f2d-0ee7-4afa-8159-59883f6d98d6`::moving from state init -> state preparing Thread-161377::INFO::2013-02-24 22:58:14,218::logUtils::41::dispatcher::(wrapper) Run and protect: createStorageDomain(storageType=3, sdUUID='c59dcf20-2de2-44c6-858d-c4e9ec018f91', domainName='OS', typeSpecificArg='Ecc2t6-uLUq-BNHq-Yjmx-d4w8-HqV6-Na88Cd', domClass=1, domVersion='3', options=None) Thread-161377::INFO::2013-02-24 22:58:14,220::blockSD::463::Storage.StorageDomain::(create) sdUUID=c59dcf20-2de2-44c6-858d-c4e9ec018f91 domainName=OS domClass=1 vgUUID=Ecc2t6-uLUq-BNHq-Yjmx-d4w8-HqV6-Na88Cd storageType=3 version=3 Thread-161377::DEBUG::2013-02-24 22:58:14,221::lvm::368::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex Thread-161377::DEBUG::2013-02-24 22:58:14,222::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%36000eb35f449ac1c000000000000005d|36000eb35f449ac1c000000000000008c%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free' (cwd None) Thread-161377::DEBUG::2013-02-24 22:58:14,470::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0 Thread-161377::DEBUG::2013-02-24 22:58:14,474::lvm::397::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex Thread-161377::DEBUG::2013-02-24 22:58:14,474::lvm::409::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex Thread-161377::DEBUG::2013-02-24 22:58:14,475::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%36000eb35f449ac1c000000000000005d|36000eb35f449ac1c000000000000008c%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags c59dcf20-2de2-44c6-858d-c4e9ec018f91' (cwd None) Thread-161377::DEBUG::2013-02-24 22:58:14,702::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0 Thread-161377::DEBUG::2013-02-24 22:58:14,703::lvm::442::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex Thread-161377::DEBUG::2013-02-24 22:58:14,704::lvm::334::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' got the operation mutex Thread-161377::DEBUG::2013-02-24 22:58:14,705::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /sbin/lvm pvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%36000eb35f449ac1c000000000000005d|36000eb35f449ac1c000000000000008c%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size /dev/mapper/36000eb35f449ac1c000000000000008c' (cwd None) Thread-161377::DEBUG::2013-02-24 22:58:14,889::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0 Thread-161377::DEBUG::2013-02-24 22:58:14,890::lvm::359::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex Thread-161377::INFO::2013-02-24 22:58:14,890::blockSD::448::Storage.StorageDomain::(metaSize) size 512 MB (metaratio 262144) Thread-161377::DEBUG::2013-02-24 22:58:14,891::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /sbin/lvm lvcreate --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%36000eb35f449ac1c000000000000005d|36000eb35f449ac1c000000000000008c%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --autobackup n --contiguous n --size 512m --name metadata c59dcf20-2de2-44c6-858d-c4e9ec018f91' (cwd None) Thread-161377::DEBUG::2013-02-24 22:58:15,152::misc::84::Storage.Misc.excCmd::(<lambda>) FAILED: <err> = ' WARNING: This metadata update is NOT backed up\n Not activating c59dcf20-2de2-44c6-858d-c4e9ec018f91/metadata since it does not pass activation filter.\n Failed to activate new LV.\n WARNING: This metadata update is NOT backed up\n'; <rc> = 5 Thread-161377::ERROR::2013-02-24 22:58:15,155::task::833::TaskManager.Task::(_setError) Task=`9bc26f2d-0ee7-4afa-8159-59883f6d98d6`::Unexpected error Thread-161377::DEBUG::2013-02-24 22:58:15,171::task::852::TaskManager.Task::(_run) Task=`9bc26f2d-0ee7-4afa-8159-59883f6d98d6`::Task._run: 9bc26f2d-0ee7-4afa-8159-59883f6d98d6 (3, 'c59dcf20-2de2-44c6-858d-c4e9ec018f91', 'OS', 'Ecc2t6-uLUq-BNHq-Yjmx-d4w8-HqV6-Na88Cd', 1, '3') {} failed - stopping task ----- Original Message -----
I did the following right now to be sure: - rotated logs - deleted the LUN from Storage and created new one (so LUN is unused) - rebooted the server - opened oVirt Engine Web Administration - Storage -> New Domain - checked my newly created LUN --> Error - pressed button "OK" one more time - Another Error
I have no other iSCSI storage domains on my installaion. I tried to create another LUN on SAN with smaller size, but result is the same.
Thanks in advance. I hope this information will help to find out what's wrong...
----- Original Message -----
We would also like to know whether the first creation of the iscsi storage domain succeeded and only the latter ones failed. Whether you possibly tried to create a storage domain with a already used lun, and specifically - do you have any iscsi storage domain at all that you can see on the webadmin?
Regards, Vered
----- Original Message -----
From: "Eduardo Warszawski" <ewarszaw@redhat.com> To: "Vered Volansky" <vered@redhat.com> Cc: users@ovirt.org Sent: Thursday, February 21, 2013 11:57:09 AM Subject: Re: [Users] HELP! Errors creating iSCSI Storage Domain
----- Original Message -----
Hi,
Please add engine and vdsm full logs.
Regards, Vered
----- Original Message -----
From: "Êî÷àíîâ Èâàí" <kia@dcak.ru> To: users@ovirt.org Sent: Thursday, February 21, 2013 4:26:14 AM Subject: [Users] HELP! Errors creating iSCSI Storage Domain
LUN info: # iscsiadm -m discovery -t st -p 172.16.20.1 172.16.20.1:3260,1 iqn.2003-10.com.lefthandnetworks:mg-dcak-p4000:134:vol-os-virtualimages
# pvs PV VG Fmt Attr PSize PFree /dev/mapper/36000eb35f449ac1c0000000000000086 db260c45-8dd2-4397-a5bf-fd577bbbe0d4 lvm2 a-- 1023.62g 1023.62g
The PV already belongs to VG db260c45-8dd2-4397-a5bf-fd577bbbe0d4. This VG was very probably created by ovirt and is part of a StorageDomain. (Then you already succeeded.) If you are sure that no valuable info is in this VG you can use the "force" thick. Anyway I suggest you to look for the SDUUID: db260c45-8dd2-4397-a5bf-fd577bbbe0d4 and removing this StorageDomain from the system or use this already created SD.
vdsm and engine logs will be very valuable, as Vered said. Please include them since you started, in order to confirm to which SD the lun belongs.
Best regards, E.
-- Sysadmin DCAK kia@dcak.ru _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Ñ óâàæåíèåì, ñèñòåìíûé àäìèíèñòðàòîð ÊÃÁÓÇ "ÄÖÀÊ" Êî÷àíîâ Èâàí Àëåêñàíäðîâè÷ mailto:kia@dcak.ru --------------090306060404070309050301 Content-Type: text/html; charset=windows-1251 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=windows-1251" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> <div class="moz-cite-prefix"><br> I have found the place where the error occurs preventing iSCSI storagedomain creation.<br> The error is:<br> FAILED: <err> = ' WARNING: This metadata update is NOT backed up\n Not activating c59dcf20-2de2-44c6-858d-c4e9ec018f91/metadata since it does not pass activation filter.\n Failed to activate new LV.\n WARNING: This metadata update is NOT backed up\n'; <rc> = 5<br> Probably the reason is in lvm mechanism on backuping metadata, but I'm not sure. <br> Do you have any thoughts about it?<br> <br> <br> Here is a part of log:<br> <br> Thread-161377::DEBUG::2013-02-24 22:58:14,217::BindingXMLRPC::161::vds::(wrapper) [192.168.0.22]<br> Thread-161377::DEBUG::2013-02-24 22:58:14,218::task::568::TaskManager.Task::(_updateState) Task=`9bc26f2d-0ee7-4afa-8159-59883f6d98d6`::moving from state init -> state preparing<br> Thread-161377::<a class="moz-txt-link-freetext" href="INFO::2013-02-24">INFO::2013-02-24</a> 22:58:14,218::logUtils::41::dispatcher::(wrapper) Run and protect: createStorageDomain(storageType=3, sdUUID='c59dcf20-2de2-44c6-858d-c4e9ec018f91', domainName='OS', typeSpecificArg='Ecc2t6-uLUq-BNHq-Yjmx-d4w8-HqV6-Na88Cd', domClass=1, domVersion='3', options=None)<br> Thread-161377::<a class="moz-txt-link-freetext" href="INFO::2013-02-24">INFO::2013-02-24</a> 22:58:14,220::blockSD::463::Storage.StorageDomain::(create) sdUUID=c59dcf20-2de2-44c6-858d-c4e9ec018f91 domainName=OS domClass=1 vgUUID=Ecc2t6-uLUq-BNHq-Yjmx-d4w8-HqV6-Na88Cd storageType=3 version=3<br> Thread-161377::DEBUG::2013-02-24 22:58:14,221::lvm::368::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex<br> Thread-161377::DEBUG::2013-02-24 22:58:14,222::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%36000eb35f449ac1c000000000000005d|36000eb35f449ac1c000000000000008c%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free' (cwd None)<br> Thread-161377::DEBUG::2013-02-24 22:58:14,470::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0<br> Thread-161377::DEBUG::2013-02-24 22:58:14,474::lvm::397::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex<br> Thread-161377::DEBUG::2013-02-24 22:58:14,474::lvm::409::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex<br> Thread-161377::DEBUG::2013-02-24 22:58:14,475::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%36000eb35f449ac1c000000000000005d|36000eb35f449ac1c000000000000008c%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags c59dcf20-2de2-44c6-858d-c4e9ec018f91' (cwd None)<br> Thread-161377::DEBUG::2013-02-24 22:58:14,702::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0<br> Thread-161377::DEBUG::2013-02-24 22:58:14,703::lvm::442::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex<br> Thread-161377::DEBUG::2013-02-24 22:58:14,704::lvm::334::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' got the operation mutex<br> Thread-161377::DEBUG::2013-02-24 22:58:14,705::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /sbin/lvm pvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%36000eb35f449ac1c000000000000005d|36000eb35f449ac1c000000000000008c%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size /dev/mapper/36000eb35f449ac1c000000000000008c' (cwd None)<br> Thread-161377::DEBUG::2013-02-24 22:58:14,889::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0<br> Thread-161377::DEBUG::2013-02-24 22:58:14,890::lvm::359::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex<br> Thread-161377::<a class="moz-txt-link-freetext" href="INFO::2013-02-24">INFO::2013-02-24</a> 22:58:14,890::blockSD::448::Storage.StorageDomain::(metaSize) size 512 MB (metaratio 262144)<br> Thread-161377::DEBUG::2013-02-24 22:58:14,891::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /sbin/lvm lvcreate --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%36000eb35f449ac1c000000000000005d|36000eb35f449ac1c000000000000008c%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --autobackup n --contiguous n --size 512m --name metadata c59dcf20-2de2-44c6-858d-c4e9ec018f91' (cwd None)<br> Thread-161377::DEBUG::2013-02-24 22:58:15,152::misc::84::Storage.Misc.excCmd::(<lambda>) FAILED: <err> = ' WARNING: This metadata update is NOT backed up\n Not activating c59dcf20-2de2-44c6-858d-c4e9ec018f91/metadata since it does not pass activation filter.\n Failed to activate new LV.\n WARNING: This metadata update is NOT backed up\n'; <rc> = 5<br> Thread-161377::ERROR::2013-02-24 22:58:15,155::task::833::TaskManager.Task::(_setError) Task=`9bc26f2d-0ee7-4afa-8159-59883f6d98d6`::Unexpected error<br> Thread-161377::DEBUG::2013-02-24 22:58:15,171::task::852::TaskManager.Task::(_run) Task=`9bc26f2d-0ee7-4afa-8159-59883f6d98d6`::Task._run: 9bc26f2d-0ee7-4afa-8159-59883f6d98d6 (3, 'c59dcf20-2de2-44c6-858d-c4e9ec018f91', 'OS', 'Ecc2t6-uLUq-BNHq-Yjmx-d4w8-HqV6-Na88Cd', 1, '3') {} failed - stopping task<br> <br> <br> <br> ----- Original Message ----- <br> </div> <blockquote cite="mid:5125F971.9050808@dcak.ru" type="cite">I did the following right now to be sure: <br> - rotated logs <br> - deleted the LUN from Storage and created new one (so LUN is unused) <br> - rebooted the server <br> - opened oVirt Engine Web Administration <br> - Storage -> New Domain <br> - checked my newly created LUN <br> --> Error <br> - pressed button "OK" one more time - Another Error <br> <br> I have no other iSCSI storage domains on my installaion. I tried to create another LUN on SAN with smaller size, but result is the same. <br> <br> Thanks in advance. I hope this information will help to find out what's wrong... <br> <br> <br> ----- Original Message ----- <br> <br> <br> <blockquote type="cite">We would also like to know whether the first creation of the iscsi storage domain succeeded and only the latter ones failed. <br> Whether you possibly tried to create a storage domain with a already used lun, and specifically - do you have any iscsi storage domain at all that you can see on the webadmin? <br> <br> Regards, <br> Vered <br> <br> ----- Original Message ----- <br> <blockquote type="cite">From: "Eduardo Warszawski" <a class="moz-txt-link-rfc2396E" href="mailto:ewarszaw@redhat.com"><ewarszaw@redhat.com></a> <br> To: "Vered Volansky" <a class="moz-txt-link-rfc2396E" href="mailto:vered@redhat.com"><vered@redhat.com></a> <br> Cc: <a class="moz-txt-link-abbreviated" href="mailto:users@ovirt.org">users@ovirt.org</a> <br> Sent: Thursday, February 21, 2013 11:57:09 AM <br> Subject: Re: [Users] HELP! Errors creating iSCSI Storage Domain <br> <br> <br> <br> ----- Original Message ----- <br> <blockquote type="cite">Hi, <br> <br> Please add engine and vdsm full logs. <br> <br> Regards, <br> Vered <br> <br> ----- Original Message ----- <br> <blockquote type="cite">From: "Êî÷àíîâ Èâàí" <a class="moz-txt-link-rfc2396E" href="mailto:kia@dcak.ru"><kia@dcak.ru></a> <br> To: <a class="moz-txt-link-abbreviated" href="mailto:users@ovirt.org">users@ovirt.org</a> <br> Sent: Thursday, February 21, 2013 4:26:14 AM <br> Subject: [Users] HELP! Errors creating iSCSI Storage Domain <br> <br> <br> LUN info: <br> # iscsiadm -m discovery -t st -p 172.16.20.1 <br> 172.16.20.1:3260,1 <br> iqn.2003-10.com.lefthandnetworks:mg-dcak-p4000:134:vol-os-virtualimages <br> <br> # pvs <br> PV VG Fmt Attr PSize PFree <br> /dev/mapper/36000eb35f449ac1c0000000000000086 <br> db260c45-8dd2-4397-a5bf-fd577bbbe0d4 lvm2 a-- 1023.62g 1023.62g <br> <br> </blockquote> </blockquote> The PV already belongs to VG db260c45-8dd2-4397-a5bf-fd577bbbe0d4. <br> This VG was very probably created by ovirt and is part of a <br> StorageDomain. <br> (Then you already succeeded.) <br> If you are sure that no valuable info is in this VG you can use the <br> "force" thick. <br> Anyway I suggest you to look for the SDUUID: <br> db260c45-8dd2-4397-a5bf-fd577bbbe0d4 <br> and removing this StorageDomain from the system or use this already <br> created SD. <br> <br> vdsm and engine logs will be very valuable, as Vered said. <br> Please include them since you started, in order to confirm to which <br> SD the lun belongs. <br> <br> Best regards, <br> E. <br> <br> <blockquote type="cite"> <blockquote type="cite"> <br> <br> -- <br> Sysadmin DCAK <a class="moz-txt-link-abbreviated" href="mailto:kia@dcak.ru">kia@dcak.ru</a> <br> _______________________________________________ <br> Users mailing list <br> <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <br> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> <br> <br> </blockquote> _______________________________________________ <br> Users mailing list <br> <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <br> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> <br> <br> </blockquote> _______________________________________________ <br> Users mailing list <br> <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <br> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> <br> <br> </blockquote> _______________________________________________ <br> Users mailing list <br> <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <br> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> <br> </blockquote> <br> <br> <fieldset class="mimeAttachmentHeader"></fieldset> <br> <pre wrap="">_______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <br> <br> <pre class="moz-signature" cols="72">-- Ñ óâàæåíèåì, ñèñòåìíûé àäìèíèñòðàòîð ÊÃÁÓÇ "ÄÖÀÊ" Êî÷àíîâ Èâàí Àëåêñàíäðîâè÷ <a class="moz-txt-link-freetext" href="mailto:kia@dcak.ru">mailto:kia@dcak.ru</a></pre> </body> </html> --------------090306060404070309050301--

the metadata warning is not the issue (you will always see this warning in the log when changing metadata). looking at the logs I can see the following error: Thread-97::ERROR::2013-02-21 17:26:13,041::lvm::680::Storage.LVM::(_initpvs) [], [' Can\'t initialize physical volume "/dev/mapper/36000eb35f449ac1c000000000000008c" of volume group "05fd0730-298a-4884-aa41-416416a1fb24" without -ff'] Thread-97::ERROR::2013-02-21 17:26:13,042::task::833::TaskManager.Task::(_setError) Task=`37aa0fb8-b527-4d28-8a7c-1bc7a1d8732e`::Unexpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 840, in _run return fn(*args, **kargs) File "/usr/share/vdsm/logUtils.py", line 42, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/storage/hsm.py", line 1948, in createVG (force.capitalize() == "True"))) File "/usr/share/vdsm/storage/lvm.py", line 859, in createVG _initpvs(pvs, metadataSize, force) File "/usr/share/vdsm/storage/lvm.py", line 681, in _initpvs raise se.PhysDevInitializationError(str(devices)) PhysDevInitializationError: Failed to initialize physical device: ("['/dev/mapper/36000eb35f449ac1c000000000000008c']",) looks to me like this bug: https://bugzilla.redhat.com/show_bug.cgi?id=908776 which was already solved. can you please update the vdsm and try again? thanks, Dafna On 02/25/2013 03:57 AM, Кочанов Иван wrote:
I have found the place where the error occurs preventing iSCSI storagedomain creation. The error is: FAILED: <err> = ' WARNING: This metadata update is NOT backed up\n Not activating c59dcf20-2de2-44c6-858d-c4e9ec018f91/metadata since it does not pass activation filter.\n Failed to activate new LV.\n WARNING: This metadata update is NOT backed up\n'; <rc> = 5 Probably the reason is in lvm mechanism on backuping metadata, but I'm not sure. Do you have any thoughts about it?
Here is a part of log:
Thread-161377::DEBUG::2013-02-24 22:58:14,217::BindingXMLRPC::161::vds::(wrapper) [192.168.0.22] Thread-161377::DEBUG::2013-02-24 22:58:14,218::task::568::TaskManager.Task::(_updateState) Task=`9bc26f2d-0ee7-4afa-8159-59883f6d98d6`::moving from state init -> state preparing Thread-161377::INFO::2013-02-24 22:58:14,218::logUtils::41::dispatcher::(wrapper) Run and protect: createStorageDomain(storageType=3, sdUUID='c59dcf20-2de2-44c6-858d-c4e9ec018f91', domainName='OS', typeSpecificArg='Ecc2t6-uLUq-BNHq-Yjmx-d4w8-HqV6-Na88Cd', domClass=1, domVersion='3', options=None) Thread-161377::INFO::2013-02-24 22:58:14,220::blockSD::463::Storage.StorageDomain::(create) sdUUID=c59dcf20-2de2-44c6-858d-c4e9ec018f91 domainName=OS domClass=1 vgUUID=Ecc2t6-uLUq-BNHq-Yjmx-d4w8-HqV6-Na88Cd storageType=3 version=3 Thread-161377::DEBUG::2013-02-24 22:58:14,221::lvm::368::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex Thread-161377::DEBUG::2013-02-24 22:58:14,222::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%36000eb35f449ac1c000000000000005d|36000eb35f449ac1c000000000000008c%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free' (cwd None) Thread-161377::DEBUG::2013-02-24 22:58:14,470::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0 Thread-161377::DEBUG::2013-02-24 22:58:14,474::lvm::397::OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex Thread-161377::DEBUG::2013-02-24 22:58:14,474::lvm::409::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex Thread-161377::DEBUG::2013-02-24 22:58:14,475::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /sbin/lvm lvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%36000eb35f449ac1c000000000000005d|36000eb35f449ac1c000000000000008c%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags c59dcf20-2de2-44c6-858d-c4e9ec018f91' (cwd None) Thread-161377::DEBUG::2013-02-24 22:58:14,702::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0 Thread-161377::DEBUG::2013-02-24 22:58:14,703::lvm::442::OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex Thread-161377::DEBUG::2013-02-24 22:58:14,704::lvm::334::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' got the operation mutex Thread-161377::DEBUG::2013-02-24 22:58:14,705::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /sbin/lvm pvs --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%36000eb35f449ac1c000000000000005d|36000eb35f449ac1c000000000000008c%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix --separator | -o uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size /dev/mapper/36000eb35f449ac1c000000000000008c' (cwd None) Thread-161377::DEBUG::2013-02-24 22:58:14,889::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> = ''; <rc> = 0 Thread-161377::DEBUG::2013-02-24 22:58:14,890::lvm::359::OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex Thread-161377::INFO::2013-02-24 22:58:14,890::blockSD::448::Storage.StorageDomain::(metaSize) size 512 MB (metaratio 262144) Thread-161377::DEBUG::2013-02-24 22:58:14,891::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/bin/sudo -n /sbin/lvm lvcreate --config " devices { preferred_names = [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 filter = [ \\"a%36000eb35f449ac1c000000000000005d|36000eb35f449ac1c000000000000008c%\\", \\"r%.*%\\" ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup { retain_min = 50 retain_days = 0 } " --autobackup n --contiguous n --size 512m --name metadata c59dcf20-2de2-44c6-858d-c4e9ec018f91' (cwd None) Thread-161377::DEBUG::2013-02-24 22:58:15,152::misc::84::Storage.Misc.excCmd::(<lambda>) FAILED: <err> = ' WARNING: This metadata update is NOT backed up\n Not activating c59dcf20-2de2-44c6-858d-c4e9ec018f91/metadata since it does not pass activation filter.\n Failed to activate new LV.\n WARNING: This metadata update is NOT backed up\n'; <rc> = 5 Thread-161377::ERROR::2013-02-24 22:58:15,155::task::833::TaskManager.Task::(_setError) Task=`9bc26f2d-0ee7-4afa-8159-59883f6d98d6`::Unexpected error Thread-161377::DEBUG::2013-02-24 22:58:15,171::task::852::TaskManager.Task::(_run) Task=`9bc26f2d-0ee7-4afa-8159-59883f6d98d6`::Task._run: 9bc26f2d-0ee7-4afa-8159-59883f6d98d6 (3, 'c59dcf20-2de2-44c6-858d-c4e9ec018f91', 'OS', 'Ecc2t6-uLUq-BNHq-Yjmx-d4w8-HqV6-Na88Cd', 1, '3') {} failed - stopping task
----- Original Message -----
I did the following right now to be sure: - rotated logs - deleted the LUN from Storage and created new one (so LUN is unused) - rebooted the server - opened oVirt Engine Web Administration - Storage -> New Domain - checked my newly created LUN --> Error - pressed button "OK" one more time - Another Error
I have no other iSCSI storage domains on my installaion. I tried to create another LUN on SAN with smaller size, but result is the same.
Thanks in advance. I hope this information will help to find out what's wrong...
----- Original Message -----
We would also like to know whether the first creation of the iscsi storage domain succeeded and only the latter ones failed. Whether you possibly tried to create a storage domain with a already used lun, and specifically - do you have any iscsi storage domain at all that you can see on the webadmin?
Regards, Vered
----- Original Message -----
From: "Eduardo Warszawski" <ewarszaw@redhat.com> To: "Vered Volansky" <vered@redhat.com> Cc: users@ovirt.org Sent: Thursday, February 21, 2013 11:57:09 AM Subject: Re: [Users] HELP! Errors creating iSCSI Storage Domain
----- Original Message -----
Hi,
Please add engine and vdsm full logs.
Regards, Vered
----- Original Message -----
From: "Кочанов Иван" <kia@dcak.ru> To: users@ovirt.org Sent: Thursday, February 21, 2013 4:26:14 AM Subject: [Users] HELP! Errors creating iSCSI Storage Domain
LUN info: # iscsiadm -m discovery -t st -p 172.16.20.1 172.16.20.1:3260,1 iqn.2003-10.com.lefthandnetworks:mg-dcak-p4000:134:vol-os-virtualimages
# pvs PV VG Fmt Attr PSize PFree /dev/mapper/36000eb35f449ac1c0000000000000086 db260c45-8dd2-4397-a5bf-fd577bbbe0d4 lvm2 a-- 1023.62g 1023.62g
The PV already belongs to VG db260c45-8dd2-4397-a5bf-fd577bbbe0d4. This VG was very probably created by ovirt and is part of a StorageDomain. (Then you already succeeded.) If you are sure that no valuable info is in this VG you can use the "force" thick. Anyway I suggest you to look for the SDUUID: db260c45-8dd2-4397-a5bf-fd577bbbe0d4 and removing this StorageDomain from the system or use this already created SD.
vdsm and engine logs will be very valuable, as Vered said. Please include them since you started, in order to confirm to which SD the lun belongs.
Best regards, E.
-- Sysadmin DCAK kia@dcak.ru _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- С уважением, системный администратор КГБУЗ "ДЦАК" Кочанов Иван Александрович
mailto:kia@dcak.ru
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Dafna Ron
participants (4)
-
Dafna Ron
-
Eduardo Warszawski
-
Vered Volansky
-
Кочанов Иван