[Users] [rhev 3] add new domain fails: Could not retrieve LUNs

Scotto Alberto al.scotto at reply.it
Tue Aug 28 16:11:43 UTC 2012


After two or three reinstallations of RHEV-H, surprisingly, now the behavior is as expected: i.e., it skips the cciss disk without breaking.



Thread-11037::WARNING::2012-08-28 15:46:25,949::hsm::735::Storage.HSM::(_getDeviceList) Ignoring partitioned device {'product': 'LOGICAL VOLUME', 'dm': 'dm-2', 'devtypes': ['FCP'], 'fwrev': '2.08', 'logicalblocksize': '512', 'connections': [], 'devtype': 'FCP', 'physicalblocksize': '512', 'vendor': 'HP', 'serial': 'SHP_LOGICAL_VOLUME_PH79MW7539', 'guid': '3600508b1001035333920202020200005', 'paths': [{'devnum': DeviceNumber(Major=104, Minor=16), 'physdev': 'cciss!c0d1', 'type': 'FCP', 'state': 'active'}], 'capacity': '220122071040'}



Attached the full vdsm log.

I didn't upgrade vdsm, I just did a reinstall (two reinstalls, actually). No idea of what happened...!

Btw, the procedure I posted to hide the cciss disk from multipath didn't work well: when rebooted, the hypervisor doesn't save the changes, due to the live image on which it mounts the root file system.





However now I'm experiencing high latency issues. The message from RHEV-M console is: "storage domain XYZ experienced a high latency of 14 seconds [...]".

Any ideas? Should I detach cciss disk once again? I'm gonna try









Alberto Scotto

[Blue]
Via Cardinal Massaia, 83
10147 - Torino - ITALY
phone: +39 011 29100
al.scotto at reply.it
www.reply.it


-----Original Message-----
From: users-bounces at ovirt.org [mailto:users-bounces at ovirt.org] On Behalf Of Scotto Alberto
Sent: venerdì 24 agosto 2012 11:44
To: Haim
Cc: users at ovirt.org
Subject: Re: [Users] [rhev 3] add new domain fails: Could not retrieve LUNs



It worked!



For the sake of knowledge, these are the steps I took (on RHEL6):



1) edited /etc/multipath.conf, adding

---------------------

blacklist {

    wwid 3600508b1001035333920202020200005 }

---------------------

2) service multipathd reload

3) multipath -f 3600508b1001035333920202020200005





I also filed the bug as you suggested

https://bugzilla.redhat.com/show_bug.cgi?id=851478





Thank you very much Haim!



Best regards





PS: actually adding a new domain still doesn't work.. What a mess... At least now it lists my LUNs correctly :)









Alberto Scotto



Blue Reply

Via Cardinal Massaia, 83

10147 - Torino - ITALY

phone: +39 011 29100

al.scotto at reply.it<mailto:al.scotto at reply.it>

www.reply.it<http://www.reply.it>



-----Original Message-----

From: Haim [mailto:hateya at redhat.com]<mailto:[mailto:hateya at redhat.com]>

Sent: giovedì 23 agosto 2012 18:22

To: Scotto Alberto

Cc: users at ovirt.org<mailto:users at ovirt.org>

Subject: Re: [Users] [rhev 3] add new domain fails: Could not retrieve LUNs



On 08/23/2012 07:13 PM, Scotto Alberto wrote:

> I was going to reply.. :)

> It looks like it halts after an error due to cciss!c0d1, which btw is displayed by multipath -ll.

> That's just a local disk, isn't it? So it shouldn't even be listed. I may have attached it by mistake, playing with /sys/class/fc_* tools.

> So, if I remove that path, everything should go ok. Do you think so too?



Yes, please remove and check again (make sure to clean device-mapper table with dmsetup remove $dm).

anyhow, vdsm should be more robust, so please file a bug for it(https://bugzilla.redhat.com/enter_bug.cgi?product=oVirt).



Haim



>

>> it appears that vdsm fails to handle a device with '!' in it

>> (cciss!c0d1), but let's make sure its indeed the case

> More than that the fact is that the path scsi_disk/ doesn't exist in

> /sys/block/cciss!c0d1/device And this must be due to the fact that

> c0d1 is NOT a damn scsi disk

>

>

> Anyway, here is your output

>

> [root at pittor06vhxd020 ~]# ls -l /sys/block/ total 0 lrwxrwxrwx. 1 root

> root 0 2007-06-30 00:37 cciss!c0d0 ->

> ../devices/pci0000:00/0000:00:03.0/0000:06:00.0/cciss0/c0d0/block/ccis

> s!c0d0 lrwxrwxrwx. 1 root root 0 2007-06-30 00:32 cciss!c0d1 ->

> ../devices/pci0000:00/0000:00:03.0/0000:06:00.0/cciss0/c0d1/block/ccis

> s!c0d1 lrwxrwxrwx. 1 root root 0 2007-06-30 01:17 dm-0 ->

> ../devices/virtual/block/dm-0 lrwxrwxrwx. 1 root root 0 2007-06-30

> 00:40 dm-1 -> ../devices/virtual/block/dm-1 lrwxrwxrwx. 1 root root 0

> 2007-06-30 01:17 dm-2 -> ../devices/virtual/block/dm-2 lrwxrwxrwx. 1

> root root 0 2007-06-30 00:49 dm-3 -> ../devices/virtual/block/dm-3

> lrwxrwxrwx. 1 root root 0 2007-06-30 00:49 dm-4 ->

> ../devices/virtual/block/dm-4 lrwxrwxrwx. 1 root root 0 2007-06-30

> 01:00 dm-5 -> ../devices/virtual/block/dm-5 lrwxrwxrwx. 1 root root 0

> 2007-06-30 00:49 dm-6 -> ../devices/virtual/block/dm-6 lrwxrwxrwx. 1

> root root 0 2007-06-30 00:10 loop0 -> ../devices/virtual/block/loop0

> lrwxrwxrwx. 1 root root 0 2007-06-30 00:10 loop1 ->

> ../devices/virtual/block/loop1 lrwxrwxrwx. 1 root root 0 2007-06-30

> 00:10 loop2 -> ../devices/virtual/block/loop2 lrwxrwxrwx. 1 root root

> 0 2007-06-30 00:10 loop3 -> ../devices/virtual/block/loop3 lrwxrwxrwx.

> 1 root root 0 2007-06-30 00:10 loop4 -> ../devices/virtual/block/loop4

> lrwxrwxrwx. 1 root root 0 2007-06-30 00:10 loop5 ->

> ../devices/virtual/block/loop5 lrwxrwxrwx. 1 root root 0 2007-06-30

> 00:10 loop6 -> ../devices/virtual/block/loop6 lrwxrwxrwx. 1 root root

> 0 2007-06-30 00:10 loop7 -> ../devices/virtual/block/loop7 lrwxrwxrwx.

> 1 root root 0 2007-06-30 00:10 ram0 -> ../devices/virtual/block/ram0

> lrwxrwxrwx. 1 root root 0 2007-06-30 00:10 ram1 ->

> ../devices/virtual/block/ram1 lrwxrwxrwx. 1 root root 0 2007-06-30

> 00:10 ram10 -> ../devices/virtual/block/ram10 lrwxrwxrwx. 1 root root

> 0 2007-06-30 00:10 ram11 -> ../devices/virtual/block/ram11 lrwxrwxrwx.

> 1 root root 0 2007-06-30 00:10 ram12 -> ../devices/virtual/block/ram12

> lrwxrwxrwx. 1 root root 0 2007-06-30 00:10 ram13 ->

> ../devices/virtual/block/ram13 lrwxrwxrwx. 1 root root 0 2007-06-30

> 00:10 ram14 -> ../devices/virtual/block/ram14 lrwxrwxrwx. 1 root root

> 0 2007-06-30 00:10 ram15 -> ../devices/virtual/block/ram15 lrwxrwxrwx.

> 1 root root 0 2007-06-30 00:10 ram2 -> ../devices/virtual/block/ram2

> lrwxrwxrwx. 1 root root 0 2007-06-30 00:10 ram3 ->

> ../devices/virtual/block/ram3 lrwxrwxrwx. 1 root root 0 2007-06-30

> 00:10 ram4 -> ../devices/virtual/block/ram4 lrwxrwxrwx. 1 root root 0

> 2007-06-30 00:10 ram5 -> ../devices/virtual/block/ram5 lrwxrwxrwx. 1

> root root 0 2007-06-30 00:10 ram6 -> ../devices/virtual/block/ram6

> lrwxrwxrwx. 1 root root 0 2007-06-30 00:10 ram7 ->

> ../devices/virtual/block/ram7 lrwxrwxrwx. 1 root root 0 2007-06-30

> 00:10 ram8 -> ../devices/virtual/block/ram8 lrwxrwxrwx. 1 root root 0

> 2007-06-30 00:10 ram9 -> ../devices/virtual/block/ram9 lrwxrwxrwx. 1

> root root 0 2007-06-30 00:36 sda ->

> ../devices/pci0000:00/0000:00:02.0/0000:09:00.0/0000:0a:00.0/0000:0b:0

> 0.0/host2/rport-2:0-0/target2:0:0/2:0:0:0/block/sda

> lrwxrwxrwx. 1 root root 0 2007-06-29 18:59 sdb ->

> ../devices/pci0000:00/0000:00:02.0/0000:09:00.0/0000:0a:00.0/0000:0b:0

> 0.0/host2/rport-2:0-2/target2:0:2/2:0:2:0/block/sdb

> lrwxrwxrwx. 1 root root 0 2007-06-29 18:59 sdc ->

> ../devices/pci0000:00/0000:00:02.0/0000:09:00.0/0000:0a:00.0/0000:0b:0

> 0.0/host2/rport-2:0-3/target2:0:3/2:0:3:0/block/sdc

> lrwxrwxrwx. 1 root root 0 2007-06-29 18:59 sdd ->

> ../devices/pci0000:00/0000:00:02.0/0000:09:00.0/0000:0a:00.0/0000:0b:0

> 0.0/host2/rport-2:0-1/target2:0:1/2:0:1:0/block/sdd

> lrwxrwxrwx. 1 root root 0 2007-06-29 18:59 sde ->

> ../devices/pci0000:00/0000:00:02.0/0000:09:00.0/0000:0a:00.0/0000:0b:0

> 0.1/host3/rport-3:0-0/target3:0:0/3:0:0:0/block/sde

> lrwxrwxrwx. 1 root root 0 2007-06-29 18:59 sdf ->

> ../devices/pci0000:00/0000:00:02.0/0000:09:00.0/0000:0a:00.0/0000:0b:0

> 0.1/host3/rport-3:0-1/target3:0:1/3:0:1:0/block/sdf

> lrwxrwxrwx. 1 root root 0 2007-06-29 18:59 sdg ->

> ../devices/pci0000:00/0000:00:02.0/0000:09:00.0/0000:0a:00.0/0000:0b:0

> 0.1/host3/rport-3:0-2/target3:0:2/3:0:2:0/block/sdg

> lrwxrwxrwx. 1 root root 0 2007-06-29 18:59 sdh ->

> ../devices/pci0000:00/0000:00:02.0/0000:09:00.0/0000:0a:00.0/0000:0b:0

> 0.1/host3/rport-3:0-3/target3:0:3/3:0:3:0/block/sdh

> lrwxrwxrwx. 1 root root 0 2007-06-30 00:49 sr0 ->

> ../devices/pci0000:00/0000:00:1f.1/host0/target0:0:0/0:0:0:0/block/sr0

>

> [root at pittor06vhxd020 ~]# dmsetup table

> 3600601601cde1d0066b2fb054dece111: 0 1363148800 multipath 1

> queue_if_no_path 1 emc 2 1 round-robin 0 4 1 8:0 1 8:48 1 8:64 1 8:80

> 1 round-robin 0 4 1 8:16 1 8:32 1 8:96 1 8:112 1

> HostVG-Logging: 0 4194304 linear 104:4 24741888

> HostVG-Swap: 0 24723456 linear 104:4 2048

> 3600508b1001035333920202020200005: 0 429925920 multipath 1

> queue_if_no_path 0 1 1 round-robin 0 1 1 104:16 1

> HostVG-Data: 0 40624128 linear 104:4 28936192

> HostVG-Config: 0 16384 linear 104:4 24725504

> live-rw: 0 2097152 snapshot 7:1 7:2 P 8

>

> [root at pittor06vhxd020 ~]# lsblk

> NAME                                       MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT

> loop0                                        7:0    0  99.3M  1 loop

> loop1                                        7:1    0     1G  1 loop

> ââlive-rw (dm-1)                           253:1    0     1G  0 dm    /

> loop2                                        7:2    0   512M  0 loop

> ââlive-rw (dm-1)                           253:1    0     1G  0 dm    /

> cciss!c0d0                                 104:0    0  33.9G  0 disk

> ââcciss!c0d0p1                             104:1    0   243M  0 part

> ââcciss!c0d0p2                             104:2    0   244M  0 part

> ââcciss!c0d0p3                             104:3    0   244M  0 part

> ââcciss!c0d0p4                             104:4    0  33.2G  0 part

>    ââHostVG-Swap (dm-3)                     253:3    0  11.8G  0 lvm   [SWAP]

>    ââHostVG-Config (dm-4)                   253:4    0     8M  0 lvm   /config

>    ââHostVG-Logging (dm-5)                  253:5    0     2G  0 lvm   /var/log

>    ââHostVG-Data (dm-6)                     253:6    0  19.4G  0 lvm   /data

> cciss!c0d1                                 104:16   0   205G  0 disk

> ââ3600508b1001035333920202020200005 (dm-0) 253:0    0   205G  0 mpath

> sr0                                         11:0    1  1024M  0 rom

> sdb                                          8:16   0   650G  0 disk

> ââ3600601601cde1d0066b2fb054dece111 (dm-2) 253:2    0   650G  0 mpath

> sda                                          8:0    0   650G  0 disk

> ââ3600601601cde1d0066b2fb054dece111 (dm-2) 253:2    0   650G  0 mpath

> sdc                                          8:32   0   650G  0 disk

> ââ3600601601cde1d0066b2fb054dece111 (dm-2) 253:2    0   650G  0 mpath

> sdd                                          8:48   0   650G  0 disk

> ââ3600601601cde1d0066b2fb054dece111 (dm-2) 253:2    0   650G  0 mpath

> sde                                          8:64   0   650G  0 disk

> ââ3600601601cde1d0066b2fb054dece111 (dm-2) 253:2    0   650G  0 mpath

> sdf                                          8:80   0   650G  0 disk

> ââ3600601601cde1d0066b2fb054dece111 (dm-2) 253:2    0   650G  0 mpath

> sdg                                          8:96   0   650G  0 disk

> ââ3600601601cde1d0066b2fb054dece111 (dm-2) 253:2    0   650G  0 mpath

> sdh                                          8:112  0   650G  0 disk

> ââ3600601601cde1d0066b2fb054dece111 (dm-2) 253:2    0   650G  0 mpath

>

>

>

> Alberto Scotto

>

> Blue Reply

> Via Cardinal Massaia, 83

> 10147 - Torino - ITALY

> phone: +39 011 29100

> al.scotto at reply.it<mailto:al.scotto at reply.it>

> www.reply.it<http://www.reply.it>

>

> -----Original Message-----

> From: Haim [mailto:hateya at redhat.com]<mailto:[mailto:hateya at redhat.com]>

> Sent: giovedì 23 agosto 2012 18:01

> To: Scotto Alberto

> Cc: users at ovirt.org<mailto:users at ovirt.org>

> Subject: Re: [Users] [rhev 3] add new domain fails: Could not retrieve

> LUNs

>

> On 08/23/2012 06:20 PM, Scotto Alberto wrote:

>> Here you are

> thanks, can you run the following?

>

> - ls -l /sys/block/

> - dmsetup table

> - lsblk (if exists)

>

> it appears that vdsm fails to handle a device with '!' in it (cciss!c0d1), but let's make sure its indeed the case.

>

>>

>> Thread-47346::DEBUG::2007-06-30

>> 00:37:10,268::clientIF::239::Storage.Dispatcher.Protect::(wrapper)

>> [10.16.250.216]

>> Thread-47346::INFO::2007-06-30

>> 00:37:10,269::dispatcher::94::Storage.Dispatcher.Protect::(run) Run

>> and protect: getDeviceList, args: ()

>> Thread-47346::DEBUG::2007-06-30

>> 00:37:10,269::task::495::TaskManager.Task::(_debug) Task

>> 0be1d461-f8fa-4c20-861d-27fde8124408: moving from state init -> state

>> preparing

>> Thread-47346::DEBUG::2007-06-30

>> 00:37:10,269::misc::1010::SamplingMethod::(__call__) Trying to enter

>> sampling method (storage.sdc.refreshStorage)

>> Thread-47346::DEBUG::2007-06-30

>> 00:37:10,270::misc::1012::SamplingMethod::(__call__) Got in to

>> sampling method

>> Thread-47346::DEBUG::2007-06-30

>> 00:37:10,270::misc::1010::SamplingMethod::(__call__) Trying to enter

>> sampling method (storage.iscsi.rescan)

>> Thread-47346::DEBUG::2007-06-30

>> 00:37:10,270::misc::1012::SamplingMethod::(__call__) Got in to

>> sampling method

>> Thread-47346::DEBUG::2007-06-30

>> 00:37:10,271::iscsi::699::Storage.Misc.excCmd::(rescan)

>> '/usr/bin/sudo -n /sbin/iscsiadm -m session -R' (cwd None)

>> Thread-47346::DEBUG::2007-06-30

>> 00:37:10,300::iscsi::699::Storage.Misc.excCmd::(rescan) FAILED: <err>

>> = 'iscsiadm: No session found.\n'; <rc> = 21

>> Thread-47346::DEBUG::2007-06-30

>> 00:37:10,301::misc::1020::SamplingMethod::(__call__) Returning last

>> result

>> Thread-47346::DEBUG::2007-06-30

>> 00:37:10,661::multipath::61::Storage.Misc.excCmd::(rescan)

>> '/usr/bin/sudo -n /sbin/multipath' (cwd None)

>> Thread-47346::DEBUG::2007-06-30

>> 00:37:10,785::multipath::61::Storage.Misc.excCmd::(rescan) SUCCESS:

>> <err> = ''; <rc> = 0

>> Thread-47346::DEBUG::2007-06-30

>> 00:37:10,786::lvm::547::OperationMutex::(_invalidateAllPvs) Operation

>> 'lvm invalidate operation' got the operation mutex

>> Thread-47346::DEBUG::2007-06-30

>> 00:37:10,786::lvm::549::OperationMutex::(_invalidateAllPvs) Operation

>> 'lvm invalidate operation' released the operation mutex

>> Thread-47346::DEBUG::2007-06-30

>> 00:37:10,786::lvm::559::OperationMutex::(_invalidateAllVgs) Operation

>> 'lvm invalidate operation' got the operation mutex

>> Thread-47346::DEBUG::2007-06-30

>> 00:37:10,787::lvm::561::OperationMutex::(_invalidateAllVgs) Operation

>> 'lvm invalidate operation' released the operation mutex

>> Thread-47346::DEBUG::2007-06-30

>> 00:37:10,787::lvm::580::OperationMutex::(_invalidateAllLvs) Operation

>> 'lvm invalidate operation' got the operation mutex

>> Thread-47346::DEBUG::2007-06-30

>> 00:37:10,788::lvm::582::OperationMutex::(_invalidateAllLvs) Operation

>> 'lvm invalidate operation' released the operation mutex

>> Thread-47346::DEBUG::2007-06-30

>> 00:37:10,788::misc::1020::SamplingMethod::(__call__) Returning last

>> result

>> Thread-47346::DEBUG::2007-06-30

>> 00:37:10,788::lvm::406::OperationMutex::(_reloadpvs) Operation 'lvm

>> reload operation' got the operation mutex

>> Thread-47346::DEBUG::2007-06-30

>> 00:37:10,791::lvm::374::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n

>> /sbin/lvm pvs --config " devices { preferred_names =

>> [\\"^/dev/mapper/\\"] ignore_suspended_devices=1 write_cache_state=0

>> disable_after_error_count=3 filter = [

>> \\"a%3600508b1001035333920202020200005|3600601601cde1d0066b2fb054dece<file:///\\%22a%253600508b1001035333920202020200005|3600601601cde1d0066b2fb054dece>

>> 1 11%\\", \\"r%.*%\\<file:///\\%22r%25.*%25\>" ] }  global {  locking_type=1

>> prioritise_write_locks=1  wait_for_locks=1 }  backup {  retain_min =

>> 50  retain_days = 0 } " --noheadings --units b --nosuffix --separator

>> | -o

>> uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_c

>> o

>> unt,dev_size' (cwd None)

>> Thread-47346::DEBUG::2007-06-30

>> 00:37:10,997::lvm::374::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = '

>> /dev/sdh: read failed after 0 of 4096 at 0: Input/output error\n

>> /dev/sdh: read failed after 0 of 4096 at 697932120064: Input/output

>> error\n  /dev/sdh: read failed after 0 of 4096 at 697932177408:

>> Input/output error\n  WARNING: Error counts reached a limit of 3.

>> Device /dev/sdh was disabled\n'; <rc> = 0

>> Thread-47346::DEBUG::2007-06-30

>> 00:37:10,998::lvm::429::OperationMutex::(_reloadpvs) Operation 'lvm

>> reload operation' released the operation mutex

>> MainProcess|Thread-47346::DEBUG::2007-06-30

>> MainProcess|00:37:11,005::devicemapper::144::Storage.Misc.excCmd::(_g

>> MainProcess|e

>> MainProcess|tPathsStatus) '/sbin/dmsetup status' (cwd None)

>> MainProcess|Thread-47346::DEBUG::2007-06-30

>> MainProcess|00:37:11,014::devicemapper::144::Storage.Misc.excCmd::(_g

>> MainProcess|e

>> MainProcess|tPathsStatus) SUCCESS: <err> = ''; <rc> = 0

>> MainProcess|Thread-47346::DEBUG::2007-06-30

>> MainProcess|00:37:11,019::multipath::159::Storage.Misc.excCmd::(getSc

>> MainProcess|s

>> MainProcess|iSerial) '/sbin/scsi_id --page=0x80 --whitelisted

>> MainProcess|--export --replace-whitespace --device=/dev/dm-0' (cwd

>> MainProcess|None)

>> MainProcess|Thread-47346::DEBUG::2007-06-30

>> MainProcess|00:37:11,026::multipath::159::Storage.Misc.excCmd::(getSc

>> MainProcess|s

>> MainProcess|iSerial) SUCCESS: <err> = ''; <rc> = 0

>> Thread-47346::WARNING::2007-06-30

>> 00:37:11,027::multipath::261::Storage.Multipath::(pathListIter) Problem getting hbtl from device `cciss!c0d1` Traceback (most recent call last):

>>     File "/usr/share/vdsm/storage/multipath.py", line 259, in pathListIter

>>     File "/usr/share/vdsm/storage/multipath.py", line 182, in getHBTL

>> OSError: [Errno 2] No such file or directory: '/sys/block/cciss!c0d1/device/scsi_disk/'

>> Thread-47346::ERROR::2007-06-30

>> 00:37:11,029::task::868::TaskManager.Task::(_setError) Unexpected error Traceback (most recent call last):

>>     File "/usr/share/vdsm/storage/task.py", line 876, in _run

>>     File "/usr/share/vdsm/storage/hsm.py", line 696, in public_getDeviceList

>>     File "/usr/share/vdsm/storage/hsm.py", line 759, in

>> _getDeviceList

>> KeyError: 'hbtl'

>> Thread-47346::DEBUG::2007-06-30

>> 00:37:11,030::task::495::TaskManager.Task::(_debug) Task

>> 0be1d461-f8fa-4c20-861d-27fde8124408: Task._run:

>> 0be1d461-f8fa-4c20-861d-27fde8124408 () {} failed - stopping task

>> Thread-47346::DEBUG::2007-06-30

>> 00:37:11,030::task::495::TaskManager.Task::(_debug) Task

>> 0be1d461-f8fa-4c20-861d-27fde8124408: stopping in state preparing

>> (force False)

>> Thread-47346::DEBUG::2007-06-30

>> 00:37:11,030::task::495::TaskManager.Task::(_debug) Task

>> 0be1d461-f8fa-4c20-861d-27fde8124408: ref 1 aborting True

>> Thread-47346::INFO::2007-06-30

>> 00:37:11,031::task::1171::TaskManager.Task::(prepare) aborting: Task

>> is aborted: "'hbtl'" - code 100

>> Thread-47346::DEBUG::2007-06-30 00:37:11,031::task::495::TaskManager.Task::(_debug) Task 0be1d461-f8fa-4c20-861d-27fde8124408: Prepare: aborted: 'hbtl'

>> Thread-47346::DEBUG::2007-06-30

>> 00:37:11,031::task::495::TaskManager.Task::(_debug) Task

>> 0be1d461-f8fa-4c20-861d-27fde8124408: ref 0 aborting True

>> Thread-47346::DEBUG::2007-06-30

>> 00:37:11,032::task::495::TaskManager.Task::(_debug) Task

>> 0be1d461-f8fa-4c20-861d-27fde8124408: Task._doAbort: force False

>> Thread-47346::DEBUG::2007-06-30

>> 00:37:11,032::resourceManager::821::ResourceManager.Owner::(cancelAll

>> )

>> Owner.cancelAll requests {}

>> Thread-47346::DEBUG::2007-06-30

>> 00:37:11,032::task::495::TaskManager.Task::(_debug) Task

>> 0be1d461-f8fa-4c20-861d-27fde8124408: moving from state preparing ->

>> state aborting

>> Thread-47346::DEBUG::2007-06-30

>> 00:37:11,033::task::495::TaskManager.Task::(_debug) Task

>> 0be1d461-f8fa-4c20-861d-27fde8124408: _aborting: recover policy none

>> Thread-47346::DEBUG::2007-06-30

>> 00:37:11,033::task::495::TaskManager.Task::(_debug) Task

>> 0be1d461-f8fa-4c20-861d-27fde8124408: moving from state aborting ->

>> state failed

>> Thread-47346::DEBUG::2007-06-30

>> 00:37:11,033::resourceManager::786::ResourceManager.Owner::(releaseAl

>> l

>> ) Owner.releaseAll requests {} resources {}

>> Thread-47346::DEBUG::2007-06-30

>> 00:37:11,034::resourceManager::821::ResourceManager.Owner::(cancelAll

>> )

>> Owner.cancelAll requests {}

>> Thread-47346::ERROR::2007-06-30 00:37:11,034::dispatcher::106::Storage.Dispatcher.Protect::(run) 'hbtl'

>> Thread-47346::ERROR::2007-06-30 00:37:11,034::dispatcher::107::Storage.Dispatcher.Protect::(run) Traceback (most recent call last):

>>     File "/usr/share/vdsm/storage/dispatcher.py", line 96, in run

>>     File "/usr/share/vdsm/storage/task.py", line 1178, in prepare

>> KeyError: 'hbtl'

>>

>>

>>

>>

>>

>>

>>

>> Alberto Scotto

>>

>> Blue Reply

>> Via Cardinal Massaia, 83

>> 10147 - Torino - ITALY

>> phone: +39 011 29100

>> al.scotto at reply.it<mailto:al.scotto at reply.it>

>> www.reply.it<http://www.reply.it>

>>

>> -----Original Message-----

>> From: Haim [mailto:hateya at redhat.com]<mailto:[mailto:hateya at redhat.com]>

>> Sent: giovedì 23 agosto 2012 17:00

>> To: Scotto Alberto

>> Cc: users at ovirt.org<mailto:users at ovirt.org>

>> Subject: Re: [Users] [rhev 3] add new domain fails: Could not

>> retrieve LUNs

>>

>> On 08/23/2012 05:54 PM, Scotto Alberto wrote:

>>

>> hi,

>>

>> can you attach full vdsm log during the execution of getDeviceList command?

>>> Hi all,

>>>

>>> I'm trying to configure a FCP storage domain on RHEV 3.

>>>

>>> I try to add a new domain from the console, but it can't find any

>>> LUNs: "Could not retrieve LUNs, please check your storage"

>>>

>>> Here is the output from /var/log/rhevm/rhevm.log:

>>>

>>> ------------------------------------

>>>

>>> 2007-06-29 21:50:07,811 WARN

>>> [org.ovirt.engine.core.bll.GetConfigurationValueQuery]

>>> (http-0.0.0.0-8443-1) calling GetConfigurationValueQuery with null

>>> version, using default general for version

>>> 2007-06-29 21:50:07,911 INFO

>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand]

>>> (http-0.0.0.0-8443-1) START, GetDeviceListVDSCommand(vdsId =

>>> 7e077f4c-25d8-11dc-bbcb-001cc4c2469a, storageType=FCP), log id:

>>> 60bdafe6

>>> 2007-06-29 21:50:08,726 ERROR

>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]

>>> (http-0.0.0.0-8443-1) Failed in GetDeviceListVDS method

>>> 2007-06-29 21:50:08,727 ERROR

>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]

>>> (http-0.0.0.0-8443-1) Error code BlockDeviceActionError and error

>>> message VDSGenericException: VDSErrorException: Failed to

>>> GetDeviceListVDS, error = Error block device action: ()

>>> 2007-06-29 21:50:08,727 INFO

>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]

>>> (http-0.0.0.0-8443-1) Command

>>> org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand

>>> return value

>>>

>>> Class Name:

>>> org.ovirt.engine.core.vdsbroker.vdsbroker.LUNListReturnForXmlRpc

>>> lunList Null

>>> mStatus Class Name:

>>> org.ovirt.engine.core.vdsbroker.vdsbroker.StatusForXmlRpc

>>> mCode 600

>>> mMessage Error block device action: ()

>>>

>>> 2007-06-29 21:50:08,727 INFO

>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]

>>> (http-0.0.0.0-8443-1) Vds: pittor06vhxd020

>>> 2007-06-29 21:50:08,727 ERROR

>>> [org.ovirt.engine.core.vdsbroker.VDSCommandBase]

>>> (http-0.0.0.0-8443-1) Command GetDeviceListVDS execution failed. Exception:

>>> VDSErrorException: VDSGenericException: VDSErrorException: Failed to

>>> GetDeviceListVDS, error = Error block device action: ()

>>> 2007-06-29 21:50:08,727 INFO

>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand]

>>> (http-0.0.0.0-8443-1) FINISH, GetDeviceListVDSCommand, log id:

>>> 60bdafe6

>>> 2007-06-29 21:50:08,727 ERROR

>>> [org.ovirt.engine.core.bll.storage.GetDeviceListQuery]

>>> (http-0.0.0.0-8443-1) Query GetDeviceListQuery failed. Exception

>>> message is VdcBLLException:

>>> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:

>>> VDSGenericException: VDSErrorException: Failed to GetDeviceListVDS,

>>> error = Error block device action: ()

>>>

>>> ----------------------------------------------

>>>

>>> First question: do LUNs have to be visible from RHEV-H or RHEV-M?

>>>

>>> Currently they are visible only from the hypervisor.

>>>

>>> ----------------------------------------

>>>

>>> [root at pittor06vhxd020 log]# multipath -ll

>>> 3600601601cde1d0066b2fb054dece111 dm-2 DGC,RAID 5 size=650G

>>> features='1 queue_if_no_path' hwhandler='1 emc' wp=rw

>>> |-+- policy='round-robin 0' prio=1 status=active

>>> | |- 2:0:0:0 sda 8:0 active ready running

>>> | |- 2:0:1:0 sdd 8:48 active ready running

>>> | |- 3:0:0:0 sde 8:64 active ready running

>>> | `- 3:0:1:0 sdf 8:80 active ready running

>>> `-+- policy='round-robin 0' prio=0 status=enabled

>>> |- 2:0:2:0 sdb 8:16 active ready running

>>> |- 2:0:3:0 sdc 8:32 active ready running

>>> |- 3:0:2:0 sdg 8:96 active ready running

>>> `- 3:0:3:0 sdh 8:112 active ready running

>>> 3600508b1001035333920202020200005 dm-0 HP,LOGICAL VOLUME size=205G

>>> features='1 queue_if_no_path' hwhandler='0' wp=rw

>>> `-+- policy='round-robin 0' prio=1 status=active

>>> `- 0:0:1:0 cciss!c0d1 104:16 active ready running

>>> ------------------------------------------------------

>>>

>>> Our SAN device is Clariion AX150. Is it compatible with ovirt?

>>>

>>> vdsClient -s 0 getDeviceListgives me:

>>>

>>> Error block device action: ()

>>>

>>> Could it be due to SPM turned off? (I have only one host)

>>>

>>> [root at pittor06vhxd020 log]# ps axu | grep -i spm

>>>

>>> root 16068 0.0 0.0 7888 868 pts/1 R+ 00:04 0:00 grep -i spm

>>>

>>> How can I turn it on? I know the command but I don't know what

>>> paramaters append

>>>

>>> spmStart

>>>

>>> <spUUID> <prevID> <prevLVER> <recoveryMode> <scsiFencing>

>>> <maxHostID> <version>

>>>

>>> Thank you very much for any hints.

>>>

>>> AS

>>>

>>>

>>>

>>> Alberto Scotto

>>>

>>> Blue

>>> Via Cardinal Massaia, 83

>>> 10147 - Torino - ITALY

>>> phone: +39 011 29100

>>> al.scotto at reply.it<mailto:al.scotto at reply.it>

>>> www.reply.it<http://www.reply.it>

>>>

>>>

>>> --------------------------------------------------------------------

>>> -

>>> -

>>> --

>>>

>>> --

>>> The information transmitted is intended for the person or entity to

>>> which it is addressed and may contain confidential and/or privileged

>>> material. Any review, retransmission, dissemination or other use of,

>>> or taking of any action in reliance upon, this information by

>>> persons or entities other than the intended recipient is prohibited.

>>> If you received this in error, please contact the sender and delete

>>> the material from any computer.

>>>

>>>

>>> _______________________________________________

>>> Users mailing list

>>> Users at ovirt.org<mailto:Users at ovirt.org>

>>> http://lists.ovirt.org/mailman/listinfo/users

>>

>>

>> ________________________________

>>

>> --

>> The information transmitted is intended for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and delete the material from any computer.

>

>

>

> ________________________________

>

> --

> The information transmitted is intended for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and delete the material from any computer.









________________________________



--

The information transmitted is intended for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and delete the material from any computer.

_______________________________________________

Users mailing list

Users at ovirt.org<mailto:Users at ovirt.org>

http://lists.ovirt.org/mailman/listinfo/users





________________________________

--
The information transmitted is intended for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and delete the material from any computer.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20120828/a89af9ae/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: blue.png
Type: image/png
Size: 2834 bytes
Desc: blue.png
URL: <http://lists.ovirt.org/pipermail/users/attachments/20120828/a89af9ae/attachment-0001.png>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: add storage domain (fcp) SUCCEEDS - vdsm log.txt
URL: <http://lists.ovirt.org/pipermail/users/attachments/20120828/a89af9ae/attachment-0001.txt>


More information about the Users mailing list