[ovirt-users] set up of fibre channel storage problem

Campbell McLeay campbell.mcleay at framestore.com
Sat Mar 5 18:35:34 EST 2016


Hi Nir,

I'd removed the partitions earlier with parted, but I guess it had
left a signature there, so I did as you suggested and that resolved
the issue.

Many thanks!

-Cam

On 5 March 2016 at 22:45, Nir Soffer <nsoffer at redhat.com> wrote:
> On Sun, Mar 6, 2016 at 12:36 AM, Campbell McLeay
> <campbell.mcleay at framestore.com> wrote:
>> Hi,
>>
>> I've set up a node in a cluster which I would l like to use the
>> attached fibre channel disks as storage. The ovirt version is 3.6 and
>> both hosts (engine and node) are running Scientific 7.2. When I select
>> the fibre channel option in the ovirt manager GUI from the drop down
>> menu in the storage section, it shows the target disks on the node
>> host (actually symlinks named as 34 character IDs which I assume are
>> created by multipathd, which point to the actual device in /dev such
>> as /dev/dm-3). 'multipath -ll' shows the devices up and running:
>>
>> 36f01faf000e5fd64000007d255820b3d dm-1 DELL    ,MD36xxf
>> size=5.7T features='2 pg_init_retries 50' hwhandler='1 rdac' wp=rw
>> |-+- policy='service-time 0' prio=14 status=active
>> | `- 8:0:0:3 sdi 8:128 active ready running
>> `-+- policy='service-time 0' prio=9 status=enabled
>>   `- 1:0:0:3 sde 8:64  active ready running
>> 36f01faf000e5fd5b000009fc52cc8a5b dm-2 DELL    ,MD36xxf
>> size=9.0T features='2 pg_init_retries 50' hwhandler='1 rdac' wp=rw
>> |-+- policy='service-time 0' prio=14 status=active
>> | `- 1:0:0:0 sdb 8:16  active ready running
>> `-+- policy='service-time 0' prio=9 status=enabled
>>   `- 8:0:0:0 sdf 8:80  active ready running
>> 36f01faf000e5fd640000078752cd9374 dm-0 DELL    ,MD36xxf
>> size=9.0T features='2 pg_init_retries 50' hwhandler='1 rdac' wp=rw
>> |-+- policy='service-time 0' prio=14 status=active
>> | `- 8:0:0:1 sdg 8:96  active ready running
>> `-+- policy='service-time 0' prio=9 status=enabled
>>   `- 1:0:0:1 sdc 8:32  active ready running
>> 36f01faf000e5fd5b00000a5a55820464 dm-3 DELL    ,MD36xxf
>> size=5.7T features='2 pg_init_retries 50' hwhandler='1 rdac' wp=rw
>> |-+- policy='service-time 0' prio=14 status=active
>> | `- 1:0:0:2 sdd 8:48  active ready running
>> `-+- policy='service-time 0' prio=9 status=enabled
>>   `- 8:0:0:2 sdh 8:112 active ready running
>>
>> When I select those IDs, it says 'The following LUNs are in use' (does
>> it perhaps mean by multipath?), and to 'Approve operation', which I
>> do, but it then fails with:
>>
>> "Error while executing action New SAN Storage Domain: Physical device
>> initialization failed. Please check that the device is empty and
>> accessible by the host."
>
> This usually means the devices has partitions table, and lvm would not overwrite
> the device.
>
> You can check this by inspecting the devices with fdisk.
>
> If you are sure that the devices are not used, you can wipe the start
> of the devices
> like this:
>
> dd if=/dev/zero of=/dev/mapper/36f01faf000e5fd64000007d255820b3d bs=1M count=1
>
> After that, you should be able to select the device for a storage domain.
>
> Nir
>
>>
>> The storage is not mounted and is not being used by any other host.
>> The error from the engine log is:
>>
>> ---------------------8<--------------------------
>>
>> 2016-03-05 21:50:47,326 INFO
>> [org.ovirt.engine.core.bll.hostdeploy.VdsDeployBase] (VdsDeploy) []
>> Stage: Termination
>> 2016-03-05 21:51:21,460 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand]
>> (default task-167) [] START, GetDeviceListVDSCommand(HostName =
>> kvm-ldn-02, GetDeviceListVDSCommandParameters:{runAsync='true',
>> hostId='d44741f9-00c9-45d2-a76d-9ea5bd2893e0', storageType='FCP',
>> checkStatus='true', lunIds='[36f01faf000e5fd64000007d255820b3d]'}),
>> log id: 67073545
>> 2016-03-05 21:51:22,966 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand]
>> (default task-167) [] FINISH, GetDeviceListVDSCommand, return:
>> [LUNs:{id='36f01faf000e5fd64000007d255820b3d', physicalVolumeId='',
>> volumeGroupId='', serial='SDELL_MD36xxf_3BD000S', lunMapping='3',
>> vendorId='DELL', productId='MD36xxf', lunConnections='[]',
>> deviceSize='5865', pvSize='0', vendorName='DELL',
>> pathsDictionary='[sde=true, sdi=true]', pathsCapacity='[sde=5865,
>> sdi=5865]', lunType='FCP', status='Used', diskId='null',
>> diskAlias='null', storageDomainId='null', storageDomainName='null'}],
>> log id: 67073545
>> 2016-03-05 21:51:26,917 INFO
>> [org.ovirt.engine.core.bll.storage.AddSANStorageDomainCommand]
>> (default task-172) [71343e04] Running command:
>> AddSANStorageDomainCommand internal: false. Entities affected :  ID:
>> aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group
>> CREATE_STORAGE_DOMAIN with role type ADMIN
>> 2016-03-05 21:51:26,935 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand]
>> (default task-172) [71343e04] START, CreateVGVDSCommand(HostName =
>> kvm-ldn-02, CreateVGVDSCommandParameters:{runAsync='true',
>> hostId='d44741f9-00c9-45d2-a76d-9ea5bd2893e0',
>> storageDomainId='65fd9f6a-5674-43b2-99d7-18741be5943d',
>> deviceList='[36f01faf000e5fd64000007d255820b3d]', force='true'}), log
>> id: 45c41848
>> 2016-03-05 21:51:28,069 ERROR
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand]
>> (default task-172) [71343e04] Failed in 'CreateVGVDS' method
>> 2016-03-05 21:51:28,073 ERROR
>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
>> (default task-172) [71343e04] Correlation ID: null, Call Stack: null,
>> Custom Event ID: -1, Message: VDSM kvm-ldn-02 command failed: Failed
>> to initialize physical device:
>> ("[u'/dev/mapper/36f01faf000e5fd64000007d255820b3d']",)
>> 2016-03-05 21:51:28,073 INFO
>> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand]
>> (default task-172) [71343e04] Command
>> 'org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand' return
>> value 'OneUuidReturnForXmlRpc:{status='StatusForXmlRpc [code=601,
>> message=Failed to initialize physical device:
>> ("[u'/dev/mapper/36f01faf000e5fd64000007d255820b3d']",)]'}'
>>
>> ---------------------8<--------------------------
>>
>> On the node, in the messages log, I get:
>>
>> ---------------------8<--------------------------
>>
>> Mar  5 22:18:53 kvm-ldn-02 multipathd: dm-4: remove map (uevent)
>> Mar  5 22:18:53 kvm-ldn-02 multipathd: dm-4: remove map (uevent)
>> Mar  5 22:19:11 kvm-ldn-02 kernel: device-mapper: table: 253:4:
>> multipath: error getting device
>> Mar  5 22:19:11 kvm-ldn-02 kernel: device-mapper: ioctl: error adding
>> target to table
>> Mar  5 22:19:11 kvm-ldn-02 multipathd: dm-4: remove map (uevent)
>> Mar  5 22:19:11 kvm-ldn-02 multipathd: dm-4: remove map (uevent)
>>
>> ---------------------8<--------------------------
>>
>> Not sure why it is looking at dm-4 and dm-8, which don't exist (the
>> symlinks point to dm-0 to dm-3 for the devices).
>>
>> What am I doing wrong here? Any help much appreciated.
>>
>> Thanks,
>>
>> Cam
>> _______________________________________________
>> Users mailing list
>> Users at ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users


More information about the Users mailing list