[ovirt-users] can not use iscsi storage type onovirtandGlusterfshyper-convergedenvironment

Sahina Bose sabose at redhat.com
Tue Dec 6 09:17:33 UTC 2016


On Tue, Dec 6, 2016 at 7:46 AM, Nir Soffer <nsoffer at redhat.com> wrote:

> On Tue, Dec 6, 2016 at 3:15 AM, 胡茂荣 <maorong.hu at horebdata.cn> wrote:
>
>>  Hi Nir :
>>      before change code ,supervdsm report error log like those :
>> ===================================  ------>
>>
>> MainProcess|jsonrpc.Executor/2::DEBUG::2016-12-02
>> 17:12:16,372::supervdsmServer::92::SuperVdsm.ServerCallback::(wrapper)
>> call getPathsStatus with () {}
>>
>> MainProcess|jsonrpc.Executor/2::DEBUG::2016-12-02
>> 17:12:16,373::devicemapper::154::Storage.Misc.excCmd::(_getPathsStatus)
>> /usr/bin/taskset --cpu-list 0-7 /usr/sbin/dmsetup status (cwd None)
>>
>> MainProcess|jsonrpc.Executor/2::DEBUG::2016-12-02
>> 17:12:16,377::devicemapper::154::Storage.Misc.excCmd::(_getPathsStatus)
>> SUCCESS: <err> = ''; <rc> = 0
>>
>> MainProcess|jsonrpc.Executor/2::ERROR::2016-12-02
>> 17:12:16,378::supervdsmServer::96::SuperVdsm.ServerCallback::(wrapper) Error
>> in getPathsStatus
>> =======================================  <----------------
>>     my environment have other dm device , dmsetup status will show like
>> this as follow, so will occure above error !
>> ==================================================== -------->
>> flash_sdd: 0 976771072 flashcache stats:
>>         reads(5723182), writes(47879823)
>>         read hits(2938125), read hit percent(51)
>>         write hits(12617126) write hit percent(26)
>>         dirty write hits(4592227) dirty write hit percent(9)
>>         replacement(701168), write replacement(3189334)
>>         write invalidates(0), read invalidates(1)
>> ----
>> ====================================================   <--------------
>>       use your patch to test , add or import iscsi type storage domain
>> have no problem  , log like follow :
>> ====================================================== --------->
>>
>> MainProcess|jsonrpc.Executor/1::DEBUG::2016-12-06
>> 08:58:07,439::supervdsmServer::92::SuperVdsm.ServerCallback::(wrapper)
>> call getPathsStatus with () {}
>>
>> MainProcess|jsonrpc.Executor/1::DEBUG::2016-12-06
>> 08:58:07,439::devicemapper::155::Storage.Misc.excCmd::(_getPathsStatus)
>> /usr/bin/taskset --cpu-list 0-7 /usr/sbin/dmsetup status --target
>> multipath (cwd None)
>> MainProcess|jsonrpc.Executor/1::DEBUG::2016-12-06
>> 08:58:07,443::devicemapper::155::Storage.Misc.excCmd::(_getPathsStatus)
>> SUCCESS: <err> = ''; <rc> = 0
>> ============================================================
>> =====================
>>
>>   Regards,
>>     humaorong
>>
>
> Thanks for testing this.
>
> Sahina, can you test if this solve the issue with hyperconverge setup?
> Maybe we can enable multipath with this patch.
>

This patch does not affect the issue that we have with duplicate multipath
entries created during vdsm start.
However, looks like the latest version of lvm2 disables lvmetad when
duplicate PVs are found and pvs command no longer hangs as originally
reported in https://bugzilla.redhat.com/show_bug.cgi?id=1303940  when
tested on RHEL 7.3

Not sure if we should be worried about this warnings or not.

 # pvs
  WARNING: Not using lvmetad because duplicate PVs were found.
  WARNING: Use multipath or vgimportclone to resolve duplicate PVs?
  WARNING: After duplicates are resolved, run "pvscan --cache" to enable
lvmetad.
  PV                                            VG           Fmt  Attr
PSize   PFree
  /dev/mapper/36c81f660c9bcaa001d65c9720fe36369 RHS_vg1      lvm2 a--
1.82t 208.09g
  /dev/mapper/36c81f660c9bcaa001d65c9be146d3c97 RHS_vg1      lvm2 a--
1.82t   1.61t
  /dev/sda2                                     rhel_headwig lvm2 a--
277.87g      0


> Cheers,
> Nir
>
>
>>
>>
>> ------------------ Original ------------------
>> *From: * "Nir Soffer"<nsoffer at redhat.com>;
>> *Date: * Tue, Dec 6, 2016 01:41 AM
>> *To: * "胡茂荣"<maorong.hu at horebdata.cn>;
>> *Cc: * "users"<users at ovirt.org>; "Jeff Nelson"<jenelson at redhat.com>;
>> "胡晓宇"<samuel.xhu at horebdata.cn>;
>> *Subject: * Re: [ovirt-users] can not use iscsi storage type
>> onovirtandGlusterfshyper-convergedenvironment
>>
>> Hi 胡茂荣,
>>
>> Can you test this patch?
>> https://gerrit.ovirt.org/67844
>>
>> I also need more information on your setup, to add more
>> details to the commit message.
>>
>> Thanks for reporting this,
>> Nir
>>
>> On Mon, Dec 5, 2016 at 10:28 AM, 胡茂荣 <maorong.hu at horebdata.cn> wrote:
>>
>>>
>>>   Thanks for Yaniv Kaul ,  change code need build vdsm source code , and
>>> if only  change /usr/share/vdsm/storage/devicemapper.py will not really
>>> take effect .
>>>
>>>    could this problem as a bug , and correct it to ovirt vdsm source
>>> code ?
>>>
>>> ------------------ Original ------------------
>>> *From: * "Yaniv Kaul"<ykaul at redhat.com>;
>>> *Date: * Sun, Dec 4, 2016 07:07 PM
>>> *To: * "胡茂荣"<maorong.hu at horebdata.cn>;
>>> *Cc: * "胡晓宇"<samuel.xhu at horebdata.cn>; "users"<users at ovirt.org>;
>>> "Sahina Bose"<sabose at redhat.com>; "Jeff Nelson"<jenelson at redhat.com>;
>>> *Subject: * Re: [ovirt-users] can not use iscsi storage type
>>> onovirtandGlusterfshyper-converged environment
>>>
>>>
>>>
>>> On Dec 2, 2016 11:53 AM, "胡茂荣" <maorong.hu at horebdata.cn> wrote:
>>>
>>>    I find supervdsm  used " /usr/sbin/dmsetup status" :
>>>
>>> MainProcess|jsonrpc.Executor/2::DEBUG::2016-12-02
>>> 17:12:16,372::supervdsmServer::92::SuperVdsm.ServerCallback::(wrapper)
>>> call getPathsStatus with () {}
>>> MainProcess|jsonrpc.Executor/2::DEBUG::2016-12-02
>>> 17:12:16,373::devicemapper::154::Storage.Misc.excCmd::(_getPathsStatus)
>>> /usr/bin/taskset --cpu-list 0-7 /usr/sbin/dmsetup status (cwd None)
>>> MainProcess|jsonrpc.Executor/2::DEBUG::2016-12-02
>>> 17:12:16,377::devicemapper::154::Storage.Misc.excCmd::(_getPathsStatus)
>>> SUCCESS: <err> = ''; <rc> = 0
>>> MainProcess|jsonrpc.Executor/2::ERROR::2016-12-02
>>> 17:12:16,378::supervdsmServer::96::SuperVdsm.ServerCallback::(wrapper) Error
>>> in getPathsStatus
>>>
>>> problem :
>>> how can I change  Storage.Misc.excCmd _getPathsStatus  "
>>> /usr/bin/taskset --cpu-list 0-7 /usr/sbin/dmsetup status"  to :
>>>
>>>    /usr/bin/taskset --cpu-list 0-7 /usr/sbin/dmsetup status  --target
>>> multipah
>>>
>>>   I think add iscsi type storage ,  if supervdsm scan mulitpah will
>>> solve my problem .(my environment have other dm devices, use "dmsetup
>>> status" will show them, and vdsm get dm path status will occur error )
>>>
>>> ============================================================
>>> ====================
>>> so I changed some as follow :
>>>  (1)
>>>    I  define EXT_DMSETUP_STATUS   in  /usr/lib/python2.7/site-packages/vdsm/constants.py
>>> :
>>>
>>> /usr/lib/python2.7/site-packages/vdsm/constants.py:EXT_DMSETUP =
>>> '/usr/sbin/dmsetup'
>>> /usr/lib/python2.7/site-packages/vdsm/constants.py:EXT_DMSETUP_STATUS =
>>> "/usr/sbin/dmsetup status --target multipath"
>>>
>>>  (2)
>>>  /usr/share/vdsm/storage/devicemapper.py     add :
>>> from vdsm.constants import EXT_DMSETUP_STATUS
>>>
>>> and changed  getPathsStatus cmd  to " EXT_DMSETUP_STATUS" :
>>>
>>> def _getPathsStatus():
>>>     cmd = [EXT_DMSETUP_STATUS]                ##### before : cmd=[
>>> EXT_DMSETUP,"status"]
>>>
>>>
>>> Why not change this to:
>>> cmd = [EXT_DMSETUP,  "status", "--target", "multipath"]
>>>
>>> Y.
>>>
>>>     rc, out, err = misc.execCmd(cmd)
>>> ============================================================
>>> ===========================
>>>
>>>  but log in supervdsm log also not change . Please help me ,how to
>>> change code to let supervdsm exec "/usr/sbin/dmsetup status --target
>>> multipath"   in function  getPathsStatus() 。
>>>
>>>
>>>
>>>
>>> ------------------ Original ------------------
>>> *From: * "胡茂荣"<maorong.hu at horebdata.cn>;
>>> *Date: * Fri, Nov 25, 2016 05:44 PM
>>> *To: * "Sahina Bose"<sabose at redhat.com>;
>>> *Cc: * "Maor Lipchuk"<mlipchuk at redhat.com>; "Jeff Nelson"<
>>> jenelson at redhat.com>; "users"<users at ovirt.org>;
>>> *Subject: * Re: [ovirt-users] can not use iscsi storage type on
>>> ovirtandGlusterfshyper-converged environment
>>>
>>>
>>> ===================================---
>>>
>>>    ###vdsm or supervdsm log  report :
>>>
>>>     MainProcess|jsonrpc.Executor/7::ERROR::2016-11-01
>>> 11:07:00,178::supervdsmServer::96::SuperVdsm.ServerCallback::(wrapper)
>>> Error in getPathsStatus
>>>
>>> MainProcess|jsonrpc.Executor/4::ERROR::2016-11-01
>>> 11:07:20,964::supervdsmServer::96::SuperVdsm.ServerCallback::(wrapper)
>>> Error in getPathsStatus
>>>  ============================  some code info ------------>
>>> [root at horeba storage]# pwd
>>> /usr/share/vdsm/storage
>>>
>>> [root at horeba storage]# grep "getPathsStatus" -R ./
>>> ./devicemapper.py:def _getPathsStatus():
>>> ./devicemapper.py:def getPathsStatus():
>>> ./devicemapper.py:    return getProxy().getPathsStatus()
>>> ./multipath.py:    pathStatuses = devicemapper.getPathsStatus()
>>>
>>> def _getPathsStatus():
>>>     cmd = [EXT_DMSETUP, "status"]
>>>     rc, out, err = misc.execCmd(cmd)
>>>     if rc != 0:
>>>         raise Exception("Could not get device statuses")
>>>
>>>     res = {}
>>>     for statusLine in out:
>>>         try:
>>>             devName, statusLine = statusLine.split(":", 1)
>>>         except ValueError:
>>>             if len(out) == 1:
>>>                 # return an empty dict when status output is: No devices
>>> found
>>>                 return res
>>>             else:
>>>                 raise
>>>
>>>         for m in PATH_STATUS_RE.finditer(statusLine):
>>>             devNum, status = m.groups()
>>>             physdevName = findDev(*[int(i) for i in devNum.split(":")])
>>>             res[physdevName] = {"A": "active", "F": "failed"}[status]
>>>
>>>     return res
>>> def getPathsStatus():
>>>     return getProxy().getPathsStatus()
>>> =============================================
>>>   and flashcache dm device will error  when use getPathsStatus()
>>> function .  could change code not check flashcache dm device ?
>>> ========================================dmsetup info ----------->
>>> [root at horebc ~]# dmsetup status
>>> flash_sdb: 0 976771072 flashcache stats:
>>>         reads(1388761), writes(15548965)
>>>         read hits(1235671), read hit percent(88)
>>>         write hits(6539144) write hit percent(42)
>>>         dirty write hits(21372) dirty write hit percent(0)
>>>         replacement(147711), write replacement(524881)
>>>         write invalidates(0), read invalidates(1)
>>>         pending enqueues(810), pending inval(810)
>>>         metadata dirties(15196370), metadata cleans(15196322)
>>>         metadata batch(30087377) metadata ssd writes(305315)
>>>         cleanings(15196322) fallow cleanings(48187)
>>>         no room(337139) front merge(716153) back merge(14391395)
>>>         force_clean_block(0)
>>>         disk reads(153093), disk writes(15530535) ssd reads(16431974)
>>> ssd writes(15672221)
>>>         uncached reads(3714), uncached writes(334235), uncached IO
>>> requeue(0)
>>>         disk read errors(0), disk write errors(0) ssd read errors(0) ssd
>>> write errors(0)
>>>         uncached sequential reads(0), uncached sequential writes(0)
>>>         pid_adds(0), pid_dels(0), pid_drops(0) pid_expiry(0)
>>>         lru hot blocks(12158976), lru warm blocks(12158976)
>>>         lru promotions(0), lru demotions(0)
>>> dedbd337--ca66--43ff--b78c--4e9347682a9c-metadata: 0 1048576 linear
>>> dedbd337--ca66--43ff--b78c--4e9347682a9c-leases: 0 4194304 linear
>>> 23137643634356633: 0 2147483648 multipath 2 0 0 0 1 1 A 0 1 2 8:128 A 0
>>> 0 1
>>> dedbd337--ca66--43ff--b78c--4e9347682a9c-master: 0 2097152 linear
>>> dedbd337--ca66--43ff--b78c--4e9347682a9c-inbox: 0 262144 linear
>>> dedbd337--ca66--43ff--b78c--4e9347682a9c-outbox: 0 262144 linear
>>>
>>> [root at horebc ~]# dmsetup info -C
>>> Name                                              Maj Min Stat Open Targ
>>> Event  UUID
>>> flash_sdb                                         253   0 L--w    1    1
>>>      0
>>> dedbd337--ca66--43ff--b78c--4e9347682a9c-metadata 253   4 L--w    0
>>>  1      0 LVM-zSZKhQXbs2Ms5ceM2Wlt3p46BgsS8gGIGwifAPyYj9GmjFzCmJkIf9vF
>>> FFHn9n7V
>>> dedbd337--ca66--43ff--b78c--4e9347682a9c-leases   253   6 L--w    0
>>>  1      0 LVM-zSZKhQXbs2Ms5ceM2Wlt3p46BgsS8gGIVSCllWEYYKziY1bSeiTL0dAK
>>> Ad27JqDT
>>> 23137643634356633                                 253   3 L--w    6    1
>>>      0 mpath-23137643634356633
>>> dedbd337--ca66--43ff--b78c--4e9347682a9c-master   253   9 L--w    0
>>>  1      0 LVM-zSZKhQXbs2Ms5ceM2Wlt3p46BgsS8gGIiEnFZklRhZfFZ4YRdYWFImKW
>>> sUGr5pHg
>>> dedbd337--ca66--43ff--b78c--4e9347682a9c-inbox    253   8 L--w    0
>>>  1      0 LVM-zSZKhQXbs2Ms5ceM2Wlt3p46BgsS8gGILobNK8KRD4SzDWyg50aG7jGd
>>> cNAi3KNw
>>> dedbd337--ca66--43ff--b78c--4e9347682a9c-outbox   253   5 L--w    0
>>>  1      0 LVM-zSZKhQXbs2Ms5ceM2Wlt3p46BgsS8gGIAvbT8CLegbVL802bG3QgLtH7
>>> I7llmS6R
>>> flash_sdf                                         253   2 L--w    1    1
>>>      0
>>> dedbd337--ca66--43ff--b78c--4e9347682a9c-ids      253   7 L--w    1
>>>  1      0 LVM-zSZKhQXbs2Ms5ceM2Wlt3p46BgsS8gGIIkeaFaWvxa9wbHy7WrfiXNsP
>>> 4F2J3gg0
>>> flash_sdd                                         253   1 L--w    1    1
>>>      0
>>>
>>> ====================================================================
>>> and flashcache dm have no UUID  ,I think it can export it  before
>>> checking .
>>>
>>>
>>> humaorong
>>>   2016-11-25
>>>
>>>
>>>
>>> ------------------ Original ------------------
>>> *From: * "胡茂荣"<maorong.hu at horebdata.cn>;
>>> *Date: * Fri, Nov 25, 2016 01:18 PM
>>> *To: * "Sahina Bose"<sabose at redhat.com>;
>>> *Cc: * "Maor Lipchuk"<mlipchuk at redhat.com>; "Jeff Nelson"<
>>> jenelson at redhat.com>; "users"<users at ovirt.org>;
>>> *Subject: * Re: [ovirt-users] can not use iscsi storage type on
>>> ovirtandGlusterfshyper-converged environment
>>>
>>>
>>>     I find this problem more info :
>>>       I use flashcache  on ovirt hosts , have dm device in /dev/mapper/
>>> :
>>>
>>> [root at horeba init.d]# dmsetup info -C     (and I set them in multipath
>>> blacklist )
>>> Name             Maj Min Stat Open Targ Event  UUID
>>>
>>> flash_sdb        253   0 L--w    0    1      0
>>>
>>> flash_sdf        253   2 L--w    0    1      0
>>>
>>> flash_sdd        253   1 L--w    0    1      0
>>> [root at horeba init.d]# multipath -l
>>> [root at horeba init.d]#
>>>
>>> [root at horeba init.d]# ll /dev/mapper/
>>> total 0
>>> crw------- 1 root root 10, 236 Nov 25 10:09 control
>>> lrwxrwxrwx 1 root root       7 Nov 25 12:51 flash_sdb -> ../dm-0
>>> lrwxrwxrwx 1 root root       7 Nov 25 12:51 flash_sdd -> ../dm-1
>>> lrwxrwxrwx 1 root root       7 Nov 25 12:51 flash_sdf -> ../dm-2
>>>
>>>   on this condition , ovirt UI add  iscsi type storage  will fail .
>>>
>>>    If I delete the flashcache device  /dev/mapper/flash_*  , ovirt UI
>>>  add iscsi type storage  have no problem .
>>>
>>>    I need flashcache for using ssd cache on my environment ,  how can I
>>> use iscsi type storage on this environment , please help me , Thanks!
>>>
>>>
>>> ------------------ Original ------------------
>>> *From: * "Sahina Bose"<sabose at redhat.com>;
>>> *Date: * Thu, Nov 3, 2016 05:54 PM
>>> *To: * "胡茂荣"<maorong.hu at horebdata.cn>;
>>> *Cc: * "Maor Lipchuk"<mlipchuk at redhat.com>; "Jeff Nelson"<
>>> jenelson at redhat.com>; "users"<users at ovirt.org>;
>>> *Subject: * Re: [ovirt-users] can not use iscsi storage type on
>>> ovirtandGlusterfshyper-converged environment
>>>
>>> A wild guess, not sure if it is related - can you check if multipathd
>>> service is enabled. If you set up your oVirt-Gluster hyperconverged
>>> environment via gdeploy, multipathd service is disabled and the
>>> /etc/multipath.conf is edited to blacklist all devices - this was to fix
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1303940
>>>
>>> Since you mention you're unable to add iscsi storage only in this
>>> environment, thought it's worth checking.
>>>
>>> On Thu, Nov 3, 2016 at 6:40 AM, 胡茂荣 <maorong.hu at horebdata.cn> wrote:
>>>
>>>>
>>>>      my   environment rpm  are :
>>>>  [root at horeba ~]# rpm -q vdsm
>>>> vdsm-4.18.13-1.el7.centos.x86_64
>>>>
>>>> [root at horeba ~]# rpm -aq | grep ovirt
>>>> ovirt-vmconsole-host-1.0.4-1.el7.centos.noarch
>>>> ovirt-imageio-common-0.4.0-1.el7.noarch
>>>> ovirt-hosted-engine-setup-2.0.2.2-1.el7.centos.noarch
>>>> ovirt-imageio-daemon-0.4.0-1.el7.noarch
>>>> ovirt-engine-appliance-4.0-20160928.1.el7.centos.noarch
>>>> ovirt-vmconsole-1.0.4-1.el7.centos.noarch
>>>> ovirt-host-deploy-1.5.2-1.el7.centos.noarch
>>>> ovirt-hosted-engine-ha-2.0.4-1.el7.centos.noarch
>>>> ovirt-release40-4.0.4-1.noarch
>>>> ovirt-setup-lib-1.0.2-1.el7.centos.noarch
>>>> ovirt-engine-sdk-python-3.6.9.1-1.el7.centos.noarch
>>>>
>>>>   and I test if not on ' ovirt and Glusterfs hyper-converged
>>>> environment ' ,  ovirt WEB UI  add iscsi storage  work ok .
>>>>
>>>>
>>>> ------------------ Original ------------------
>>>> *From: * "Maor Lipchuk"<mlipchuk at redhat.com>;
>>>> *Date: * Wed, Nov 2, 2016 07:37 PM
>>>> *To: * "胡茂荣"<maorong.hu at horebdata.cn>;
>>>> *Cc: * "users"<users at ovirt.org>; "Jeff Nelson"<jenelson at redhat.com>;
>>>> "Nir Soffer"<nsoffer at redhat.com>;
>>>> *Subject: * Re: [ovirt-users] can not use iscsi storage type on
>>>> ovirtandGlusterfs hyper-converged environment
>>>>
>>>> Thanks for the logs,
>>>>
>>>> What kind of VDSM version are you using?
>>>>     "rpm -q vdsm"
>>>> There seems to be a similar issue which was reported recently in the
>>>> VDSM area
>>>> (see https://bugzilla.redhat.com/show_bug.cgi?id=1197292)
>>>> It should be fixed in later versions of VDSM
>>>> vdsm-4.16.12-2.el7ev.x86_64
>>>> Adding also Nir and Jeff to the thread, if they have any insights
>>>>
>>>> Regards,
>>>> Maor
>>>>
>>>> On Wed, Nov 2, 2016 at 4:11 AM, 胡茂荣 <maorong.hu at horebdata.cn> wrote:
>>>>
>>>>>
>>>>>  Hi Maor:
>>>>>       vdsm/supervdsm/engine log on attachment .  I mkfs.xfs the lun
>>>>> block device and mount to /mnt , dd write  ,dmesg not report error ,dd
>>>>> result is ok :
>>>>>
>>>>> /dev/sdi                      50G   33M   50G   1% /mnt
>>>>>
>>>>> [root at horebc mnt]# for i in `seq 3`; do dd if=/dev/zero of=./file
>>>>> bs=1G count=1 oflag=direct ; done
>>>>> 1+0 records in
>>>>> 1+0 records out
>>>>> 1073741824 bytes (1.1 GB) copied, 13.3232 s, 80.6 MB/s
>>>>> 1+0 records in
>>>>> 1+0 records out
>>>>> 1073741824 bytes (1.1 GB) copied, 9.89988 s, 108 MB/s
>>>>> 1+0 records in
>>>>> 1+0 records out
>>>>> 1073741824 bytes (1.1 GB) copied, 14.0143 s, 76.6 MB/s
>>>>>
>>>>>    my envirnment  have three  network segments (hosts have 3 network
>>>>> segments ) :
>>>>>        engine  and glusterfs mount : 192.168.11.X/24
>>>>>         glusterfs brick : 192.168.10.x/24
>>>>>         iscsi : 192.168.1.0/24
>>>>>
>>>>>     and I add 192.168.1.0/24 to engine vm ,  ovirt web UI report the
>>>>> same error .
>>>>>
>>>>>  humaorong
>>>>>   2016-11-2
>>>>>
>>>>> ------------------ Original ------------------
>>>>> *From: * "Maor Lipchuk"<mlipchuk at redhat.com>;
>>>>> *Date: * Tue, Nov 1, 2016 08:14 PM
>>>>> *To: * "胡茂荣"<maorong.hu at horebdata.cn>;
>>>>> *Cc: * "users"<users at ovirt.org>;
>>>>> *Subject: * Re: [ovirt-users] can not use iscsi storage type on ovirt
>>>>> andGlusterfs hyper-converged environment
>>>>>
>>>>> Hi 胡茂荣Can u please also add the VDSM and engine logs.
>>>>> If you try discover and connect to those luns directly from your Host
>>>>> does it work?
>>>>>
>>>>> Regards,
>>>>> Maor
>>>>>
>>>>>
>>>>> On Tue, Nov 1, 2016 at 6:12 AM, 胡茂荣 <maorong.hu at horebdata.cn> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>>     on ovirt and Glusterfs hyper-converged environment , can not use
>>>>>> iscsi storage type , UI report error: "Could not retrieve LUNs,
>>>>>> please check your storage." , vdsm report :"VDSM hosted_engine_3
>>>>>> command failed: Error block device action: ()" .
>>>>>>     but this block device alse login on centos 7 host :
>>>>>> =============================================================
>>>>>>
>>>>>> ## lsscsi
>>>>>>
>>>>>> [7:0:0:0]   disk    SCST_BIO DEVFOR_OVIRT_rbd  221  /dev/sdi
>>>>>>
>>>>>>   ## dmesg :
>>>>>>
>>>>>> [684521.131186] sd 7:0:0:0: [sdi] Attached SCSI disk
>>>>>>
>>>>>> ===================================---
>>>>>>
>>>>>>    ###vdsm or supervdsm log  report :
>>>>>>
>>>>>>     MainProcess|jsonrpc.Executor/7::ERROR::2016-11-01
>>>>>> 11:07:00,178::supervdsmServer::96::SuperVdsm.ServerCallback::(wrapper)
>>>>>> Error in getPathsStatus
>>>>>>
>>>>>> MainProcess|jsonrpc.Executor/4::ERROR::2016-11-01
>>>>>> 11:07:20,964::supervdsmServer::96::SuperVdsm.ServerCallback::(wrapper)
>>>>>> Error in getPathsStatus
>>>>>>
>>>>>>    jsonrpc.Executor/4::DEBUG::2016-11-01
>>>>>> 11:07:04,251::iscsi::434::Storage.ISCSI::(rescan) Performing SCSI
>>>>>> scan, this will take up to 30 seconds
>>>>>>
>>>>>> jsonrpc.Executor/5::INFO::2016-11-01 11:07:19,413::iscsi::567::Stor
>>>>>> age.ISCSI::(setRpFilterIfNeeded) iSCSI iface.net_ifacename not
>>>>>> provided. Skipping.
>>>>>>
>>>>>> 11:09:15,753::iscsiadm::119::Storage.Misc.excCmd::(_runCmd)
>>>>>> /usr/bin/taskset --cpu-list 0-7 /usr/bin/sudo -n /usr/sbin/iscsiadm -m
>>>>>> session -R (cwd None)
>>>>>>
>>>>>> ======================================
>>>>>>
>>>>>>      the other info please the attachment "bug-info.doc".
>>>>>>
>>>>>>      this prolem on ovirt3.6 and 4.X  ovirt and Glusterfs
>>>>>> hyper-converged environment . how can I use iscsi storage type on ovirt
>>>>>> and Glusterfs hyper-converged environment .Please help me !
>>>>>>
>>>>>>     humaorong
>>>>>>
>>>>>>    2016-11-1
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> Users mailing list
>>>>>> Users at ovirt.org
>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>
>>>>>>
>>>>>
>>>>
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users at ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>
>>>>
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users at ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users at ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20161206/d390d265/attachment-0001.html>


More information about the Users mailing list