[ovirt-users] GetStorageDeviceListVDS failed

Joel Diaz mrjoeldiaz at gmail.com
Mon May 29 16:33:13 UTC 2017


Good morning,

Any advice on how to remove those  stale device mapper entries?

Thank you,

On Thu, May 25, 2017 at 12:52 PM, Joel Diaz <mrjoeldiaz at gmail.com> wrote:

> Sahina,
>
> I believe you're correct regarding the stale devicemapper entries.
>
> Before this started happening, I added a new iscsi storage domain. Host 1
> couldn't communicate with it and the engine kept setting host 1 as
> non-operational. I removed the iscsi domain to get host 1 working again but
> the other two host must still think the iscsi storage is attached.
>
> Below is the ouput from both hosts.
>
> root@<host2> ~]# lvm lvs -a --unit k --nosuffix --nameprefixes --unquoted
> --noheadings -ovg_name,lv_name,lv_uuid,lv_size,lv_attr,segtype
>
>   /dev/mapper/36589cfc000000f05aea0f2b50f8d76e5: read failed after 0 of
> 4096 at 0: Input/output error
>
>   /dev/mapper/36589cfc000000f05aea0f2b50f8d76e5: read failed after 0 of
> 4096 at 1099511562240: Input/output error
>
>   /dev/mapper/36589cfc000000f05aea0f2b50f8d76e5: read failed after 0 of
> 4096 at 1099511619584: Input/output error
>
>   /dev/mapper/36589cfc000000f05aea0f2b50f8d76e5: read failed after 0 of
> 4096 at 4096: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/metadata: read failed after 0
> of 4096 at 0: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/metadata: read failed after 0
> of 4096 at 536805376: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/metadata: read failed after 0
> of 4096 at 536862720: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/metadata: read failed after 0
> of 4096 at 4096: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/ids: read failed after 0 of
> 4096 at 0: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/ids: read failed after 0 of
> 4096 at 134152192: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/ids: read failed after 0 of
> 4096 at 134209536: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/ids: read failed after 0 of
> 4096 at 4096: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/leases: read failed after 0
> of 4096 at 0: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/leases: read failed after 0
> of 4096 at 2147418112 <%28214%29%20741-8112>: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/leases: read failed after 0
> of 4096 at 2147475456 <%28214%29%20747-5456>: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/leases: read failed after 0
> of 4096 at 4096: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/outbox: read failed after 0
> of 4096 at 0: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/outbox: read failed after 0
> of 4096 at 134152192: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/outbox: read failed after 0
> of 4096 at 134209536: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/outbox: read failed after 0
> of 4096 at 4096: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/xleases: read failed after 0
> of 4096 at 0: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/xleases: read failed after 0
> of 4096 at 1073676288: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/xleases: read failed after 0
> of 4096 at 1073733632: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/xleases: read failed after 0
> of 4096 at 4096: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/inbox: read failed after 0 of
> 4096 at 0: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/inbox: read failed after 0 of
> 4096 at 134152192: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/inbox: read failed after 0 of
> 4096 at 134209536: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/inbox: read failed after 0 of
> 4096 at 4096: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/master: read failed after 0
> of 4096 at 0: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/master: read failed after 0
> of 4096 at 1073676288: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/master: read failed after 0
> of 4096 at 1073733632: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/master: read failed after 0
> of 4096 at 4096: Input/output error
>
>   /dev/mapper/36589cfc000000b8e821765d5febefda2: read failed after 0 of
> 4096 at 0: Input/output error
>
>   /dev/mapper/36589cfc000000b8e821765d5febefda2: read failed after 0 of
> 4096 at 536870846464: Input/output error
>
>   /dev/mapper/36589cfc000000b8e821765d5febefda2: read failed after 0 of
> 4096 at 536870903808: Input/output error
>
>   /dev/mapper/36589cfc000000b8e821765d5febefda2: read failed after 0 of
> 4096 at 4096: Input/output error
>
>   LVM2_VG_NAME=centos_ovirt-hyp-02 LVM2_LV_NAME=home
> LVM2_LV_UUID=WmKd5N-uhew-w2uu-P5Qy-Xpeq-quDz-LrLM3n
> LVM2_LV_SIZE=63627264.00 LVM2_LV_ATTR=-wi-ao---- LVM2_SEGTYPE=linear
>
>   LVM2_VG_NAME=centos_ovirt-hyp-02 LVM2_LV_NAME=root
> LVM2_LV_UUID=qeFOoZ-3WR0-LX2J-BZXh-oo9u-mqXJ-NRJkq3
> LVM2_LV_SIZE=52428800.00 LVM2_LV_ATTR=-wi-ao---- LVM2_SEGTYPE=linear
>
>   LVM2_VG_NAME=centos_ovirt-hyp-02 LVM2_LV_NAME=swap
> LVM2_LV_UUID=FQ6NIv-N5bc-uo6Q-IdZS-i6XL-Y4TQ-5mOkSd
> LVM2_LV_SIZE=8192000.00 LVM2_LV_ATTR=-wi-ao---- LVM2_SEGTYPE=linear
>
>   LVM2_VG_NAME=gluster_vg_sdb LVM2_LV_NAME=gluster_lv_data
> LVM2_LV_UUID=nhoFx5-aHEd-ZUe4-eEuI-FItR-I0nJ-JS1wD3
> LVM2_LV_SIZE=681574400.00 LVM2_LV_ATTR=Vwi-aotz-- LVM2_SEGTYPE=thin
>
>   LVM2_VG_NAME=gluster_vg_sdb LVM2_LV_NAME=gluster_lv_engine
> LVM2_LV_UUID=QAyxti-wHCB-LqEF-HVdK-S1KM-i99p-QBhCun
> LVM2_LV_SIZE=26214400.00 LVM2_LV_ATTR=-wi-ao---- LVM2_SEGTYPE=linear
>
>   LVM2_VG_NAME=gluster_vg_sdb LVM2_LV_NAME=gluster_lv_export
> LVM2_LV_UUID=Gb4qNx-djQV-85oW-l8Q8-3AfH-zvoa-Y4EFie
> LVM2_LV_SIZE=104857600.00 LVM2_LV_ATTR=Vwi-aotz-- LVM2_SEGTYPE=thin
>
>   LVM2_VG_NAME=gluster_vg_sdb LVM2_LV_NAME=gluster_lv_iso
> LVM2_LV_UUID=9zHjQd-QW21-Nu5C-8ggR-y3R7-TgsV-b1ydGp
> LVM2_LV_SIZE=52428800.00 LVM2_LV_ATTR=Vwi-aotz-- LVM2_SEGTYPE=thin
>
>   LVM2_VG_NAME=gluster_vg_sdb LVM2_LV_NAME=gluster_thinpool_sdb
> LVM2_LV_UUID=F8J1Uh-So1O-2cju-su1W-BFxI-cgfe-QBYpuT
> LVM2_LV_SIZE=838860800.00 LVM2_LV_ATTR=twi-aotz-- LVM2_SEGTYPE=thin-pool
>
>   LVM2_VG_NAME=gluster_vg_sdb LVM2_LV_NAME=[gluster_thinpool_sdb_tdata]
> LVM2_LV_UUID=gkWCVU-yYuE-iqvU-9p79-k8Oq-EK4g-kBg6py
> LVM2_LV_SIZE=838860800.00 LVM2_LV_ATTR=Twi-ao---- LVM2_SEGTYPE=linear
>
>   LVM2_VG_NAME=gluster_vg_sdb LVM2_LV_NAME=[gluster_thinpool_sdb_tmeta]
> LVM2_LV_UUID=6VDVJo-AAgI-HcrP-ANQM-qvgo-GMOp-HAYeQM
> LVM2_LV_SIZE=16777216.00 LVM2_LV_ATTR=ewi-ao---- LVM2_SEGTYPE=linear
>
>   LVM2_VG_NAME=gluster_vg_sdb LVM2_LV_NAME=[lvol0_pmspare]
> LVM2_LV_UUID=STxeFv-0sl2-fCqz-jm4q-kVB5-sek1-BwSVNc
> LVM2_LV_SIZE=16777216.00 LVM2_LV_ATTR=ewi------- LVM2_SEGTYPE=linear
>
>
>
>   [root@<host3> ~]# lvm lvs -a --unit k --nosuffix --nameprefixes
> --unquoted --noheadings -ovg_name,lv_name,lv_uuid,lv_size,lv_attr,segtype
>
>   /dev/mapper/36589cfc000000f05aea0f2b50f8d76e5: read failed after 0 of
> 4096 at 0: Input/output error
>
>   /dev/mapper/36589cfc000000f05aea0f2b50f8d76e5: read failed after 0 of
> 4096 at 1099511562240: Input/output error
>
>   /dev/mapper/36589cfc000000f05aea0f2b50f8d76e5: read failed after 0 of
> 4096 at 1099511619584: Input/output error
>
>   /dev/mapper/36589cfc000000f05aea0f2b50f8d76e5: read failed after 0 of
> 4096 at 4096: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/metadata: read failed after 0
> of 4096 at 0: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/metadata: read failed after 0
> of 4096 at 536805376: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/metadata: read failed after 0
> of 4096 at 536862720: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/metadata: read failed after 0
> of 4096 at 4096: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/outbox: read failed after 0
> of 4096 at 0: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/outbox: read failed after 0
> of 4096 at 134152192: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/outbox: read failed after 0
> of 4096 at 134209536: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/outbox: read failed after 0
> of 4096 at 4096: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/xleases: read failed after 0
> of 4096 at 0: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/xleases: read failed after 0
> of 4096 at 1073676288: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/xleases: read failed after 0
> of 4096 at 1073733632: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/xleases: read failed after 0
> of 4096 at 4096: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/leases: read failed after 0
> of 4096 at 0: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/leases: read failed after 0
> of 4096 at 2147418112 <%28214%29%20741-8112>: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/leases: read failed after 0
> of 4096 at 2147475456 <%28214%29%20747-5456>: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/leases: read failed after 0
> of 4096 at 4096: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/ids: read failed after 0 of
> 4096 at 0: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/ids: read failed after 0 of
> 4096 at 134152192: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/ids: read failed after 0 of
> 4096 at 134209536: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/ids: read failed after 0 of
> 4096 at 4096: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/inbox: read failed after 0 of
> 4096 at 0: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/inbox: read failed after 0 of
> 4096 at 134152192: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/inbox: read failed after 0 of
> 4096 at 134209536: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/inbox: read failed after 0 of
> 4096 at 4096: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/master: read failed after 0
> of 4096 at 0: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/master: read failed after 0
> of 4096 at 1073676288: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/master: read failed after 0
> of 4096 at 1073733632: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/master: read failed after 0
> of 4096 at 4096: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/d7e958f7-e307-4c33-95ea-f98532ad6fd0:
> read failed after 0 of 4096 at 0: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/d7e958f7-e307-4c33-95ea-f98532ad6fd0:
> read failed after 0 of 4096 at 21608988672: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/d7e958f7-e307-4c33-95ea-f98532ad6fd0:
> read failed after 0 of 4096 at 21609046016: Input/output error
>
>   /dev/81f7db35-cff2-4592-bad7-55f437bb09ee/d7e958f7-e307-4c33-95ea-f98532ad6fd0:
> read failed after 0 of 4096 at 4096: Input/output error
>
>   /dev/mapper/36589cfc000000b8e821765d5febefda2: read failed after 0 of
> 4096 at 0: Input/output error
>
>   /dev/mapper/36589cfc000000b8e821765d5febefda2: read failed after 0 of
> 4096 at 536870846464: Input/output error
>
>  /dev/mapper/36589cfc000000b8e821765d5febefda2: read failed after 0 of
> 4096 at 536870903808: Input/output error
>
>   /dev/mapper/36589cfc000000b8e821765d5febefda2: read failed after 0 of
> 4096 at 4096: Input/output error
>
>   LVM2_VG_NAME=centos_ovirt-hyp-03 LVM2_LV_NAME=home
> LVM2_LV_UUID=d4pR6F-JD7f-JR7F-RjYv-EkB9-ka7b-4BVJqZ
> LVM2_LV_SIZE=63832064.00 LVM2_LV_ATTR=-wi-ao---- LVM2_SEGTYPE=linear
>
>   LVM2_VG_NAME=centos_ovirt-hyp-03 LVM2_LV_NAME=root
> LVM2_LV_UUID=Dk3ihf-qtXB-8h7m-0zul-d9wo-oTGd-4gijh1
> LVM2_LV_SIZE=52428800.00 LVM2_LV_ATTR=-wi-ao---- LVM2_SEGTYPE=linear
>
>   LVM2_VG_NAME=centos_ovirt-hyp-03 LVM2_LV_NAME=swap
> LVM2_LV_UUID=RVT0Wl-f3Kx-vxha-CIFA-nK1M-IEeU-CNyFFu
> LVM2_LV_SIZE=8192000.00 LVM2_LV_ATTR=-wi-ao---- LVM2_SEGTYPE=linear
>
>   LVM2_VG_NAME=gluster_vg_sdb LVM2_LV_NAME=gluster_lv_data
> LVM2_LV_UUID=ILg0Ym-l0M3-Zxg7-heDz-VsDF-yfHJ-jy8gJG
> LVM2_LV_SIZE=10485760.00 LVM2_LV_ATTR=Vwi-aotz-- LVM2_SEGTYPE=thin
>
>   LVM2_VG_NAME=gluster_vg_sdb LVM2_LV_NAME=gluster_lv_engine
> LVM2_LV_UUID=NnIR9J-4PIP-ksQA-PdpX-1epc-X5iT-Wk9qiO
> LVM2_LV_SIZE=10485760.00 LVM2_LV_ATTR=-wi-ao---- LVM2_SEGTYPE=linear
>
>   LVM2_VG_NAME=gluster_vg_sdb LVM2_LV_NAME=gluster_lv_export
> LVM2_LV_UUID=z6hJeU-wb4z-WzHv-jHKv-2sB5-BUoR-XK9zRf
> LVM2_LV_SIZE=10485760.00 LVM2_LV_ATTR=Vwi-aotz-- LVM2_SEGTYPE=thin
>
>   LVM2_VG_NAME=gluster_vg_sdb LVM2_LV_NAME=gluster_lv_iso
> LVM2_LV_UUID=kj9fMh-SBym-erPu-sgOe-m6kU-TWsc-LmRP65
> LVM2_LV_SIZE=10485760.00 LVM2_LV_ATTR=Vwi-aotz-- LVM2_SEGTYPE=thin
>
>   LVM2_VG_NAME=gluster_vg_sdb LVM2_LV_NAME=gluster_thinpool_sdb
> LVM2_LV_UUID=FGu1eb-T273-fEY0-pThE-lXXc-zBCs-OfEJkJ
> LVM2_LV_SIZE=73400320.00 LVM2_LV_ATTR=twi-aotz-- LVM2_SEGTYPE=thin-pool
>
>   LVM2_VG_NAME=gluster_vg_sdb LVM2_LV_NAME=[gluster_thinpool_sdb_tdata]
> LVM2_LV_UUID=CnqJbS-o7XA-4XD0-0nQ5-t3Wu-iRfW-Nc337j
> LVM2_LV_SIZE=73400320.00 LVM2_LV_ATTR=Twi-ao---- LVM2_SEGTYPE=linear
>
>   LVM2_VG_NAME=gluster_vg_sdb LVM2_LV_NAME=[gluster_thinpool_sdb_tmeta]
> LVM2_LV_UUID=f9KiCq-UUMN-eve1-lqLK-0lAI-VRP5-ASKGPt
> LVM2_LV_SIZE=16777216.00 LVM2_LV_ATTR=ewi-ao---- LVM2_SEGTYPE=linear
>
>   LVM2_VG_NAME=gluster_vg_sdb LVM2_LV_NAME=[lvol0_pmspare]
> LVM2_LV_UUID=C24xEs-njob-0E7Q-vwJG-q814-FmMQ-ebrb33
> LVM2_LV_SIZE=16777216.00 LVM2_LV_ATTR=ewi------- LVM2_SEGTYPE=linear
>
> How can I clean that up?
>
> Thank you,
>
> Joel
>
>
>
> On May 25, 2017 11:49 AM, "Sahina Bose" <sabose at redhat.com> wrote:
>
>
>
> On Thu, May 25, 2017 at 8:29 PM, Joel Diaz <mrjoeldiaz at gmail.com> wrote:
>
>> I almost forgot about python-blivet. All 3 hosts are on version
>> 0.61.15.59-1.
>>
>> The hosts and engine were updated using the snapshot repo last week. I
>> updated them just now. I just noticed that host 2 and required the same 20
>> updates but host 1 required an additional update, ovirt-engine-appliance.
>>
>>
>>
>> On May 25, 2017 10:40 AM, "Joel Diaz" <mrjoeldiaz at gmail.com> wrote:
>>
>> Hello Sahina,
>>
>> Thanks for the response.
>>
>> Attached are the requested supervsdm logs from both hosts.
>>
>>
> Seems to be an error returned by lvm module.
>
> " lvm lvs -a --unit k --nosuffix --nameprefixes --unquoted --noheadings
> -ovg_name,lv_name,lv_uuid,lv_size,lv_attr,segtype"  - seems to throw this
> error. Can you check?
>
> Do you have stale devicemapper entries on these hosts?
>
>
>> Joel
>>
>>
>>
>> On May 25, 2017 5:41 AM, "Sahina Bose" <sabose at redhat.com> wrote:
>>
>>> Could you provide the supervdsm.log from either host2 or host3.
>>>
>>> Were the packages on these hosts updated?
>>> What's the version of python-blivet? Is this different from host1?
>>>
>>> On Wed, May 24, 2017 at 6:26 PM, Joel Diaz <mrjoeldiaz at gmail.com> wrote:
>>>
>>>> Good morning oVirt community,
>>>>
>>>> I need some assistance.
>>>>
>>>> I am running a 3 host, hosted engine, gluster environment. The hosts
>>>> are running centos 7.3 and the engine is version 4.1.2.3
>>>>
>>>> Since yesterday, every 2 hours, the engine reports the error below on
>>>> host 2 and 3.
>>>>
>>>> event ID 10802
>>>>
>>>> VDSM <host> command GetStorageDeviceListVDS failed:
>>>> 'gluster_vg_sdb-/dev/mapper/36589cfc000000f05aea0f2b50f8d76e5: read
>>>> failed after 0 of 4096 at 0: Input/output error'
>>>>
>>>> I've attached logs from both hosts. Hosts 3 is the SPM and holds the
>>>> arbiter brick of all 4 gluster volumes.
>>>>
>>>> As always, your help is appreciated.
>>>>
>>>> Joel
>>>>
>>>>
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users at ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>
>>>>
>>>
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20170529/ed2bf147/attachment-0001.html>


More information about the Users mailing list