On Mon, May 20, 2019 at 9:55 PM Adrian Quintero <adrianquintero(a)gmail.com>
wrote:
Sahina,
Yesterday I started with a fresh install, I completely wiped clean all the
disks, recreated the arrays from within my controller of our DL380 Gen 9's.
OS: RAID 1 (2x600GB HDDs): /dev/sda // Using ovirt node 4.3.3.1 iso.
engine and VMSTORE1: JBOD (1x3TB HDD):/dev/sdb
DATA1: JBOD (1x3TB HDD): /dev/sdc
DATA2: JBOD (1x3TB HDD): /dev/sdd
Caching disk: JOBD (1x440GB SDD): /dev/sde
*After the OS install on the first 3 servers and setting up ssh keys, I
started the Hyperconverged deploy process:*
1.-Logged int to the first server
http://host1.example.com:9090
2.-Selected Hyperconverged, clicked on "Run Gluster Wizard"
3.-Followed the wizard steps (Hosts, FQDNs, Packages, Volumes, Bricks,
Review)
*Hosts/FQDNs:*
host1.example.com
host2.example.com
host3.example.com
*Packages:*
*Volumes:*
engine:replicate:/gluster_bricks/engine/engine
vmstore1:replicate:/gluster_bricks/vmstore1/vmstore1
data1:replicate:/gluster_bricks/data1/data1
data2:replicate:/gluster_bricks/data2/data2
*Bricks:*
engine:/dev/sdb:100GB:/gluster_bricks/engine
vmstore1:/dev/sdb:2600GB:/gluster_bricks/vmstrore1
data1:/dev/sdc:2700GB:/gluster_bricks/data1
data2:/dev/sdd:2700GB:/gluster_bricks/data2
LV Cache:
/dev/sde:400GB:writethrough
4.-After I hit deploy on the last step of the "Wizard" that is when I get
the disk filter error.
TASK [gluster.infra/roles/backend_setup : Create volume groups]
****************
failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb',
u'pvname':
u'/dev/sdb'}) => {"changed": false, "err": " Device
/dev/sdb excluded by a
filter.\n", "item": {"pvname": "/dev/sdb",
"vgname": "gluster_vg_sdb"},
"msg": "Creating physical volume '/dev/sdb' failed",
"rc": 5}
failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb',
u'pvname':
u'/dev/sdb'}) => {"changed": false, "err": " Device
/dev/sdb excluded by a
filter.\n", "item": {"pvname": "/dev/sdb",
"vgname": "gluster_vg_sdb"},
"msg": "Creating physical volume '/dev/sdb' failed",
"rc": 5}
failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdb',
u'pvname':
u'/dev/sdb'}) => {"changed": false, "err": " Device
/dev/sdb excluded by a
filter.\n", "item": {"pvname": "/dev/sdb",
"vgname": "gluster_vg_sdb"},
"msg": "Creating physical volume '/dev/sdb' failed",
"rc": 5}
failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc',
u'pvname':
u'/dev/sdc'}) => {"changed": false, "err": " Device
/dev/sdc excluded by a
filter.\n", "item": {"pvname": "/dev/sdc",
"vgname": "gluster_vg_sdc"},
"msg": "Creating physical volume '/dev/sdc' failed",
"rc": 5}
failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc',
u'pvname':
u'/dev/sdc'}) => {"changed": false, "err": " Device
/dev/sdc excluded by a
filter.\n", "item": {"pvname": "/dev/sdc",
"vgname": "gluster_vg_sdc"},
"msg": "Creating physical volume '/dev/sdc' failed",
"rc": 5}
failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdc',
u'pvname':
u'/dev/sdc'}) => {"changed": false, "err": " Device
/dev/sdc excluded by a
filter.\n", "item": {"pvname": "/dev/sdc",
"vgname": "gluster_vg_sdc"},
"msg": "Creating physical volume '/dev/sdc' failed",
"rc": 5}
failed: [vmm10.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd',
u'pvname':
u'/dev/sdd'}) => {"changed": false, "err": " Device
/dev/sdd excluded by a
filter.\n", "item": {"pvname": "/dev/sdd",
"vgname": "gluster_vg_sdd"},
"msg": "Creating physical volume '/dev/sdd' failed",
"rc": 5}
failed: [vmm12.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd',
u'pvname':
u'/dev/sdd'}) => {"changed": false, "err": " Device
/dev/sdd excluded by a
filter.\n", "item": {"pvname": "/dev/sdd",
"vgname": "gluster_vg_sdd"},
"msg": "Creating physical volume '/dev/sdd' failed",
"rc": 5}
failed: [vmm11.virt.iad3p] (item={u'vgname': u'gluster_vg_sdd',
u'pvname':
u'/dev/sdd'}) => {"changed": false, "err": " Device
/dev/sdd excluded by a
filter.\n", "item": {"pvname": "/dev/sdd",
"vgname": "gluster_vg_sdd"},
"msg": "Creating physical volume '/dev/sdd' failed",
"rc": 5}
Attached is the generated yml file ( /etc/ansible/hc_wizard_inventory.yml)
and the "Deployment Failed" file
Also wondering if I hit this bug?
https://bugzilla.redhat.com/show_bug.cgi?id=1635614
+Sachidananda URS <surs(a)redhat.com> +Gobinda Das <godas(a)redhat.com> to
review the inventory file and failures
Thanks for looking into this.
*Adrian Quintero*
*adrianquintero(a)gmail.com <adrianquintero(a)gmail.com> |
adrian.quintero(a)rackspace.com <adrian.quintero(a)rackspace.com>*
On Mon, May 20, 2019 at 7:56 AM Sahina Bose <sabose(a)redhat.com> wrote:
> To scale existing volumes - you need to add bricks and run rebalance on
> the gluster volume so that data is correctly redistributed as Alex
> mentioned.
> We do support expanding existing volumes as the bug
>
https://bugzilla.redhat.com/show_bug.cgi?id=1471031 has been fixed
>
> As to procedure to expand volumes:
> 1. Create bricks from UI - select Host -> Storage Device -> Storage
> device. Click on "Create Brick"
> If the device is shown as locked, make sure there's no signature on
> device. If multipath entries have been created for local devices, you can
> blacklist those devices in multipath.conf and restart multipath.
> (If you see device as locked even after you do this -please report back).
> 2. Expand volume using Volume -> Bricks -> Add Bricks, and select the 3
> bricks created in previous step
> 3. Run Rebalance on the volume. Volume -> Rebalance.
>
>
> On Thu, May 16, 2019 at 2:48 PM Fred Rolland <frolland(a)redhat.com> wrote:
>
>> Sahina,
>> Can someone from your team review the steps done by Adrian?
>> Thanks,
>> Freddy
>>
>> On Thu, Apr 25, 2019 at 5:14 PM Adrian Quintero <
>> adrianquintero(a)gmail.com> wrote:
>>
>>> Ok, I will remove the extra 3 hosts, rebuild them from scratch and
>>> re-attach them to clear any possible issues and try out the suggestions
>>> provided.
>>>
>>> thank you!
>>>
>>> On Thu, Apr 25, 2019 at 9:22 AM Strahil Nikolov
<hunter86_bg(a)yahoo.com>
>>> wrote:
>>>
>>>> I have the same locks , despite I have blacklisted all local disks:
>>>>
>>>> # VDSM PRIVATE
>>>> blacklist {
>>>> devnode "*"
>>>> wwid Crucial_CT256MX100SSD1_14390D52DCF5
>>>> wwid WDC_WD5000AZRX-00A8LB0_WD-WCC1U0056126
>>>> wwid WDC_WD5003ABYX-01WERA0_WD-WMAYP2335378
>>>> wwid
>>>>
nvme.1cc1-324a31313230303131353936-414441544120535838323030504e50-00000001
>>>> }
>>>>
>>>> If you have multipath reconfigured, do not forget to rebuild the
>>>> initramfs (dracut -f). It's a linux issue , and not oVirt one.
>>>>
>>>> In your case you had something like this:
>>>> /dev/VG/LV
>>>> /dev/disk/by-id/pvuuid
>>>> /dev/mapper/multipath-uuid
>>>> /dev/sdb
>>>>
>>>> Linux will not allow you to work with /dev/sdb , when multipath is
>>>> locking the block device.
>>>>
>>>> Best Regards,
>>>> Strahil Nikolov
>>>>
>>>> В четвъртък, 25 април 2019 г., 8:30:16 ч. Гринуич-4, Adrian Quintero
<
>>>> adrianquintero(a)gmail.com> написа:
>>>>
>>>>
>>>> under Compute, hosts, select the host that has the locks on /dev/sdb,
>>>> /dev/sdc, etc.., select storage devices and in here is where you see a
>>>> small column with a bunch of lock images showing for each row.
>>>>
>>>>
>>>> However as a work around, on the newly added hosts (3 total), I had to
>>>> manually modify /etc/multipath.conf and add the following at the end as
>>>> this is what I noticed from the original 3 node setup.
>>>>
>>>> -------------------------------------------------------------
>>>> # VDSM REVISION 1.3
>>>> # VDSM PRIVATE
>>>> # BEGIN Added by gluster_hci role
>>>>
>>>> blacklist {
>>>> devnode "*"
>>>> }
>>>> # END Added by gluster_hci role
>>>> ----------------------------------------------------------
>>>> After this I restarted multipath and the lock went away and was able
>>>> to configure the new bricks thru the UI, however my concern is what will
>>>> happen if I reboot the server will the disks be read the same way by the
OS?
>>>>
>>>> Also now able to expand the gluster with a new replicate 3 volume if
>>>> needed using
http://host4.mydomain.com:9090.
>>>>
>>>>
>>>> thanks again
>>>>
>>>> On Thu, Apr 25, 2019 at 8:00 AM Strahil Nikolov
<hunter86_bg(a)yahoo.com>
>>>> wrote:
>>>>
>>>> In which menu do you see it this way ?
>>>>
>>>> Best Regards,
>>>> Strahil Nikolov
>>>>
>>>> В сряда, 24 април 2019 г., 8:55:22 ч. Гринуич-4, Adrian Quintero <
>>>> adrianquintero(a)gmail.com> написа:
>>>>
>>>>
>>>> Strahil,
>>>> this is the issue I am seeing now
>>>>
>>>> [image: image.png]
>>>>
>>>> The is thru the UI when I try to create a new brick.
>>>>
>>>> So my concern is if I modify the filters on the OS what impact will
>>>> that have after server reboots?
>>>>
>>>> thanks,
>>>>
>>>>
>>>>
>>>> On Mon, Apr 22, 2019 at 11:39 PM Strahil <hunter86_bg(a)yahoo.com>
>>>> wrote:
>>>>
>>>> I have edited my multipath.conf to exclude local disks , but you need
>>>> to set '#VDSM private' as per the comments in the header of the
file.
>>>> Otherwise, use the /dev/mapper/multipath-device notation - as you
>>>> would do with any linux.
>>>>
>>>> Best Regards,
>>>> Strahil NikolovOn Apr 23, 2019 01:07, adrianquintero(a)gmail.com wrote:
>>>> >
>>>> > Thanks Alex, that makes more sense now while trying to follow the
>>>> instructions provided I see that all my disks /dev/sdb, /dev/sdc,
/dev/sdd
>>>> are locked and inidicating " multpath_member" hence not letting
me create
>>>> new bricks. And on the logs I see
>>>> >
>>>> > Device /dev/sdb excluded by a filter.\n", "item":
{"pvname":
>>>> "/dev/sdb", "vgname": "gluster_vg_sdb"},
"msg": "Creating physical volume
>>>> '/dev/sdb' failed", "rc": 5}
>>>> > Same thing for sdc, sdd
>>>> >
>>>> > Should I manually edit the filters inside the OS, what will be the
>>>> impact?
>>>> >
>>>> > thanks again.
>>>> > _______________________________________________
>>>> > Users mailing list -- users(a)ovirt.org
>>>> > To unsubscribe send an email to users-leave(a)ovirt.org
>>>> > Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
>>>> > oVirt Code of Conduct:
>>>>
https://www.ovirt.org/community/about/community-guidelines/
>>>> > List Archives:
>>>>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FW3IR3NMQTY...
>>>>
>>>>
>>>>
>>>> --
>>>> Adrian Quintero
>>>> _______________________________________________
>>>> Users mailing list -- users(a)ovirt.org
>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
>>>> oVirt Code of Conduct:
>>>>
https://www.ovirt.org/community/about/community-guidelines/
>>>> List Archives:
>>>>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EW7NKT76JR3...
>>>>
>>>>
>>>>
>>>> --
>>>> Adrian Quintero
>>>> _______________________________________________
>>>> Users mailing list -- users(a)ovirt.org
>>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>>> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
>>>> oVirt Code of Conduct:
>>>>
https://www.ovirt.org/community/about/community-guidelines/
>>>> List Archives:
>>>>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KRKR5LFARNR...
>>>>
>>>
>>>
>>> --
>>> Adrian Quintero
>>> _______________________________________________
>>> Users mailing list -- users(a)ovirt.org
>>> To unsubscribe send an email to users-leave(a)ovirt.org
>>> Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>>
https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5EQPZTGK6PW...
>>>
>>
--
Adrian Quintero