Fwd: Re: Single node single node SelfHosted Hyperconverged

---------- Forwarded message ---------- From: Leo David <leoalex@gmail.com> Date: Tue, Jun 12, 2018 at 7:57 PM Subject: Re: [ovirt-users] Re: Single node single node SelfHosted Hyperconverged To: femi adegoke <ovirt@fateknollogee.com> Thank you very much for you response, now it feels I can barelly see the light ! So: multipath -ll 3614187705c01820022b002b00c52f72e dm-1 DELL ,PERC H730P Mini size=931G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=1 status=active `- 0:2:0:0 sda 8:0 active ready running lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 931G 0 disk ├─sda1 8:1 0 1G 0 part ├─sda2 8:2 0 930G 0 part └─3614187705c01820022b002b00c52f72e 253:1 0 931G 0 mpath ├─3614187705c01820022b002b00c52f72e1 253:3 0 1G 0 part /boot └─3614187705c01820022b002b00c52f72e2 253:4 0 930G 0 part ├─onn-pool00_tmeta 253:6 0 1G 0 lvm │ └─onn-pool00-tpool 253:8 0 825.2G 0 lvm │ ├─onn-ovirt--node--ng--4.2.3.1--0.20180530.0+1 253:9 0 798.2G 0 lvm / │ ├─onn-pool00 253:12 0 825.2G 0 lvm │ ├─onn-var_log_audit 253:13 0 2G 0 lvm /var/log/audit │ ├─onn-var_log 253:14 0 8G 0 lvm /var/log │ ├─onn-var 253:15 0 15G 0 lvm /var │ ├─onn-tmp 253:16 0 1G 0 lvm /tmp │ ├─onn-home 253:17 0 1G 0 lvm /home │ └─onn-var_crash 253:20 0 10G 0 lvm /var/crash ├─onn-pool00_tdata 253:7 0 825.2G 0 lvm │ └─onn-pool00-tpool 253:8 0 825.2G 0 lvm │ ├─onn-ovirt--node--ng--4.2.3.1--0.20180530.0+1 253:9 0 798.2G 0 lvm / │ ├─onn-pool00 253:12 0 825.2G 0 lvm │ ├─onn-var_log_audit 253:13 0 2G 0 lvm /var/log/audit │ ├─onn-var_log 253:14 0 8G 0 lvm /var/log │ ├─onn-var 253:15 0 15G 0 lvm /var │ ├─onn-tmp 253:16 0 1G 0 lvm /tmp │ ├─onn-home 253:17 0 1G 0 lvm /home │ └─onn-var_crash 253:20 0 10G 0 lvm /var/crash └─onn-swap 253:10 0 4G 0 lvm [SWAP] sdb 8:16 0 931G 0 disk └─sdb1 8:17 0 931G 0 part sdc 8:32 0 4.6T 0 disk └─sdc1 8:33 0 4.6T 0 part nvme0n1 259:0 0 1.1T 0 disk So the multipath "3614187705c01820022b002b00c52f72e" that was shown in the error is actually the root filesystem, which was created at node installation ( from iso ). Is this mpath ok that is activated on sda ? What should I do in this situation ? Thank you ? On Tue, Jun 12, 2018 at 5:38 PM, femi adegoke <ovirt@fateknollogee.com> wrote:
Are your disks "multipathing"?
What's your output if you run the command multipath -ll
For comparison sake, here is my gdeploy.conf (used for a single host gluster install) - lv1 was changed to 62gb **Credit for that pastebin to Squeakz on the IRC channel https://pastebin.com/LTRQ78aJ _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/EEIE4PWUFCXHHTT6PGP2EPFQXIWL6H5P/
-- Best regards, Leo David -- Best regards, Leo David

It looks like the same error I had where oVirt keeps "trying" to use the multipath disk as part of the install. The solution (unfortunately) is to re-install the o/s, then setup a blacklist & then proceed with the rest of the ovirt install. There is no way to release that disk from multipath since it is part of the root filesystem.

Thank you very much Semi. The reinstallation would not be an problem, being the fact that i've spent so much in sorting this issue. I don't understant the "setup a blacklist part" . What does this mean, how should I do it ? I have looked over the other oVirt nodes that I have in the network, and the mpath is active on their rootfs as well. I have used the 4.2-2018052606.iso image for instalation for cluster nodes and for single instance too. Would it be possible to have a bug in this version ? What iso should I use just to be sure everything will work good ? Again, thank you very much ! On Tue, Jun 12, 2018, 21:54 femi adegoke <ovirt@fateknollogee.com> wrote:
It looks like the same error I had where oVirt keeps "trying" to use the multipath disk as part of the install.
The solution (unfortunately) is to re-install the o/s, then setup a blacklist & then proceed with the rest of the ovirt install.
There is no way to release that disk from multipath since it is part of the root filesystem. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JIE7GPJSBVSOMG...

A blacklist is a list of the disks that the system should NOT mark as multipath disks. You need to create a file, you can name it local.conf, create it in this location: /etc/multipath/conf.d/ Use the most current iso. I think there might be a bug.

Fortunatelly, after node reinstallation nu more mpath devices are present: multipath -ll Jun 13 06:32:10 | DM multipath kernel driver not loaded Jun 13 06:32:10 | DM multipath kernel driver not loaded But now I am encountering this "invalid number format \"virt\" in option \"brick-uid": TASK [Sets options for volume] ******************************************************************************************************************************************* failed: [10.10.8.111] (item={u'key': u'storage.owner-uid', u'value': u'virt'}) => {"changed": false, "item": {"key": "storage.owner-uid", "value": "virt"}, "msg": "volume set: failed: invalid number format \"virt\" in option \"brick-uid\"\n"} changed: [10.10.8.111] => (item={u'key': u'storage.owner-gid', u'value': u'36'}) failed: [10.10.8.111] (item={u'key': u'features.shard', u'value': u'36'}) => {"changed": false, "item": {"key": "features.shard", "value": "36"}, "msg": "volume set: failed: Error, Validation Failed\n"} changed: [10.10.8.111] => (item={u'key': u'performance.low-prio-threads', u'value': u'30'}) changed: [10.10.8.111] => (item={u'key': u'performance.strict-o-direct', u'value': u'on'}) changed: [10.10.8.111] => (item={u'key': u'network.remote-dio', u'value': u'off'}) failed: [10.10.8.111] (item={u'key': u'network.ping-timeout', u'value': u'enable'}) => {"changed": false, "item": {"key": "network.ping-timeout", "value": "enable"}, "msg": "volume set: failed: invalid time format \"enable\" in \"option ping-timeout\"\n"} Below, is my gdeploy.conf: [hosts] 10.10.8.111 [script1] action=execute ignore_script_errors=no file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h 10.10.8.111 [disktype] jbod [diskcount] 12 [stripesize] 256 [service1] action=enable service=chronyd [service2] action=restart service=chronyd [shell2] action=execute command=vdsm-tool configure --force [script3] action=execute file=/usr/share/gdeploy/scripts/blacklist_all_disks.sh ignore_script_errors=no [pv] action=create devices=sdb ignore_pv_errors=no [vg1] action=create vgname=gluster_vg_sdb pvname=sdb ignore_vg_errors=no [lv1] action=create poolname=gluster_thinpool_sdb ignore_lv_errors=no vgname=gluster_vg_sdb lvtype=thinpool size=804GB poolmetadatasize=4GB [lv2] action=create lvname=gluster_lv_engine ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/engine size=100GB lvtype=thick [lv3] action=create lvname=gluster_lv_data ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/data lvtype=thinlv poolname=gluster_thinpool_sdb virtualsize=400GB [lv4] action=create lvname=gluster_lv_vmstore ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/vmstore lvtype=thinlv poolname=gluster_thinpool_sdb virtualsize=400GB [selinux] yes [service3] action=restart service=glusterd slice_setup=yes [firewalld] action=add ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp,54322/tcp services=glusterfs [script2] action=execute file=/usr/share/gdeploy/scripts/disable-gluster-hooks.sh [shell3] action=execute command=usermod -a -G gluster qemu [volume1] action=create volname=engine transport=tcp key=storage.owner-uid,storage.owner-gid,features.shard,performance.low-prio-threads,performance.strict-o-direct,network.remote-dio,network.ping-timeout,user.cifs,nfs.disable,performance.quick-read,performance.read-ahead,performance.io-cache,cluster.eager-lock value=36,36,on,32,on,off,30,off,on,off,off,off,enable #key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal value=virt,36,36,30,on,off,enable #brick_dirs=10.10.8.111:/gluster_bricks/engine/engine brick_dirs=/gluster_bricks/engine/engine ignore_volume_errors=no [volume2] action=create volname=data transport=tcp key=storage.owner-uid,storage.owner-gid,features.shard,performance.low-prio-threads,performance.strict-o-direct,network.remote-dio,network.ping-timeout,user.cifs,nfs.disable,performance.quick-read,performance.read-ahead,performance.io-cache,cluster.eager-lock value=36,36,on,32,on,off,30,off,on,off,off,off,enable #key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal value=virt,36,36,30,on,off,enable #brick_dirs=10.10.8.111:/gluster_bricks/data/data brick_dirs=/gluster_bricks/data/data ignore_volume_errors=no [volume3] action=create volname=vmstore transport=tcp key=storage.owner-uid,storage.owner-gid,features.shard,performance.low-prio-threads,performance.strict-o-direct,network.remote-dio,network.ping-timeout,user.cifs,nfs.disable,performance.quick-read,performance.read-ahead,performance.io-cache,cluster.eager-lock value=36,36,on,32,on,off,30,off,on,off,off,off,enable #key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal value=virt,36,36,30,on,off,enable #brick_dirs=10.10.8.111:/gluster_bricks/vmstore/vmstore brick_dirs=/gluster_bricks/vmstore/vmstore ignore_volume_errors=no I just don't understand how this config should be adjusted so the ansible script will finnish succesfully... :( On Wed, Jun 13, 2018 at 9:06 AM, femi adegoke <ovirt@fateknollogee.com> wrote:
A blacklist is a list of the disks that the system should NOT mark as multipath disks.
You need to create a file, you can name it local.conf, create it in this location: /etc/multipath/conf.d/
Use the most current iso.
I think there might be a bug. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/SWNMBQSIM74657FALFANBLAMR2VKXLHI/
-- Best regards, Leo David

Can you run command lsblk What's your output? What is the "-h" for? [script1] action=execute ignore_script_errors=no file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h Please look at this file & make sure yours is similar https://pastebin.com/LTRQ78aJ

That was just me being stupid.... I was passing wrong values to the key for volumes. I am running again the playbook. On Wed, Jun 13, 2018 at 10:22 AM, femi adegoke <ovirt@fateknollogee.com> wrote:
Can you run command lsblk What's your output?
What is the "-h" for? [script1] action=execute ignore_script_errors=no file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
Please look at this file & make sure yours is similar https://pastebin.com/LTRQ78aJ _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/FWQYNPOFXWKXXJ4WR3OMZW7V2GKE3GKB/
-- Best regards, Leo David

Leo, you should still add the blacklist

Thank you, The playbook just finnished successfully. Now: 1. Should I create the blacklist acordingly: kblacklist { wwid INTEL_SSDSCKHB120G4_BTWM65160025120B wwid eui.0025385171b04d62 wwid SAMSUNG_MZ7GE960HMHP-000AZ_S1P8NYAG123827 } - but not sure what to put as wwid 2. How to deploy self-hosted engine vm, since when I hit any of the "Hosted engine" or "Hyperconverged" it does not recognize the already gluster setup ? 3. Will I be able to create some future gluster volumes using the other unused hdds ? From Cockpit, or again using gdeploy ? I am really sorry if i put stupid questions, but i just don't understand how to proceed further On Wed, Jun 13, 2018 at 10:23 AM, femi adegoke <ovirt@fateknollogee.com> wrote:
Leo, you should still add the blacklist _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/PRSPQGLJRGVD7WOD2VAAXYRF2DBGF2HW/
-- Best regards, Leo David

It's blacklist { not kblacklist (remove the "k") run this command to find wwids: ls -la /dev/disk/by-id/ Is the existing gluster setup installed correctly?

Gluster volumes are started, so I assume that the gluster part is fine. Would'nt make more sense to add everything to black list, reboot the server and continue with ovirt-engine vm setup ( and adding 10.10.8.111:/engine for vm storage ) ? ie: blacklist { devnode "*" } Thank you ! On Wed, Jun 13, 2018 at 11:05 AM, femi adegoke <ovirt@fateknollogee.com> wrote:
It's blacklist { not kblacklist (remove the "k")
run this command to find wwids: ls -la /dev/disk/by-id/
Is the existing gluster setup installed correctly? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/W47PLDI3RYTGXVVFXORYPMJ2YH77XAY2/
-- Best regards, Leo David

Finnally !!! Femi, thank you so much !!! I wouldn't have been able do solve this without your help. So, just as info: In this version ( iso ovirt-node-ng-installer-ovirt-4.2-2018053012 ) , multipah.conf seems to be alrady configured to blacklist all devices. It already has configured : # VDSM PRIVATE - as second line and, blacklist { devnode "*" } HE installation went fine, but after reboot the gluster volumes were not present - glusterd daemon was not enabled at boot, so the ansible playbook did not changed that. Now, I am trying to figure out wat it would be the best procedure to create a new gluster volume on sdc drive, and put some vms to run on it. Cockpit UI won't let me, because its asking for 3 nodes and "No two hosts can be the same" . Should i just format the drive as xfs, mount it ( fstab ) under /gluster_briks/spinning_storage, create the gluster volume from cli and add it as a new storage domain ? What it would be the best approach to accomplish this ? Again, thank you very much ! On Wed, Jun 13, 2018 at 12:05 PM, Leo David <leoalex@gmail.com> wrote:
Gluster volumes are started, so I assume that the gluster part is fine. Would'nt make more sense to add everything to black list, reboot the server and continue with ovirt-engine vm setup ( and adding 10.10.8.111:/engine for vm storage ) ? ie:
blacklist { devnode "*" }
Thank you !
On Wed, Jun 13, 2018 at 11:05 AM, femi adegoke <ovirt@fateknollogee.com> wrote:
It's blacklist { not kblacklist (remove the "k")
run this command to find wwids: ls -la /dev/disk/by-id/
Is the existing gluster setup installed correctly? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/communit y/about/community-guidelines/ List Archives: https://lists.ovirt.org/archiv es/list/users@ovirt.org/message/W47PLDI3RYTGXVVFXORYPMJ2YH77XAY2/
-- Best regards, Leo David
-- Best regards, Leo David

Since this is a single node gluster install, you have to add the gluster volume. Log into the HE web gui go to Storage, Storage Domains, click on "New Domain" (it's located at the top right side) Storage Type: select GlusterFS Export Path: enter in this format "myserver.mydomain.com:/my/local/path" Click ok Wait for it to do it's thing

Yes, it's working as per expectations. Thank you very much Femi, really very helpfull ! On Wed, Jun 13, 2018, 16:16 femi adegoke <ovirt@fateknollogee.com> wrote:
Since this is a single node gluster install, you have to add the gluster volume.
Log into the HE web gui go to Storage, Storage Domains, click on "New Domain" (it's located at the top right side) Storage Type: select GlusterFS Export Path: enter in this format "myserver.mydomain.com:/my/local/path" Click ok
Wait for it to do it's thing _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/YNYPXNZCAEZGPZ...

No problem Leo. Glad it worked out ok!!

On Wed, Jun 13, 2018 at 1:18 PM Leo David <leoalex@gmail.com> wrote:
Finnally !!! Femi, thank you so much !!! I wouldn't have been able do solve this without your help. So, just as info: In this version ( iso ovirt-node-ng-installer-ovirt-4.2-2018053012 ) , multipah.conf seems to be alrady configured to blacklist all devices. It already has configured : # VDSM PRIVATE - as second line and, blacklist { devnode "*" }
HE installation went fine, but after reboot the gluster volumes were not present - glusterd daemon was not enabled at boot, so the ansible playbook did not changed that. Now, I am trying to figure out wat it would be the best procedure to create a new gluster volume on sdc drive, and put some vms to run on it. Cockpit UI won't let me, because its asking for 3 nodes and "No two hosts can be the same" . Should i just format the drive as xfs, mount it ( fstab ) under /gluster_briks/spinning_storage, create the gluster volume from cli and add it as a new storage domain ? What it would be the best approach to accomplish this ? Again, thank you very much !
Hi, can you confirm, after setting up gluster volumes with gdeploy, which steps you followed to reach the "HE installation went fine" target? From the cockpit which option? Thanks!

0) install ovirt node v4.2.3.1 (or higher) 1) Create a blacklist here: /etc/multiapth/conf.d/local.conf (assuming local.conf is the new file you create) 2) local.conf should be similar to this (using your disks wwid) blacklist { wwid INTEL_SSDSCKHB120G4_BTWM65160025120B wwid eui.0025385171b04d62 wwid SAMSUNG_MZ7GE960HMHP-000AZ_S1P8NYAG123827 } 3) remove multiapth device using multipath -f INTEL_SSDSCKHB120G4_BTWM65160025120B run this command: dracut --force --add multipath --include /etc/multipath /etc/multipath 4) Reboot 5) proceed with the rest of oVirt install
participants (3)
-
femi adegoke
-
Gianluca Cecchi
-
Leo David