
Hello everyone, Geetign stucked into this situation, and just wanted to know if it actually b done, or should I follow diffrent approach. I need to do this single instace deployment for a POC, with the evantual scale this setup up to 3 nodes in the future. I folowed this tuturial: https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Single_node_... But when i run gdeploy, i end up with this error: TASK [Run a shell script] ********************************************************************************************************************************************** failed: [10.10.8.101] (item=/usr/share/gdeploy/scripts/blacklist_all_disks.sh) => {"changed": true, "failed_when_result": true, "item": "/usr/share/gdeploy/scripts/blacklist_all_disks.sh", "msg": "non-zero return code", "rc": 1, "stderr": "Shared connection to 10.10.8.101 closed.\r\n", "stdout": "iscsiadm: No active sessions.\r\nThis script will prevent listing iscsi devices when multipath CLI is called\r\nwithout parameters, and so no LUNs will be discovered by applications like VDSM\r\n(oVirt, RHV) which shell-out to call `/usr/sbin/multipath` after target login\r\nJun 12 14:30:48 | 3614187705c01820022b002b00c52f72e2: map in use\r\nJun 12 14:30:48 | failed to remove multipath map 3614187705c01820022b002b00c52f72e\r\n", "stdout_lines": ["iscsiadm: No active sessions.", "This script will prevent listing iscsi devices when multipath CLI is called", "without parameters, and so no LUNs will be discovered by applications like VDSM", "(oVirt, RHV) which shell-out to call `/usr/sbin/multipath` after target login", "Jun 12 14:30:48 | 3614187705c01820022b002b00c52f72e2: map in use", "Jun 12 14:30:48 | failed to remove multipath map 3614187705c01820022b002b00c52f72e"]} to retry, use: --limit @/tmp/tmpbYZBC6/run-script.retry PLAY RECAP ************************************************************************************************************************************************************* 10.10.8.101 : ok=0 changed=0 unreachable=0 failed=1 This is my gdeploy.conf : [hosts] 10.10.8.101 [script1] action=execute ignore_script_errors=no file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h 10.10.8.111 [disktype] jbod [diskcount] 12 [stripesize] 256 [service1] action=enable service=chronyd [service2] action=restart service=chronyd [shell2] action=execute command=vdsm-tool configure --force [script3] action=execute file=/usr/share/gdeploy/scripts/blacklist_all_disks.sh ignore_script_errors=no [pv] action=create devices=sdb ignore_pv_errors=no [vg1] action=create vgname=gluster_vg_sdb pvname=sdb ignore_vg_errors=no [lv1] action=create poolname=gluster_thinpool_sdb ignore_lv_errors=no vgname=gluster_vg_sdb lvtype=thinpool size=804GB poolmetadatasize=4GB [lv2] action=create lvname=gluster_lv_engine ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/engine size=100GB lvtype=thick [lv3] action=create lvname=gluster_lv_data ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/data lvtype=thinlv poolname=gluster_thinpool_sdb virtualsize=400GB [lv4] action=create lvname=gluster_lv_vmstore ignore_lv_errors=no vgname=gluster_vg_sdb mount=/gluster_bricks/vmstore lvtype=thinlv poolname=gluster_thinpool_sdb virtualsize=400GB [selinux] yes [service3] action=restart service=glusterd slice_setup=yes [firewalld] action=add ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp,54322/tcp services=glusterfs [script2] action=execute file=/usr/share/gdeploy/scripts/disable-gluster-hooks.sh [shell3] action=execute command=usermod -a -G gluster qemu [volume1] action=create volname=engine transport=tcp key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal value=virt,36,36,30,on,off,enable #brick_dirs=10.10.8.111:/gluster_bricks/engine/engine brick_dirs=/gluster_bricks/engine/engine ignore_volume_errors=no [volume2] action=create volname=data transport=tcp key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal value=virt,36,36,30,on,off,enable #brick_dirs=10.10.8.111:/gluster_bricks/data/data brick_dirs=/gluster_bricks/data/data ignore_volume_errors=no [volume3] action=create volname=vmstore transport=tcp key=group,storage.owner-uid,storage.owner-gid,network.ping-timeout,performance.strict-o-direct,network.remote-dio,cluster.granular-entry-heal value=virt,36,36,30,on,off,enable #brick_dirs=10.10.8.111:/gluster_bricks/vmstore/vmstore brick_dirs=/gluster_bricks/vmstore/vmstore ignore_volume_errors=no Any thoughs on this ? Scratching my head in getting this sorted out.... Thank you very much ! Have a nice day ! -- Best regards, Leo David

Are your disks "multipathing"? What's your output if you run the command multipath -ll For comparison sake, here is my gdeploy.conf (used for a single host gluster install) - lv1 was changed to 62gb **Credit for that pastebin to Squeakz on the IRC channel https://pastebin.com/LTRQ78aJ

Thank you very much for you response, now it feels I can barely see the light ! So: multipath -ll 3614187705c01820022b002b00c52f72e dm-1 DELL ,PERC H730P Mini size=931G features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='service-time 0' prio=1 status=active `- 0:2:0:0 sda 8:0 active ready running lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 931G 0 disk ├─sda1 8:1 0 1G 0 part ├─sda2 8:2 0 930G 0 part └─3614187705c01820022b002b00c52f72e 253:1 0 931G 0 mpath ├─3614187705c01820022b002b00c52f72e1 253:3 0 1G 0 part /boot └─3614187705c01820022b002b00c52f72e2 253:4 0 930G 0 part ├─onn-pool00_tmeta 253:6 0 1G 0 lvm │ └─onn-pool00-tpool 253:8 0 825.2G 0 lvm │ ├─onn-ovirt--node--ng--4.2.3.1--0.20180530.0+1 253:9 0 798.2G 0 lvm / │ ├─onn-pool00 253:12 0 825.2G 0 lvm │ ├─onn-var_log_audit 253:13 0 2G 0 lvm /var/log/audit │ ├─onn-var_log 253:14 0 8G 0 lvm /var/log │ ├─onn-var 253:15 0 15G 0 lvm /var │ ├─onn-tmp 253:16 0 1G 0 lvm /tmp │ ├─onn-home 253:17 0 1G 0 lvm /home │ └─onn-var_crash 253:20 0 10G 0 lvm /var/crash ├─onn-pool00_tdata 253:7 0 825.2G 0 lvm │ └─onn-pool00-tpool 253:8 0 825.2G 0 lvm │ ├─onn-ovirt--node--ng--4.2.3.1--0.20180530.0+1 253:9 0 798.2G 0 lvm / │ ├─onn-pool00 253:12 0 825.2G 0 lvm │ ├─onn-var_log_audit 253:13 0 2G 0 lvm /var/log/audit │ ├─onn-var_log 253:14 0 8G 0 lvm /var/log │ ├─onn-var 253:15 0 15G 0 lvm /var │ ├─onn-tmp 253:16 0 1G 0 lvm /tmp │ ├─onn-home 253:17 0 1G 0 lvm /home │ └─onn-var_crash 253:20 0 10G 0 lvm /var/crash └─onn-swap 253:10 0 4G 0 lvm [SWAP] sdb 8:16 0 931G 0 disk └─sdb1 8:17 0 931G 0 part sdc 8:32 0 4.6T 0 disk └─sdc1 8:33 0 4.6T 0 part nvme0n1 259:0 0 1.1T 0 disk So the multipath "3614187705c01820022b002b00c52f72e" that was shown in the error is actually the root filesystem, which was created at node installation ( from iso ). Is this mpath ok that is activated on sda ? What should I do in this situation ? Thank you very much for help ! On Tue, Jun 12, 2018 at 5:52 PM, femi adegoke <ovirt@fateknollogee.com> wrote:
Yes Leo, single host install can be done! _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community- guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ message/2MMKPAG5CAEBFTVTYB5Q3DBQ4WWQFVT3/
-- Best regards, Leo David

0) install ovirt node v4.2.3.1 (or higher) 1) Create a blacklist here: /etc/multiapth/conf.d/local.conf (assuming local.conf is the new file you create) 2) local.conf should be similar to this (using your disks wwid) blacklist { wwid INTEL_SSDSCKHB120G4_BTWM65160025120B wwid eui.0025385171b04d62 wwid SAMSUNG_MZ7GE960HMHP-000AZ_S1P8NYAG123827 } 3) remove multiapth device using multipath -f INTEL_SSDSCKHB120G4_BTWM65160025120B run this command: dracut --force --add multipath --include /etc/multipath /etc/multipath 4) Reboot 5) proceed with the rest of oVirt install
participants (2)
-
femi adegoke
-
Leo David