
I installed oVirt node for oVirt v4.3.4 on a single server (default partitioning). I know a single node is not best practice, but fits our current need. Now, I am trying to deploy the hosted engine, but getting the error "Device /dev/sdb not found" (full output below). /dev/sdb does not exist. Changing to /dev/sda fails (as I would expect) with "Creating physical volume '/dev/sda' failed". There is only one logical disk on this server (RAID), which is /dev/sda. The process does not seem correct to me. Why would the wizard try to build gluster volumes on bare partitions rather than in a file system like root? What do we need to do to deploy the HE? I used Cockpit and "Hyperconverged : Configure Gluster storage and oVirt hosted engine". Since it is a single node, I selected "Run Gluster Wizard For Single Node". Host1 set to the IP address of the eno1 Ethernet interface (10.0.0.11) No Packages, Update Hosts Volumes left at defaults (engine, data, and vmstore) Bricks left at defaults - (Raid 6 / 256 / 12) - Host 10.0.0.11 - device name /dev/sdb (all LVs) - engine 100GB, data 500GB, vmstore 500GB Output after "Deployment failed": PLAY [Setup backend] *********************************************************** TASK [Gathering Facts] ********************************************************* ok: [10.0.0.11] TASK [gluster.infra/roles/firewall_config : Start firewalld if not already started] *** ok: [10.0.0.11] TASK [gluster.infra/roles/firewall_config : check if required variables are set] *** skipping: [10.0.0.11] TASK [gluster.infra/roles/firewall_config : Open/Close firewalld ports] ******** ok: [10.0.0.11] => (item=2049/tcp) ok: [10.0.0.11] => (item=54321/tcp) ok: [10.0.0.11] => (item=5900/tcp) ok: [10.0.0.11] => (item=5900-6923/tcp) ok: [10.0.0.11] => (item=5666/tcp) ok: [10.0.0.11] => (item=16514/tcp) TASK [gluster.infra/roles/firewall_config : Add/Delete services to firewalld rules] *** ok: [10.0.0.11] => (item=glusterfs) TASK [gluster.infra/roles/backend_setup : Gather facts to determine the OS distribution] *** ok: [10.0.0.11] TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools for debian systems.] *** skipping: [10.0.0.11] TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools for RHEL systems.] *** ok: [10.0.0.11] TASK [gluster.infra/roles/backend_setup : Install python-yaml package for Debian systems] *** skipping: [10.0.0.11] TASK [gluster.infra/roles/backend_setup : Initialize vdo_devs array] *********** ok: [10.0.0.11] TASK [gluster.infra/roles/backend_setup : Record VDO devices (if any)] ********* skipping: [10.0.0.11] => (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) TASK [gluster.infra/roles/backend_setup : Enable and start vdo service] ******** skipping: [10.0.0.11] TASK [gluster.infra/roles/backend_setup : Create VDO with specified size] ****** skipping: [10.0.0.11] TASK [gluster.infra/roles/backend_setup : Check if valid disktype is provided] *** skipping: [10.0.0.11] TASK [gluster.infra/roles/backend_setup : Set PV data alignment for JBOD] ****** skipping: [10.0.0.11] TASK [gluster.infra/roles/backend_setup : Set PV data alignment for RAID] ****** ok: [10.0.0.11] TASK [gluster.infra/roles/backend_setup : Set VG physical extent size for RAID] *** ok: [10.0.0.11] TASK [gluster.infra/roles/backend_setup : Create volume groups] **************** failed: [10.0.0.11] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Device /dev/sdb not found."} NO MORE HOSTS LEFT ************************************************************* NO MORE HOSTS LEFT ************************************************************* PLAY RECAP ********************************************************************* 10.0.0.11 : ok=9 changed=0 unreachable=0 failed=1 skipped=8 rescued=0 ignored=0 ---

Mike, Keep it simple. It has to be a bare, plain disk. Do not format or put a file system on the disk or the gluster install will fail. On Jun 25 2019, at 3:53 pm, Mike Davis <ovirtlists@profician.com> wrote:
I installed oVirt node for oVirt v4.3.4 on a single server (default partitioning). I know a single node is not best practice, but fits our current need.
Now, I am trying to deploy the hosted engine, but getting the error "Device /dev/sdb not found" (full output below). /dev/sdb does not exist. Changing to /dev/sda fails (as I would expect) with "Creating physical volume '/dev/sda' failed". There is only one logical disk on this server (RAID), which is /dev/sda.
The process does not seem correct to me. Why would the wizard try to build gluster volumes on bare partitions rather than in a file system like root? What do we need to do to deploy the HE?
I used Cockpit and "Hyperconverged : Configure Gluster storage and oVirt hosted engine". Since it is a single node, I selected "Run Gluster Wizard For Single Node".
Host1 set to the IP address of the eno1 Ethernet interface (10.0.0.11) No Packages, Update Hosts Volumes left at defaults (engine, data, and vmstore) Bricks left at defaults - (Raid 6 / 256 / 12) - Host 10.0.0.11 - device name /dev/sdb (all LVs) - engine 100GB, data 500GB, vmstore 500GB
Output after "Deployment failed": PLAY [Setup backend] ***********************************************************
TASK [Gathering Facts] ********************************************************* ok: [10.0.0.11]
TASK [gluster.infra/roles/firewall_config : Start firewalld if not already started] *** ok: [10.0.0.11]
TASK [gluster.infra/roles/firewall_config : check if required variables are set] *** skipping: [10.0.0.11]
TASK [gluster.infra/roles/firewall_config : Open/Close firewalld ports] ******** ok: [10.0.0.11] => (item=2049/tcp) ok: [10.0.0.11] => (item=54321/tcp) ok: [10.0.0.11] => (item=5900/tcp) ok: [10.0.0.11] => (item=5900-6923/tcp) ok: [10.0.0.11] => (item=5666/tcp) ok: [10.0.0.11] => (item=16514/tcp)
TASK [gluster.infra/roles/firewall_config : Add/Delete services to firewalld rules] *** ok: [10.0.0.11] => (item=glusterfs)
TASK [gluster.infra/roles/backend_setup : Gather facts to determine the OS distribution] *** ok: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools for debian systems.] *** skipping: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools for RHEL systems.] *** ok: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Install python-yaml package for Debian systems] *** skipping: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Initialize vdo_devs array] *********** ok: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Record VDO devices (if any)] ********* skipping: [10.0.0.11] => (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'})
TASK [gluster.infra/roles/backend_setup : Enable and start vdo service] ******** skipping: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Create VDO with specified size] ****** skipping: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Check if valid disktype is provided] *** skipping: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Set PV data alignment for JBOD] ****** skipping: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Set PV data alignment for RAID] ****** ok: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Set VG physical extent size for RAID] *** ok: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Create volume groups] **************** failed: [10.0.0.11] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Device /dev/sdb not found."}
NO MORE HOSTS LEFT *************************************************************
NO MORE HOSTS LEFT *************************************************************
PLAY RECAP ********************************************************************* 10.0.0.11 : ok=9 changed=0 unreachable=0 failed=1 skipped=8 rescued=0 ignored=0
---
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/SWYYE4YEDRIMK2...

Ok. The oVirt Node installation sets up partitions to use the entire disk. (Documentation recommends using the default layout.) The Cockpit HE deployment tries to use gluster, but on a different disk (non-existent). Do you recommend that I not use the default partitioning when installing oVirt Node; instead, I should set aside a partition for gluster (not formatting, etc)? Thanks, Mike On 6/25/2019 7:26 PM, femi adegoke wrote:
Mike,
Keep it simple. It has to be a bare, plain disk. Do not format or put a file system on the disk or the gluster install will fail.
On Jun 25 2019, at 3:53 pm, Mike Davis <ovirtlists@profician.com> wrote:
I installed oVirt node for oVirt v4.3.4 on a single server (default partitioning). I know a single node is not best practice, but fits our current need.
Now, I am trying to deploy the hosted engine, but getting the error "Device /dev/sdb not found" (full output below). /dev/sdb does not exist. Changing to /dev/sda fails (as I would expect) with "Creating physical volume '/dev/sda' failed". There is only one logical disk on this server (RAID), which is /dev/sda.
The process does not seem correct to me. Why would the wizard try to build gluster volumes on bare partitions rather than in a file system like root? What do we need to do to deploy the HE?
I used Cockpit and "Hyperconverged : Configure Gluster storage and oVirt hosted engine". Since it is a single node, I selected "Run Gluster Wizard For Single Node".
Host1 set to the IP address of the eno1 Ethernet interface (10.0.0.11) No Packages, Update Hosts Volumes left at defaults (engine, data, and vmstore) Bricks left at defaults - (Raid 6 / 256 / 12) - Host 10.0.0.11 - device name /dev/sdb (all LVs) - engine 100GB, data 500GB, vmstore 500GB
Output after "Deployment failed":
PLAY [Setup backend] ***********************************************************
TASK [Gathering Facts] ********************************************************* ok: [10.0.0.11]
TASK [gluster.infra/roles/firewall_config : Start firewalld if not already started] *** ok: [10.0.0.11]
TASK [gluster.infra/roles/firewall_config : check if required variables are set] *** skipping: [10.0.0.11]
TASK [gluster.infra/roles/firewall_config : Open/Close firewalld ports] ******** ok: [10.0.0.11] => (item=2049/tcp) ok: [10.0.0.11] => (item=54321/tcp) ok: [10.0.0.11] => (item=5900/tcp) ok: [10.0.0.11] => (item=5900-6923/tcp) ok: [10.0.0.11] => (item=5666/tcp) ok: [10.0.0.11] => (item=16514/tcp)
TASK [gluster.infra/roles/firewall_config : Add/Delete services to firewalld rules] *** ok: [10.0.0.11] => (item=glusterfs)
TASK [gluster.infra/roles/backend_setup : Gather facts to determine the OS distribution] *** ok: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools for debian systems.] *** skipping: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools for RHEL systems.] *** ok: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Install python-yaml package for Debian systems] *** skipping: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Initialize vdo_devs array] *********** ok: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Record VDO devices (if any)] ********* skipping: [10.0.0.11] => (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'})
TASK [gluster.infra/roles/backend_setup : Enable and start vdo service] ******** skipping: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Create VDO with specified size] ****** skipping: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Check if valid disktype is provided] *** skipping: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Set PV data alignment for JBOD] ****** skipping: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Set PV data alignment for RAID] ****** ok: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Set VG physical extent size for RAID] *** ok: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Create volume groups] **************** failed: [10.0.0.11] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Device /dev/sdb not found."}
NO MORE HOSTS LEFT *************************************************************
NO MORE HOSTS LEFT *************************************************************
PLAY RECAP ********************************************************************* 10.0.0.11 : ok=9 changed=0 unreachable=0 failed=1 skipped=8 rescued=0 ignored=0
---
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/SWYYE4YEDRIMK2...

Hello Mike, I think the requirements for a gluster node is that it has at least 2 disks one for the O/S and the others for bricks. It doesn't seem to say this in the oVirt prerequisites. Could you reconfigure your raid device to provide LUNS? Regards, Paul S. ________________________________ From: Mike Davis <ovirtlists@profician.com> Sent: 26 June 2019 02:11 To: users@ovirt.org Subject: [ovirt-users] Re: HE on single oVirt node Ok. The oVirt Node installation sets up partitions to use the entire disk. (Documentation recommends using the default layout.) The Cockpit HE deployment tries to use gluster, but on a different disk (non-existent). Do you recommend that I not use the default partitioning when installing oVirt Node; instead, I should set aside a partition for gluster (not formatting, etc)? Thanks, Mike On 6/25/2019 7:26 PM, femi adegoke wrote: Mike, Keep it simple. It has to be a bare, plain disk. Do not format or put a file system on the disk or the gluster install will fail. On Jun 25 2019, at 3:53 pm, Mike Davis <ovirtlists@profician.com><mailto:ovirtlists@profician.com> wrote: I installed oVirt node for oVirt v4.3.4 on a single server (default partitioning). I know a single node is not best practice, but fits our current need. Now, I am trying to deploy the hosted engine, but getting the error "Device /dev/sdb not found" (full output below). /dev/sdb does not exist. Changing to /dev/sda fails (as I would expect) with "Creating physical volume '/dev/sda' failed". There is only one logical disk on this server (RAID), which is /dev/sda. The process does not seem correct to me. Why would the wizard try to build gluster volumes on bare partitions rather than in a file system like root? What do we need to do to deploy the HE? I used Cockpit and "Hyperconverged : Configure Gluster storage and oVirt hosted engine". Since it is a single node, I selected "Run Gluster Wizard For Single Node". Host1 set to the IP address of the eno1 Ethernet interface (10.0.0.11) No Packages, Update Hosts Volumes left at defaults (engine, data, and vmstore) Bricks left at defaults - (Raid 6 / 256 / 12) - Host 10.0.0.11 - device name /dev/sdb (all LVs) - engine 100GB, data 500GB, vmstore 500GB Output after "Deployment failed": PLAY [Setup backend] *********************************************************** TASK [Gathering Facts] ********************************************************* ok: [10.0.0.11] TASK [gluster.infra/roles/firewall_config : Start firewalld if not already started] *** ok: [10.0.0.11] TASK [gluster.infra/roles/firewall_config : check if required variables are set] *** skipping: [10.0.0.11] TASK [gluster.infra/roles/firewall_config : Open/Close firewalld ports] ******** ok: [10.0.0.11] => (item=2049/tcp) ok: [10.0.0.11] => (item=54321/tcp) ok: [10.0.0.11] => (item=5900/tcp) ok: [10.0.0.11] => (item=5900-6923/tcp) ok: [10.0.0.11] => (item=5666/tcp) ok: [10.0.0.11] => (item=16514/tcp) TASK [gluster.infra/roles/firewall_config : Add/Delete services to firewalld rules] *** ok: [10.0.0.11] => (item=glusterfs) TASK [gluster.infra/roles/backend_setup : Gather facts to determine the OS distribution] *** ok: [10.0.0.11] TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools for debian systems.] *** skipping: [10.0.0.11] TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools for RHEL systems.] *** ok: [10.0.0.11] TASK [gluster.infra/roles/backend_setup : Install python-yaml package for Debian systems] *** skipping: [10.0.0.11] TASK [gluster.infra/roles/backend_setup : Initialize vdo_devs array] *********** ok: [10.0.0.11] TASK [gluster.infra/roles/backend_setup : Record VDO devices (if any)] ********* skipping: [10.0.0.11] => (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) TASK [gluster.infra/roles/backend_setup : Enable and start vdo service] ******** skipping: [10.0.0.11] TASK [gluster.infra/roles/backend_setup : Create VDO with specified size] ****** skipping: [10.0.0.11] TASK [gluster.infra/roles/backend_setup : Check if valid disktype is provided] *** skipping: [10.0.0.11] TASK [gluster.infra/roles/backend_setup : Set PV data alignment for JBOD] ****** skipping: [10.0.0.11] TASK [gluster.infra/roles/backend_setup : Set PV data alignment for RAID] ****** ok: [10.0.0.11] TASK [gluster.infra/roles/backend_setup : Set VG physical extent size for RAID] *** ok: [10.0.0.11] TASK [gluster.infra/roles/backend_setup : Create volume groups] **************** failed: [10.0.0.11] (item={u'vgname': u'gluster_vg_sdb', u'pvname': u'/dev/sdb'}) => {"ansible_loop_var": "item", "changed": false, "item": {"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Device /dev/sdb not found."} NO MORE HOSTS LEFT ************************************************************* NO MORE HOSTS LEFT ************************************************************* PLAY RECAP ********************************************************************* 10.0.0.11 : ok=9 changed=0 unreachable=0 failed=1 skipped=8 rescued=0 ignored=0 --- _______________________________________________ Users mailing list -- users@ovirt.org<mailto:users@ovirt.org> To unsubscribe send an email to users-leave@ovirt.org<mailto:users-leave@ovirt.org> Privacy Statement: https://www.ovirt.org/site/privacy-policy/<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt.org%2Fsite%2Fprivacy-policy%2F&data=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7Cb8010225a8fc4eb2fa2008d6f9d3e5ac%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C636971085827014819&sdata=xn7vVOl5%2BS%2FPuyxeA6ci%2FlzwS1ggHuIac4nj%2BDiF5nc%3D&reserved=0> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovirt.org%2Fcommunity%2Fabout%2Fcommunity-guidelines%2F&data=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7Cb8010225a8fc4eb2fa2008d6f9d3e5ac%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C636971085827024821&sdata=thmmeyf4ZjsF7yu9kYHVBE%2F0BJq4xtFUr3KWsma9mVk%3D&reserved=0> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/SWYYE4YEDRIMK2I24NCTVHQRRIFQKYIF/<https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.ovirt.org%2Farchives%2Flist%2Fusers%40ovirt.org%2Fmessage%2FSWYYE4YEDRIMK2I24NCTVHQRRIFQKYIF%2F&data=02%7C01%7Cp.staniforth%40leedsbeckett.ac.uk%7Cb8010225a8fc4eb2fa2008d6f9d3e5ac%7Cd79a81124fbe417aa112cd0fb490d85c%7C0%7C0%7C636971085827024821&sdata=CsY1%2BjhdqYHez97ka3WujaX031hzt9rAOaBN2hw2uEg%3D&reserved=0> To view the terms under which this email is distributed, please go to:- http://leedsbeckett.ac.uk/disclaimer/email/
participants (3)
-
femi adegoke
-
Mike Davis
-
Staniforth, Paul