Hello Mike,
I think the requirements for a gluster node is that it has at least 2 disks one for the O/S and the others for bricks.
It doesn't seem to say this in the oVirt prerequisites.
Could you reconfigure your raid device to provide LUNS?
Regards,
Paul S.
Ok. The oVirt Node installation sets up partitions to use the entire disk. (Documentation recommends using the default layout.) The Cockpit HE deployment tries to use gluster, but on a different disk (non-existent). Do you recommend that I not use the default partitioning when installing oVirt Node; instead, I should set aside a partition for gluster (not formatting, etc)?
Thanks,
Mike
Mike,
Keep it simple. It has to be a bare, plain disk.Do not format or put a file system on the disk or the gluster install will fail.
On Jun 25 2019, at 3:53 pm, Mike Davis <ovirtlists@profician.com> wrote:I installed oVirt node for oVirt v4.3.4 on a single server (defaultpartitioning). I know a single node is not best practice, but fits ourcurrent need.
Now, I am trying to deploy the hosted engine, but getting the error"Device /dev/sdb not found" (full output below). /dev/sdb does notexist. Changing to /dev/sda fails (as I would expect) with "Creatingphysical volume '/dev/sda' failed". There is only one logical disk onthis server (RAID), which is /dev/sda.
The process does not seem correct to me. Why would the wizard try tobuild gluster volumes on bare partitions rather than in a file systemlike root? What do we need to do to deploy the HE?
I used Cockpit and "Hyperconverged : Configure Gluster storage and oVirthosted engine". Since it is a single node, I selected "Run GlusterWizard For Single Node".
Host1 set to the IP address of the eno1 Ethernet interface (10.0.0.11)No Packages, Update HostsVolumes left at defaults (engine, data, and vmstore)Bricks left at defaults- (Raid 6 / 256 / 12)- Host 10.0.0.11- device name /dev/sdb (all LVs)- engine 100GB, data 500GB, vmstore 500GB
Output after "Deployment failed":
PLAY [Setup backend]***********************************************************
TASK [Gathering Facts]*********************************************************ok: [10.0.0.11]
TASK [gluster.infra/roles/firewall_config : Start firewalld if notalready started] ***ok: [10.0.0.11]
TASK [gluster.infra/roles/firewall_config : check if required variablesare set] ***skipping: [10.0.0.11]
TASK [gluster.infra/roles/firewall_config : Open/Close firewalld ports]********ok: [10.0.0.11] => (item=2049/tcp)ok: [10.0.0.11] => (item=54321/tcp)ok: [10.0.0.11] => (item=5900/tcp)ok: [10.0.0.11] => (item=5900-6923/tcp)ok: [10.0.0.11] => (item=5666/tcp)ok: [10.0.0.11] => (item=16514/tcp)
TASK [gluster.infra/roles/firewall_config : Add/Delete services tofirewalld rules] ***ok: [10.0.0.11] => (item=glusterfs)
TASK [gluster.infra/roles/backend_setup : Gather facts to determine theOS distribution] ***ok: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Change to Install lvm toolsfor debian systems.] ***skipping: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Change to Install lvm toolsfor RHEL systems.] ***ok: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Install python-yaml packagefor Debian systems] ***skipping: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Initialize vdo_devs array]***********ok: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Record VDO devices (if any)]*********skipping: [10.0.0.11] => (item={u'vgname': u'gluster_vg_sdb', u'pvname':u'/dev/sdb'})
TASK [gluster.infra/roles/backend_setup : Enable and start vdo service]********skipping: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Create VDO with specifiedsize] ******skipping: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Check if valid disktype isprovided] ***skipping: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Set PV data alignment forJBOD] ******skipping: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Set PV data alignment forRAID] ******ok: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Set VG physical extent sizefor RAID] ***ok: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Create volume groups]****************failed: [10.0.0.11] (item={u'vgname': u'gluster_vg_sdb', u'pvname':u'/dev/sdb'}) => {"ansible_loop_var": "item", "changed": false, "item":{"pvname": "/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg": "Device/dev/sdb not found."}
NO MORE HOSTS LEFT*************************************************************
NO MORE HOSTS LEFT*************************************************************
PLAY RECAP*********************************************************************10.0.0.11 : ok=9 changed=0 unreachable=0failed=1 skipped=8 rescued=0 ignored=0
---
_______________________________________________Users mailing list -- users@ovirt.orgTo unsubscribe send an email to users-leave@ovirt.orgPrivacy Statement: https://www.ovirt.org/site/privacy-policy/oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/