Ok. The oVirt Node installation sets up partitions to use the entire
disk. (Documentation recommends using the default layout.) The Cockpit
HE deployment tries to use gluster, but on a different disk
(non-existent). Do you recommend that I not use the default
partitioning when installing oVirt Node; instead, I should set aside a
partition for gluster (not formatting, etc)?
Thanks,
Mike
On 6/25/2019 7:26 PM, femi adegoke wrote:
Mike,
Keep it simple. It has to be a bare, plain disk.
Do not format or put a file system on the disk or the gluster install
will fail.
On Jun 25 2019, at 3:53 pm, Mike Davis <ovirtlists(a)profician.com> wrote:
I installed oVirt node for oVirt v4.3.4 on a single server (default
partitioning). I know a single node is not best practice, but
fits our
current need.
Now, I am trying to deploy the hosted engine, but getting the error
"Device /dev/sdb not found" (full output below). /dev/sdb does not
exist. Changing to /dev/sda fails (as I would expect) with "Creating
physical volume '/dev/sda' failed". There is only one logical disk on
this server (RAID), which is /dev/sda.
The process does not seem correct to me. Why would the wizard try to
build gluster volumes on bare partitions rather than in a file system
like root? What do we need to do to deploy the HE?
I used Cockpit and "Hyperconverged : Configure Gluster storage and
oVirt
hosted engine". Since it is a single node, I selected "Run Gluster
Wizard For Single Node".
Host1 set to the IP address of the eno1 Ethernet interface (10.0.0.11)
No Packages, Update Hosts
Volumes left at defaults (engine, data, and vmstore)
Bricks left at defaults
- (Raid 6 / 256 / 12)
- Host 10.0.0.11
- device name /dev/sdb (all LVs)
- engine 100GB, data 500GB, vmstore 500GB
Output after "Deployment failed":
PLAY [Setup backend]
***********************************************************
TASK [Gathering Facts]
*********************************************************
ok: [10.0.0.11]
TASK [gluster.infra/roles/firewall_config : Start firewalld if not
already started] ***
ok: [10.0.0.11]
TASK [gluster.infra/roles/firewall_config : check if required
variables
are set] ***
skipping: [10.0.0.11]
TASK [gluster.infra/roles/firewall_config : Open/Close firewalld
ports]
********
ok: [10.0.0.11] => (item=2049/tcp)
ok: [10.0.0.11] => (item=54321/tcp)
ok: [10.0.0.11] => (item=5900/tcp)
ok: [10.0.0.11] => (item=5900-6923/tcp)
ok: [10.0.0.11] => (item=5666/tcp)
ok: [10.0.0.11] => (item=16514/tcp)
TASK [gluster.infra/roles/firewall_config : Add/Delete services to
firewalld rules] ***
ok: [10.0.0.11] => (item=glusterfs)
TASK [gluster.infra/roles/backend_setup : Gather facts to
determine the
OS distribution] ***
ok: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools
for debian systems.] ***
skipping: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools
for RHEL systems.] ***
ok: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Install python-yaml package
for Debian systems] ***
skipping: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Initialize vdo_devs array]
***********
ok: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Record VDO devices (if any)]
*********
skipping: [10.0.0.11] => (item={u'vgname': u'gluster_vg_sdb',
u'pvname':
u'/dev/sdb'})
TASK [gluster.infra/roles/backend_setup : Enable and start vdo
service]
********
skipping: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Create VDO with specified
size] ******
skipping: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Check if valid disktype is
provided] ***
skipping: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Set PV data alignment for
JBOD] ******
skipping: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Set PV data alignment for
RAID] ******
ok: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Set VG physical extent size
for RAID] ***
ok: [10.0.0.11]
TASK [gluster.infra/roles/backend_setup : Create volume groups]
****************
failed: [10.0.0.11] (item={u'vgname': u'gluster_vg_sdb',
u'pvname':
u'/dev/sdb'}) => {"ansible_loop_var": "item",
"changed": false,
"item":
{"pvname": "/dev/sdb", "vgname":
"gluster_vg_sdb"}, "msg": "Device
/dev/sdb not found."}
NO MORE HOSTS LEFT
*************************************************************
NO MORE HOSTS LEFT
*************************************************************
PLAY RECAP
*********************************************************************
10.0.0.11 : ok=9 changed=0 unreachable=0
failed=1 skipped=8 rescued=0 ignored=0
---
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SWYYE4YEDRI...