On Sat, Jan 19, 2019 at 1:07 PM <hunter86_bg(a)yahoo.com> wrote:
Hello Community,
recently I managed somehow to deploy a 2 node cluster on GlusterFS , but
after a serious engine failiure - I have decided to start from scratch.
2 node hyperconverged gluster is definitively a bad idea since it's not
going to protect you from split brains.
Please choose 1 or 3 but not 2.
What I have done so far:
1. Inctall CentOS7 from scratch
2. Add ovirt repositories, vdo,cockpit for ovirt
3. Deployed the gluster cluster using cockpit
4. Trying to deploy the hosted-engine , which has failed several times.
Without any logs it's difficult to guess what really happened but I think
that it could be related to the two nodes approach which is explicitly
prevented.
Up to now I have detected that ovirt-ha-agent is giving:
яну 19 13:54:57 ovirt1.localdomain ovirt-ha-agent[16992]: ovirt-ha-agent
ovirt_hosted_engine_ha.agent.agent.Agent ERROR Traceback (most recent call
last):
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py",
line 131, in _run_agent
return
action(he)
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py",
line 55, in action_proper
return
he.start_monitoring()
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
line 413, in start_monitoring
self._initialize_broker()
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
line 535, in _initialize_broker
m.get('options', {}))
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
line 83, in start_monitor
.format(type, options, e ))
RequestError:
Failed to start monitor ping, options {'addr': '192.168.1.1'}: [Errno 2]
No
such file or directory
This simply means that ovirt-ha-agents fails to communicate (in order to
send a ping to check network connectivity) with ovirt-ha-broker over a unix
domain socket.
'[Errno 2] No such file or directory' means that the socket is closed on
ovirt-ha-broker side: you can probably see why checking
/var/log/ovirt-hosted-engine-ha/broker.log but if didn't successfully
completed the setup this is not surprising me and I strongly suggest to
correctly complete the deployment before trying anything else.
According to
https://access.redhat.com/solutions/3353391 , the
/etc/ovirt-hosted-engine/hosted-engine.conf should be empty , but it's OK:
[root@ovirt1 tmp]# cat /etc/ovirt-hosted-engine/hosted-engine.conf
fqdn=engine.localdomain
vm_disk_id=bb0a9839-a05d-4d0a-998c-74da539a9574
vm_disk_vol_id=c1fc3c59-bc6e-4b74-a624-557a1a62a34f
vmid=d0e695da-ec1a-4d6f-b094-44a8cac5f5cd
storage=ovirt1.localdomain:/engine
nfs_version=
mnt_options=backup-volfile-servers=ovirt2.localdomain:ovirt3.localdomain
conf=/var/run/ovirt-hosted-engine-ha/vm.conf
host_id=1
console=vnc
domainType=glusterfs
spUUID=00000000-0000-0000-0000-000000000000
sdUUID=444e524e-9008-48f8-b842-1ce7b95bf248
connectionUUID=e29cf818-5ee5-46e1-85c1-8aeefa33e95d
ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
ca_subject="C=EN, L=Test, O=Test, CN=Test"
vdsm_use_ssl=true
gateway=192.168.1.1
bridge=ovirtmgmt
metadata_volume_UUID=a3be2390-017f-485b-8f42-716fb6094692
metadata_image_UUID=368fb8dc-6049-4ef0-8cf8-9d3c4d772d59
lockspace_volume_UUID=41762f85-5d00-488f-bcd0-3de49ec39e8b
lockspace_image_UUID=de100b9b-07ac-4986-9d86-603475572510
conf_volume_UUID=4306f6d6-7fe9-499d-81a5-6b354e8ecb79
conf_image_UUID=d090dd3f-fc62-442a-9710-29eeb56b0019
# The following are used only for iSCSI storage
iqn=
portal=
user=
password=
port=
Ovirt-ha-agent version is:
ovirt-hosted-engine-ha-2.2.18-1.el7.noarch
Can you guide me in order to resolve this issue and to deploy the
self-hosted engine ?
Where should I start from ?
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/D72UQFMNOEJ...