Hi Again,
it seems that sanlock error -223 indicated sanlock lockspace error.I have somehow
reinitialize the lockspace and the engine is up and running, but I have 2 VMs defined :1.
The engine itself 2. A VM called "External-HostedEngineLocal"
I'm pretty sure that there are some tasks that the wizard completes after successfull
power-on of the engine , which should clean up the situation and in my case - is not
actually working.
Could someone advise how to get rid of that VM and what should I do in order to complete
the deployment.
Thanks in advance for all who read this thread.
Best Regards,Strahil Nikolov
От: Strahil Nikolov <hunter86_bg(a)yahoo.com>
До: Simone Tiraboschi <stirabos(a)redhat.com>
Копие: users <users(a)ovirt.org>
Изпратен: събота, 19 януари 2019 г. 23:34
Тема: Отн: [ovirt-users] HyperConverged Self-Hosted deployment fails
Hello All,
it seems that the ovirt-ha-broker has some problems:Thread-8::DEBUG::2019-01-19
19:30:16,048::stompreactor::479::jsonrpc.AsyncoreClient::(send) Sending response
...skipping...
smtp-server = localhost
smtp-port = 25
source-email = root@localhost
destination-emails = root@localhost
[notify]
state_transition = maintenance|start|stop|migrate|up|down
Listener::DEBUG::2019-01-19
19:30:31,741::heconflib::95::ovirt_hosted_engine_ha.broker.notifications.Notifications.config.broker::(_dd_pipe_tar)
stderr
:
Thread-3::DEBUG::2019-01-19
19:30:31,747::stompreactor::479::jsonrpc.AsyncoreClient::(send) Sending response
StatusStorageThread::ERROR::2019-01-19
19:30:31,751::status_broker::90::ovirt_hosted_engine_ha.broker.status_broker.StatusBroker.Update::(run)
Failed t
o update state.
Traceback (most recent call last):
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/status_broker.py",
line 82, in run
if (self._status_broker._inquire_whiteboard_lock() or
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/status_broker.py",
line 190, in _inquire_whiteboard_lock
self.host_id, self._lease_file)
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/status_broker.py",
line 128, in host_id
raise ex.HostIdNotLockedError("Host id is not set")
HostIdNotLockedError: Host id is not set
StatusStorageThread::ERROR::2019-01-19
19:30:31,751::status_broker::70::ovirt_hosted_engine_ha.broker.status_broker.StatusBroker.Update::(trigger_restart)
Trying to restart the brokerAnd most probably the issue is within the sanlock:
2019-01-19 19:29:57 4739 [4602]: worker0 aio collect WR
0x7f92a00008c0:0x7f92a00008d0:0x7f92acc70000 result 1048576:0 other free
2019-01-19 19:30:01 4744 [4603]: s8 lockspace
hosted-engine:1:/var/run/vdsm/storage/b388324b-eaf4-4158-8e1b-0b7c9b861002/5a849a03-ecbc-4b3f-b558-ec2ebbc42c6f/dd663799-36ed-47de-8208-d357f803efde:0
2019-01-19 19:30:01 4744 [2779]: verify_leader 1 wrong magic 0
/var/run/vdsm/storage/b388324b-eaf4-4158-8e1b-0b7c9b861002/5a849a03-ecbc-4b3f-b558-ec2ebbc42c6f/dd663799-36ed-47de-8208-d357f803efde
2019-01-19 19:30:01 4744 [2779]: leader1 delta_acquire_begin error -223 lockspace
hosted-engine host_id 1
2019-01-19 19:30:01 4744 [2779]: leader2 path
/var/run/vdsm/storage/b388324b-eaf4-4158-8e1b-0b7c9b861002/5a849a03-ecbc-4b3f-b558-ec2ebbc42c6f/dd663799-36ed-47de-8208-d357f803efde
offset 0
2019-01-19 19:30:01 4744 [2779]: leader3 m 0 v 30003 ss 512 nh 0 mh 1 oi 0 og 0 lv 0
2019-01-19 19:30:01 4744 [2779]: leader4 sn hosted-engine rn ts 0 cs 60346c59
2019-01-19 19:30:02 4745 [4603]: s8 add_lockspace fail result -223
2019-01-19 19:30:07 4750 [4603]: s9 lockspace
hosted-engine:1:/var/run/vdsm/storage/b388324b-eaf4-4158-8e1b-0b7c9b861002/5a849a03-ecbc-4b3f-b558-ec2ebbc42c6f/dd663799-36ed-47de-8208-d357f803efde:0
2019-01-19 19:30:07 4750 [2837]: verify_leader 1 wrong magic 0
/var/run/vdsm/storage/b388324b-eaf4-4158-8e1b-0b7c9b861002/5a849a03-ecbc-4b3f-b558-ec2ebbc42c6f/dd663799-36ed-47de-8208-d357f803efde
2019-01-19 19:30:07 4750 [2837]: leader1 delta_acquire_begin error -223 lockspace
hosted-engine host_id 1
2019-01-19 19:30:07 4750 [2837]: leader2 path
/var/run/vdsm/storage/b388324b-eaf4-4158-8e1b-0b7c9b861002/5a849a03-ecbc-4b3f-b558-ec2ebbc42c6f/dd663799-36ed-47de-8208-d357f803efde
offset 0
2019-01-19 19:30:07 4750 [2837]: leader3 m 0 v 30003 ss 512 nh 0 mh 1 oi 0 og 0 lv 0
2019-01-19 19:30:07 4750 [2837]: leader4 sn hosted-engine rn ts 0 cs 60346c59
2019-01-19 19:30:08 4751 [4603]: s9 add_lockspace fail result -223
Can someone guide me how to go further ? Can debug be enabled for sanlock ?
Best Regards,Strahil Nikolov
От: Strahil Nikolov <hunter86_bg(a)yahoo.com>
До: Simone Tiraboschi <stirabos(a)redhat.com>
Копие: users <users(a)ovirt.org>
Изпратен: събота, 19 януари 2019 г. 17:54
Тема: Отн: [ovirt-users] HyperConverged Self-Hosted deployment fails
Thanks Simone,
I will check the broker.I didn't specify the layout correctly - it's 'replica
3 arbiter 1' which was OK last time I used this layout.
Best Regards,Strahil Nikolov
От: Simone Tiraboschi <stirabos(a)redhat.com>
До: hunter86bg <hunter86_bg(a)yahoo.com>
Копие: users <users(a)ovirt.org>
Изпратен: събота, 19 януари 2019 г. 17:42
Тема: Re: [ovirt-users] HyperConverged Self-Hosted deployment fails
On Sat, Jan 19, 2019 at 1:07 PM <hunter86_bg(a)yahoo.com> wrote:
Hello Community,
recently I managed somehow to deploy a 2 node cluster on GlusterFS , but after a serious
engine failiure - I have decided to start from scratch.
2 node hyperconverged gluster is definitively a bad idea since it's not going to
protect you from split brains.Please choose 1 or 3 but not 2.
What I have done so far:
1. Inctall CentOS7 from scratch
2. Add ovirt repositories, vdo,cockpit for ovirt
3. Deployed the gluster cluster using cockpit
4. Trying to deploy the hosted-engine , which has failed several times.
Without any logs it's difficult to guess what really happened but I think that it
could be related to the two nodes approach which is explicitly prevented.
Up to now I have detected that ovirt-ha-agent is giving:
яну 19 13:54:57 ovirt1.localdomain ovirt-ha-agent[16992]: ovirt-ha-agent
ovirt_hosted_engine_ha.agent.agent.Agent ERROR Traceback (most recent call last):
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py", line
131, in _run_agent
return action(he)
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py", line
55, in action_proper
return
he.start_monitoring()
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
line 413, in start_monitoring
self._initialize_broker()
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
line 535, in _initialize_broker
m.get('options',
{}))
File
"/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",
line 83, in start_monitor
.format(type, options, e
))
RequestError: Failed to start
monitor ping, options {'addr': '192.168.1.1'}: [Errno 2] No such file or
directory
This simply means that ovirt-ha-agents fails to communicate (in order to send a ping to
check network connectivity) with ovirt-ha-broker over a unix domain socket.
'[Errno 2] No such file or directory' means that the socket is closed on
ovirt-ha-broker side: you can probably see why checking
/var/log/ovirt-hosted-engine-ha/broker.log but if didn't successfully completed the
setup this is not surprising me and I strongly suggest to correctly complete the
deployment before trying anything else.
According to
https://access.redhat.com/solutions/3353391 , the
/etc/ovirt-hosted-engine/hosted-engine.conf should be empty , but it's OK:
[root@ovirt1 tmp]# cat /etc/ovirt-hosted-engine/hosted-engine.conf
fqdn=engine.localdomain
vm_disk_id=bb0a9839-a05d-4d0a-998c-74da539a9574
vm_disk_vol_id=c1fc3c59-bc6e-4b74-a624-557a1a62a34f
vmid=d0e695da-ec1a-4d6f-b094-44a8cac5f5cd
storage=ovirt1.localdomain:/engine
nfs_version=
mnt_options=backup-volfile-servers=ovirt2.localdomain:ovirt3.localdomain
conf=/var/run/ovirt-hosted-engine-ha/vm.conf
host_id=1
console=vnc
domainType=glusterfs
spUUID=00000000-0000-0000-0000-000000000000
sdUUID=444e524e-9008-48f8-b842-1ce7b95bf248
connectionUUID=e29cf818-5ee5-46e1-85c1-8aeefa33e95d
ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem
ca_subject="C=EN, L=Test, O=Test, CN=Test"
vdsm_use_ssl=true
gateway=192.168.1.1
bridge=ovirtmgmt
metadata_volume_UUID=a3be2390-017f-485b-8f42-716fb6094692
metadata_image_UUID=368fb8dc-6049-4ef0-8cf8-9d3c4d772d59
lockspace_volume_UUID=41762f85-5d00-488f-bcd0-3de49ec39e8b
lockspace_image_UUID=de100b9b-07ac-4986-9d86-603475572510
conf_volume_UUID=4306f6d6-7fe9-499d-81a5-6b354e8ecb79
conf_image_UUID=d090dd3f-fc62-442a-9710-29eeb56b0019
# The following are used only for iSCSI storage
iqn=
portal=
user=
password=
port=
Ovirt-ha-agent version is:
ovirt-hosted-engine-ha-2.2.18-1.el7.noarch
Can you guide me in order to resolve this issue and to deploy the self-hosted engine ?
Where should I start from ?
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/D72UQFMNOEJ...