Re: ovirt-ha-agent not running
by Strahil
What is the content of '/rhev/data-center/mnt/glusterSD/kvm380.durchhalten.intern:_engine/36663740-576a-4498-b28e-0a402628c6a7/ha_agent/' ?
Usually when you restart the broker & agent, they will check if the links exist and are recreated again.
You can try to set the host into maintenance (local), then stop ovirt-ha-agent & ovirt-ha-broker . Then start the broker and after it -> the agent.
You can increase the log level via :
/etc/ovirt-hosted-engine-ha/agent-log.conf
/etc/ovirt-hosted-engine-ha/broker-log.conf
(Don't forget to restart the service).
Best Regards,
Strahil NikolovOn Dec 7, 2019 17:22, Stefan Wolf <shb256(a)gmail.com> wrote:
>
> and here is the broker.log
>
> MainThread::INFO::2019-12-07 15:20:03,563::broker::47::ovirt_hosted_engine_ha.broker.broker.Broker::(run) ovirt-hosted-engine-ha broker 2.3.6 started
> MainThread::INFO::2019-12-07 15:20:03,564::monitor::40::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Searching for submonitors in /usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/submonitors
> MainThread::INFO::2019-12-07 15:20:03,564::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor mem-free
> MainThread::INFO::2019-12-07 15:20:03,565::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor network
> MainThread::INFO::2019-12-07 15:20:03,566::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor mem-free
> MainThread::INFO::2019-12-07 15:20:03,566::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor engine-health
> MainThread::INFO::2019-12-07 15:20:03,567::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor cpu-load
> MainThread::INFO::2019-12-07 15:20:03,567::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor storage-domain
> MainThread::INFO::2019-12-07 15:20:03,568::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor mgmt-bridge
> MainThread::INFO::2019-12-07 15:20:03,569::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor cpu-load
> MainThread::INFO::2019-12-07 15:20:03,569::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor network
> MainThread::INFO::2019-12-07 15:20:03,574::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor cpu-load-no-engine
> MainThread::INFO::2019-12-07 15:20:03,575::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor cpu-load-no-engine
> MainThread::INFO::2019-12-07 15:20:03,576::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor mgmt-bridge
> MainThread::INFO::2019-12-07 15:20:03,577::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor engine-health
> MainThread::INFO::2019-12-07 15:20:03,577::monitor::49::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Loaded submonitor storage-domain
> MainThread::INFO::2019-12-07 15:20:03,577::monitor::50::ovirt_hosted_engine_ha.broker.monitor.Monitor::(_discover_submonitors) Finished loading submonitors
> MainThread::INFO::2019-12-07 15:20:03,651::storage_backends::373::ovirt_hosted_engine_ha.lib.storage_backends::(connect) Connecting the storage
> MainThread::INFO::2019-12-07 15:20:03,652::storage_server::349::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server) Connecting storage server
> MainThread::INFO::2019-12-07 15:20:03,716::storage_server::356::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server) Connecting storage server
> MainThread::INFO::2019-12-07 15:20:03,748::storage_server::413::ovirt_hosted_engine_ha.lib.storage_server.StorageServer::(connect_storage_server) Refreshing the storage domain
> MainThread::WARNING::2019-12-07 15:20:06,985::storage_broker::97::ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker::(__init__) Can't connect vdsm storage: [Errno 107] Transport endpoint is not connected: '/rhev/data-center/mnt/glusterSD/kvm380.durchhalten.intern:_engine/36663740-576a-4498-b28e-0a402628c6a7/ha_agent/hosted-engine.lockspace'
>
>
> maybe it helps
>
> bye shb
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/KECDUEVCHP3...
5 years
Re: Hyper-V to KVM | v2v
by Strahil
Have you tried to power up and fully shutdown the VM ?
Best Regards,
Strahil NikolovOn Dec 6, 2019 15:52, Vijay Sachdeva <vijay.sachdeva(a)indiqus.com> wrote:
>
> Hello,
>
>
>
> I am trying to convert a Hyper-V guest VM to run on KVM using virt-v2v utility.
>
>
>
> Getting below error:
>
>
>
>
>
> Any suggestions how to solve this.
>
>
>
> Thanks
>
>
>
> Vijay Sachdeva
>
>
5 years
ovirt-ha-agent not running
by Stefan Wolf
hello,
since some days ovirt-ha-agent is not running anymore
i ve 4 ovirt hosts and only on one host the agent is running.
maybe it was from an update, because i lost one agent after an other.
i ve done a complete fresh install for the host with the latest ovirt node.
I ve got on tree hosts this error
[root@kvm380 ~]# systemctl status ovirt-ha-agent
● ovirt-ha-agent.service - oVirt Hosted Engine High Availability Monitoring Agent
Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service; enabled; vendor preset: disabled)
Active: activating (auto-restart) (Result: exit-code) since Sa 2019-12-07 14:56:21 UTC; 5s ago
Process: 28002 ExecStart=/usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent (code=exited, status=157)
Main PID: 28002 (code=exited, status=157)
Tasks: 0
CGroup: /system.slice/ovirt-ha-agent.service
Dez 07 14:56:21 kvm380.durchhalten.intern systemd[1]: ovirt-ha-agent.service: main process exited, code=exited, status=157/n/a
Dez 07 14:56:21 kvm380.durchhalten.intern systemd[1]: Unit ovirt-ha-agent.service entered failed state.
Dez 07 14:56:21 kvm380.durchhalten.intern systemd[1]: ovirt-ha-agent.service failed.
and this is in /var/log/ovirt-hosted-engine-ha/agent.log
MainThread::INFO::2019-12-07 15:01:51,048::agent::67::ovirt_hosted_engine_ha.agent.agent.Agent::(run) ovirt-hosted-engine-ha agent 2.3.6 started
MainThread::INFO::2019-12-07 15:01:51,161::hosted_engine::234::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_get_hostname) Found certificate common name: kvm380.durchhalten.intern
MainThread::INFO::2019-12-07 15:01:51,374::hosted_engine::543::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker) Initializing ha-broker connection
MainThread::INFO::2019-12-07 15:01:51,378::brokerlink::80::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(start_monitor) Starting monitor network, options {'tcp_t_address': None, 'network_test': None, 'tcp_t_port': None, 'addr': '192.168.200.1'}
MainThread::ERROR::2019-12-07 15:01:51,379::hosted_engine::559::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_broker) Failed to start necessary monitors
MainThread::ERROR::2019-12-07 15:01:51,381::agent::144::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py", line 131, in _run_agent
return action(he)
File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py", line 55, in action_proper
return he.start_monitoring()
File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py", line 432, in start_monitoring
self._initialize_broker()
File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py", line 556, in _initialize_broker
m.get('options', {}))
File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", line 89, in start_monitor
).format(t=type, o=options, e=e)
RequestError: brokerlink - failed to start monitor via ovirt-ha-broker: [Errno 2] No such file or directory, [monitor: 'network', options: {'tcp_t_address': None, 'network_test': None, 'tcp_t_port': None, 'addr': '192.168.200.1'}]
MainThread::ERROR::2019-12-07 15:01:51,381::agent::145::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent) Trying to restart agent
MainThread::INFO::2019-12-07 15:01:51,382::agent::89::ovirt_hosted_engine_ha.agent.agent.Agent::(run) Agent shutting down
maybe someone can give me an advice
thx shb
5 years
Hyper-V to KVM | v2v
by Vijay Sachdeva
Hello,
I am trying to convert a Hyper-V guest VM to run on KVM using virt-v2v utility.
Getting below error:
Any suggestions how to solve this.
Thanks
Vijay Sachdeva
5 years
Issue deploying self hosted engine on new install
by Robert Webb
Hi all,
I am trying to deploy the self hosted engine via cockpit and it is failing at the very end of deployment when it is checking the VM Health. I have done multiple clean installs and run this and it fails every time.
I have also tried from the cli via ssh with the same outcome.
After the failure, if I look at the oVirt Machines section of the node, the VM is there but will not start.
Anyone else having this issue?
5 years
[ANN] oVirt 4.3.8 First Release Candidate is now available for testing
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the oVirt
4.3.8 First Release Candidate for testing, as of December 6th, 2019.
This update is a release candidate of the eighth in a series of
stabilization updates to the 4.3 series.
This is pre-release software. This pre-release should not to be used in
production.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.7 or later (but <8)
* CentOS Linux (or similar) 7.7 or later (but <8)
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 7.7 or later (but <8)
* CentOS Linux (or similar) 7.7 or later (but <8)
* oVirt Node 4.3 (available for x86_64 only) has been built consuming
CentOS 7.7 Release
See the release notes [1] for known issues, new features and bugs fixed.
Notes:
- oVirt Appliance is already available
- oVirt Node is already available
Additional Resources:
* Read more about the oVirt 4.3.8 release highlights:
http://www.ovirt.org/release/4.3.8/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.8/
[2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>*Red Hat respects your work life balance.
Therefore there is no need to answer this email out of your office hours.*
5 years
Gluster FS & hosted engine fails to set up
by rob.downer@orbitalsystems.co.uk
I have set up 3 new servers and as you can see Gluster is working well, however the hosted engine deployment fails
can anyone suggest a reason ?
I have wiped and set up all three servers again and set up Gluster first.
This is the gluster congig I have used for the setup.
Please review the configuration. Once you click the 'Finish Deployment' button, the management VM will be transferred to the configured storage and the configuration of your hosted engine cluster will be finalized. You will be able to use your hosted engine once this step finishes.
* StorageStorage Type:glusterfs
Storage Domain Connection:gfs3.gluster.private:/engine
Mount Options:backup-volfile-servers=gfs2.gluster.private:gfs1.gluster.private
Disk Size (GiB):58
[ INFO ] TASK [ovirt.hosted_engine_setup : Add glusterfs storage domain]
[ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is "[Unexpected exception]". HTTP response code is 400.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault reason is \"Operation Failed\". Fault detail is \"[Unexpected exception]\". HTTP response code is 400."}
root@ovirt3 ~]# gluster volume status
Status of volume: data
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gfs3.gluster.private:/gluster_bricks/
data/data 49152 0 Y 3756
Brick gfs2.gluster.private:/gluster_bricks/
data/data 49153 0 Y 3181
Brick gfs1.gluster.private:/gluster_bricks/
data/data 49152 0 Y 15548
Self-heal Daemon on localhost N/A N/A Y 17602
Self-heal Daemon on gfs1.gluster.private N/A N/A Y 15706
Self-heal Daemon on gfs2.gluster.private N/A N/A Y 3348
Task Status of Volume data
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: engine
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gfs3.gluster.private:/gluster_bricks/
engine/engine 49153 0 Y 3769
Brick gfs2.gluster.private:/gluster_bricks/
engine/engine 49154 0 Y 3194
Brick gfs1.gluster.private:/gluster_bricks/
engine/engine 49153 0 Y 15559
Self-heal Daemon on localhost N/A N/A Y 17602
Self-heal Daemon on gfs1.gluster.private N/A N/A Y 15706
Self-heal Daemon on gfs2.gluster.private N/A N/A Y 3348
Task Status of Volume engine
------------------------------------------------------------------------------
There are no active volume tasks
Status of volume: vmstore
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gfs3.gluster.private:/gluster_bricks/
vmstore/vmstore 49154 0 Y 3786
Brick gfs2.gluster.private:/gluster_bricks/
vmstore/vmstore 49152 0 Y 2901
Brick gfs1.gluster.private:/gluster_bricks/
vmstore/vmstore 49154 0 Y 15568
Self-heal Daemon on localhost N/A N/A Y 17602
Self-heal Daemon on gfs1.gluster.private N/A N/A Y 15706
Self-heal Daemon on gfs2.gluster.private N/A N/A Y 3348
Task Status of Volume vmstore
------------------------------------------------------------------------------
There are no active volume tasks
5 years
Re: Importing a Virtual Machine from a KVM Host
by Dirk Streubel
Hello Paul,
thanks a lot for your answer.
If i have any other questions during the import maybe i can asked again?
And i let you know the result :)
Regards
Dirk
Am 05.12.19 um 13:22 schrieb Staniforth, Paul:
> Hello Dirk,
> The link you are using is old developer docs, try the following.
> https://www.ovirt.org/documentation/admin-guide/chap-External_Providers.html
>
> You need to setup an external provider and use a proxy host in the datacentre (not the engine host) you want to import into, this needs to be on the same network as your KVM host and have the virt-v2v package installed.
>
> Regards,
> Paul S.
>
> -----Original Message-----
> From: Dirk Streubel <dirk.streubel(a)posteo.de>
> Sent: 05 December 2019 11:10
> To: users <users(a)ovirt.org>
> Subject: [ovirt-users] Importing a Virtual Machine from a KVM Host
>
> Hello everbody,
>
> i am just a little bit confused about importing Qemu Images from my KVM Host.
>
> At home, and at work i am using a ovirt engine in a VM. The Version is 4.3.7 and on bare metal my host.
>
> Everything works fine but i have the same problem at home and on work. I want to import my qcow2 Images from a KVM Host but i
>
> will not work.
>
> The command virsh -r -c 'qemu+tcp://root@host1.example.org/system' list --all shows me akk my VMs.
>
> But when i want to load the images in the GUI, nothing happens. I follow this instruction:
>
> https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovi...
>
> After that, i take a look on the official 4.3 Rhev Documentation from Red Hat. There stand this:
>
> Log in to the proxy host and generate SSH keys for the vdsm user.
>
> sudo -u vdsm ssh-keygen
>
> So, i have no proxy host. So i want to do this on my ovirt-engine. Is this right?
>
> Next thing: the user vdsm is locked on the ovirt-engine machine and the host.
>
> So, do i have to unlocked the account of the user vdsm or what is the best way.
>
> My Problem is i found nothing that would help me to solve this problem.
>
> Maybe somebody here do me a favour and help me to solve this "challenge"
>
> Regards
>
> Dirk
>
>
>
>
>
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org Privacy Statement: https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovi...
> oVirt Code of Conduct: https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ovi...
> List Archives: https://eur02.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.o...
> To view the terms under which this email is distributed, please go to:-
> http://leedsbeckett.ac.uk/disclaimer/email/
5 years
Possible sources of cpu steal and countermeasures
by klaasdemter@gmail.com
Hi,
I'm having performance issues with a ovirt installation. It is showing
high steal (5-10%) for a cpu intensive VM. The hypervisor however has
more than 65% of his resources idle while the steal is seen inside of
the VM.
Even when placing only a single VM on a hypervisor it still receives
steal (0-2%), even though the hypervisor is not overcommited.
Hypervisor:
2 Socket system in total 2*28(56HT) cores
VM:
30vCPUs (ovirt seems to think its a good idea to make that 15 sockets *
2 cores)
My questions are:
a) Could it be that the hypervisor is trying to schedule all 30 cores on
a single numa node, ie using the HT cores instead of "real" ones and
this shows up as steal?
b) Do I need to make VMs this big numa-aware and spread the vm over both
numa nodes?
c) Would using the High Performance VM type help in this kind of situation?
d) General advise: how do I reduce steal in an environment where the
hypervisor has idle resources
Any advise would be appreciated.
Greetings
Klaas
5 years