<div dir="ltr">I'am on Centos 6.5 and this repo is for fedora...</div><div class="gmail_extra"><br><br><div class="gmail_quote">2014-04-28 12:16 GMT+02:00 Kevin Tibi <span dir="ltr"><<a href="mailto:kevintibi@hotmail.com" target="_blank">kevintibi@hotmail.com</a>></span>:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi,<div><br></div><div><div>qemu-kvm-0.12.1.2-2.415.el6_5.8.x86_64</div></div><div><div>libvirt-0.10.2-29.el6_5.7.x86_64</div>
</div><div><div>vdsm-4.14.6-0.el6.x86_64</div></div><div><div>kernel-2.6.32-431.el6.x86_64</div>
<div>kernel-2.6.32-431.11.2.el6.x86_64</div></div><div><br></div><div>i add this repop and try to update.</div><div><br></div><div><br></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">2014-04-28 11:57 GMT+02:00 Martin Sivak <span dir="ltr"><<a href="mailto:msivak@redhat.com" target="_blank">msivak@redhat.com</a>></span>:<div>
<div class="h5"><br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Kevin,<br>
<br>
thanks for the information.<br>
<div><br>
> Agent.log and broker.log says nothing.<br>
<br>
</div>Can you please attach those files? I would like to see how the crashed Qemu process is reported to us and what are the state machine trainsitions that cause the load.<br>
<div><br>
> 07:23:58,994::libvirtconnection::124::root::(wrapper) Unknown libvirterror:<br>
> ecode: 84 edom: 10 level: 2 message: Operation not supported: live disk<br>
> snapshot not supported with this QEMU binary<br>
<br>
</div>What are the versions of vdsm, libvirt, qemu-kvm and kernel?<br>
<br>
If you feel like it try updating virt packages from the virt-preview repository: <a href="http://fedoraproject.org/wiki/Virtualization_Preview_Repository" target="_blank">http://fedoraproject.org/wiki/Virtualization_Preview_Repository</a><br>
<div><div><br>
--<br>
Martin Sivák<br>
<a href="mailto:msivak@redhat.com" target="_blank">msivak@redhat.com</a><br>
Red Hat Czech<br>
RHEV-M SLA / Brno, CZ<br>
<br>
----- Original Message -----<br>
> Hi,<br>
><br>
> I use this version : ovirt-hosted-engine-ha-1.1.2-1.el6.noarch<br>
><br>
> For 3 days, my engine-ha worked perfectly but i tried to snapshot a Vm and<br>
> ha service make defunct ==> 400% CPU !!<br>
><br>
> Agent.log and broker.log says nothing. But vdsm.log i have errors :<br>
><br>
> Thread-9462::DEBUG::2014-04-28<br>
> 07:23:58,994::libvirtconnection::124::root::(wrapper) Unknown libvirterror:<br>
> ecode: 84 edom: 10 level: 2 message: Operation not supported: live disk<br>
> snapshot not supported with this QEMU binary<br>
><br>
> Thread-9462::ERROR::2014-04-28 07:23:58,995::vm::4006::vm.Vm::(snapshot)<br>
> vmId=`773f6e6d-c670-49f3-ae8c-dfbcfa22d0a5`::Unable to take snapshot<br>
><br>
><br>
> Thread-9352::DEBUG::2014-04-28<br>
> 08:41:39,922::lvm::295::Storage.Misc.excCmd::(cmd) '/usr/bin/sudo -n<br>
> /sbin/lvm vgs --config " devices { preferred_names = [\\"^/dev/mapper/\\"]<br>
> ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3<br>
> obtain_device_list_from_udev=0 filter = [ \'r|.*|\' ] } global {<br>
> locking_type=1 prioritise_write_locks=1 wait_for_locks=1 } backup {<br>
> retain_min = 50 retain_days = 0 } " --noheadings --units b --nosuffix<br>
> --separator | -o<br>
> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name<br>
> cc51143e-8ad7-4b0b-a4d2-9024dffc1188 ff98d346-4515-4349-8437-fb2f5e9eaadf'<br>
> (cwd None)<br>
><br>
> I'll try to reboot my node with hosted-engine.<br>
><br>
><br>
><br>
> 2014-04-25 13:54 GMT+02:00 Martin Sivak <<a href="mailto:msivak@redhat.com" target="_blank">msivak@redhat.com</a>>:<br>
><br>
> > Hi Kevin,<br>
> ><br>
> > can you please tell us what version of hosted-engine are you running?<br>
> ><br>
> > rpm -q ovirt-hosted-engine-ha<br>
> ><br>
> > Also, do I understand it correctly that the engine VM is running, but you<br>
> > see bad status when you execute the hosted-engine --vm-status command?<br>
> ><br>
> > If that is so, can you give us current logs from<br>
> > /var/log/ovirt-hosted-engine-ha?<br>
> ><br>
> > --<br>
> > Martin Sivák<br>
> > <a href="mailto:msivak@redhat.com" target="_blank">msivak@redhat.com</a><br>
> > Red Hat Czech<br>
> > RHEV-M SLA / Brno, CZ<br>
> ><br>
> > ----- Original Message -----<br>
> > > Ok i mount manualy the domain for hosted engine and agent go up.<br>
> > ><br>
> > > But vm-status :<br>
> > ><br>
> > > --== Host 2 status ==--<br>
> > ><br>
> > > Status up-to-date : False<br>
> > > Hostname : 192.168.99.103<br>
> > > Host ID : 2<br>
> > > Engine status : unknown stale-data<br>
> > > Score : 0<br>
> > > Local maintenance : False<br>
> > > Host timestamp : 1398333438<br>
> > ><br>
> > > And in my engine, host02 Ha is no active.<br>
> > ><br>
> > ><br>
> > > 2014-04-24 12:48 GMT+02:00 Kevin Tibi <<a href="mailto:kevintibi@hotmail.com" target="_blank">kevintibi@hotmail.com</a>>:<br>
> > ><br>
> > > > Hi,<br>
> > > ><br>
> > > > I try to reboot my hosts and now [supervdsmServer] is <defunct>.<br>
> > > ><br>
> > > > /var/log/vdsm/supervdsm.log<br>
> > > ><br>
> > > ><br>
> > > > MainProcess|Thread-120::DEBUG::2014-04-24<br>
> > > > 12:22:19,955::supervdsmServer::103::SuperVdsm.ServerCallback::(wrapper)<br>
> > > > return validateAccess with None<br>
> > > > MainProcess|Thread-120::DEBUG::2014-04-24<br>
> > > > 12:22:20,010::supervdsmServer::96::SuperVdsm.ServerCallback::(wrapper)<br>
> > call<br>
> > > > validateAccess with ('qemu', ('qemu', 'kvm'),<br>
> > > > '/rhev/data-center/mnt/host01.ovirt.lan:_home_export', 5) {}<br>
> > > > MainProcess|Thread-120::DEBUG::2014-04-24<br>
> > > > 12:22:20,014::supervdsmServer::103::SuperVdsm.ServerCallback::(wrapper)<br>
> > > > return validateAccess with None<br>
> > > > MainProcess|Thread-120::DEBUG::2014-04-24<br>
> > > > 12:22:20,059::supervdsmServer::96::SuperVdsm.ServerCallback::(wrapper)<br>
> > call<br>
> > > > validateAccess with ('qemu', ('qemu', 'kvm'),<br>
> > > > '/rhev/data-center/mnt/host01.ovirt.lan:_home_iso', 5) {}<br>
> > > > MainProcess|Thread-120::DEBUG::2014-04-24<br>
> > > > 12:22:20,063::supervdsmServer::103::SuperVdsm.ServerCallback::(wrapper)<br>
> > > > return validateAccess with None<br>
> > > ><br>
> > > > and one host don't mount the NFS used for hosted engine.<br>
> > > ><br>
> > > > MainThread::CRITICAL::2014-04-24<br>
> > > ><br>
> > 12:36:16,603::agent::103::ovirt_hosted_engine_ha.agent.agent.Agent::(run)<br>
> > > > Could not start ha-agent<br>
> > > > Traceback (most recent call last):<br>
> > > > File<br>
> > > ><br>
> > "/usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/agent/agent.py",<br>
> > > > line 97, in run<br>
> > > > self._run_agent()<br>
> > > > File<br>
> > > ><br>
> > "/usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/agent/agent.py",<br>
> > > > line 154, in _run_agent<br>
> > > ><br>
> > hosted_engine.HostedEngine(self.shutdown_requested).start_monitoring()<br>
> > > > File<br>
> > > ><br>
> > "/usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",<br>
> > > > line 299, in start_monitoring<br>
> > > > self._initialize_vdsm()<br>
> > > > File<br>
> > > ><br>
> > "/usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",<br>
> > > > line 418, in _initialize_vdsm<br>
> > > > self._sd_path = env_path.get_domain_path(self._config)<br>
> > > > File<br>
> > > > "/usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/env/path.py",<br>
> > line<br>
> > > > 40, in get_domain_path<br>
> > > > .format(sd_uuid, parent))<br>
> > > > Exception: path to storage domain aea040f8-ab9d-435b-9ecf-ddd4272e592f<br>
> > not<br>
> > > > found in /rhev/data-center/mnt<br>
> > > ><br>
> > > ><br>
> > > ><br>
> > > > 2014-04-23 17:40 GMT+02:00 Kevin Tibi <<a href="mailto:kevintibi@hotmail.com" target="_blank">kevintibi@hotmail.com</a>>:<br>
> > > ><br>
> > > > top<br>
> > > >> 1729 vdsm 20 0 0 0 0 Z 373.8 0.0 252:08.51<br>
> > > >> ovirt-ha-broker <defunct><br>
> > > >><br>
> > > >><br>
> > > >> [root@host01 ~]# ps axwu | grep 1729<br>
> > > >> vdsm 1729 0.7 0.0 0 0 ? Zl Apr02 240:24<br>
> > > >> [ovirt-ha-broker] <defunct><br>
> > > >><br>
> > > >> [root@host01 ~]# ll<br>
> > > >><br>
> > /rhev/data-center/mnt/host01.ovirt.lan\:_home_NFS01/aea040f8-ab9d-435b-9ecf-ddd4272e592f/ha_agent/<br>
> > > >> total 2028<br>
> > > >> -rw-rw----. 1 vdsm kvm 1048576 23 avril 17:35 hosted-engine.lockspace<br>
> > > >> -rw-rw----. 1 vdsm kvm 1028096 23 avril 17:35 hosted-engine.metadata<br>
> > > >><br>
> > > >> cat /var/log/vdsm/vdsm.log<br>
> > > >><br>
> > > >> Thread-120518::DEBUG::2014-04-23<br>
> > > >> 17:38:02,299::task::1185::TaskManager.Task::(prepare)<br>
> > > >> Task=`f13e71f1-ac7c-49ab-8079-8f099ebf72b6`::finished:<br>
> > > >> {'aea040f8-ab9d-435b-9ecf-ddd4272e592f': {'code': 0, 'version': 3,<br>
> > > >> 'acquired': True, 'delay': '0.000410963', 'lastCheck': '3.4', 'valid':<br>
> > > >> True}, '5ae613a4-44e4-42cb-89fc-7b5d34c1f30f': {'code': 0, 'version':<br>
> > 3,<br>
> > > >> 'acquired': True, 'delay': '0.000412357', 'lastCheck': '6.8', 'valid':<br>
> > > >> True}, 'cc51143e-8ad7-4b0b-a4d2-9024dffc1188': {'code': 0, 'version':<br>
> > 0,<br>
> > > >> 'acquired': True, 'delay': '0.000455292', 'lastCheck': '1.2', 'valid':<br>
> > > >> True}, 'ff98d346-4515-4349-8437-fb2f5e9eaadf': {'code': 0, 'version':<br>
> > 0,<br>
> > > >> 'acquired': True, 'delay': '0.00817113', 'lastCheck': '1.7', 'valid':<br>
> > > >> True}}<br>
> > > >> Thread-120518::DEBUG::2014-04-23<br>
> > > >> 17:38:02,300::task::595::TaskManager.Task::(_updateState)<br>
> > > >> Task=`f13e71f1-ac7c-49ab-8079-8f099ebf72b6`::moving from state<br>
> > preparing<br>
> > > >> -><br>
> > > >> state finished<br>
> > > >> Thread-120518::DEBUG::2014-04-23<br>
> > > >><br>
> > 17:38:02,300::resourceManager::940::ResourceManager.Owner::(releaseAll)<br>
> > > >> Owner.releaseAll requests {} resources {}<br>
> > > >> Thread-120518::DEBUG::2014-04-23<br>
> > > >> 17:38:02,300::resourceManager::977::ResourceManager.Owner::(cancelAll)<br>
> > > >> Owner.cancelAll requests {}<br>
> > > >> Thread-120518::DEBUG::2014-04-23<br>
> > > >> 17:38:02,300::task::990::TaskManager.Task::(_decref)<br>
> > > >> Task=`f13e71f1-ac7c-49ab-8079-8f099ebf72b6`::ref 0 aborting False<br>
> > > >> Thread-120518::ERROR::2014-04-23<br>
> > > >><br>
> > 17:38:02,302::brokerlink::72::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(connect)<br>
> > > >> Failed to connect to broker: [Errno 2] No such file or directory<br>
> > > >> Thread-120518::ERROR::2014-04-23<br>
> > > >> 17:38:02,302::API::1612::vds::(_getHaInfo) failed to retrieve Hosted<br>
> > > >> Engine<br>
> > > >> HA info<br>
> > > >> Traceback (most recent call last):<br>
> > > >> File "/usr/share/vdsm/API.py", line 1603, in _getHaInfo<br>
> > > >> stats = instance.get_all_stats()<br>
> > > >> File<br>
> > > >><br>
> > "/usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/client/client.py",<br>
> > > >> line 83, in get_all_stats<br>
> > > >> with broker.connection():<br>
> > > >> File "/usr/lib64/python2.6/contextlib.py", line 16, in __enter__<br>
> > > >> return self.gen.next()<br>
> > > >> File<br>
> > > >><br>
> > "/usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",<br>
> > > >> line 96, in connection<br>
> > > >> self.connect()<br>
> > > >> File<br>
> > > >><br>
> > "/usr/lib/python2.6/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py",<br>
> > > >> line 64, in connect<br>
> > > >> self._socket.connect(constants.BROKER_SOCKET_FILE)<br>
> > > >> File "<string>", line 1, in connect<br>
> > > >> error: [Errno 2] No such file or directory<br>
> > > >> Thread-78::DEBUG::2014-04-23<br>
> > > >> 17:38:05,490::fileSD::225::Storage.Misc.excCmd::(getReadDelay)<br>
> > '/bin/dd<br>
> > > >> iflag=direct<br>
> > > >><br>
> > if=/rhev/data-center/mnt/host01.ovirt.lan:_home_DATA/5ae613a4-44e4-42cb-89fc-7b5d34c1f30f/dom_md/metadata<br>
> > > >> bs=4096 count=1' (cwd None)<br>
> > > >> Thread-78::DEBUG::2014-04-23<br>
> > > >> 17:38:05,523::fileSD::225::Storage.Misc.excCmd::(getReadDelay)<br>
> > SUCCESS:<br>
> > > >> <err> = '0+1 records in\n0+1 records out\n545 bytes (545 B) copied,<br>
> > > >> 0.000412209 s, 1.3 MB/s\n'; <rc> = 0<br>
> > > >><br>
> > > >><br>
> > > >><br>
> > > >><br>
> > > >> 2014-04-23 17:27 GMT+02:00 Martin Sivak <<a href="mailto:msivak@redhat.com" target="_blank">msivak@redhat.com</a>>:<br>
> > > >><br>
> > > >> Hi Kevin,<br>
> > > >>><br>
> > > >>> > same pb.<br>
> > > >>><br>
> > > >>> Are you missing the lockspace file as well while running on top of<br>
> > > >>> GlusterFS?<br>
> > > >>><br>
> > > >>> > ovirt-ha-broker have 400% cpu and is defunct. I can't kill with -9.<br>
> > > >>><br>
> > > >>> Defunct process eating full four cores? I wonder how is that<br>
> > possible..<br>
> > > >>> What are the status flags of that process when you do ps axwu?<br>
> > > >>><br>
> > > >>> Can you attach the log files please?<br>
> > > >>><br>
> > > >>> --<br>
> > > >>> Martin Sivák<br>
> > > >>> <a href="mailto:msivak@redhat.com" target="_blank">msivak@redhat.com</a><br>
> > > >>> Red Hat Czech<br>
> > > >>> RHEV-M SLA / Brno, CZ<br>
> > > >>><br>
> > > >>> ----- Original Message -----<br>
> > > >>> > same pb. ovirt-ha-broker have 400% cpu and is defunct. I can't kill<br>
> > > >>> with -9.<br>
> > > >>> ><br>
> > > >>> ><br>
> > > >>> > 2014-04-23 13:55 GMT+02:00 Martin Sivak <<a href="mailto:msivak@redhat.com" target="_blank">msivak@redhat.com</a>>:<br>
> > > >>> ><br>
> > > >>> > > Hi,<br>
> > > >>> > ><br>
> > > >>> > > > Isn't this file created when hosted engine is started?<br>
> > > >>> > ><br>
> > > >>> > > The file is created by the setup script. If it got lost then<br>
> > there<br>
> > > >>> was<br>
> > > >>> > > probably something bad happening in your NFS or Gluster storage.<br>
> > > >>> > ><br>
> > > >>> > > > Or how can I create this file manually?<br>
> > > >>> > ><br>
> > > >>> > > I can give you experimental treatment for this. We do not have<br>
> > any<br>
> > > >>> > > official way as this is something that should not ever happen :)<br>
> > > >>> > ><br>
> > > >>> > > !! But before you do that make sure you do not have any nodes<br>
> > running<br>
> > > >>> > > properly. This will destroy and reinitialize the lockspace<br>
> > database<br>
> > > >>> for the<br>
> > > >>> > > whole hosted-engine environment (which you apparently lack,<br>
> > but..).<br>
> > > >>> !!<br>
> > > >>> > ><br>
> > > >>> > > You have to create the ha_agent/hosted-engine.lockspace file<br>
> > with the<br>
> > > >>> > > expected size (1MB) and then tell sanlock to initialize it as a<br>
> > > >>> lockspace<br>
> > > >>> > > using:<br>
> > > >>> > ><br>
> > > >>> > > # python<br>
> > > >>> > > >>> import sanlock<br>
> > > >>> > > >>> sanlock.write_lockspace(lockspace="hosted-engine",<br>
> > > >>> > > ... path="/rhev/data-center/mnt/<nfs>/<hosted engine storage<br>
> > > >>> > > domain>/ha_agent/hosted-engine.lockspace",<br>
> > > >>> > > ... offset=0)<br>
> > > >>> > > >>><br>
> > > >>> > ><br>
> > > >>> > > Then try starting the services (both broker and agent) again.<br>
> > > >>> > ><br>
> > > >>> > > --<br>
> > > >>> > > Martin Sivák<br>
> > > >>> > > <a href="mailto:msivak@redhat.com" target="_blank">msivak@redhat.com</a><br>
> > > >>> > > Red Hat Czech<br>
> > > >>> > > RHEV-M SLA / Brno, CZ<br>
> > > >>> > ><br>
> > > >>> > ><br>
> > > >>> > > ----- Original Message -----<br>
> > > >>> > > > On 04/23/2014 11:08 AM, Martin Sivak wrote:<br>
> > > >>> > > > > Hi René,<br>
> > > >>> > > > ><br>
> > > >>> > > > >>>> libvirtError: Failed to acquire lock: No space left on<br>
> > device<br>
> > > >>> > > > ><br>
> > > >>> > > > >>>> 2014-04-22 12:38:17+0200 654 [3093]: r2 cmd_acquire<br>
> > 2,9,5733<br>
> > > >>> invalid<br>
> > > >>> > > > >>>> lockspace found -1 failed 0 name<br>
> > > >>> > > 2851af27-8744-445d-9fb1-a0d083c8dc82<br>
> > > >>> > > > ><br>
> > > >>> > > > > Can you please check the contents of /rhev/data-center/<your<br>
> > nfs<br>
> > > >>> > > > > mount>/<nfs domain uuid>/ha_agent/?<br>
> > > >>> > > > ><br>
> > > >>> > > > > This is how it should look like:<br>
> > > >>> > > > ><br>
> > > >>> > > > > [root@dev-03 ~]# ls -al<br>
> > > >>> > > > ><br>
> > > >>> > ><br>
> > > >>><br>
> > /rhev/data-center/mnt/euryale\:_home_ovirt_he/e16de6a2-53f5-4ab3-95a3-255d08398824/ha_agent/<br>
> > > >>> > > > > total 2036<br>
> > > >>> > > > > drwxr-x---. 2 vdsm kvm 4096 Mar 19 18:46 .<br>
> > > >>> > > > > drwxr-xr-x. 6 vdsm kvm 4096 Mar 19 18:46 ..<br>
> > > >>> > > > > -rw-rw----. 1 vdsm kvm 1048576 Apr 23 11:05<br>
> > > >>> hosted-engine.lockspace<br>
> > > >>> > > > > -rw-rw----. 1 vdsm kvm 1028096 Mar 19 18:46<br>
> > > >>> hosted-engine.metadata<br>
> > > >>> > > > ><br>
> > > >>> > > > > The errors seem to indicate that you somehow lost the<br>
> > lockspace<br>
> > > >>> file.<br>
> > > >>> > > ><br>
> > > >>> > > > True :)<br>
> > > >>> > > > Isn't this file created when hosted engine is started? Or how<br>
> > can I<br>
> > > >>> > > > create this file manually?<br>
> > > >>> > > ><br>
> > > >>> > > > ><br>
> > > >>> > > > > --<br>
> > > >>> > > > > Martin Sivák<br>
> > > >>> > > > > <a href="mailto:msivak@redhat.com" target="_blank">msivak@redhat.com</a><br>
> > > >>> > > > > Red Hat Czech<br>
> > > >>> > > > > RHEV-M SLA / Brno, CZ<br>
> > > >>> > > > ><br>
> > > >>> > > > > ----- Original Message -----<br>
> > > >>> > > > >> On 04/23/2014 12:28 AM, Doron Fediuck wrote:<br>
> > > >>> > > > >>> Hi Rene,<br>
> > > >>> > > > >>> any idea what closed your ovirtmgmt bridge?<br>
> > > >>> > > > >>> as long as it is down vdsm may have issues starting up<br>
> > properly<br>
> > > >>> > > > >>> and this is why you see the complaints on the rpc server.<br>
> > > >>> > > > >>><br>
> > > >>> > > > >>> Can you try manually fixing the network part first and then<br>
> > > >>> > > > >>> restart vdsm?<br>
> > > >>> > > > >>> Once vdsm is happy hosted engine VM will start.<br>
> > > >>> > > > >><br>
> > > >>> > > > >> Thanks for your feedback, Doron.<br>
> > > >>> > > > >><br>
> > > >>> > > > >> My ovirtmgmt bridge seems to be on or isn't it:<br>
> > > >>> > > > >> # brctl show ovirtmgmt<br>
> > > >>> > > > >> bridge name bridge id STP enabled<br>
> > > >>> interfaces<br>
> > > >>> > > > >> ovirtmgmt 8000.0025907587c2 no<br>
> > > >>> eth0.200<br>
> > > >>> > > > >><br>
> > > >>> > > > >> # ip a s ovirtmgmt<br>
> > > >>> > > > >> 7: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500<br>
> > qdisc<br>
> > > >>> noqueue<br>
> > > >>> > > > >> state UNKNOWN<br>
> > > >>> > > > >> link/ether 00:25:90:75:87:c2 brd ff:ff:ff:ff:ff:ff<br>
> > > >>> > > > >> inet <a href="http://10.0.200.102/24" target="_blank">10.0.200.102/24</a> brd 10.0.200.255 scope global<br>
> > > >>> ovirtmgmt<br>
> > > >>> > > > >> inet6 fe80::225:90ff:fe75:87c2/64 scope link<br>
> > > >>> > > > >> valid_lft forever preferred_lft forever<br>
> > > >>> > > > >><br>
> > > >>> > > > >> # ip a s eth0.200<br>
> > > >>> > > > >> 6: eth0.200@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu<br>
> > 1500<br>
> > > >>> qdisc<br>
> > > >>> > > > >> noqueue state UP<br>
> > > >>> > > > >> link/ether 00:25:90:75:87:c2 brd ff:ff:ff:ff:ff:ff<br>
> > > >>> > > > >> inet6 fe80::225:90ff:fe75:87c2/64 scope link<br>
> > > >>> > > > >> valid_lft forever preferred_lft forever<br>
> > > >>> > > > >><br>
> > > >>> > > > >> I tried the following yesterday:<br>
> > > >>> > > > >> Copy virtual disk from GlusterFS storage to local disk of<br>
> > host<br>
> > > >>> and<br>
> > > >>> > > > >> create a new vm with virt-manager which loads ovirtmgmt<br>
> > disk. I<br>
> > > >>> could<br>
> > > >>> > > > >> reach my engine over the ovirtmgmt bridge (so bridge must be<br>
> > > >>> working).<br>
> > > >>> > > > >><br>
> > > >>> > > > >> I also started libvirtd with Option -v and I saw the<br>
> > following<br>
> > > >>> in<br>
> > > >>> > > > >> libvirtd.log when trying to start ovirt engine:<br>
> > > >>> > > > >> 2014-04-22 14:18:25.432+0000: 8901: debug :<br>
> > > >>> virCommandRunAsync:2250 :<br>
> > > >>> > > > >> Command result 0, with PID 11491<br>
> > > >>> > > > >> 2014-04-22 14:18:25.478+0000: 8901: debug :<br>
> > virCommandRun:2045 :<br>
> > > >>> > > Result<br>
> > > >>> > > > >> exit status 255, stdout: '' stderr: 'iptables v1.4.7: goto<br>
> > > >>> 'FO-vnet0'<br>
> > > >>> > > is<br>
> > > >>> > > > >> not a chain<br>
> > > >>> > > > >><br>
> > > >>> > > > >> So it could be that something is broken in my hosted-engine<br>
> > > >>> network.<br>
> > > >>> > > Do<br>
> > > >>> > > > >> you have any clue how I can troubleshoot this?<br>
> > > >>> > > > >><br>
> > > >>> > > > >><br>
> > > >>> > > > >> Thanks,<br>
> > > >>> > > > >> René<br>
> > > >>> > > > >><br>
> > > >>> > > > >><br>
> > > >>> > > > >>><br>
> > > >>> > > > >>> ----- Original Message -----<br>
> > > >>> > > > >>>> From: "René Koch" <<a href="mailto:rkoch@linuxland.at" target="_blank">rkoch@linuxland.at</a>><br>
> > > >>> > > > >>>> To: "Martin Sivak" <<a href="mailto:msivak@redhat.com" target="_blank">msivak@redhat.com</a>><br>
> > > >>> > > > >>>> Cc: <a href="mailto:users@ovirt.org" target="_blank">users@ovirt.org</a><br>
> > > >>> > > > >>>> Sent: Tuesday, April 22, 2014 1:46:38 PM<br>
> > > >>> > > > >>>> Subject: Re: [ovirt-users] hosted engine health check<br>
> > issues<br>
> > > >>> > > > >>>><br>
> > > >>> > > > >>>> Hi,<br>
> > > >>> > > > >>>><br>
> > > >>> > > > >>>> I rebooted one of my ovirt hosts today and the result is<br>
> > now<br>
> > > >>> that I<br>
> > > >>> > > > >>>> can't start hosted-engine anymore.<br>
> > > >>> > > > >>>><br>
> > > >>> > > > >>>> ovirt-ha-agent isn't running because the lockspace file is<br>
> > > >>> missing<br>
> > > >>> > > > >>>> (sanlock complains about it).<br>
> > > >>> > > > >>>> So I tried to start hosted-engine with --vm-start and I<br>
> > get<br>
> > > >>> the<br>
> > > >>> > > > >>>> following errors:<br>
> > > >>> > > > >>>><br>
> > > >>> > > > >>>> ==> /var/log/sanlock.log <==<br>
> > > >>> > > > >>>> 2014-04-22 12:38:17+0200 654 [3093]: r2 cmd_acquire<br>
> > 2,9,5733<br>
> > > >>> invalid<br>
> > > >>> > > > >>>> lockspace found -1 failed 0 name<br>
> > > >>> > > 2851af27-8744-445d-9fb1-a0d083c8dc82<br>
> > > >>> > > > >>>><br>
> > > >>> > > > >>>> ==> /var/log/messages <==<br>
> > > >>> > > > >>>> Apr 22 12:38:17 ovirt-host02 sanlock[3079]: 2014-04-22<br>
> > > >>> > > 12:38:17+0200 654<br>
> > > >>> > > > >>>> [3093]: r2 cmd_acquire 2,9,5733 invalid lockspace found -1<br>
> > > >>> failed 0<br>
> > > >>> > > name<br>
> > > >>> > > > >>>> 2851af27-8744-445d-9fb1-a0d083c8dc82<br>
> > > >>> > > > >>>> Apr 22 12:38:17 ovirt-host02 kernel: ovirtmgmt: port<br>
> > 2(vnet0)<br>
> > > >>> > > entering<br>
> > > >>> > > > >>>> disabled state<br>
> > > >>> > > > >>>> Apr 22 12:38:17 ovirt-host02 kernel: device vnet0 left<br>
> > > >>> promiscuous<br>
> > > >>> > > mode<br>
> > > >>> > > > >>>> Apr 22 12:38:17 ovirt-host02 kernel: ovirtmgmt: port<br>
> > 2(vnet0)<br>
> > > >>> > > entering<br>
> > > >>> > > > >>>> disabled state<br>
> > > >>> > > > >>>><br>
> > > >>> > > > >>>> ==> /var/log/vdsm/vdsm.log <==<br>
> > > >>> > > > >>>> Thread-21::DEBUG::2014-04-22<br>
> > > >>> > > > >>>> 12:38:17,563::libvirtconnection::124::root::(wrapper)<br>
> > Unknown<br>
> > > >>> > > > >>>> libvirterror: ecode: 38 edom: 42 level: 2 message: Failed<br>
> > to<br>
> > > >>> acquire<br>
> > > >>> > > > >>>> lock: No space left on device<br>
> > > >>> > > > >>>> Thread-21::DEBUG::2014-04-22<br>
> > > >>> > > > >>>> 12:38:17,563::vm::2263::vm.Vm::(_startUnderlyingVm)<br>
> > > >>> > > > >>>><br>
> > vmId=`f26dd37e-13b5-430c-b2f2-ecd098b82a91`::_ongoingCreations<br>
> > > >>> > > released<br>
> > > >>> > > > >>>> Thread-21::ERROR::2014-04-22<br>
> > > >>> > > > >>>> 12:38:17,564::vm::2289::vm.Vm::(_startUnderlyingVm)<br>
> > > >>> > > > >>>> vmId=`f26dd37e-13b5-430c-b2f2-ecd098b82a91`::The vm start<br>
> > > >>> process<br>
> > > >>> > > failed<br>
> > > >>> > > > >>>> Traceback (most recent call last):<br>
> > > >>> > > > >>>> File "/usr/share/vdsm/vm.py", line 2249, in<br>
> > > >>> _startUnderlyingVm<br>
> > > >>> > > > >>>> self._run()<br>
> > > >>> > > > >>>> File "/usr/share/vdsm/vm.py", line 3170, in _run<br>
> > > >>> > > > >>>> self._connection.createXML(domxml, flags),<br>
> > > >>> > > > >>>> File<br>
> > > >>> > > > >>>><br>
> > > >>> "/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py",<br>
> > > >>> > > > >>>> line 92, in wrapper<br>
> > > >>> > > > >>>> ret = f(*args, **kwargs)<br>
> > > >>> > > > >>>> File "/usr/lib64/python2.6/site-packages/libvirt.py",<br>
> > > >>> line<br>
> > > >>> > > 2665, in<br>
> > > >>> > > > >>>> createXML<br>
> > > >>> > > > >>>> if ret is None:raise<br>
> > libvirtError('virDomainCreateXML()<br>
> > > >>> > > failed',<br>
> > > >>> > > > >>>> conn=self)<br>
> > > >>> > > > >>>> libvirtError: Failed to acquire lock: No space left on<br>
> > device<br>
> > > >>> > > > >>>><br>
> > > >>> > > > >>>> ==> /var/log/messages <==<br>
> > > >>> > > > >>>> Apr 22 12:38:17 ovirt-host02 vdsm vm.Vm ERROR<br>
> > > >>> > > > >>>> vmId=`f26dd37e-13b5-430c-b2f2-ecd098b82a91`::The vm start<br>
> > > >>> process<br>
> > > >>> > > > >>>> failed#012Traceback (most recent call last):#012 File<br>
> > > >>> > > > >>>> "/usr/share/vdsm/vm.py", line 2249, in<br>
> > _startUnderlyingVm#012<br>
> > > >>> > > > >>>> self._run()#012 File "/usr/share/vdsm/vm.py", line 3170,<br>
> > in<br>
> > > >>> > > _run#012<br>
> > > >>> > > > >>>> self._connection.createXML(domxml, flags),#012 File<br>
> > > >>> > > > >>>><br>
> > > >>> "/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py",<br>
> > > >>> > > line 92,<br>
> > > >>> > > > >>>> in wrapper#012 ret = f(*args, **kwargs)#012 File<br>
> > > >>> > > > >>>> "/usr/lib64/python2.6/site-packages/libvirt.py", line<br>
> > 2665, in<br>
> > > >>> > > > >>>> createXML#012 if ret is None:raise<br>
> > > >>> > > libvirtError('virDomainCreateXML()<br>
> > > >>> > > > >>>> failed', conn=self)#012libvirtError: Failed to acquire<br>
> > lock:<br>
> > > >>> No<br>
> > > >>> > > space<br>
> > > >>> > > > >>>> left on device<br>
> > > >>> > > > >>>><br>
> > > >>> > > > >>>> ==> /var/log/vdsm/vdsm.log <==<br>
> > > >>> > > > >>>> Thread-21::DEBUG::2014-04-22<br>
> > > >>> > > > >>>> 12:38:17,569::vm::2731::vm.Vm::(setDownStatus)<br>
> > > >>> > > > >>>> vmId=`f26dd37e-13b5-430c-b2f2-ecd098b82a91`::Changed<br>
> > state to<br>
> > > >>> Down:<br>
> > > >>> > > > >>>> Failed to acquire lock: No space left on device<br>
> > > >>> > > > >>>><br>
> > > >>> > > > >>>><br>
> > > >>> > > > >>>> No space left on device is nonsense as there is enough<br>
> > space<br>
> > > >>> (I had<br>
> > > >>> > > this<br>
> > > >>> > > > >>>> issue last time as well where I had to patch machine.py,<br>
> > but<br>
> > > >>> this<br>
> > > >>> > > file<br>
> > > >>> > > > >>>> is now Python 2.6.6 compatible.<br>
> > > >>> > > > >>>><br>
> > > >>> > > > >>>> Any idea what prevents hosted-engine from starting?<br>
> > > >>> > > > >>>> ovirt-ha-broker, vdsmd and sanlock are running btw.<br>
> > > >>> > > > >>>><br>
> > > >>> > > > >>>> Btw, I can see in log that json rpc server module is<br>
> > missing<br>
> > > >>> - which<br>
> > > >>> > > > >>>> package is required for CentOS 6.5?<br>
> > > >>> > > > >>>> Apr 22 12:37:14 ovirt-host02 vdsm vds WARNING Unable to<br>
> > load<br>
> > > >>> the<br>
> > > >>> > > json<br>
> > > >>> > > > >>>> rpc server module. Please make sure it is installed.<br>
> > > >>> > > > >>>><br>
> > > >>> > > > >>>><br>
> > > >>> > > > >>>> Thanks,<br>
> > > >>> > > > >>>> René<br>
> > > >>> > > > >>>><br>
> > > >>> > > > >>>><br>
> > > >>> > > > >>>><br>
> > > >>> > > > >>>> On 04/17/2014 10:02 AM, Martin Sivak wrote:<br>
> > > >>> > > > >>>>> Hi,<br>
> > > >>> > > > >>>>><br>
> > > >>> > > > >>>>>>>> How can I disable notifications?<br>
> > > >>> > > > >>>>><br>
> > > >>> > > > >>>>> The notification is configured in<br>
> > > >>> > > > >>>>> /etc/ovirt-hosted-engine-ha/broker.conf<br>
> > > >>> > > > >>>>> section notification.<br>
> > > >>> > > > >>>>> The email is sent when the key state_transition exists<br>
> > and<br>
> > > >>> the<br>
> > > >>> > > string<br>
> > > >>> > > > >>>>> OldState-NewState contains the (case insensitive) regexp<br>
> > > >>> from the<br>
> > > >>> > > > >>>>> value.<br>
> > > >>> > > > >>>>><br>
> > > >>> > > > >>>>>>>> Is it intended to send out these messages and detect<br>
> > that<br>
> > > >>> ovirt<br>
> > > >>> > > > >>>>>>>> engine<br>
> > > >>> > > > >>>>>>>> is down (which is false anyway), but not to restart<br>
> > the<br>
> > > >>> vm?<br>
> > > >>> > > > >>>>><br>
> > > >>> > > > >>>>> Forget about emails for now and check the<br>
> > > >>> > > > >>>>> /var/log/ovirt-hosted-engine-ha/agent.log and broker.log<br>
> > (and<br>
> > > >>> > > attach<br>
> > > >>> > > > >>>>> them<br>
> > > >>> > > > >>>>> as well btw).<br>
> > > >>> > > > >>>>><br>
> > > >>> > > > >>>>>>>> oVirt hosts think that hosted engine is down because<br>
> > it<br>
> > > >>> seems<br>
> > > >>> > > that<br>
> > > >>> > > > >>>>>>>> hosts<br>
> > > >>> > > > >>>>>>>> can't write to hosted-engine.lockspace due to<br>
> > glusterfs<br>
> > > >>> issues<br>
> > > >>> > > (or<br>
> > > >>> > > > >>>>>>>> at<br>
> > > >>> > > > >>>>>>>> least I think so).<br>
> > > >>> > > > >>>>><br>
> > > >>> > > > >>>>> The hosts think so or can't really write there? The<br>
> > > >>> lockspace is<br>
> > > >>> > > > >>>>> managed<br>
> > > >>> > > > >>>>> by<br>
> > > >>> > > > >>>>> sanlock and our HA daemons do not touch it at all. We<br>
> > only<br>
> > > >>> ask<br>
> > > >>> > > sanlock<br>
> > > >>> > > > >>>>> to<br>
> > > >>> > > > >>>>> get make sure we have unique server id.<br>
> > > >>> > > > >>>>><br>
> > > >>> > > > >>>>>>>> Is is possible or planned to make the whole ha feature<br>
> > > >>> optional?<br>
> > > >>> > > > >>>>><br>
> > > >>> > > > >>>>> Well the system won't perform any automatic actions if<br>
> > you<br>
> > > >>> put the<br>
> > > >>> > > > >>>>> hosted<br>
> > > >>> > > > >>>>> engine to global maintenance and only start/stop/migrate<br>
> > the<br>
> > > >>> VM<br>
> > > >>> > > > >>>>> manually.<br>
> > > >>> > > > >>>>> I would discourage you from stopping agent/broker,<br>
> > because<br>
> > > >>> the<br>
> > > >>> > > engine<br>
> > > >>> > > > >>>>> itself has some logic based on the reporting.<br>
> > > >>> > > > >>>>><br>
> > > >>> > > > >>>>> Regards<br>
> > > >>> > > > >>>>><br>
> > > >>> > > > >>>>> --<br>
> > > >>> > > > >>>>> Martin Sivák<br>
> > > >>> > > > >>>>> <a href="mailto:msivak@redhat.com" target="_blank">msivak@redhat.com</a><br>
> > > >>> > > > >>>>> Red Hat Czech<br>
> > > >>> > > > >>>>> RHEV-M SLA / Brno, CZ<br>
> > > >>> > > > >>>>><br>
> > > >>> > > > >>>>> ----- Original Message -----<br>
> > > >>> > > > >>>>>> On 04/15/2014 04:53 PM, Jiri Moskovcak wrote:<br>
> > > >>> > > > >>>>>>> On 04/14/2014 10:50 AM, René Koch wrote:<br>
> > > >>> > > > >>>>>>>> Hi,<br>
> > > >>> > > > >>>>>>>><br>
> > > >>> > > > >>>>>>>> I have some issues with hosted engine status.<br>
> > > >>> > > > >>>>>>>><br>
> > > >>> > > > >>>>>>>> oVirt hosts think that hosted engine is down because<br>
> > it<br>
> > > >>> seems<br>
> > > >>> > > that<br>
> > > >>> > > > >>>>>>>> hosts<br>
> > > >>> > > > >>>>>>>> can't write to hosted-engine.lockspace due to<br>
> > glusterfs<br>
> > > >>> issues<br>
> > > >>> > > (or<br>
> > > >>> > > > >>>>>>>> at<br>
> > > >>> > > > >>>>>>>> least I think so).<br>
> > > >>> > > > >>>>>>>><br>
> > > >>> > > > >>>>>>>> Here's the output of vm-status:<br>
> > > >>> > > > >>>>>>>><br>
> > > >>> > > > >>>>>>>> # hosted-engine --vm-status<br>
> > > >>> > > > >>>>>>>><br>
> > > >>> > > > >>>>>>>><br>
> > > >>> > > > >>>>>>>> --== Host 1 status ==--<br>
> > > >>> > > > >>>>>>>><br>
> > > >>> > > > >>>>>>>> Status up-to-date : False<br>
> > > >>> > > > >>>>>>>> Hostname : 10.0.200.102<br>
> > > >>> > > > >>>>>>>> Host ID : 1<br>
> > > >>> > > > >>>>>>>> Engine status : unknown<br>
> > stale-data<br>
> > > >>> > > > >>>>>>>> Score : 2400<br>
> > > >>> > > > >>>>>>>> Local maintenance : False<br>
> > > >>> > > > >>>>>>>> Host timestamp : 1397035677<br>
> > > >>> > > > >>>>>>>> Extra metadata (valid at timestamp):<br>
> > > >>> > > > >>>>>>>> metadata_parse_version=1<br>
> > > >>> > > > >>>>>>>> metadata_feature_version=1<br>
> > > >>> > > > >>>>>>>> timestamp=1397035677 (Wed Apr 9 11:27:57<br>
> > 2014)<br>
> > > >>> > > > >>>>>>>> host-id=1<br>
> > > >>> > > > >>>>>>>> score=2400<br>
> > > >>> > > > >>>>>>>> maintenance=False<br>
> > > >>> > > > >>>>>>>> state=EngineUp<br>
> > > >>> > > > >>>>>>>><br>
> > > >>> > > > >>>>>>>><br>
> > > >>> > > > >>>>>>>> --== Host 2 status ==--<br>
> > > >>> > > > >>>>>>>><br>
> > > >>> > > > >>>>>>>> Status up-to-date : True<br>
> > > >>> > > > >>>>>>>> Hostname : 10.0.200.101<br>
> > > >>> > > > >>>>>>>> Host ID : 2<br>
> > > >>> > > > >>>>>>>> Engine status : {'reason': 'vm<br>
> > not<br>
> > > >>> running<br>
> > > >>> > > on<br>
> > > >>> > > > >>>>>>>> this<br>
> > > >>> > > > >>>>>>>> host', 'health': 'bad', 'vm': 'down', 'detail':<br>
> > 'unknown'}<br>
> > > >>> > > > >>>>>>>> Score : 0<br>
> > > >>> > > > >>>>>>>> Local maintenance : False<br>
> > > >>> > > > >>>>>>>> Host timestamp : 1397464031<br>
> > > >>> > > > >>>>>>>> Extra metadata (valid at timestamp):<br>
> > > >>> > > > >>>>>>>> metadata_parse_version=1<br>
> > > >>> > > > >>>>>>>> metadata_feature_version=1<br>
> > > >>> > > > >>>>>>>> timestamp=1397464031 (Mon Apr 14 10:27:11<br>
> > 2014)<br>
> > > >>> > > > >>>>>>>> host-id=2<br>
> > > >>> > > > >>>>>>>> score=0<br>
> > > >>> > > > >>>>>>>> maintenance=False<br>
> > > >>> > > > >>>>>>>> state=EngineUnexpectedlyDown<br>
> > > >>> > > > >>>>>>>> timeout=Mon Apr 14 10:35:05 2014<br>
> > > >>> > > > >>>>>>>><br>
> > > >>> > > > >>>>>>>> oVirt engine is sending me 2 emails every 10 minutes<br>
> > with<br>
> > > >>> the<br>
> > > >>> > > > >>>>>>>> following<br>
> > > >>> > > > >>>>>>>> subjects:<br>
> > > >>> > > > >>>>>>>> - ovirt-hosted-engine state transition<br>
> > > >>> EngineDown-EngineStart<br>
> > > >>> > > > >>>>>>>> - ovirt-hosted-engine state transition<br>
> > > >>> EngineStart-EngineUp<br>
> > > >>> > > > >>>>>>>><br>
> > > >>> > > > >>>>>>>> In oVirt webadmin I can see the following message:<br>
> > > >>> > > > >>>>>>>> VM HostedEngine is down. Exit message: internal error<br>
> > > >>> Failed to<br>
> > > >>> > > > >>>>>>>> acquire<br>
> > > >>> > > > >>>>>>>> lock: error -243.<br>
> > > >>> > > > >>>>>>>><br>
> > > >>> > > > >>>>>>>> These messages are really annoying as oVirt isn't<br>
> > doing<br>
> > > >>> anything<br>
> > > >>> > > > >>>>>>>> with<br>
> > > >>> > > > >>>>>>>> hosted engine - I have an uptime of 9 days in my<br>
> > engine<br>
> > > >>> vm.<br>
> > > >>> > > > >>>>>>>><br>
> > > >>> > > > >>>>>>>> So my questions are now:<br>
> > > >>> > > > >>>>>>>> Is it intended to send out these messages and detect<br>
> > that<br>
> > > >>> ovirt<br>
> > > >>> > > > >>>>>>>> engine<br>
> > > >>> > > > >>>>>>>> is down (which is false anyway), but not to restart<br>
> > the<br>
> > > >>> vm?<br>
> > > >>> > > > >>>>>>>><br>
> > > >>> > > > >>>>>>>> How can I disable notifications? I'm planning to<br>
> > write a<br>
> > > >>> Nagios<br>
> > > >>> > > > >>>>>>>> plugin<br>
> > > >>> > > > >>>>>>>> which parses the output of hosted-engine --vm-status<br>
> > and<br>
> > > >>> only<br>
> > > >>> > > Nagios<br>
> > > >>> > > > >>>>>>>> should notify me, not hosted-engine script.<br>
> > > >>> > > > >>>>>>>><br>
> > > >>> > > > >>>>>>>> Is is possible or planned to make the whole ha feature<br>
> > > >>> > > optional? I<br>
> > > >>> > > > >>>>>>>> really really really hate cluster software as it<br>
> > causes<br>
> > > >>> more<br>
> > > >>> > > > >>>>>>>> troubles<br>
> > > >>> > > > >>>>>>>> then standalone machines and in my case the<br>
> > hosted-engine<br>
> > > >>> ha<br>
> > > >>> > > feature<br>
> > > >>> > > > >>>>>>>> really causes troubles (and I didn't had a hardware or<br>
> > > >>> network<br>
> > > >>> > > > >>>>>>>> outage<br>
> > > >>> > > > >>>>>>>> yet only issues with hosted-engine ha agent). I don't<br>
> > > >>> need any<br>
> > > >>> > > ha<br>
> > > >>> > > > >>>>>>>> feature for hosted engine. I just want to run engine<br>
> > > >>> > > virtualized on<br>
> > > >>> > > > >>>>>>>> oVirt and if engine vm fails (e.g. because of issues<br>
> > with<br>
> > > >>> a<br>
> > > >>> > > host)<br>
> > > >>> > > > >>>>>>>> I'll<br>
> > > >>> > > > >>>>>>>> restart it on another node.<br>
> > > >>> > > > >>>>>>><br>
> > > >>> > > > >>>>>>> Hi, you can:<br>
> > > >>> > > > >>>>>>> 1. edit<br>
> > > >>> /etc/ovirt-hosted-engine-ha/{agent,broker}-log.conf and<br>
> > > >>> > > tweak<br>
> > > >>> > > > >>>>>>> the logger as you like<br>
> > > >>> > > > >>>>>>> 2. or kill ovirt-ha-broker & ovirt-ha-agent services<br>
> > > >>> > > > >>>>>><br>
> > > >>> > > > >>>>>> Thanks for the information.<br>
> > > >>> > > > >>>>>> So engine is able to run when ovirt-ha-broker and<br>
> > > >>> ovirt-ha-agent<br>
> > > >>> > > isn't<br>
> > > >>> > > > >>>>>> running?<br>
> > > >>> > > > >>>>>><br>
> > > >>> > > > >>>>>><br>
> > > >>> > > > >>>>>> Regards,<br>
> > > >>> > > > >>>>>> René<br>
> > > >>> > > > >>>>>><br>
> > > >>> > > > >>>>>>><br>
> > > >>> > > > >>>>>>> --Jirka<br>
> > > >>> > > > >>>>>>>><br>
> > > >>> > > > >>>>>>>> Thanks,<br>
> > > >>> > > > >>>>>>>> René<br>
> > > >>> > > > >>>>>>>><br>
> > > >>> > > > >>>>>>>><br>
> > > >>> > > > >>>>>>><br>
> > > >>> > > > >>>>>> _______________________________________________<br>
> > > >>> > > > >>>>>> Users mailing list<br>
> > > >>> > > > >>>>>> <a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
> > > >>> > > > >>>>>> <a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
> > > >>> > > > >>>>>><br>
> > > >>> > > > >>>> _______________________________________________<br>
> > > >>> > > > >>>> Users mailing list<br>
> > > >>> > > > >>>> <a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
> > > >>> > > > >>>> <a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
> > > >>> > > > >>>><br>
> > > >>> > > > >><br>
> > > >>> > > ><br>
> > > >>> > > _______________________________________________<br>
> > > >>> > > Users mailing list<br>
> > > >>> > > <a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
> > > >>> > > <a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
> > > >>> > ><br>
> > > >>> ><br>
> > > >>> _______________________________________________<br>
> > > >>> Users mailing list<br>
> > > >>> <a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
> > > >>> <a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
> > > >>><br>
> > > >><br>
> > > >><br>
> > > ><br>
> > ><br>
> > _______________________________________________<br>
> > Users mailing list<br>
> > <a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
> > <a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
> ><br>
><br>
_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a><br>
</div></div></blockquote></div></div></div><br></div>
</blockquote></div><br></div>