Users
Threads by month
- ----- 2025 -----
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- 10 participants
- 19138 discussions
Hi all,
Been playing with an ovirt test setup for the last couple of days.
Created some vm's, started to throw them around on the cluster, but now I'm
stuck. The VM's are running, but when I try to stop them, I get errors like
this:
https://plakbord.cloud.nl/p/zvAEVPFeBBJSBeGspKNxJsqF
When trying to migrate a VM, the node throws this error:
https://plakbord.cloud.nl/p/4Syi9A7tEd8L3A2pQg6boVB6
Any clue on what's happening?
--
Met vriendelijke groeten / With kind regards,
Johan Kooijman
1
2
Hi,
I am facing one strange issue in the ovirt with glusterfs ......i want to
reactivate onw of my host node....but it's failed with the following error
:-
Gluster command [gluster peer status cpu04.zne01.hkg1.ovt.com] failed on
server cpu04.
Engine Logs :- http://ur1.ca/jczdp
3
6
The vdsm.log just after I turned the host where HE VM is to local.
In the log, there is some part like
---
GuestMonitor-HostedEngine::DEBUG::2014-12-30
13:01:03,988::vm::486::vm.Vm::(_getUserCpuTuneInfo)
vmId=`0d3adb5c-0960-483c-9d73-5e256a519f2f`::Domain Metadata is not
set
GuestMonitor-HostedEngine::DEBUG::2014-12-30
13:01:03,989::vm::486::vm.Vm::(_getUserCpuTuneInfo)
vmId=`0d3adb5c-0960-483c-9d73-5e256a519f2f`::Domain Metadata is not
set
GuestMonitor-HostedEngine::DEBUG::2014-12-30
13:01:03,990::vm::486::vm.Vm::(_getUserCpuTuneInfo)
vmId=`0d3adb5c-0960-483c-9d73-5e256a519f2f`::Domain Metadata is not
set
JsonRpc (StompReactor)::DEBUG::2014-12-30
13:01:04,675::stompReactor::98::Broker.StompAdapter::(handle_frame)
Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2014-12-30
13:01:04,676::__init__::504::jsonrpc.JsonRpcServer::(serve_requests)
Waiting for request
Thread-1806995::DEBUG::2014-12-30
13:01:04,677::stompReactor::163::yajsonrpc.StompServer::(send) Sending
response
JsonRpc (StompReactor)::DEBUG::2014-12-30
13:01:04,678::stompReactor::98::Broker.StompAdapter::(handle_frame)
Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2014-12-30
13:01:04,679::__init__::504::jsonrpc.JsonRpcServer::(serve_requests)
Waiting for request
Thread-1806996::DEBUG::2014-12-30
13:01:04,681::vm::486::vm.Vm::(_getUserCpuTuneInfo)
vmId=`0d3adb5c-0960-483c-9d73-5e256a519f2f`::Domain Metadata is not
set
---
I this with some wrong?
Thanks,
Cong
> From: Artyom Lukianov <alukiano(a)redhat.com>
> Date: 2014年12月29日 23:13:45 GMT-8
> To: "Yue, Cong" <Cong_Yue(a)alliedtelesis.com>
> Cc: Simone Tiraboschi <stirabos(a)redhat.com>, "users(a)ovirt.org"
> <users(a)ovirt.org>
> Subject: Re: [ovirt-users] VM failover with ovirt3.5
>
> HE vm migrated only by ovirt-ha-agent and not by engine, but FatalError it's
> more interesting, can you provide vdsm.log for this one please.
>
> ----- Original Message -----
> From: "Cong Yue" <Cong_Yue(a)alliedtelesis.com>
> To: "Artyom Lukianov" <alukiano(a)redhat.com>
> Cc: "Simone Tiraboschi" <stirabos(a)redhat.com>, users(a)ovirt.org
> Sent: Monday, December 29, 2014 8:29:04 PM
> Subject: Re: [ovirt-users] VM failover with ovirt3.5
>
> I disabled local maintenance mode for all hosts, and then only set the host
> where HE VM is there to local maintenance mode. The logs are as follows.
> During the migration of HE VM , it shows some fatal error happen. By the
> way, also HE VM can not work with live migration. Instead, other VMs can do
> live migration.
>
> ---
> [root@compute2-3 ~]# hosted-engine --set-maintenance --mode=local
> You have new mail in /var/spool/mail/root
> [root@compute2-3 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
> MainThread::INFO::2014-12-29
> 13:16:12,435::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.92 (id: 3, score: 2400)
> MainThread::INFO::2014-12-29
> 13:16:22,711::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-29
> 13:16:22,711::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.92 (id: 3, score: 2400)
> MainThread::INFO::2014-12-29
> 13:16:32,978::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-29
> 13:16:32,978::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-29
> 13:16:43,272::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-29
> 13:16:43,272::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-29
> 13:16:53,316::states::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
> Engine vm running on localhost
> MainThread::INFO::2014-12-29
> 13:16:53,562::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-29
> 13:16:53,562::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-29
> 13:17:03,600::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
> Local maintenance detected
> MainThread::INFO::2014-12-29
> 13:17:03,611::brokerlink::111::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
> Trying: notify time=1419877023.61 type=state_transition
> detail=EngineUp-LocalMaintenanceMigrateVm hostname='compute2-3'
> MainThread::INFO::2014-12-29
> 13:17:03,672::brokerlink::120::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
> Success, was notification of state_transition
> (EngineUp-LocalMaintenanceMigrateVm) sent? sent
> MainThread::INFO::2014-12-29
> 13:17:03,911::states::208::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)
> Score is 0 due to local maintenance mode
> MainThread::INFO::2014-12-29
> 13:17:03,912::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state LocalMaintenanceMigrateVm (score: 0)
> MainThread::INFO::2014-12-29
> 13:17:03,912::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-29
> 13:17:03,960::brokerlink::111::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
> Trying: notify time=1419877023.96 type=state_transition
> detail=LocalMaintenanceMigrateVm-EngineMigratingAway
> hostname='compute2-3'
> MainThread::INFO::2014-12-29
> 13:17:03,980::brokerlink::120::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
> Success, was notification of state_transition
> (LocalMaintenanceMigrateVm-EngineMigratingAway) sent? sent
> MainThread::INFO::2014-12-29
> 13:17:04,218::states::66::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_penalize_memory)
> Penalizing score by 400 due to low free memory
> MainThread::INFO::2014-12-29
> 13:17:04,218::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineMigratingAway (score: 2000)
> MainThread::INFO::2014-12-29
> 13:17:04,219::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::ERROR::2014-12-29
> 13:17:14,251::hosted_engine::867::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitor_migration)
> Failed to migrate
> Traceback (most recent call last):
> File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
> line 863, in _monitor_migration
> vm_id,
> File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/vds_client.py",
> line 85, in run_vds_client_cmd
> response['status']['message'])
> DetailedError: Error 12 from migrateStatus: Fatal error during migration
> MainThread::INFO::2014-12-29
> 13:17:14,262::brokerlink::111::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
> Trying: notify time=1419877034.26 type=state_transition
> detail=EngineMigratingAway-ReinitializeFSM hostname='compute2-3'
> MainThread::INFO::2014-12-29
> 13:17:14,263::brokerlink::120::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
> Success, was notification of state_transition
> (EngineMigratingAway-ReinitializeFSM) sent? ignored
> MainThread::INFO::2014-12-29
> 13:17:14,496::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state ReinitializeFSM (score: 0)
> MainThread::INFO::2014-12-29
> 13:17:14,496::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-29
> 13:17:24,536::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
> Local maintenance detected
> MainThread::INFO::2014-12-29
> 13:17:24,547::brokerlink::111::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
> Trying: notify time=1419877044.55 type=state_transition
> detail=ReinitializeFSM-LocalMaintenance hostname='compute2-3'
> MainThread::INFO::2014-12-29
> 13:17:24,574::brokerlink::120::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
> Success, was notification of state_transition
> (ReinitializeFSM-LocalMaintenance) sent? sent
> MainThread::INFO::2014-12-29
> 13:17:24,812::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state LocalMaintenance (score: 0)
> MainThread::INFO::2014-12-29
> 13:17:24,812::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-29
> 13:17:34,851::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
> Local maintenance detected
> MainThread::INFO::2014-12-29
> 13:17:35,095::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state LocalMaintenance (score: 0)
> MainThread::INFO::2014-12-29
> 13:17:35,095::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-29
> 13:17:45,130::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
> Local maintenance detected
> MainThread::INFO::2014-12-29
> 13:17:45,368::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state LocalMaintenance (score: 0)
> MainThread::INFO::2014-12-29
> 13:17:45,368::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> ^C
> [root@compute2-3 ~]#
>
>
> [root@compute2-3 ~]# hosted-engine --vm-status
>
>
> --== Host 1 status ==--
>
> Status up-to-date : True
> Hostname : 10.0.0.94
> Host ID : 1
> Engine status : {"health": "good", "vm": "up",
> "detail": "up"}
> Score : 0
> Local maintenance : True
> Host timestamp : 1014956<tel:1014956>
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=1014956<tel:1014956> (Mon Dec 29 13:20:19 2014)
> host-id=1
> score=0
> maintenance=True
> state=LocalMaintenance
>
>
> --== Host 2 status ==--
>
> Status up-to-date : True
> Hostname : 10.0.0.93
> Host ID : 2
> Engine status : {"reason": "vm not running on
> this host", "health": "bad", "vm": "down", "detail": "unknown"}
> Score : 2400
> Local maintenance : False
> Host timestamp : 866019
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=866019 (Mon Dec 29 10:19:45 2014)
> host-id=2
> score=2400
> maintenance=False
> state=EngineDown
>
>
> --== Host 3 status ==--
>
> Status up-to-date : True
> Hostname : 10.0.0.92
> Host ID : 3
> Engine status : {"reason": "vm not running on
> this host", "health": "bad", "vm": "down", "detail": "unknown"}
> Score : 2400
> Local maintenance : False
> Host timestamp : 860493
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=860493 (Mon Dec 29 10:20:35 2014)
> host-id=3
> score=2400
> maintenance=False
> state=EngineDown
> [root@compute2-3 ~]#
> ---
> Thanks,
> Cong
>
>
>
> On 2014/12/29, at 8:43, "Artyom Lukianov"
> <alukiano(a)redhat.com<mailto:alukiano@redhat.com>> wrote:
>
> I see that HE vm run on host with ip 10.0.0.94, and two another hosts in
> "Local Maintenance" state, so vm will not migrate to any of them, can you
> try disable local maintenance on all hosts in HE environment and after
> enable "local maintenance" on host where HE vm run, and provide also output
> of hosted-engine --vm-status.
> Failover works in next way:
> 1) if host where run HE vm have score less by 800 that some other host in HE
> environment, HE vm will migrate on host with best score
> 2) if something happen to vm(kernel panic, crash of service...), agent will
> restart HE vm on another host in HE environment with positive score
> 3) if put to local maintenance host with HE vm, vm will migrate to another
> host with positive score
> Thanks.
>
> ----- Original Message -----
> From: "Cong Yue"
> <Cong_Yue(a)alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com>>
> To: "Artyom Lukianov" <alukiano(a)redhat.com<mailto:alukiano@redhat.com>>
> Cc: "Simone Tiraboschi" <stirabos(a)redhat.com<mailto:stirabos@redhat.com>>,
> users(a)ovirt.org<mailto:users@ovirt.org>
> Sent: Monday, December 29, 2014 6:30:42 PM
> Subject: Re: [ovirt-users] VM failover with ovirt3.5
>
> Thanks and the --vm-status log is as follows:
> [root@compute2-2 ~]# hosted-engine --vm-status
>
>
> --== Host 1 status ==--
>
> Status up-to-date : True
> Hostname : 10.0.0.94
> Host ID : 1
> Engine status : {"health": "good", "vm": "up",
> "detail": "up"}
> Score : 2400
> Local maintenance : False
> Host timestamp : 1008087
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=1008087<tel:1008087> (Mon Dec 29 11:25:51 2014)
> host-id=1
> score=2400
> maintenance=False
> state=EngineUp
>
>
> --== Host 2 status ==--
>
> Status up-to-date : True
> Hostname : 10.0.0.93
> Host ID : 2
> Engine status : {"reason": "vm not running on
> this host", "health": "bad", "vm": "down", "detail": "unknown"}
> Score : 0
> Local maintenance : True
> Host timestamp : 859142
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=859142 (Mon Dec 29 08:25:08 2014)
> host-id=2
> score=0
> maintenance=True
> state=LocalMaintenance
>
>
> --== Host 3 status ==--
>
> Status up-to-date : True
> Hostname : 10.0.0.92
> Host ID : 3
> Engine status : {"reason": "vm not running on
> this host", "health": "bad", "vm": "down", "detail": "unknown"}
> Score : 0
> Local maintenance : True
> Host timestamp : 853615
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=853615 (Mon Dec 29 08:25:57 2014)
> host-id=3
> score=0
> maintenance=True
> state=LocalMaintenance
> You have new mail in /var/spool/mail/root
> [root@compute2-2 ~]#
>
> Could you please explain how VM failover works inside ovirt? Is there any
> other debug option I can enable to check the problem?
>
> Thanks,
> Cong
>
>
> On 2014/12/29, at 1:39, "Artyom Lukianov"
> <alukiano(a)redhat.com<mailto:alukiano@redhat.com><mailto:alukiano@redhat.com>>
> wrote:
>
> Can you also provide output of hosted-engine --vm-status please, previous
> time it was useful, because I do not see something unusual.
> Thanks
>
> ----- Original Message -----
> From: "Cong Yue"
> <Cong_Yue(a)alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com><mailto:Cong_Yue@alliedtelesis.com>>
> To: "Artyom Lukianov"
> <alukiano(a)redhat.com<mailto:alukiano@redhat.com><mailto:alukiano@redhat.com>>
> Cc: "Simone Tiraboschi"
> <stirabos(a)redhat.com<mailto:stirabos@redhat.com><mailto:stirabos@redhat.com>>,
> users(a)ovirt.org<mailto:users@ovirt.org><mailto:users@ovirt.org>
> Sent: Monday, December 29, 2014 7:15:24 AM
> Subject: Re: [ovirt-users] VM failover with ovirt3.5
>
> Also I change the maintenance mode to local in another host. But also the VM
> in this host can not be migrated. The logs are as follows.
>
> [root@compute2-2 ~]# hosted-engine --set-maintenance --mode=local
> [root@compute2-2 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
> MainThread::INFO::2014-12-28
> 21:09:04,184::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-28
> 21:09:14,603::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-28
> 21:09:14,603::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-28
> 21:09:24,903::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-28
> 21:09:24,904::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-28
> 21:09:35,026::states::437::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
> Engine vm is running on host 10.0.0.94 (id 1)
> MainThread::INFO::2014-12-28
> 21:09:35,236::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-28
> 21:09:35,236::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-28
> 21:09:45,604::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-28
> 21:09:45,604::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-28
> 21:09:55,691::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
> Local maintenance detected
> MainThread::INFO::2014-12-28
> 21:09:55,701::brokerlink::111::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
> Trying: notify time=1419829795.7 type=state_transition
> detail=EngineDown-LocalMaintenance hostname='compute2-2'
> MainThread::INFO::2014-12-28
> 21:09:55,761::brokerlink::120::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
> Success, was notification of state_transition
> (EngineDown-LocalMaintenance) sent? sent
> MainThread::INFO::2014-12-28
> 21:09:55,990::states::208::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)
> Score is 0 due to local maintenance mode
> MainThread::INFO::2014-12-28
> 21:09:55,990::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state LocalMaintenance (score: 0)
> MainThread::INFO::2014-12-28
> 21:09:55,991::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> ^C
> You have new mail in /var/spool/mail/root
> [root@compute2-2 ~]# ps -ef | grep qemu
> root 18420 2777 0 21:10<x-apple-data-detectors://39> pts/0
> 00:00:00<x-apple-data-detectors://40> grep --color=auto qemu
> qemu 29809 1 0 Dec19 ? 01:17:20 /usr/libexec/qemu-kvm
> -name testvm2-2 -S -machine rhel6.5.0,accel=kvm,usb=off -cpu Nehalem
> -m 500 -realtime mlock=off -smp
> 1,maxcpus=16,sockets=16,cores=1,threads=1 -uuid
> c31e97d0-135e-42da-9954-162b5228dce3 -smbios
> type=1,manufacturer=oVirt,product=oVirt
> Node,version=7-0.1406.el7.centos.2.5,serial=4C4C4544-0059-3610-8033-B4C04F395931,uuid=c31e97d0-135e-42da-9954-162b5228dce3
> -no-user-config -nodefaults -chardev
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/testvm2-2.monitor,server,nowait
> -mon chardev=charmonitor,id=monitor,mode=control -rtc
> base=2014-12-19T20:17:17<x-apple-data-detectors://42>,driftfix=slew
> -no-kvm-pit-reinjection
> -no-hpet -no-shutdown -boot strict=on -device
> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
> virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5
> -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial=
> -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
> -drive
> file=/rhev/data-center/00000002-0002-0002-0002-0000000001e4/1dc71096-27c4-4256-b2ac-bd7265525c69/images/5cbeb8c9-4f04-48d0-a5eb-78c49187c550/a0570e8c-9867-4ec4-818f-11e102fc4f9b,if=none,id=drive-virtio-disk0,format=qcow2,serial=5cbeb8c9-4f04-48d0-a5eb-78c49187c550,cache=none,werror=stop,rerror=stop,aio=threads
> -device
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
> -netdev tap,fd=28,id=hostnet0,vhost=on,vhostfd=29 -device
> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:db:94:00,bus=pci.0,addr=0x3
> -chardev
> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/c31e97d0-135e-42da-9954-162b5228dce3.com.redhat.rhevm.vdsm,server,nowait
> -device
> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> -chardev
> socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/c31e97d0-135e-42da-9954-162b5228dce3.org.qemu.guest_agent.0,server,nowait
> -device
> virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
> -chardev spicevmc,id=charchannel2,name=vdagent -device
> virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
> -spice
> tls-port=5901,addr=10.0.0.93,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
> -k en-us -vga qxl -global qxl-vga.ram_size=67108864<tel:67108864> -global
> qxl-vga.vram_size=33554432<tel:33554432> -incoming tcp:[::]:49152 -device
> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7
> [root@compute2-2 ~]#
>
> Thanks,
> Cong
>
>
> On 2014/12/28, at 20:53, "Yue, Cong"
> <Cong_Yue(a)alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com><mailto:Cong_Yue@alliedtelesis.com><mailto:Cong_Yue@alliedtelesis.com>>
> wrote:
>
> I checked it again and confirmed there is one guest VM is running on the top
> of this host. The log is as follows:
>
> [root@compute2-1 vdsm]# ps -ef | grep qemu
> qemu 2983 846 0 Dec19 ? 00:00:00<x-apple-data-detectors://0>
> [supervdsmServer] <defunct>
> root 5489 3053 0 20:49<x-apple-data-detectors://1> pts/0
> 00:00:00<x-apple-data-detectors://2> grep --color=auto qemu
> qemu 26128 1 0 Dec19 ? 01:09:19 /usr/libexec/qemu-kvm
> -name testvm2 -S -machine rhel6.5.0,accel=kvm,usb=off -cpu Nehalem -m
> 500 -realtime mlock=off -smp 1,maxcpus=16,sockets=16,cores=1,threads=1
> -uuid e46bca87-4df5-4287-844b-90a26fccef33 -smbios
> type=1,manufacturer=oVirt,product=oVirt
> Node,version=7-0.1406.el7.centos.2.5,serial=4C4C4544-0030-3310-8059-B8C04F585231,uuid=e46bca87-4df5-4287-844b-90a26fccef33
> -no-user-config -nodefaults -chardev
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/testvm2.monitor,server,nowait
> -mon chardev=charmonitor,id=monitor,mode=control -rtc
> base=2014-12-19T20:18:01<x-apple-data-detectors://4>,driftfix=slew
> -no-kvm-pit-reinjection
> -no-hpet -no-shutdown -boot strict=on -device
> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
> virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5
> -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial=
> -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
> -drive
> file=/rhev/data-center/00000002-0002-0002-0002-0000000001e4/1dc71096-27c4-4256-b2ac-bd7265525c69/images/b4b5426b-95e3-41af-b286-da245891cdaf/0f688d49-97e3-4f1d-84d4-ac1432d903b3,if=none,id=drive-virtio-disk0,format=qcow2,serial=b4b5426b-95e3-41af-b286-da245891cdaf,cache=none,werror=stop,rerror=stop,aio=threads
> -device
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
> -netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=27 -device
> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:db:94:01,bus=pci.0,addr=0x3
> -chardev
> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/e46bca87-4df5-4287-844b-90a26fccef33.com.redhat.rhevm.vdsm,server,nowait
> -device
> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> -chardev
> socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/e46bca87-4df5-4287-844b-90a26fccef33.org.qemu.guest_agent.0,server,nowait
> -device
> virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
> -chardev spicevmc,id=charchannel2,name=vdagent -device
> virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
> -spice
> tls-port=5900,addr=10.0.0.92,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
> -k en-us -vga qxl -global qxl-vga.ram_size=67108864<tel:67108864> -global
> qxl-vga.vram_size=33554432<tel:33554432> -incoming tcp:[::]:49152 -device
> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7
> [root@compute2-1 vdsm]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
> MainThread::INFO::2014-12-28
> 20:49:27,315::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
> Local maintenance detected
> MainThread::INFO::2014-12-28
> 20:49:27,646::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state LocalMaintenance (score: 0)
> MainThread::INFO::2014-12-28
> 20:49:27,646::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-28
> 20:49:37,732::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
> Local maintenance detected
> MainThread::INFO::2014-12-28
> 20:49:37,961::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state LocalMaintenance (score: 0)
> MainThread::INFO::2014-12-28
> 20:49:37,961::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-28
> 20:49:48,048::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
> Local maintenance detected
> MainThread::INFO::2014-12-28
> 20:49:48,319::states::208::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)
> Score is 0 due to local maintenance mode
> MainThread::INFO::2014-12-28
> 20:49:48,319::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state LocalMaintenance (score: 0)
> MainThread::INFO::2014-12-28
> 20:49:48,319::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
>
> Thanks,
> Cong
>
>
> On 2014/12/28, at 3:46, "Artyom Lukianov"
> <alukiano(a)redhat.com<mailto:alukiano@redhat.com><mailto:alukiano@redhat.com><mailto:alukiano@redhat.com>>
> wrote:
>
> I see that you set local maintenance on host3 that do not have engine vm on
> it, so it nothing to migrate from this host.
> If you set local maintenance on host1, vm must migrate to another host with
> positive score.
> Thanks
>
> ----- Original Message -----
> From: "Cong Yue"
> <Cong_Yue(a)alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com><mailto:Cong_Yue@alliedtelesis.com><mailto:Cong_Yue@alliedtelesis.com>>
> To: "Simone Tiraboschi"
> <stirabos(a)redhat.com<mailto:stirabos@redhat.com><mailto:stirabos@redhat.com><mailto:stirabos@redhat.com>>
> Cc:
> users(a)ovirt.org<mailto:users@ovirt.org><mailto:users@ovirt.org><mailto:users@ovirt.org>
> Sent: Saturday, December 27, 2014 6:58:32 PM
> Subject: Re: [ovirt-users] VM failover with ovirt3.5
>
> Hi
>
> I had a try with "hosted-engine --set-maintence --mode=local" on
> compute2-1, which is host 3 in my cluster. From the log, it shows
> maintence mode is dectected, but migration does not happen.
>
> The logs are as follows. Is there any other config I need to check?
>
> [root@compute2-1 vdsm]# hosted-engine --vm-status
>
>
> --== Host 1 status ==-
>
> Status up-to-date : True
> Hostname : 10.0.0.94
> Host ID : 1
> Engine status : {"health": "good", "vm": "up",
> "detail": "up"}
> Score : 2400
> Local maintenance : False
> Host timestamp : 836296
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=836296 (Sat Dec 27 11:42:39 2014)
> host-id=1
> score=2400
> maintenance=False
> state=EngineUp
>
>
> --== Host 2 status ==--
>
> Status up-to-date : True
> Hostname : 10.0.0.93
> Host ID : 2
> Engine status : {"reason": "vm not running on
> this host", "health": "bad", "vm": "down", "detail": "unknown"}
> Score : 2400
> Local maintenance : False
> Host timestamp : 687358
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=687358 (Sat Dec 27 08:42:04 2014)
> host-id=2
> score=2400
> maintenance=False
> state=EngineDown
>
>
> --== Host 3 status ==--
>
> Status up-to-date : True
> Hostname : 10.0.0.92
> Host ID : 3
> Engine status : {"reason": "vm not running on
> this host", "health": "bad", "vm": "down", "detail": "unknown"}
> Score : 0
> Local maintenance : True
> Host timestamp : 681827
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=681827 (Sat Dec 27 08:42:40 2014)
> host-id=3
> score=0
> maintenance=True
> state=LocalMaintenance
> [root@compute2-1 vdsm]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
> MainThread::INFO::2014-12-27
> 08:42:41,109::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-27
> 08:42:51,198::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
> Local maintenance detected
> MainThread::INFO::2014-12-27
> 08:42:51,420::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state LocalMaintenance (score: 0)
> MainThread::INFO::2014-12-27
> 08:42:51,420::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-27
> 08:43:01,507::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
> Local maintenance detected
> MainThread::INFO::2014-12-27
> 08:43:01,773::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state LocalMaintenance (score: 0)
> MainThread::INFO::2014-12-27
> 08:43:01,773::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-27
> 08:43:11,859::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
> Local maintenance detected
> MainThread::INFO::2014-12-27
> 08:43:12,072::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state LocalMaintenance (score: 0)
> MainThread::INFO::2014-12-27
> 08:43:12,072::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
>
>
>
> [root@compute2-3 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
> MainThread::INFO::2014-12-27
> 11:36:28,855::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-27
> 11:36:39,130::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-27
> 11:36:39,130::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-27
> 11:36:49,449::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-27
> 11:36:49,449::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-27
> 11:36:59,739::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-27
> 11:36:59,739::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-27
> 11:37:09,779::states::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
> Engine vm running on localhost
> MainThread::INFO::2014-12-27
> 11:37:10,026::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-27
> 11:37:10,026::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-27
> 11:37:20,331::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-27
> 11:37:20,331::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
>
>
> [root@compute2-2 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
> MainThread::INFO::2014-12-27
> 08:36:12,462::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-27
> 08:36:22,797::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-27
> 08:36:22,798::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-27
> 08:36:32,876::states::437::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
> Engine vm is running on host 10.0.0.94 (id 1)
> MainThread::INFO::2014-12-27
> 08:36:33,169::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-27
> 08:36:33,169::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-27
> 08:36:43,567::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-27
> 08:36:43,567::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-27
> 08:36:53,858::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-27
> 08:36:53,858::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-27
> 08:37:04,028::state_machine::160::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
> Global metadata: {'maintenance': False}
> MainThread::INFO::2014-12-27
> 08:37:04,028::state_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
> Host 10.0.0.94 (id 1): {'extra':
> 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=835987
> (Sat Dec 27 11:37:30
> 2014)\nhost-id=1\nscore=2400\nmaintenance=False\nstate=EngineUp\n',
> 'hostname': '10.0.0.94', 'alive': True, 'host-id': 1, 'engine-status':
> {'health': 'good', 'vm': 'up', 'detail': 'up'}, 'score': 2400,
> 'maintenance': False, 'host-ts': 835987}
> MainThread::INFO::2014-12-27
> 08:37:04,028::state_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
> Host 10.0.0.92 (id 3): {'extra':
> 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=681528
> (Sat Dec 27 08:37:41
> 2014)\nhost-id=3\nscore=0\nmaintenance=True\nstate=LocalMaintenance\n',
> 'hostname': '10.0.0.92', 'alive': True, 'host-id': 3, 'engine-status':
> {'reason': 'vm not running on this host', 'health': 'bad', 'vm':
> 'down', 'detail': 'unknown'}, 'score': 0, 'maintenance': True,
> 'host-ts': 681528}
> MainThread::INFO::2014-12-27
> 08:37:04,028::state_machine::168::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
> Local (id 2): {'engine-health': {'reason': 'vm not running on this
> host', 'health': 'bad', 'vm': 'down', 'detail': 'unknown'}, 'bridge':
> True, 'mem-free': 15300.0, 'maintenance': False, 'cpu-load': 0.0215,
> 'gateway': True}
> MainThread::INFO::2014-12-27
> 08:37:04,265::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-27
> 08:37:04,265::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
>
> Thanks,
> Cong
>
> On 2014/12/22, at 5:29, "Simone Tiraboschi"
> <stirabos(a)redhat.com<mailto:stirabos@redhat.com><mailto:stirabos@redhat.com><mailto:stirabos@redhat.com>>
> wrote:
>
>
>
> ----- Original Message -----
> From: "Cong Yue"
> <Cong_Yue(a)alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com><mailto:Cong_Yue@alliedtelesis.com><mailto:Cong_Yue@alliedtelesis.com>>
> To: "Simone Tiraboschi"
> <stirabos(a)redhat.com<mailto:stirabos@redhat.com><mailto:stirabos@redhat.com><mailto:stirabos@redhat.com>>
> Cc:
> users(a)ovirt.org<mailto:users@ovirt.org><mailto:users@ovirt.org><mailto:users@ovirt.org>
> Sent: Friday, December 19, 2014 7:22:10 PM
> Subject: RE: [ovirt-users] VM failover with ovirt3.5
>
> Thanks for the information. This is the log for my three ovirt nodes.
> From the output of hosted-engine --vm-status, it shows the engine state for
> my 2nd and 3rd ovirt node is DOWN.
> Is this the reason why VM failover not work in my environment?
>
> No, they looks ok: you can run the engine VM on single host at a time.
>
> How can I make
> also engine works for my 2nd and 3rd ovit nodes?
>
> If you put the host 1 in local maintenance mode ( hosted-engine
> --set-maintenance --mode=local ) the VM should migrate to host 2; if you
> reactivate host 1 ( hosted-engine --set-maintenance --mode=none ) and put
> host 2 in local maintenance mode the VM should migrate again.
>
> Can you please try that and post the logs if something is going bad?
>
>
> --
> --== Host 1 status ==--
>
> Status up-to-date : True
> Hostname : 10.0.0.94
> Host ID : 1
> Engine status : {"health": "good", "vm": "up",
> "detail": "up"}
> Score : 2400
> Local maintenance : False
> Host timestamp : 150475
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=150475 (Fri Dec 19 13:12:18 2014)
> host-id=1
> score=2400
> maintenance=False
> state=EngineUp
>
>
> --== Host 2 status ==--
>
> Status up-to-date : True
> Hostname : 10.0.0.93
> Host ID : 2
> Engine status : {"reason": "vm not running on
> this host", "health": "bad", "vm": "down", "detail": "unknown"}
> Score : 2400
> Local maintenance : False
> Host timestamp : 1572
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=1572 (Fri Dec 19 10:12:18 2014)
> host-id=2
> score=2400
> maintenance=False
> state=EngineDown
>
>
> --== Host 3 status ==--
>
> Status up-to-date : False
> Hostname : 10.0.0.92
> Host ID : 3
> Engine status : unknown stale-data
> Score : 2400
> Local maintenance : False
> Host timestamp : 987
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=987 (Fri Dec 19 10:09:58 2014)
> host-id=3
> score=2400
> maintenance=False
> state=EngineDown
>
> --
> And the /var/log/ovirt-hosted-engine-ha/agent.log for three ovirt nodes are
> as follows:
> --
> 10.0.0.94(hosted-engine-1)
> ---
> MainThread::INFO::2014-12-19
> 13:09:33,716::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:09:33,716::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:09:44,017::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:09:44,017::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:09:54,303::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:09:54,303::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:04,342::states::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
> Engine vm running on localhost
> MainThread::INFO::2014-12-19
> 13:10:04,617::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:04,617::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:14,657::state_machine::160::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
> Global metadata: {'maintenance': False}
> MainThread::INFO::2014-12-19
> 13:10:14,657::state_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
> Host 10.0.0.93 (id 2): {'extra':
> 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=1448
> (Fri Dec 19 10:10:14
> 2014)\nhost-id=2\nscore=2400\nmaintenance=False\nstate=EngineDown\n',
> 'hostname': '10.0.0.93', 'alive': True, 'host-id': 2, 'engine-status':
> {'reason': 'vm not running on this host', 'health': 'bad', 'vm':
> 'down', 'detail': 'unknown'}, 'score': 2400, 'maintenance': False,
> 'host-ts': 1448}
> MainThread::INFO::2014-12-19
> 13:10:14,657::state_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
> Host 10.0.0.92 (id 3): {'extra':
> 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=987
> (Fri Dec 19 10:09:58
> 2014)\nhost-id=3\nscore=2400\nmaintenance=False\nstate=EngineDown\n',
> 'hostname': '10.0.0.92', 'alive': True, 'host-id': 3, 'engine-status':
> {'reason': 'vm not running on this host', 'health': 'bad', 'vm':
> 'down', 'detail': 'unknown'}, 'score': 2400, 'maintenance': False,
> 'host-ts': 987}
> MainThread::INFO::2014-12-19
> 13:10:14,658::state_machine::168::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
> Local (id 1): {'engine-health': {'health': 'good', 'vm': 'up',
> 'detail': 'up'}, 'bridge': True, 'mem-free': 1079.0, 'maintenance':
> False, 'cpu-load': 0.0269, 'gateway': True}
> MainThread::INFO::2014-12-19
> 13:10:14,904::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:14,904::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:25,210::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:25,210::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:35,499::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:35,499::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:45,784::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:45,785::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:56,070::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:56,070::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:11:06,109::states::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
> Engine vm running on localhost
> MainThread::INFO::2014-12-19
> 13:11:06,359::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:11:06,359::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:11:16,658::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:11:16,658::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:11:26,991::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:11:26,991::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:11:37,341::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:11:37,341::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> ----
>
> 10.0.0.93 (hosted-engine-2)
> MainThread::INFO::2014-12-19
> 10:12:18,339::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-19
> 10:12:18,339::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-19
> 10:12:28,651::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-19
> 10:12:28,652::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-19
> 10:12:39,010::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-19
> 10:12:39,010::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-19
> 10:12:49,338::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-19
> 10:12:49,338::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-19
> 10:12:59,642::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-19
> 10:12:59,642::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-19
> 10:13:10,010::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-19
> 10:13:10,010::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
>
>
> 10.0.0.92(hosted-engine-3)
> same as 10.0.0.93
> --
>
> -----Original Message-----
> From: Simone Tiraboschi [mailto:stirabos@redhat.com]
> Sent: Friday, December 19, 2014 12:28 AM
> To: Yue, Cong
> Cc:
> users(a)ovirt.org<mailto:users@ovirt.org><mailto:users@ovirt.org><mailto:users@ovirt.org>
> Subject: Re: [ovirt-users] VM failover with ovirt3.5
>
>
>
> ----- Original Message -----
> From: "Cong Yue"
> <Cong_Yue(a)alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com><mailto:Cong_Yue@alliedtelesis.com><mailto:Cong_Yue@alliedtelesis.com>>
> To:
> users(a)ovirt.org<mailto:users@ovirt.org><mailto:users@ovirt.org><mailto:users@ovirt.org>
> Sent: Friday, December 19, 2014 2:14:33 AM
> Subject: [ovirt-users] VM failover with ovirt3.5
>
>
>
> Hi
>
>
>
> In my environment, I have 3 ovirt nodes as one cluster. And on top of
> host-1, there is one vm to host ovirt engine.
>
> Also I have one external storage for the cluster to use as data domain
> of engine and data.
>
> I confirmed live migration works well in my environment.
>
> But it seems very buggy for VM failover if I try to force to shut down
> one ovirt node. Sometimes the VM in the node which is shutdown can
> migrate to other host, but it take more than several minutes.
>
> Sometimes, it can not migrate at all. Sometimes, only when the host is
> back, the VM is beginning to move.
>
> Can you please check or share the logs under
> /var/log/ovirt-hosted-engine-ha/
> ?
>
> Is there some documentation to explain how VM failover is working? And
> is there some bugs reported related with this?
>
> http://www.ovirt.org/Features/Self_Hosted_Engine#Agent_State_Diagram
>
> Thanks in advance,
>
> Cong
>
>
>
>
> This e-mail message is for the sole use of the intended recipient(s)
> and may contain confidential and privileged information. Any
> unauthorized review, use, disclosure or distribution is prohibited. If
> you are not the intended recipient, please contact the sender by reply
> e-mail and destroy all copies of the original message. If you are the
> intended recipient, please be advised that the content of this message
> is subject to access, review and disclosure by the sender's e-mail System
> Administrator.
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org<mailto:Users@ovirt.org><mailto:Users@ovirt.org><mailto:Users@ovirt.org>
> http://lists.ovirt.org/mailman/listinfo/users
>
> This e-mail message is for the sole use of the intended recipient(s) and may
> contain confidential and privileged information. Any unauthorized review,
> use, disclosure or distribution is prohibited. If you are not the intended
> recipient, please contact the sender by reply e-mail and destroy all copies
> of the original message. If you are the intended recipient, please be
> advised that the content of this message is subject to access, review and
> disclosure by the sender's e-mail System Administrator.
>
>
> This e-mail message is for the sole use of the intended recipient(s) and may
> contain confidential and privileged information. Any unauthorized review,
> use, disclosure or distribution is prohibited. If you are not the intended
> recipient, please contact the sender by reply e-mail and destroy all copies
> of the original message. If you are the intended recipient, please be
> advised that the content of this message is subject to access, review and
> disclosure by the sender's e-mail System Administrator.
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org<mailto:Users@ovirt.org><mailto:Users@ovirt.org><mailto:Users@ovirt.org>
> http://lists.ovirt.org/mailman/listinfo/users
>
> ________________________________
> This e-mail message is for the sole use of the intended recipient(s) and may
> contain confidential and privileged information. Any unauthorized review,
> use, disclosure or distribution is prohibited. If you are not the intended
> recipient, please contact the sender by reply e-mail and destroy all copies
> of the original message. If you are the intended recipient, please be
> advised that the content of this message is subject to access, review and
> disclosure by the sender's e-mail System Administrator.
>
> ________________________________
> This e-mail message is for the sole use of the intended recipient(s) and may
> contain confidential and privileged information. Any unauthorized review,
> use, disclosure or distribution is prohibited. If you are not the intended
> recipient, please contact the sender by reply e-mail and destroy all copies
> of the original message. If you are the intended recipient, please be
> advised that the content of this message is subject to access, review and
> disclosure by the sender's e-mail System Administrator.
>
> ________________________________
> This e-mail message is for the sole use of the intended recipient(s) and may
> contain confidential and privileged information. Any unauthorized review,
> use, disclosure or distribution is prohibited. If you are not the intended
> recipient, please contact the sender by reply e-mail and destroy all copies
> of the original message. If you are the intended recipient, please be
> advised that the content of this message is subject to access, review and
> disclosure by the sender's e-mail System Administrator.
>
>
> ________________________________
> This e-mail message is for the sole use of the intended recipient(s) and may
> contain confidential and privileged information. Any unauthorized review,
> use, disclosure or distribution is prohibited. If you are not the intended
> recipient, please contact the sender by reply e-mail and destroy all copies
> of the original message. If you are the intended recipient, please be
> advised that the content of this message is subject to access, review and
> disclosure by the sender's e-mail System Administrator.
4
11
Dear All,
after a failing hosted-engine --deploy, I am trying to recover the system based on the following description:
http://lists.ovirt.org/pipermail/users/2014-May/024423.html
Whatever I do, however, I receive the following message during the next hosted-engine --deploy:
[ ERROR ] Failed to execute stage 'Environment setup': Failed to reconfigure libvirt for VDSM
Is there a way to initiate such reconfiguration without completely installing the server from scratch??
Thank you very much for any efforts,
Michael
2
1
We have a new community ad set to go for the next StackOverflow campaign for the first half of the year. Allon Mureinik has posted it at:
http://meta.stackoverflow.com/a/283016/2422776
Now we just need some upvotes there to have the ad approved. The current threshold is +6. If you are a member of the StackOverflow network, we could use the support!
Thanks!
BKP
--
Brian Proffitt
Community Liaison
oVirt
Open Source and Standards, Red Hat - http://community.redhat.com
Phone: +1 574 383 9BKP
IRC: bkp @ OFTC
2
1
Hello everyone!
We have been working on our testing OVirt cluster today again, for the
first time in a few weeks, and all of the sudden a new problem has cropped
up. VM's that I created weeks ago and had working properly are now no
longer starting. When we try to start one of them, we get this error in
the engine console:
VM CentOS1 is down with error. Exit message: Bad volume specification
{'index': 0, 'iface': 'virtio', 'type': 'disk', 'format': 'raw',
'bootOrder': '1', 'volumeID': 'a737621e-6e66-4cd9-9014-67f7aaa184fb',
'apparentsize': '53687091200', 'imageID':
'702440a9-cd53-4300-8369-28123e8a095e', 'specParams': {}, 'readonly':
'false', 'domainID': 'fa2f828c-f98a-4a17-99fb-1ec1f46d018c', 'reqsize':
'0', 'deviceId': '702440a9-cd53-4300-8369-28123e8a095e', 'truesize':
'53687091200', 'poolID': 'a0781e2b-6242-4043-86c2-cd6694688ed2', 'device':
'disk', 'shared': 'false', 'propagateErrors': 'off', 'optional': 'false'}.
Looking at the VDSM log files, I think I've found what's actually
triggering this up, but I honestly do not know how to decipher it - here's
the message:
Thread-418::ERROR::2015-01-09
15:59:57,874::task::863::Storage.TaskManager.Task::(_setError)
Task=`11a740b7-4391-47ab-8575-919bd1e0c3fb`::Unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 870, in _run
return fn(*args, **kargs)
File "/usr/share/vdsm/logUtils.py", line 49, in wrapper
res = f(*args, **kwargs)
File "/usr/share/vdsm/storage/hsm.py", line 3242, in prepareImage
leafInfo = dom.produceVolume(imgUUID, leafUUID).getVmVolumeInfo()
File "/usr/share/vdsm/storage/glusterVolume.py", line 35, in
getVmVolumeInfo
volTrans = VOLUME_TRANS_MAP[volInfo[volname]['transportType'][0]]
KeyError: u'_gf-os'
This is Ovirt 3.5, with a 2-node gluster as the storage domain (no ovirt
stuff running there) , and 5 virtualization nodes, all machines running
CentOS 6.6 installs. We also have the patched RPMs that *should* enable
libgfapi access to gluster, but I can't confirm those are working
properly. The gluster filesystem is mounted on the virtualization node:
gf-os01-ib:/gf-os on /rhev/data-center/mnt/glusterSD/gf-os01-ib:_gf-os type
fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
Anyone got any ideas? More logs available upon request.
1
0
This is a multi-part message in MIME format.
--------------080707060003080807040003
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Hello!
Recently I have upgraded my installation to 3.5. Everything works fine
except of two things:
1. I cannot use REST API any more - "Internal server error 500" with
java exception. May be I missing something?
2. Cannot import VM from Export domain to Local Storage - "VM already
exists" error, but there are no any vms or disks on local storage.
--------------080707060003080807040003
Content-Type: multipart/related;
boundary="------------050805050704010707040600"
--------------050805050704010707040600
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Hello!<br>
Recently I have upgraded my installation to 3.5. Everything works
fine except of two things:<br>
<br>
1. I cannot use REST API any more - "Internal server error 500" with
java exception. May be I missing something?<br>
2. Cannot import VM from Export domain to Local Storage - "VM
already exists" error, but there are no any vms or disks on local
storage.<br>
<br>
<img src="cid:part1.02090405.00060307@shurik.kiev.ua" alt=""><br>
</body>
</html>
--------------050805050704010707040600
Content-Type: image/png;
name="dfdiaadb.png"
Content-Transfer-Encoding: base64
Content-ID: <part1.02090405.00060307(a)shurik.kiev.ua>
Content-Disposition: inline;
filename="dfdiaadb.png"
iVBORw0KGgoAAAANSUhEUgAAA94AAAI6CAIAAABerQdcAAAAA3NCSVQICAjb4U/gAAAACXBI
WXMAAA7EAAAOxAGVKw4bAAAgAElEQVR4nOzdfVRUd57v+2/xEKJoklaBBlqxDSQDw9gxhKMB
+iHtTTcgZUzOXI12zkpGQ5F1yLSQtpl7bLuH22PnTtN2kO5wVyjUm6yVgHpmEkOpcLqX1/Qc
JDocYrdh9CrESAyUBUoexAeiWPePXQ+7qnYVxWNt4P1as9ZU7Yff/u696fCpn9/aGB5//PGb
N2+KiIiEh4d/5zvfSUlJmTNnTnh4uAAAAAAYb0NDQwMDAx0dHf/2b/82NDSkLLx1+7bh29/+
tvImNzf3r//6r0nkAAAAwOQYGhr6j//4j6amJuWt4dvf/nZ0dPRzzz0XHR2tLLLb7YODg59+
+unHH3985swZ15w6AAAAgFG7++67U1NTv/nNb37jG9+IiooyGAzK8mvXrr3++uvXrl0z5Obm
FhUVRUREiIjdbrfZbG+++eadO3dCWjYAAAAwnYWFhT3zzDNxcXFKQL99+3ZNTY1h3759s2fP
FpGvvvrqyJEjH374YajrBAAAAGaEv/mbv1m5cuVdd90lItevXw9z5fJ9+/aRywEAAIBJ8+GH
H+7bt++rr74SkdmzZ4eJiN1uP3LkiNVqDXVtAAAAwMxitVqPHDlit9tFJExEbDYb8+UAAABA
SHz44Yc2m01Ewux2+5tvvhnqegAAAICZ680337Tb7RE3b97keSwAAADAuJg7d+7cuXNnzZo1
a9YsEblx48aNGzeuXr169erVAHvduXPn5s2bEd3d3ZNVJwAAADBtRUZGLliwYP78+Tab7fPP
Px8cHBSRqKio6OjopKSkK1euXL58+datW/527+7uDk9KSrp8+XLAo6Q9WfZ3q7NjPzv2//WN
9wkMK+3JspwwrwPHfvf54nX/m2dBSo2pER0fRP9guGq1xhxRRZrjO6rKzs5Ojej4oOtaoKUj
oAygeTrjc19iv/t88bqHAxU3/BYAAAAzXWRk5Ne//vVbt251dXVdu3bt1q1bdrvdbrffunXr
+vXrly9fnjVr1n333Xfjxo0AHSthH3/88WQWPSJpT5YVpPgu7j3T0S8iKQ+mubd8MEVE+jvO
9MrpdyoqKireOT2yMcdLf3+/zEtJjXW8jU1NmTdhxxrmTIOTlr18nnS0/KnX/ya9f2rpkHnL
s9P8bwIAADDDLViwYHBwsLe3VzN537lzp7e3d3BwcMGCBf5G+PjjjyNu3rw5gmMqwbbjxIn5
y5fPE5GOgxXvXP7u88+53pwWkdjvPv/c8nkdBw9KgSMFO9eIuFYrr/tPvL7LkQqVxf0dHZLi
GWdTCsrKHlQPIL1nOvqXL5+X8mCanD4t4kzmSsB0lHiw4p3Tw4952VGqY/S0J8sKUlw1eaR4
VaUBXbnSPy9l3gKRXhGRBfPmiffh/Q6rvjAeV8xzpWe1Hmca1DVXr4j9blaKSMfZ0xrbqTY8
fbajICUl67uxp4O5BAAAADPM3Llz58+ff/q0I1NlZma2tra61rreXr58OS0tbWBgQLPv/ObN
m2GjOXjKcmd8SykoK3vO/eZJ1cRqSoE7gKYUPP9dZSI57Un39iIyb/lzZeqdZJ4jxPZ3dPT7
L8Axbz4/Rhk0Nma+qAOml+DG9Jb2pMfs+rzlq78b63djt/6zHf2uCf20B1NErvSrj+pvWI9Q
LJJSUOZ5Md0rPS+zB89r7tzM65qnFJQ574Yyqd/ff9m5oXo71V2T02c7RP2vAQAAAHCbO3eu
zWZT5sszMzPnzJmTmZmprFK/vXPnjs1mmzt3rr9xIkZ19I6DFe+cdobJjoMV75xWZnDnx8Q6
povFPSGc9mRZQcq85dlpf3rn8nezUlRrlFVes7GqSV31tLAn57x5auyfenuVgOk3mQcaM0DU
PP1OhWtA5RRcc+EBXe674vjQ0BsbM1+k4+wZyVo+3LCOzhdHXcp1cf2jgLgumXLNPS6zB8dm
6ruhTIy7Z+djv/v8c8uXZ6f96Z3TsmDePEcbkIjzE472Pw9c7u8XCfIKAAAAzDCzZs36/PPP
ldetra3qOD5nzpyBgQHXJPq1a9fi4+P9jTOqaK6k4N6+KyLzHG+U6OaxkbN/+fSxE1kpjjw5
b556jXOVKvH1nzgWVO90759aOpYXpKSkxv5JhkvmwY6pZRSt6afPdhQUuArr79D6jq3PsEpI
dhaqzu8i4o7Pjmvuj3MzpQFFNbLMW/5c2XLVlso/OMTMV++tDO7e0vczkf/PBAAAADPYrFmz
lOexKJR0vnDhQhG5ePGiurllcHBQeaiiptHNmqu4uiEmmxI/U1LTJGXemNK3Nq8Gk5G43N8v
KfMWKI3mJ870SmqQw/qflb7SF1QgDnIzf06/83rM8+rOmbLnYxxT6MN8JgAAAMDYjTma++Xq
U0nLXj5PlNiozK27O1iUVaNM90o2X16wXJQAPIZSldYRpfVD4ejCVreRBD2c0m3z4JOS4ijM
Hc39Duu4Mo4eFsfXV0+8vutPYzgt9ciabSq+7Ty9f9pV4TikZxNPrOcEOwAAANxu3LgRFRV1
/fp15a3S0HLx4kURUTpbXBPnUVFRN27c8DfOxEVzzx4Kx7S20obi2V0R+Ml9Wk9ocXC1bbi6
pYPmGlOZDE4pKCsr8NjAu7ljRHr7roikpKSIdPT1eiRgv8M6m+dVhSjXZcxfvHS0/ngeUknq
XlPhGp9AvNqExjgrDwAAMD3duHEjOjpaieZKLnf1l7v6zpW30dHRAaL5qJ7QEpSOgwc7HC9V
U7an36l4/YT7iSX9J14P8GDu08ecmzqfxeK1/myHyAiTudeYp99xVamu2GNF/4nXXz/R7/kg
9WEOohSm0QDvd9jeP+2qUB2+/8TrY3xgufqY6pHVt8PzwSu9f9qlvjceNSi98KFqXgIAANC1
q1evxsXFhYWFiUhra6v6e5/qt2FhYXFxcZpPTlQYvv3tb493bbGeDwuHjqU9WVaQMvyt8nzg
OwAAALzEx8cPDQ319gYKS7GxseHh4Var1d8GEzdrjqng9NmOIP45IO3BlOEbjwAAAGawy5cv
R0VFxcbGKnPnXsLCwmJjY6Oioi5fDtSFQDSf4U4fO9EvKVmB/pxS7HezUibgGTgAAADTyK1b
ty5duhQeHp6WlhYTEzN79uzw8PDw8PDZs2fHxMSkpaWFh4dfunTp1q1bAQaZiIYWAAAAYIaa
O3fu3LlzZ82apTy//MaNGzdu3Lh69WqAFnOXCXxCCwAAADDTBJnCNdHQAgAAAOgC0RwAAADQ
BaI5AAAAoAsGi8US6hoAAAAAMGsOAAAA6EOEiJzv5k/JAAAAAKG0JDE2wvUqtKUAAAAAM5Yy
V05DCwAAAKALRHMAAABAF4jmAAAAgC4QzQEAAABdIJoDAAAAuhAR6gKA6aa+vj7UJQBBWb9+
fahLAAB4IJoD44/Eg0m2a9euke4SHR09EZUAAMaCaA4A08HGjRuD33jPnj0TVwkAYNSI5gAw
TQwODgazWVRU1ERXAgAYHT/RvO/IjpcPdqsWJBZs3bIyJsBA7XWlu1uD2M7jELbcyg3pw23k
riNz0zCbT7SRX5bRHmXYKwMAAIDpJsCsuToIt9eVvrxDAsTQ9lOto42p/pNoe93LBxM2VW5J
d21XWrepckN6SMPriC7LqMSs3FI5nuMBmCHsdnuoSwAAjEmQD09M37Aps/vgH9v9re+z9UhC
3DhPH7efak0seNwVv2NWPluQ2HrKbw0hMNxlAYBJZA9OqMsEAPgVdK95+tJMabL1SXqMuLpX
HFPIzjaP3aWtmZsqN4hzpXO9xyS394R3e51j3x6fSfe4uMTug39sX+na1jGd7LOLu88k0b3k
DVuCtLZ2Z26q3JDe7lOSqPbKLCjoOeisyfPURnRZfMtQznZZQc9B5UCbKh+3OQ/qHN63Ntcl
8tx7QppnMOGsDWWm2qRyS3GGemlbtbG8q9Bckfi2sbxJcj1XWxvKTLVncr338Vh/cZ2/tSJt
1cbjK/yvHoW2auO+heaK1fFKaSKirtm9MLXQXLE6XnNPrWHdO3rt3FZtLG9Sb+k7sOcAftaP
A7+HcNcYkksx3A8BAGBqCv5roHFxid02m0hM35Edu2VTZaWSIXcc2bpl5Zat4gzc7XVKQE9X
QmfTkcfTVwYaNn3D1oIe7e6UmJVbNtlKd5cqwdUdlb12UbW9tNeVvlwXpyzvblWKFGmv2+1T
Uox7r74jO15ulUwREY1TGy4Luy6LnzJEWk/K1srKGGmvK91d2lOwtbIypu/IjpedZfjW5jG+
em/1xxRMFfGr1+XWlh9vK85Qhai2402phebV8dImIiJNHqvb3q6V1NTRH9Ha3SWyYvT7+2ir
Lpdyy+p4aas2NeeYLRXxIm3VxrIGc4WysDap3FKRIdJWbTRVJ6riYlt1eZOkFvoZ1VjeVaiM
5nhvMl50pVyPaOszsONMW5sDfIIZF34OYW0oK+8qNFtWx4tYG8qM1WIpzpjcS7G6orzaWN3m
WdqdO3fG68wBACEx8r8G2nfqZHfm0nQRkZiVuZndJ0/1qVenb6h0ReilmWMtL31DpWJTZuvu
0tLS0jqf5pE+W4846pH0pZnSY3PUkxgX578kVbNMzMrczKBOLSC/ZUhmrpLv4+ISna9jli5T
Av3wlytx2VLX3piaMp4qTG063uZeYG3Y15Sak+kIW6mFhblN+xqszrVtx5tyc3KCG7qt2ljd
0FBmNBqNRmN1m7LIVHtGmsqNZQ1W1fqyBqtIW7XRocx5PI0RvFgb9nUVPpWhVOYqO2Fh6pmL
PaJ8EMhdocTDjBW50tXtPpPqfQvLC7U/ZLRVlzfllqunkDOKzV7Xyc1rYEddZY4zNRqr27zO
1Oo8JfeZ+lnodTk9NvA8hPrYrc1nctc5io/PzFHqnuxLkfFUYdc+7TORDVo0twQA6Erws+Y2
W7cr7LY6p7JFJLHAa0OPx5j4rB2l9A2VlRtck83+6nLNYceJqHvffUrqs/UoGzn3EpvjdcBT
0+A8vGYZwbSfDHO5xr2DH5MvPjMntdY9Ma7kugp3EstckVu7r9W6enW8KMl8hVm6aoMcvKlZ
zBZLvLRVG8urV1iKM4rNhV3OVoc293qxNpSVS7nFkiGinvTWGMFjfGtrs+SUxouIZBRbMtxL
z6TmJHidXNvxJkkqV07M2lC2b2FpRUJrs1bV1u6u1MJSr6no+NUVFu2TbDvelJpjjvfe2iyu
po62aveZSlu10Tl7bW0oc8xeq6a03QvVJWldH/UhAurqtkrGZF+K+MwcMb3dttpdnauP/K23
3vrRj36k3vGtt96iyxwA9C/oaN5+qlUSNsWI9IlGH7ZrdrnvyI6XTy7bWrnF0ZLe5D1O8Nrr
SneLx4Fili5LPOhM3k4eMdgjIAcoKSYuQbWFzeZ+IOIIH9DouiwyXBmaxvFyQcfiM3NSa/c1
PJWxOl6k7e3aM7nlHlEsY0VuuZLNrQ37ugpLi6V1X7BjO+exM1bkyvEA65UPBM7Duo443AjW
1uYzSevUodjR/JxbblHmeeNXr8s1lhubRCTV0eIhYm2obM4prYgX7Tld6bl4RhZ6Dymiat44
U2syuj+epBaai4drJXfN6CvtQsqpxq9el2s83lacIVoL1Q3+fq+PBp8bKqk5obgU8YlJ0txt
lQzngmvXrrk2NJvNJpPJ9Vq9as6cOdrFAABCLciGlva63c4GkJily1zPSek7ssOrxcRm63ZO
9LbXOb/eGBOXII5d+k6dVD8X3Mnd/uGW/nhBYuvuHUdUsf+Ng+J6ZItzF9XgSlD2nmbWLCl9
qevRKn1Hmpx1Bjy1YS7LcGVo0qwN00/86nW5Z5pbreLIjU95TZJmPFUotW+3KVPUmcNlUA9J
iUFvnrrQ/Yk0YaGruyLgCD0Xz6h3E5GMYovFYrGsOK40ebRVG8ul3GKxWCyWUqk0ljVYHWk0
+K9lOoa0lOeqii00W9zWXTRpt9tosHZ3yZlak7M3pbxJurqtmgu9dvRzfbTEr64oT3IMd3yF
o+4QXApXX5GD15NYampqRKSmpoYntADAVBFg1lzd2iGZrseLS8zKLZvqSktLRZSnhnjMMac/
XpD4srJjYsGmgsTdNptIjHtpYkFBpqt5xDHe0mWJBw++XGrznq+OWbmlMq6u9OXSg44FrmeU
eOySvmFrwY6X/dTjvyT3XpmZmSJxccOd2jCXZbgyNGnWFsx0O6YaxzRspuxryl1n8UlqSltI
w8IupXnEzwTrGJ252CPiOLTnXG1QrA1lpuYc5zcSM1bkyr5uq1W6xDXZHL96XW7tvtY2aT5z
5oxqptdkbPZ6XknGitzyco9WjMAynipMNXnNcwfi881Na4PGQi8juz4ZxRZLsWPsstScp+Kt
raG/FL6x+7XXXiOLA8AU4ieax6zcUhngwSrpGyorPb5S5P4rOZ57rtRaujLgLsMcSWsX3wE8
/mqPdkmqxe11ra55Jz8HHL5SzbXqOjRfa9amXql5RpiCMp4q3Ff59tui6ppQi8/MSa2trU0t
NI9ozjx4ng3vro7lnmF2c07LxrvSprsnPjXHHB/fkySu1g5Hh3XG6gqLsxPE2lBWKaW+TwzM
KC7PNZYbRZ2Wrd1d4icPe7abBHGmrq5552MqNReqyvJzffx8SHJMkBdniFgbKmslxxwv8Ykh
uRQe/6jBE1oAYKoL/mug04ujyXvLyhh1vzgwceIzc6S2Vpz9zj6rV6/LrT2+Yhwezq1EU2NT
brlF/QjF+NUV5oYyo7FcZATPAY/PzEl1dTNnFJu7y5xTwI5m8/hiS3m1UbUw6NnfYotlRbXR
aHQtSVU9P9CrwVpSC80VwT4j0fEFUcfIzlPVXOi1V9DXJ6PYXOgazNl2nzHpl8LnmwBMkAPA
VGewWCznu3uXJMaGupLJpnoyCn/MB+Opvr5+/fr1oa5i3Pib7EXIqW/Nrl27Nm7c+MknnwSz
46JFi/bs2RMdHT2dflABYKpTAvlMnTUftmcHCD2fvwQpMqLZ2HERv3pdknEEvdCYLG1v1yZp
fG8BADCVzdxoDuie+5uGIS6j/Lj7MejQBcdj2L0+L9FrDgBTHdEcwHBUf2wI+qD9R4kWL148
6ZUAAMYT0RwApoM9e/aEugQAwFgRzYHxV19fH+oSMLNER0eHugQAwDggmgPjjKdeAACA0QkL
dQEAAAAARIjmAAAAgE4QzQEAAABdIJoDAAAAukA0BwAAAHSBaA4AAADoAtEcAAAA0AWeaw6M
M/7eEKYKnsEPAHpDNAfGH4kHk2zXrl0j3YU/IAoAOkQ0B4DpYOPGjcFvvGfPnomrBAAwakRz
AJgmBgcHg9ksKipqoisBAIxOwGjed2THywe7ldeZmyo3pDsX2nIdbwAAAACMD//RvL2udLds
qqx05fHSuk2jDOSkeQCYeHa7PdQlAADGxN/DE9vrdrdmbnKF6ZiVzxYktjYd6ZusugAAI2QP
TqjLBAD45Seat59qlcyl6lnumJVbKresjFEt6Tuyo7Su3fd1e12pw44jfSLSXvfywW5p3e14
q95CtfuOurodqiXAtGFtKDNWt/lf31YdcPUoTcSobdXGsgbrOA8KAABc/De0JMbFjWbA9rrd
PQVbK1fGiLTXlb5xZOmWlRu2FvS4G1r6juxwNMr0Hdnx8o4jW5XA393q6p4BZhJrd5fIilBX
gWnhzp07oS4BADAmE/GElm6bTSRGJH2DRtTuO3WyOzM3XUQkZmVu5sGmU30rV4qM+qMAMHW0
VRuPLyzsqq09IyK55ZbiDGmrNtWeESk3dhWaKxLfdq5PLTRXrO6pNpY3iYhIaqG5YnW89ggj
LsF7TI+lrsXWhjJT7Rnx2RRTwoYNG3wX1tXVTX4lAIAR8R/NnQF7hNI3VG6qKy0tFRHVY108
te4ubXW+TixwvEiIG/nBgKmmqVnMFku8tFUby6tXWIozis2FXaaL6yzFGSJt7vVibSgrl3KL
JUPE0UniiMc+IwR/cO0x26qNHktN1YmWFcdNtUnllooMZVH5222rR/wpAJPP1Uf+1ltv/ehH
P1Kveuutt+gyBwD98xPN05dmyu5T7RvS3cHa8cSWDcPPbadvqKzc4NijbqlGOPdJ7Hy7FDNF
ak5mvIhIxopcOR5gvbW1+UzuOmcazliRW76v1bp6dRAj+Kc9ZuLxptRCs2tpscUiIpKh/L/R
HQihcu3aNddrs9lsMplcr9Wr5syZM9mVAQCC4+8JLemPFyS27nZ9cbPvyI7drYkFj3t8MTQu
QVpPtYsoXSquDd1f91TrsSkLY5YuS3Ts5fHlUWBGSEoMujEkdWGC63XCwtRRjBDsmNpDWhvK
jEaj0ejqdYHueT2JpaamRkRqamp4QgsATBV+G1piVm6pjKsrfbn0oPJeozcl/fGCxJd3l7aK
JBYUZIrNsdsmm3OvxIKtW9JFieMHD75cattUuSE9ZuUWV8eLcwMAPs5c7BFxZOaei2dk4YSN
2dVtlQyPdN5WbSzvKlRaa6St2sis+dTgG7tfe+01sjgATCEBvwbq7EzxELNyS6Xq5Urn8pUB
9vLYUGML1ZgARCQ+Mye19nhbcYbSAX68KTXHHC/SM/5jxq/ILXf3klsbykzNOeU5XZK7Tvnm
p7VhX5Mk8QiZKYEntADAVDcRT2gBELz4zJzU2nJjU265RZ1/41dXmBvKjMZykdE+IqWp3Ohu
RUktNFes1hwzo9hSXm00GlXbxUtirtGxd2phYa7U7mt4KmP1KM8Qk4YJcgCY6gwWi+V8d++S
xNhQVwJME/X19evXrw91FZhZdu3atXHjxk8++SSYjRctWrRnz57o6Gh+UAFAP5RAzqw5MKWp
HknuNponngMAgJAjmgNTWkaxxVIc6iKgE/SaA8BURzQHgGli8eLFoS4BADAmRHMAmA727NkT
6hIAAGNFNAfGX319fahLwMwSHR0d6hIAAOOAaA6MM556AQAARics1AUAAAAAECGaAwAAADpB
NAcAAAB0gWgOAAAA6ALRHAAAANAFojkAAACgC0RzAAAAQBd4rjkwzvh7Q5gqeAY/AOgN0RwY
fyQeTLJdu3aNdBf+gCgA6BDRHACmg40bNwa/8Z49eyauEgDAqBHNAWCaGBwcDGazqKioia4E
ADA6AaN535EdLx/sVl5nbqrckO690CGxYOuWlTHKKluuYzvXCJ5LAo8QhPa60t2tqpo0DwEA
AABMNf6jeXtd6W7ZVFnpyuOldc507s7pzi1f3iFbt6wcwXH9jDB8Om8/1TqCHA8AM4fdbg91
CQCAMfH38MT2ut2tmZtc6Tlm5bMFia1NR/q0tk1/vCCx22YbQxXBj9Bn65GEOHI5APiwByfU
ZQIA/PIza95+qlUyN6l7RGJWbqlcKSKimc7HS9+RHW/YEqS1tdvVrKK0viQWbN2yUhxvd5e2
ZhYU9Bz07WNxdbt4zsoDIWRtKDNdXGcpzvCzvq3aeHyF/9WjNBGjtlUb9y00V6yOH89BAQCA
i/8/OZQYFxfkGO1/PNiduXQsQVg9Qner5FZWVm5Il/a6lw8mbKqsrKys3JRw8OW69piVW7YW
JErmpsrKDUt9R+k7smO3bKqsrKzcWtCze4f2FD+gM9burlCXgOniTnBCXSYAwK/RPaGldXdp
q+ptYsHWLSNM5gFGcH4m6LP1SGausjR9aaY02fok0FH6Tp3sdmwfszI382DTqb6VtKRDT9qq
jccXFnbV1p4RkdxyS3GGtFWbas+IlBu7Cs0ViW8716cWmitW91Qby5tERCS10DFXrTHCiEvw
HtNjqWuxtaHMVHtGfDbFlLBhwwbfhXV1dZNfCQBgRPxH826bTcRPsh17u0iAEZyt5DZbt3vm
Pi5OaUYfZiZfnfgTC8ZUITARmprFbLHES1u1sbx6haU4o9hc2OXsd2lzrxdrQ1m5lFssGSIi
bdXGsgZHPPYZIfiDa4/ZVm30WGqqTrSsOG6qTSq3VGQoi8rfbls93h03mACuPvK33nrrRz/6
kXrVW2+9RZc5AOifn2ievjRTdp9q35Du8RSV3bKpckOAcBwTlyAnbX2S7kr0HvF6hJxpPGYE
A9FiDp1LzcmMFxHJWJErxwOst7Y2n8ld50zDGStyy/e1WlevDmIE/7THTDzelFpodi0ttlhE
RDKU/ze6AyFUrl275nptNptNJpPrtXrVnDlzJrsyAEBw/PWapz9ekNjq7tfuO7Jjd2tiwePD
xN70pZndB99wNXm31+1uTVy2dJRNJTFxCdJ6ql0Z6VTrsA9miVm6LNG5fd+RHaV17aM7LjCB
khKDbgxJXZjgep2wMHUUIwQ7pvaQ1oYyo9FoNLp6XaB7Xk9iqampEZGamhqe0AIAU4XfhpaY
lVsq4+pKXy49qLwPbj46fUPlJtVOY3sCefqGrQU7Xi4tdY403OFjVm7ZVFca/PaArp252CPi
yMw9F8/Iwgkbs6vbKhke6byt2ljeVai01khbtZFZ86nBN3a/9tprZHEAmEICfg00fUNlpc93
iWJWbqkc8U7BjuC1yvXERt8NfF8Ed3BgSojPzEmtPd5WnKF0gB9vSs0xx4v0jP+Y8Styy929
5NaGMlNzTnlOl+SuU775aW3Y1yRJK8Z6QpgMPH0FAKa60T2hBcB4ic/MSa0tNzblllvU+Td+
dYW5ocxoLBcZ7SNSmsqN7laU1EJzxWrNMTOKLeXVRqNRtV28JOYaHXunFhbmSu2+hqcyVo/y
DDFpmCAHgKnOYLFYznf3LkmMDXUlwDRRX1+/fv36UFeBmWXXrl0bN2785JNPgtl40aJFe/bs
iY6O5gcVAPRDCeTMmgNTmuqR5G6jeeI5AAAIOaI5MKVlFFssxaEuAjpBrzkATHVEcwCYJhYv
XhzqEgAAY0I0B4DpYM+ePaEuAQAwVkRzYPzV19eHugTMLNHR0aEuAQAwDojmwDjjqRcAAGB0
wkJdAAAAAAARojkAAACgE0RzAAAAQBeI5gAAAIAuEM0BAAAAXSCaAwAAALpANAcAAAB0gWgO
AAAA6ALRHB3JxSAAACAASURBVAAAANAFojkAAACgC0RzAAAAQBeI5gAAAIAuEM0BAAAAXSCa
AwAAALpANAcAAAB0gWgOAAAA6EJEqAsAEEpXrlzp7u622Wxffvnl4OBgqMsBAIRMVFTUPffc
ExcXl5iYOH/+/FCXM0MRzYEZymazffDBB1euXAl1IQAAXbhx48aNGzdsNtupU6fmz5//8MMP
x8XFhbqoGYdoDsw4N2/efP/997u7u0NdCABAp65cufLHP/4xMTHx0Ucfvfvuu0NdzgxCNAdm
ls8+++y99967du2a8jYiIuLpp5/OzMycN29eRAT/QQCAmev27dv9/f2tra179+69ffu2iHR3
dzc2Nn7ve9/72te+FurqZgqDxWI53927JDE21JUAmHCfffbZH/7wh1u3bilvTSbTd77zHRI5
AEDt9u3b//Zv/2Y2m5W3kZGRP/jBD0jnE00J5PxKBmaKGzduHD16VMnl9957b0VFxb333qus
stvt165dO3v27F/+8peWlpaBgYGQVgoAmDxz5szJysr61re+9eCDD0ZHRxsMhoiIiO9///sZ
GRllZWVffPHFrVu3jh49mpeXN2vWrFAXO/0xaw7MFEePHlX6y++7777f//73kZGRImK32z/+
+OOf//znQ0NDoS4QABBi4eHh//RP//TNb37TYDCIyK1bt/7+7//+888/F5HExMTHHnss1AVO
Z0og57nmwIxgs9lc3/usqKhQcvnNmzdramq2bt1KLgcAiMjQ0NDWrVtrampu3rwpIpGRkRUV
Fcoq5Um7Ia1uRiCaAzPCBx98oLwwmUz33HOPiNy8eXP79u3vvfdeKMsCAOjPe++9t337diWd
33PPPSaTSVnu+lWCiUM0B6a/gYEB5fnlERER3/nOd0TEbre//vrrnZ2doS4NAKBHnZ2dr7/+
ut1uFxHXAwOuXLnCl5EmGtEcmP4uXLigvPjhD3+o/Of1448/Zr4cABDAe++99/HHH4tIRETE
D3/4Q2Wh6xcKJgjRHJj+rFar8uL73/++iNjt9p///OchrQgAMAX8/Oc/VybOlV8fovqFggnC
wxOB6c/1IPN58+aJyMDAAN/7BIAZZd68efPnz587d+6cOXNEZGBg4OrVq1euXOnv7w+w19DQ
0MDAwNy5c5VfH6L6hYIJQjQHpr8vvvhCeREVFSUi586dC2k5AIDJExUVtXDhwoSEhAsXLths
tuvXr4vI7Nmz77vvvvT09J6enosXLw4ODvrb/dy5cxkZGcqvD1H9QsEEIZoD05/yz5Eiojyn
9tixY8PtkfOTvS9mSuurT/+2eYJr0zz28hP+DqwU5tDz7j+8VN81poMlrX/l108kOAZ7O+HX
rrMO9goErBYAQiwqKmrJkiXXr18/duyY+t9Lv/zyyy+//LK7u/sb3/jGkiVLzp8/7y+dHzt2
LCMjQ/n1IapfKJgg9JoDM85f/vKXUJfgl0f09l3nuTLhiV/v/UnOmA73t08kiLS++vTTT3uF
/ObfPv3000Hkcr/VAoAOLFy48Pr1611dXZp9jENDQ11dXdevX1+4cKG/EfT8K2NaYtYcmHGu
Xbs2gq0d08fvvpv4xBMJItL66tO/veica3bOKiuTz62vviovOrKqx3yze27aY6pbWdzT2iqZ
mQnqY2a+uHfvcq8J66T1T2Wqd3eMmfnU+qRmZbyAR/Gtzb155ot79z717j+8dNH3tJ1FqMfW
2t27WgAIvXnz5iUkJLj+pXTVqlWHDh1yrXW9/fTTT7Ozs/v7+zX7zkf2KwNjxqw5gCBkPuEM
ppkv7t37a/cb9ax15ovuOeTMF19ZnyQiIjk/cW8vGlPdCY5c3tPa2uO/gKSszASRnnernJPb
XfVV7/ZI66uOBD7cUTxrG9lcuzqXi0jmi2OcqweASTF//vwLFy4o8+WrVq362te+tmrVKmWV
+u3Q0NCFCxfmz58fylrhRDQHEIzWV59++ul/eLfH/ebVVhGRxIVJ7o163v2Hp592rkp44m9z
1JPdT7tXZT61Psl77Kdf+u1vX3IM2vqqTy/JwoQEkZ7WFlXXSVf9S86tgjiKY5276q569eEC
NK0rnwocVTqGXp7juTtT5gD0Z+7cuZ9//rny+tChQ5999pkSx5Vc/tlnn7km0T///PO5c+eG
rlK4Ec0BBKH1RLOIdF3sdr+52OM9yd36tiPfNv/Luz2iBOCFCQnqNY5VCQnutsaed/9lzLk2
iKM4Un3zidZRDe6qMqgmdADQgTlz5ijPY1Eo6TwtLS0tLU2dy0Xk+vXrykMVEXJEcwAj1NNz
cfiNxtvFnh6RhMws1Tx40vpXgu8s6b44tke5qGM+AAAThGgOYLy4OkiUB59I98Uux9y6u7dE
WTXydK9M2Cc8sdk5UNL6zU8kOFvax+somhyDL89xHPeVvXv3vuLRkQMAejQwMDB79mzXW6WP
5fTp06dPn1b3nYvI7NmzBwYGQlEjvPGEFgDjJuGJX+99wvnG0QJS/3brEy9meqxRdZ5o03rm
SfNvX12+13sg11CjOUqwulpae554IiHzxb17X9Qamie0ANCnq1ev3nfffV9++aU4c7mrj8X1
NVDl7X333Xf16tUQlwsRYdYcwPhpffVVZyO36uGFzb91fX3UsSZAinV0qYvX90udAz39qrpT
XD3UiI4yQu5vfDrHdgwdqFoACLUrV64sXrw4PDxcnI3mrv5y9dvw8PDFixdfuXIllLXCyWCx
WM539y5JjA11JQAmSl1d3Z07d0Skvr7eYDA8/fTT430E57PDmToGAD1JTk6+detWV1egf0NM
SkqKjIzs7Oz0t8HevXvtdvv69etFJCwsbMOGDeNfKESUQM6sOQAAwPR08eLF2bNnJyUlKXPn
XsLDw5OSkmbPnn3xYgi+3g9NRHMAAIDpaXBw8Pz585GRkdnZ2YsWLbrnnnsiIiIiIiLuueee
RYsWZWdnR0ZGnj9/fnBwMNSVwoGvgQIYu676l56uD3URAABfg4ODnZ2d/f398+fPT05OVp5f
PjAwcPXq1fb29v7+/lAXCA9EcwAAgGmuv7+fFD4l0NACAAAA6ALRHAAAANAFojkAAACgC/Sa
AzPO3r17Q10CAADQwKw5AAAAoAvMmgMz0cmzF0JdAgBgClj24OJQlzCzaEfz0tLSSa4DwMR5
6KGHfP8OHP+1BQAE5juPc+vWLVLieKmsrPRd6HfW/Cc/2Xr3vfdOZD0AJskfLP9y586dUFcB
AJjyIiMjf/bLX4e6iungV7/4B83lgRpaZkXcNTHFePjsSt/X5sdMwoEQDNftmBL3ZWpVCwDA
NBA4H/KrObBhLwtfAwUAAAB0gWgOAAAA6ALRHAAAANAFojkwM3VVZRsMRY1eSxuLDIaiRums
yvZdNzqaQ3VWZRsMPodvLNJYOOrxx1D/OJ6+zg86sWU0FhmyqzrHZywHnVwlAJgwRHNgZkra
vM0k5gNe4fiAWUxr8iR58zF7TZ7XHuOcirKysjwP33jAnJWVNS5ja9Y/XrtPSDpsLEopaRnn
MaejENwaAJhURHNgpspb453Nncl8UqSvXZvVfs49p9p4wGxau3Zyjq0rnVXZBkN+u8k0Ph9L
AABTGdEcmLG8s3njAXPWzp/miWr2sbMqO7uoKNtgMBgMKSUtYs5XWhTU05Pq10pPimK46csH
Vq2V/Yec2bzxgNm0ZpVqteZQjk4Yn+GdGzsWqus3FFU5d1J1V7hG16oy0O7K9LbqOjiuj2t7
z0P5LUB9IbZ12O3HfprmU0NwF9J9oVQHdd01z2vnPVSA++V1appXTHN394YH3AtUO/mZ2vYe
v7Mq2/MyGrKrOv38sCmbed4azSsDALpHNAdmLs9s3njAnLV2VbL3Ri1m2Wa32+32jp1ZYjps
P7bZZxOnxqJ8s+mw3W632w+bxLx9mDyUrMrmjQfMHvP1mkM1FqWUpB92FNOe78p35vwDa5Qt
zfm+oc+8X95Q1raU/KZRRKSzKjtfDjuHGSa2ee2eV+N1HRzXpyZPXd7h9JIUd3k+BXhehrw8
nyvq70x9NRblt+/scB70Wee5OKrq2OnqGuo8tF+UD16BL7KK+9Q0r5ife+Rc2JHWbhYRkbyf
7lT1LnUe2t/i808zGuMnbz7mumCNvymRnR2eP3m+J+51a/xcGQDQN6I5MIOpQ5OfZC6SlZYS
7HA1dmcjcN4a0/Dbu7O5dzLXHMo9q+/Vc+xcmpKm1RPiPCvXWlU4TN68zdTinrrX4rO7zwaO
69N5rt3VDpS3xiSubp1hR/Dh90w1tZzuUA5ao/rcpFSVvGqt4wZ3Htov3rd32PvlOjXNKzbM
PUrevM2x0F2Fv2SufUfyajp2tucXFRXlt+98Q+MToeaJj2gDANAdvUVz6x9+uf75jcr/bfuD
zf+GtkPbN65//ZTqxQQ7tcdR2Pb/YZ2QA9gObQ9+cNuh7XtOTkgZ2rTuy2Rd+ZGxHdq+cf3z
ysU5ZR7mpwiq0OQ3mUv6A8GHGncbRr45uMPL/kOdvslca6jOc+1+hglcoeZac36wdQ57+s4N
Ok63uD/FpKRlOXPhiC6giAQ6U195NfbDku/ZC6I6aPIqpaNfK5nLsPdLXbnWFQv2Hnl+QvCa
uw8wviRv3mYym82mbb7B2t+JB78BgElyas/65zeaVXnh5OuO39QnX1eihTPS2P7Htuc3em08
srwx+rQ22cnKL11Fc+sffvnSfnmm/Hf1lZX1P3tC9v+3bYcDXtjb1+TGLY8XE+WU+XfN9//n
X9T/7In7L+yrPvjJRB3nzk25fnPYrax/qHnzgsj16xNVhvfh/N+XCb/yo9PcesJ5cQYH5Kuh
kBajb47Q1Fm1XTP/jEhnVXbK/rVKC4H9cBCz5iLJD6S37D/UeK7da2pea6jkB9LHVJ4HRyOG
3W63j9eEqjqNe+b0kRrhmebVOK+TRj9P8uZt6fsPNWol85HdL58rNpJ7lLx5m8l8oNHfJwTN
8UUcTSmH/bX0BDzxoDYAMBmWPvKYyNET78tt5f2p1maRR1OXXb0uIrJo4f3S1fPxdRGx/uXE
R4uT7heRG184NxaRuFXbauvXJgWRN0af1iY5WQUSEeoCVE5Z9l+4/6ltq+4Vib5X5j6x3fxD
uTYoXw3JXf9hfn7nUWWrnJL655b6GWDP+t81i4jI4md+9YtVcbZD2//bm9/Ieay5+aiILF73
yrYfxouI7X9s+9m+j7xHOzX8IW4PSWKBsyrroYqfvSk5j0nz0QsiklO2a+OysdYgIh9U//jd
j0Q1oE9h1j/88qX9F0QuVPz44jO/+sdV4m+o8eLvvqh/fH2vnu3Q9v+mcX2Cuc5jt2jhJ6f+
IhnRqgID3hdZ/Mzab7y5v1lEdY8mp9TQS161Nqtk+7NZLaZtQTyapf1cp+QlKxGs5EBjTV6e
dB7a3yLpIiIdp1skfVuyiNJxLFk7hx8wb40pPz+/JWtnh0de0xwqb40pP/83jZtr8ly58Njm
EZ+w45SV4qWzKjvl9LYRP2nReR08hnVfE2k8YJb0w8kio2tw1jxTzUAbeK1jsPT8/BLvKywj
uV+aV2y4e9RZtd0s4hwzb40p/0BVWrusfcO3VO070lmVnd++s6MmL3nVzuyU7CqPs/R74s5b
E8SVATBJlmbmyNGuXuutW/ERkXLqfx0VeSw9TSIiRUTk4UdXvNvde0eS5FLPhcceXvPJhS7P
3ZVf2SX1z8X5iRaeVGnNeviXL70tz/zqF6vi5OTrGyuac8p2bVzmk8G8k1Wcn1QzTGwYJzqa
NbdZPxGRIbtEzXKUFXa3zL1X7rp8aPvOo4v/9pXKyvrKwsead67f3aa1/ynz7z595pev1lcW
PiYX3v/383JHRESab2X+/tX6ysLHLux7adf/ErEdqt330eKnXqmsrC9cLs3/euiTWyK2Q9t3
Hs0prnceYtuhTz0HX5qZIx+9+6v1//WNk46qwkRELtzKfOnV+m1P3C/N/2q5MLYaRETkk4T/
/Lvf1f/MNaDN99zjf/CLV55aJLK8rLJk1ezzfoea8Pvi+tnRKNKxxvv6DHudx0nCw49aPzx5
y3UpAt6Xnz1xv1x48/0Fr/zfryqfsy3/fnPyStWB5M3bTC0tvv2/vhuuWpvVUpKi9Abk/XRn
ltKC8KysdUy4upcZtqcd3qmaQw4gb43J3Y3tXqo5lNJ7bDAYDIaUkvTDo05dyZuPOVsdUvav
7RhhLve4Dh5Fu8tTIqX/MYb7YzxBn2ny5mOH00tSAh40b41JNFtCgr9fWldsuHvk/sFwVmEu
KUnX/qcZjfEbi1JKxNFinrx5m6lF9cVafyeuvjXBXBkAk2TZwzly8YMPbEMiYr30qcijmWki
kUo0l4SvJ31y2SZ3bD2fLk6MtWsP4fpXeu9ooead1uIzlt/v+L1v6/lUmaq/4BucvJOVvwAQ
KDaMHz3Nmvth+/P7F+QxU3Z8eJTMfjQzp/Zo16fWG3E+2y017Yo79H++uP6iiIhcssntxSIi
K5Yuuy0yV9mx23rjWw//p8Vv7n/7pdK3Jaek/rVkuTGoHEIuVK9vdg7Wc1G+ipe7wl2jL3tu
T316zfrXmiueb75/7f+1/QciIrLo618fEln8zUUin8iQfPXXpl3zRlnDHdeAYfJN54CfntQ6
96USESZiEImUuUu0hoqc1I9cfae0iozTuD6fnnz/wjDXeZzErsrv3XakZ5Hj7VLTrhi/9+Wb
31wk8lFCTPztMJl9t4h8YuuWns7JKlUP8mrs9hqPJcmbj9k9XzjfbNZ8s1ljmWxW9lSP4DO+
4+i+yzWH8l7sPZRv2Zprtc9Za0zt3dVFeJ5dkOWJ5NXYvZNi8nBD+RPg9vk/msYxvO6XxiAa
hxnmHm1WnUJKWlagp+Z7j6/+uVCtzfP8yfG6h54lBbzJACbT0kcek+b3P+xZtSj6g3+/cP9T
zyyTSFcG/Xpswkcf9smdL7svfCNzgWGYobyj11DAtPbD/5yzr+KSTXq+eP/C4meeSZOI2drB
yZWsrne8f8E3ACwWCRgbbt8/bpFaR9E8Ln6RyNFLvSKLNdbaRe662/HaIGK/471B3x+3/fJf
ZO2v6n96wfzj2qMiMuTsMHb8i4my41D8D35R/72Bk2/8uKJ55/pmeezHe0xxIiL3P/Gz7d9b
4B5w6I6IZw57pKj+tf9y8v95seL949bv/SfHwnBVCu7947by/aOsYek1jQG1z92jc1prKI3r
N3qB74vfIp03yOd0hr/O4+KvH17U5OwyC+K+3P/1WDGoyrhjn7xSgZmg89D+4JqmAExHSzNz
5OgHH1qXRb1/YfGjz8yXqCj3yvRlj5lPnvxW2NGch0z2nn8ddjDfpKTmkdYKlj2cI787dSjB
9tHih4vnh0tUZDDByScAXHW88BsbhsYtIeiooUWWGtculhNNhy7dEXF8S3fbH2wS99Cji+Vo
03tWEeWrA/dnpGv09PRaPxJZdO9s6f/Co+3/eNOhPseOkhgbL72Htm9cX3Hs64V76n9f+JjI
J10fyfyHHl0sH33Qbo2aI9ePbystNX98r8yKdA/i+nJxxNUem4hB5LZW30hvz+hruK0xnsR+
a7hztwU71Oj5uS8uMUuDukHO0wl0nce17MxvdDk6yfzdFy/q/6lPaqnAsFR/gUhlivwlnc6q
bKUxh6YSYOZSelosTSc+Wvzww/PDJUIdZGMTF1tbT356f9x8CR9DwNVMa0sfeUxOvPnOhfsf
TosPj5Cw4YJT3HCZ0CXwJ4RR09GsuUj8D37xyu3yl35Z9KaIiNz/v5dvf/Ru+WrOqm1/3/38
71964V0RkZzi+u8skAifr9D+9Q+fWXzszdrSo5LzWI581NVrlYUiIisSun9ZtF5EFv/tK2vT
xDB7VeHa93+276Xn94my8HsL5I7zEP/1XeW4pm9+IbfvdV+dpRvLsporfrfR0fu/7Xvx8oXG
CaTnPbO4ebQ1aH4pOFbr3CPjv/4NkeaKUuszv/pHraHG76ObiPi9L66C44K6QerT8Xedx9Wy
h3OkuVnE/30JZFJLBYYTfH+LDk3p4gGMk6WPPCbNR4/L/U/+l/jwCM+54diEb1x4s1keW7ZA
wsaQdzXSmjgm7JsXPpo6XyIiReI0g5NHstIIAKMvasQMFovlfHfvksRY9dLS0tKf/GTr1+bH
TEIFn13p8zrQVwMy6Ow3iJrjbCG6JVedaS9itsyKFBmS6wNimC2zwjxeDIlIpETcEvsXh14p
f/PrhfU/+pZjkjs8SukKkjs35dqgYzTXQvUh3MdVcT/KJ1yi50jYkFwfEFF2vyVXr0vUHLlL
xlCD5oDhWucucvu63LjlqEQ0T2dUXLcjqPviugWRfm9Q4NPRvM5jrVZVlXKVNO5LlMyO9C5e
KUa5L+Neami9+07dnTt3RKS+vt5gMIjIybMXlj24OMRlAQD0zfXLwm63r1+/XkTCwsKeeHJD
gF0CBAnXb1X1r9QbX4g9Smbf7UhZs+6VsJtybVBmqafDPCOfRrRQ8U5rcvL1jRWfPPFK6cr4
6DmOjwSaOVCdrMJ8A4BP5tGIDUFwXZZtW0srKyvVq5RArsdJwLvmyF2+SyNl7r2eS8JltnOJ
7wsREdcXZiNl7myPXcPulrm++dX3EJ5m+S9Ave/oa/AzoGZhEbNlruuN5umMN4374q/gwGuH
u85jpTqu+irN9jmoZnke92WiSwUAYObQ+q3qClfulBUwIAUKHl7juJ9V/dgL3/OYqtfMgR7J
KuhUox0px0CP0XzcxK3aVrvq+sCMrwEAAGDmifvhdvN3HRPkd038JOa4mNbRXDw/4szkGgAA
AGaecZ/Vnmh6ekILAAAAMIMRzQEAAABdIJoDAAAAukA0BwAAAHSBaA4AAADoAtEcAAAA0AWi
OQAAAKALRHMAAABAF4jmAAAAgC4QzQEAAABdiPC34u57771x+6tJqGDSDoRguG7HlLgvU6ta
AACmgcC/cPnVHNiwl8VvNF8w964JqAcAAABTGBFxQtHQAgAAAOgC0RwAAADQBaI5AAAAoAtE
cwAAAEAX/H4NNCROnDgRzGbLly+f6Eqgxn0BAACYBPqK5iKSnp7utcTgpLz+85//HIq6Zjru
CwAAwETTe0OLEv6uXLly8eJFJQVCD7gvAAAA406n0dzudOfOHRH5/PPPr1275pqjRahwX2ao
zqpsQ1FjqKsAAGC8PPsr47O/Moa6Cg06jeYu4eHhV69evX379qxZs4iA+qHv+9JWbSxrsAZa
Xd02Accc/0GnMtI8AECvnv2V8bmfvZT8o7k6TOe66zV3hTyDwWC328PCwq5cuRIWFrZgwQKd
5b+ZZUrdl4xiS4b/tdbuLpEV43m8tmpjeZNI7rgOCgAAJoCSy09d++9ffHYj1LVo0N2seVhY
WFhY2FdfffXFF19EREQMDAzcunUrKirq3nvvDXVpM9qUui+uWfO2amN1Q0OZ0Wg0Gh2T2m3V
ptoz0lRuLGuwqtaXNVhF2qqNDq5Jd40RfA9WLuWW8lyfxT6DabE6B3eP73FMj6NaG8oCjzZi
jUXOb/NmV3X6LtWc9tZa21mVbVAvbSxKKWkRc75z2MYijwMAABASrlx+89r1UNeiTXfRXEQM
BkNfX19vb+/g4ODly5cNBsOCBQtCXRSm6H1papZSi8ViKc9tKq9uE8koNhemSm65pWJ1vGp9
xWppKCuXcoti3UWTKwJ7j+Alo9hiKfaapHfEdUV5Uq3JX69LW7WpOcesbGcu7Cp3HtR5TIu5
MLXpuHPntrdrpbBUqXtcNBblt+/ssNvtdvvh9JJnlezcWZWdL4ftdru9Y2d7vneg1lzbWJRS
kn7Y7lxa1Ch5NR07s8R02H5sc7KISF6N8xUAAJPihRdeeOGFF9RL1Lm8r/uLvsO33/iZJVTl
+aO7aB4WFmYwGBITEw0Gw4ULF7766qvIyEhdTs3OLFP2vqTmZMaLiGSsyA243trafCZ3hTNi
Z6zIPdPcag1qBF9tx5tSC59yjaWR3Z0SFqaeqa1U8nj86grn5wXXMSU+M8eVzduON7mWj5uW
0x0iosrOnYf2t5jW5ImIJG/eZmrZf0idzTXXNh4wZ+38aZ4oS4/Za/LGt0YAAEbohRdemFNk
W/faOlc6nxK5XHQYzUXEYDCEh4d/7WtfU966upn193XDmWVq3pekxKCzbOrCBNfrhIWpoxhh
xIeNX11hLpRak2/vimsLZza3Nuxryl03jlPmIpJXYz8s+b7tKWbnsnyzxl7eazvPtY9nUQAA
jIdVy34sIko6f+GFF5Rc/mXf53rO5aLPr4EaDIaIiIj4+Hi73W4wGJQsqOPwNyPMhPty5mKP
iCP69lw8IwvHMFZXt1Uygk3nltUiorTBlC00VyR6rs/MSa093vbUwuYzuesCfLt1lPJq7PYa
EWksMuQXrXFMeJsOe818ezS1+Kx9wPvPUQEAEHLNF2rviblPRNa9tk5E/r33ja+uf/XFZzf0
nMtFh7Pm6qiXkJCQkJAQYGNMmml/X9SdI2PtHclYkXum9m3nWAG+u+m5KmFhqtZse/zqdblN
+yqbxd0jM046q7J9v5yZvGptlvlAo2sDz2+Caq7NW2NqKflNo++Y7ef45icAIBRee+217n++
58u+z0Xk1LX/PlVyuehw1rytjYdD69E0ui/xmTmpteXGptxyi/pph/GrK8wNZUZjuYhIaqG5
IkDviONrnv46yCWj2FJebTQ6npXqf7D41RXlF40mY617wwwRnyudsSK3vKmrsHSc28wlefOx
w6cNKYYSEZGsnR3H8pxLixwfxVwLPfbxWZtX07EzO8Xx6c102L45WURWrc0qKUkxnD5sr8mT
xiLD9rQOvgkKAJg0r7322gsvvCD/x+fK2ymRy0XEYLFYznf3LkmMVS8tLS2trKwMVU0Axldd
XZ3yB1zr6+uVXH3y7IVlDy4OcVkAAH1z/bKw2+3r168XkbCwsA0bNoS4rJFQvg8qInrL5b5h
WwnkWLXhBAAAIABJREFUups1B6Yjx18l8pQbYOIdAACMnTJ3fmNht65yeQBEc2ASZBRbLMWh
LgIAgBnotddeC3UJI6C7r4ECAAAAMxPRHAAAANAFojkAAACgC0RzAAAAQBcCfQ306sC1SasD
AAAA+kc+nFCBovncOdGTVgcAAAD0j3w4oWhoAQAAAHSBaA4AAADoAtEcAAAA0AWiOQAAAKAL
RHMAAABAF4jmAAAAgC4QzQEAAABdIJoDAAAAukA0BwAAAHSBaA4AAADoAtEcAAAA0AWiOQAA
AKALRHMAAABAF4jmAAAAgC4QzQEAAABdIJoDAAAAukA0BwAAAHSBaA4AAADoAtEcAAAA0AWi
OQAAAKALRHMAAABAF4jmAAAAgC4QzQEAAABdIJoDAAAAuhAR6gIA6MjQ0NCH7e0dHR1Wq1VE
4uPjU1JS/iY9PTw8PNSlAQAw/TFrDsBhYGCgfu/e3t7eZQ899HfPPfd3zz237KGHent76/fu
HRgY8LNTY5HBU1HjSI/bWZWt7NVYpLl3Y5Ehu6pzpKOOgJ/jAgAw2YjmAEREhoaG3jlwYHFS
0iMZj/T1f/k/j7f9z+NtVz6/+sgjjyxOSnrnwIGhoSE/u5oO2106drbnjzbndp5r11yeV2M/
tjl5VEOO6bgAAEw2ojkAEZEP29vj4uIeePCvWlpP3hj8av78+QsWLBgauvOXD8/8VWpqXFzc
h+3BBNjkzccOm8zbHZPc7il117R3Y5GhqKoqWz3DnvxAelZaSmNRSkmLmPN9Z8ids+adVdnZ
VZ2OMYsaRZlvH2Zw7TLcG2ZnZ/s7LgAAk41oDkBE5NzZs/cvWdJ5vis6es49c++5KzJyzuxZ
cbELYubPs9ls9y9Zcu7s2eBGSknLajndIdJZlZ0vzgn1badTXNHXvF/esNvt9sMmc35Ro4jk
1RzbnJxX07EzS0yHA86Qt5SkHFjjnJw3GJ6VN+z2jp1SkuLK4eaS09uc0/fOQK9ZhqOKY8eO
BXFcAAAmBdEcgIjIJZstNjb2sy++uPeeuXdHRc6NvvveObPmRs+KWTDv9u2h2NjYSzbbiAbs
PLS/xbQmz/Eub42pZf8hJRRnrV2V7Fg24jJNh2vyRCR51dosydr5xuZkkeTN20zSfs6R+7N2
/jRPRFnasv9Qp98ynFUAAKAfPKEFgFtUZOTsWVGzou6adfddd911l8FgsNslOnr20NBQWNiI
P8lnpaW4XqekZTlfpj8wPplYcxz3QtcR/ZQBAIDuMGsOQEQkNja2r69vwfyv3bl9S8nlkZGR
kZGRdvuduXPm9Pf3L1iwIKiBVNPULac7XIs7TrdMTOF+uY4Y2jIAAAge0RyAiMiDDzzw0fnz
CxMTbt68cXNw0GAwiMjt27eHhoZmz5790fnzDz7wQBDDdFY9WyI7f5qn9JyYDzhbwBsPmCen
g8T5FdTOqu3mrLWrkkNUBgAAo0E0ByAisnTp0kuXLp05c/r+JUsMBsPAwLVr168PDd25++67
//KXv1y6dGnp0qV+djXnu59qnrJ/bYfj+5TJm491pG13LN+e1jHs1yyTV63NMueP5snoKqa1
8qzBYDCklKQfPqa0og9XxrgcFwCAcUCvOQARkfDw8CfXrHnnwIEbN29+c/Hi+Ph4Ebl06dL5
jz++dOnSk2vW+PmDoHk1dnuNv0GTNx+zb/bZPk/rtZ/NPTbbfMyu3lRzTHnAdxDfcT2P7Oe4
AABMNqI5AIc5c+ZsWL/+1KlTJ//8Z8vBgyKSmJCQkpLy/cce85PLAQDAeCKaA3ALDw9ftmzZ
smXLQl0IAAAzEdEcwLTh3SEDAMDUwtdAAQAAAF0gmgMAAAC6QDQHAAAAdIFoDgAAAOgC0RwA
AADQBaI5AAAAoAtEcwAAAEAXiOYAAACALhDNAQAAAF0gmgMAAAC6QDQHAAAAdIFoDgAAAOgC
0RwAAADQBaI5AAAAoAtEcwAAAEAXiOYAAACALhDNAQAAAF0gmgMAAAC6QDQHAAAAdIFoDgAA
AOgC0RwAAADQBaI5AAAAoAtEcwAAAEAXiOYAAACALhDNAQAAAF0gmgMAAAC6QDQHMAadVdmG
osaJPEJj0QQfQBqLDNlVneM+5gRXDQCYjojmAPSs81x7qEsAAGCyEM0BjIfOquzsqs7GIoPB
YFAmjDursg0Gg3tCurHIUFSlLDOoppQdu6i2dG+YnZ2dUtIi5nyNae1O51DqY6gX+pm1dh9P
Y1DfYvyV7T6SahTn7tlV50ZxCQEAIJoDGCctJSkH1tjt9o6d7fkGg+FZecNu79gpJSmuQGsu
Ob3N7tgiu6pTpLMqO18O2xXbTqc4Y655v7xht9uPHTvWsTNLTIftxzYnexyrsSilJN2x42FT
S8lvGpWF+9d2KAtdh/DazeA+3uF0VWnivxhnNfbDJnO+soPq8B1r96c4P4rkt+/sUHYuMY//
9QUATH9EcwDjxXS4Jk9EkletzZKsnW9sThZJ3rzNJO3nHBk3a+dP80SUpS37D3V2HtrfYlqT
59g9b42pZf+hThGRrLWrkrWO4JJXY7fX5Ln2U16kpGW1lDyrBOrkzcd88rw0HjA7S/AaQ0T8
FuOqxnUg9TjJm7eZzAcald23KUfM++nOrGAuGAAAnojmACZC+gMa2dq9MCXNkV2z0lJcq10L
g+PqKcl3TFEnbz7WsVNKUgJ1tGjW5eSnGO99Os+1S4vrMIZ8s+PDh3v35AfSR3ImAAAoiOYA
QqDjdIvyouV0h+/C4TUWOTpmlEYT1/Lkzcec7Soms0ZHi3sGX8sIijE5O1/sdrvdMT/v3p0v
rwIARoVoDmDSmLcrWbmzars5a+2q5ORVa7PMB5yT240HzMM2sjh0nmsXZ/dIZ9V2Zda8sypb
9Z3MlLQsn+nuvDXOtnTf7SX4YpJXrc1ydp0rnxKyqzqTV63Ncp3fof1Bf8gAAMCNaA5g0pjW
yrMGg8GQUpJ++JjSin6sI227ozFke1qHT3e4IwZ7d6ckb95mMucruz0ra01i3l7Vmbz52OF0
V59Jyv61HapOckVejf2w5Ku2UB9w+GLUWyrfdTUYDPntOzuObU4W1eGflbUmP7sCABCAwWKx
nO/uXZIYq15aWlpaWVkZqpoAjK+6uro7d+6ISH19vcFgEJGTZy8se3Dx5FbRWGQ4sMbuk5YB
AHrl+mVht9vXr18vImFhYRs2bAhxWdOCb9hWAjmz5gAAAIAuEM0BAAAAXYgIdQEAZoi8GjvN
LAAABMKsOQAAAKALRHMAAABAF2hoAeA2NDT0YXt7R0eH1WoVkfj4+JSUlL9JTw8PDw91aQAA
TH/MmgNwGBgYqN+7t7e3d9lDD/3dc8/93XPPLXvood7e3vq9ewcGBiapiMYig4v77wE1Fnk9
2Hx8jqTx10J1T7lAXlejsyrbZ6HWMvcA3ld4pDVojjvu9yhUgj+XUfwUTcQPnmvMYQd3nVrg
LTursnV+O0N7GSfUCP6nNJ3+VwcXojkAEZGhoaF3DhxYnJT0SMYjH3/a+/s3/vX3b/xrV0/v
I488sjgp6Z0DB4aGhia6hs6qbEO+HLY7HE4vSZnA3495NXb/f1RI79x/tlREpPE3JZKV5blF
56H9snOnyfkHSh0LJ/UKQ5NOfvB0UsaoTfX6x0XnufZQl4AJQDQHICLyYXt7XFzcAw/+1a79
B//+n2v/55/Pvvv/vv/8P/7+jX85/FepqXFxcR+2T/QvgcbflMhO1V/wzKs5bGop+Y0rgzqn
e11hUpkX9lzYWGQoqnIuL/LZ11BUVZXtNTEW/C4e1WrupVFSZ1V2dlWnY7SiRvdGqjHduwWT
lLO8InfjAbNp7Vqfi9mS/sDmNZ5XcJgr7FuO1z9dOJac87gMXgvdFya7qlP71NTja146910I
end/RvFD4nGCnle1yGcsrZWudZoHGuMPnp+7E5jGqbknhoe5np1aN1Nzy2FvfTD/W/C+jJ1V
2R4H66zKNmRXdYbiMmqdftD/HQhiS49L6vOfO98fsMailJIWMeerr5T/u4Opg2gOQETk3Nmz
9y9Z8uf2s7X/+sf/Wmz6x3/+VUbpjwckYudblvMXuu5fsuTc2bMTW0HjAXPW2lUe02B5NXbX
nw815x9YY7fb7a4w2ViUUpLumAD2SJjm/fKGstCc7/z1n9++s8Nut9s70vaXtPgeexS7aO3l
r6SWkpQDa+z/f3v3HxVlnfd//D380sw2SwQ97prYoDcalT9P4ZFWFDZE9xgZm3VaqL0DXX8w
do5tlnsjuxrbejaHzBS6K71rs0WlzvqDFQo3/Urfr2m2i8kqI7beW4GQoal7SuX6/nHNDNfM
XPMDZJgLeD7OnM7MZ67r83lfF5O8+PC5LhSl3npslslkypYtilJvFUu8Y2mBc7f6rLL4AL6r
ZszNrSnb7ch/75Xmzs3wPJnW5emSvtya5Ezxfs6wc1+9o9CckJXHLaXqlrqN7SfmYL5Z79Aq
8uLLsurV/uutx2ZNLbZ5adQ9M/pbetGpD4nLsWj70vzGYc8dFpcvlK14avubK4+3/zLCYyAX
Hf/geT0i73wfmu/zWZFnUt9Wv5h+z7zvL734+39B5zSa87dYk9p/S1Sx1iLWLW7T5d1yGr0e
fsD/Dkip5fjKAD72ovPPnd4HLL2k3pokuXuUg/nmjv1/AWMjmgMQEWlsaoqJifk/R48N7Bem
3H7bK//7+b7Pr4XPWvT9xfM1n56IiYlpbGoKehF3jPb+C+ok6/J0EZH0ublqgzZVOhtFRBzx
09Fo211Wk7tS/WZuzt9idVv50clddPbyXlLunpJ0ETFnZCVJkhoszPkrc+XYSVt7jFYHW5nr
ulhFX7ozm9uKVx9z7O1gK17tCOHm/JXabODrDDv71jkK7QlJX24/H7qN2hOjf2jxY5NqLNlq
cDDnH1SXJeg2Br57R47FpUSfB+hCW4zHzzS23WU1uXOdb2p+cvL4kLjo+AfP6xF55efQvJ/P
Y6vV5U+OlkDOvO8vvYjv/xf0T6M5I8uZzXV+vJTuOY3eDz/wfwccZ8Scv1I9NO99uv1z5/0D
5rc89DxEcwDtIsMjGhsbX6r854GjF1svy7+/V0Skf7+oa9euhYUF/5+LYye9T/Tohkrnr3Bn
lXrfsv54TdLYeJ8Dd2KXDpXkay/byWNSY4l3/CZ6VqnP0+CQvtwqlrUVYttdJu5Rxba7rKa9
y1mlIs6J80C69nIU7SfEPPoOn41+Ds2cf7DeKs5mdTpVr7EDu3fwWHS/dr6PxctOeruLxI91
hkHfPwx17oPn9zPmvTbPQ/N+Pmsky6qd6+/ImQ/sU+3zq9B+Gp3Z3Fa8utQRuX31E5TT6PXw
O/NPk/3QOtKnlw+Y3/LQ8xDNAYiIxMTENDc3J09OjBo46Ov3Nn7z5eVvz15u27ux38BBKfdO
OHfuXHR0dHAr0JkJciwr1VWRZ/91uPp7X+8dx49Nqjle36FaOrFLh0pyl+u8MlNRVwP430WN
K8U6ybxiraXGpcN6a1KNZW1FoGfYy1G0nxDNtWe6jf4PzZx/0LEwJLfU8at33cbAd9fVka+I
/2Px+YON9gNTf1x/CZRf/j94nfqM+Tk0L+czd2V+/nJrknYhTuBnXjr1qfZyGu3Z3HXq2Icg
ncZADz+Azts/IQGfUv8fsA59dWBgRHMAIiJjRo8+1dBwxxjzc/857wfK9zf8+bc/+PNvbzV9
v25F3rCY6FMNDWNGjw5yCenL25dfi4jYirMtNXpzZOq7J4+J401b8Wof817mjCznemtbcbaX
hePXu0uHSvIYa5bmqrIAL0ozZ2QllVos7sncVrxau/BCxL5eoHR1sS2gM6x7FC4nZHdZjfdG
v4fm+tNA/NgkuWO0Wbcx8N29naNOfkj0jkXSXS6pdfuZRrvmwtuqi0D4/eB14jPm+9D8nE91
pfdqx1WdAZ/5zn2qvZ5Gc/7K3NLV2WXivnbLRz9dfBoDP3zvnbfXpK44C7xPvx+wDn11YHBE
cwAiInfeeWdjY2Nd3fGfzUnbtXHVi8/lW1c9Vb1lberUSX/7298aGxvvvPPOYNdgzj+o7JFZ
jl+Cx5dl1Xtco6jZeGVuqX3bbMnKFdfbBLr1u+cO9Te98cezvCwcv+5dOlSS21j2a+JMJtOs
Y9b6g/lmzxtT6A8n7sG6Yq2lxjMWpi+3JtVYsoslgDOsfxSaE5ItWbnOyj0b/R2aZid7CSXp
otsY+O7eTlfnPiT6xyLpJZpzF1+WVa+dBzbnH6wfu9r+5uqx9Z1d6evvg9eZz5jPQ/NyPrUb
bLGKJX5qsfjb0r1fnU91AIfv5TSmz82tqfH4DVEAh9xVp9HviQqg89wsyTaZTKZ4yx17Duab
O9CnlzOj/gRkMuVVdKArGJ5p586dDV+cHTU8Rtu6bNmydevWhaomAF3r7bffbmtrE5GtW7ea
TCYROXri8/FjRrptdvHixXffe2/o0KFxI0cOGzZMRBobGxtOn25sbHxg7tyBAwd2e+HBUJHX
4eTUiV26gq146trRB/kGG6BeeLpC9MHrbQxyGivyTO/N9T7XYFjObxaKosyfP19EwsLCHnnk
kRCX1St4hm01kDNrDsBu4MCBj8yfHz148NFPPy199dXSV189cuRI9ODBj8yf37NzufZP+wW4
1qATu3QxW3F2WVZAv72H9J7TFfoPXq/AaURPFhHqAgAYSHh4+Pjx48ePHx/qQrpUesme90zx
JouIiOTuUQKYPuvELl3MnH/wYHeP2YP1ltMV+g9er8BpRE9GNAfQ+6WXKEpJ0HcBrh8fvC5h
vNOYXqL0/N/qoFuwoAUAAAAwBKI5AAAAYAhEcwAAAMAQiOYAAACAIRDNAQAAAEMgmgMAAACG
QDQHAAAADIFoDgAAABgC0RwAAAAwBKI5AAAAYAhEcwAAAMAQiOYAAACAIRDNAQAAAEMgmgMA
AACGQDQHAAAADIFoDgAAABgC0RwAAAAwBKI5AAAAYAhEcwAAAMAQiOYAAACAIRDNAQAAAEMg
mgMAAACGQDQHAAAADIFoDgAAABgC0RwAAAAwBKI5AAAAYAhEcwAAAMAQiOYAAACAIRDNAQAA
AEMgmgMAAACGQDQHAAAADIFoDgAAABgC0RwAAAAwhIhQFwAAQC9XWFgY6hKAgBQUFIS6hL6O
aA4AQNCReGB8/AxpBCxoAQAAAAyBaA4AAAAYAtEcQJCd2fLopEmPbjnj2rJqvzG7BQAgdIjm
ALrDifWvByM0B6lbAABCgmgOoBvMXrKkvkQ7w23obgEACA2iOYDuEJddlFapO8O9f9UklboW
RbMq5cyWRx2tsn+V6+IVf906e3V2rPa8ZcujzsYzmuf6xfgYGQCArkc0B9A9RmTnyVPuS8HP
bHn0KXnx8OHDh8uX1D/16JYzMmJ62phd1ftFRD4/LWPG1J8+IyJnTtePSZs+ItBu9696atfs
Fw8fPnz48IuzZVf7xPquSilSW3c9NWmFFB0+fLh8yRj7Bp7FiIgkrzr8x2y9kQEA6GpEcwDd
JfkJ9+UnZ/ZVnpidkiwiMiI7b/aJyn1qNq8/fUZkf3V9Wl6anP5c3S4+zks+9uxWklcdPrwq
WX2aMlvzxuw8NWWPjBvjeD5ietqYE45B3IsBAKA78SeHAHSbEdlFaY++vj/7CW3jrqcm7XI8
H7NEREbExZ8o2XdmutTHx62Ki3+qev+qlNOO0Bxot2e2PJq5/oS224B4FAMAQDcimgPoRiOy
82TSqn3a0Dv7RccEt0Nyyuynqvftq5e4IklOmV1yeovsmp2yKvBuz2x5NLMyrfzwH0eIyP5V
k0oCrs+zGAAAug8LWgB0q+QnltSvd8xnaxaWa6//TE6ZXV9ZKWnTR4iMjJPKyvoxcSM70O3n
p0+Iff3L/lVP7fK1n4aXYgAA6DZEcwDda0R20ZIx7S/++KI8NWnSpEmTMivTyh0z1iPj5IR9
cfmIuPgTJ0T/ElBv3SY/sWTMLrXXkrgXl6hLyQOpTK8Y7tACAOg2LGgBEGQjsv942KMh2/kq
edXhw6t0dsnWvN/Rbl1HyD7ssYO35zrFJK86zBIXAED3YNYcAAAAMASiOQAAAGAIRHMAAADA
EIjmAAAAgCFwGSgAAEFXWFgY6hIA9ABEcwAAgqugoCDUJQDoGVjQAgAAABgC0RwAAAAwBKI5
AAAAYAhEcwAAAMAQiOYAAACAIRDNAQAAAEMgmgMAAACGQDQHAAAADIFoDgAAABgC0RwAAAAw
BKI5AAAAYAhEcwAAAMAQiOYAAACAIRDNAQAAAEMgmgMAAACGQDQHAAAADIFoDgAAABgC0RwA
AAAwBKI5AAAAYAhEcwAAAMAQiOYAAACAIRDNAQAAAEMgmgMAAACGQDQHAAAADIFoDgAAABgC
0RwAAAAwBKI5AAAAYAhEcwAAAMAQiOYAAACAIRDNAQAAAEMgmgMAAACGQDQHAAAADIFoDgAA
ABgC0RwAAAAwBKI5AAAAYAhEcwAAAMAQiOYAAACAIUSEugAARrTOag1ks2UWS7ArAQCg7yCa
A9CX++STvjcoffXV7qkEAIA+gmgOwKvKfQfdWkwmU2RkeL/IiHunTPS1p614arylJnePUpLe
8WEr8kyrx9YfzDcHtOl7czs1iPtYHRgUAIAgIZoD8OVHP/yh83lkRHhkZERUZPiXX37pey/b
7jKxWnMtq4uXpwcz7NpOHhOZG7z+AQDoVlwGCsC/sDBTv6jI/v2j+veL7BcVGRkR7nPzirWW
mjtG58/NrbGsrXA25pnyiounmkwmk8mUV+GjUWUrnmoyTS22aV9rXopIRV68pUZKZzmaK/JM
dq7bufTocwsAAEKKaA7Al7CwsMjIiH5RkTf0j7yhX2T/fpFRkRFRUZG+9ql4rzTJujxd0pdb
k0pXazJwqeX4SkVRlHrrsVnObKzbKCLm/IN7NNnetrusJnelyxR8ekm9NUly9ygH881iK546
S/YoqpXH4z2yd0VevOUO+wZ7XH5oAADAIIjmALyKiAjv3y9yQP+ogQP633TjDTcOuKF/v36R
kZH9+9/gfSdb8erSpKwMs4iY81e6ZOAk6/J0cbSX7bZ5b1Slz80Ve7a37S6ryZ3rfU256/vp
c927EkkvUZyL0tPn5gZ4CgAA6EZEcwBeRanR/IaoATf069evX1RUVGRkZGRkZP/+/b3uY9td
ViM1lnh13cisUpH2ifM7RjsmvePHJjl20G20S19uTaqxrK1QF68v9321Z9LYeOdzz67U4hxL
WmaV+uwKAICQIJoD8CoiIjwqMiIyIiI8PDw8PDwsLEz9b2Sk1yvIK9ZaanId60rUdSpJOotH
6o/XeO6r02jOX5krpaunZltEnYj3oeZ4va+uKvJMpmzZ4ljR4rsvAABCgWgOwKuoyIioyIiI
iAg1kYeFhZlMprCwsMgIL9HcVry6NMl1dtserott0j5/rlnz4qWxXfpya1JNTY3OOy7MGVlJ
pe85fgKoeM+jK9vJY+JYq24rXs2sOQDAgIjmALzq3y+qubnl3LlvWs+fv/DttxcvXbp0+fKl
S5fDw/WjecVai06GVlelZK8+JrlZkm0ymUzxljv2OO8grtuoYc7IShK3C0C175XOMpnyKsSc
f7B+7Gr7/Vd07lBuzl+ZWzpLfTtbsnLF5QJVAACMgPuaA/BqbMJ/dGj79BJF0Wk25x9U8qUi
z/Te6PyDSr772x6N6SWKZt5dvTWLl2XmZu2+Zr3eXasrcb7KV7c0O8ZyHRQAgFAgmgPQV/rq
q6EuQcRPMgcAoFchmgPQscxiCXUJIhV5plmlIrl7mM8GAPQNRHMA3UN3xYjPZSSuK1AAAOj1
uAwUAAAAMASiOQAAAGAIRHMAAADAEIjmAAAAgCEQzQEAAABDIJoDAAAAhkA0BwAAAAyBaA4A
AAAYAtEcAAAAMASiOQAAAGAIRHMAAADAEIjmAAAAgCEQzQEAAABDIJoDAAAAhkA0BwAAAAyB
aA4AAAAYAtEcAAAAMASiOQAAAGAIRHMAAADAEIjmAAAAgCEQzQEAAABDIJoDAAAAhhAR6gIA
AOjlCgsLQ10CEJCCgoJQl9DXEc0BAAg6i8US6hIAP6xWa6hLAAtaAAAAAGMgmgMAAACGwIIW
AN2g5cCmDdVNjleJWQWZCT62risvLKsViU1Jiamurg1sn6CoKy/8cMiiBdOi/TZ2VsuBTRua
7+uSQ+vCrtp5HmzLgU0bqmPcvxp15YVlklWQ3LJpQ3WT+xdL/XqG5CsIAD0M0RxA0NWVb6iO
ySpYoAazlgObNhSW+8hpdXW1sSmLFkyLrisvVJ8EPFJQ4mlXjxuqIrtG9LT7EqvL6uoyE7Th
u65WErMSRA6IxMbG1rq8XVdXGxsb26TXGQDAFQtaAARbXV1tbEqyM6lFT5uXEltbV+dt85aW
sxITHa15AkNJSEgU169fXV2tJDqyeMy4cbFnW1q0byaOG9e9JfZAVZZBTmkbG0JdDoBQYdYc
QLANGRLbVL2/bppzmjh62gL19lza+WP1+aIhH26obhIpK7QvZCkrrE1MSTlb3XxfQeaQA5s2
VIt9Hr1F+0JVV27f96x2myYRES+z7/alMyKiWTLjbExMTPTcUtvYmXFdN3btXLPmo308/d8v
aJYI6WzhWUDLgU0bmselnK2ubnItS28gXwcrajbXzpurv+Zo/9krYZxsr2uZpg5QV1ebmLBI
zlZ7nElNFV26SKjnadiYNmHF2G2trakiIlJlGTQhTT6pXDgqxHUBCAFmzQEEW/S0BVmJtWWF
qnKv0+X2bRelxEpiVkFBgeOJNtJnJTZV768Tkbr9brlcRBIy7buozfZ1NAUFBQVZMdUbPAeu
Ky+rTbRvkCi1Hx5ocWlcNORsrceWmsbOjeu2sYjUltUlqDXUlqkbtxzYVCbqcClnyzYdaHEf
UTPEopSzZW5DeCug9jOZpw5kP4n6A/k+WBH3efO6utrYcQmaL0R0wjj5rK7F8Waiy9KX8rKh
yHP3AAAVJklEQVSzKYsctW23H1pCZkGfzeUiVS+tkKJPrKmO16nWbTmHVrxUpb5p8ZhLr7IM
smzcmKa2WqpEpGFjmstce8PGNKbegR6KaA6gGyRkFhQUOOKn/4Duq6NFKWfLysvLy86mzPMd
5lpazjpXWSQkJIp2kUV7VZntG4iIy+qb6Gn3+Wrs/LhuHF0PGRJr76HusyZ7D9HT7ktscsRc
J+0SoehpC9wmzb0W4AjQvgcK5GATktvXJHkkc202d0/mIiJNzc1qJ306j7er2r15ygM/cZkh
T7W2tlpTRRo2pj0k21pVy09McMbtzStOLG9tbW39pOj4Q2kbG0YtrNSkeWnY++6hnOVMugM9
EgtaAHSnhMyCgsyWA5s2fHggOWFaZ3qInnZfYmFZbWJWgZ9U19zcFDtkiP3FkCGxTc3NIm77
uNw5JjZFjbUyRNy20mu8rnFd6K6nr3Uu6VEL60A9ugUMCXSgwA42OmFcbHVdXWZCQl1dbey4
RdEeb8v2upZp0XW1iQmZIu0/WiRkFmSVO/42JjdtsRtr1o3RasR2zKanZuQ8tHZvw8KFIjKl
aGmqiMiohctzVqzd27BwYWpGjjy0duPS1IWjXHcD0LMQzQEEmf2+epoMFp0wLrbakRc73l3Z
2ZSslM/Kyut85zqXVOwSV+1aDmza8Nm4RQUL1FUohR+KSHR0jGdPuo2dHzcgvlKrn3o6VoDH
QIEdrD2bJ7d8WJt4n+dPSdHRMU0f1tWNOxs7JNn9vYTMgoJMUT8Y5QmEcxE5bmuQVN10PmXM
7c7nt4+Z4njanuWdjalLi6ZMWPFS1cKltnelaBPJHOihWNACIMgSklNia7ULplsObK+WlOQE
NWPa10W01H0WwO31Wg5sKjubMm9awrR5+ouwRZzrNzSdS11dreeccXNzk6OxrtxxNWhCgmYh
9oe+Gjs9rnZjXdEJ45zLRVoObPJc/aOpR1oObCp0PREBFeB9oIAO1r7vh9udK2L0SiyrFrel
Lp7VIjUj59C7e10WhrevFT904pSz9dSJQ557tzeOWrg8RzavTVuwQtzWxwDoQZg1BxBs0dMW
FESXF24odNyko/3OHAnJKbEbygrVvy+UKM2+O6or31AtKYumRYvj9tobyqNdZl2jE8bFVldv
KGzOKshMSMhclLJpg7p2IjZl0QL3BNk+uMSmZKXEljU3i0S37xWbkpIon6mb6jVqD7Ej42o3
1p/Pjp62wLnqQ68H0Q4hiVkF06JdF434LsDPQP4O1rnvfYnVZbWJ9+l3n5CQKLVn3ZK5RE9b
kNXs+Ci019a379CSurRo7YQJFnOr/UrQho0LVhzK2VY5alTDA1NW7K6ypqaK2NekfzJK5JTI
ZnXpijRsXGtvtHc0ZcKKQ1OKNpHMgR7LtHPnzoYvzo4a7vIbzGXLlq1bty5UNQHoWm+//XZb
W5uIbN261WQyicjRE5+PHzMyxGUBfUZhYaHFYvG1RZVl0EOb7c+nFLXfObFhY9qEFYdcWqss
g3aPKTq+YsUhEcnZ1tp+bxdp2Jg24cRybQvQAVartaCgQNvi/GahKMr8+fNFJCws7JFHHglJ
eb2MZ9hWAzmz5gAAhFqqtbXVqtM+amFl60LPZrNuMxeAAj0fa80BAOgOmj/46WSp6rr+G/a+
eygng2QO9GjMmgMA0B28TIx3Bft6mJxtrSRzoGcjmgMA0IOkWj3zt7f1MAB6GqI5AABBZ7US
nQH4RzQHACC43O56AQDecBkoAAAAYAhEcwAAAMAQiOYAAACAIRDNAQAAAEMgmgMAAACGQDQH
AAAADIFoDgAAABgC0RwAAAAwBKI5AAAAYAhEcwAAAMAQiOYAAACAIRDNAQAAAEMgmgMAAACG
QDQHAAAADIFoDgAAABgC0RwAAAAwBKI5AAAAYAhEcwAAAMAQiOYAAACAIRDNAQAAAEMgmgMA
AACGQDQHAAAADIFoDgAAABgC0RwAAAAwBKI5AAAAYAgRoS4AQGgcPfF5qEsAAAAuiOZAXzR+
zMhQlwAAANyxoAUAAAAwBKI5AAAAYAhEcwAAAMAQiOYAAACAIRDNAQAAAEPgDi0AAKBvOXLk
SPA6nzhxYncOd508q0VoEc0BAEAfcuTIkYkTJ37//ZVgdN7c3PLmm28+9thj3TPcdfKsFiFH
NAcAAH2LIqIowelZr9/gDXeddKtFaBHNAQBAH6MoQQrL+mE3aMNdJ6K5ARHNAQBAH6MobcGK
5t063HUyZFF9HXdoAQAAfYsijplst4dtfcqA/gPsj0WVLu2al44tF+9170Fp01/Q4n+4xX/R
36ZLHp71e68WoUU0BwAAfY7i+ahcPODO5WPf/felS/++dOnfl96VuTdOf/mU/V2XXU69nHLn
jsy//3t9mkcnXiaivQz3j1+pY1069vvjcwcs2auzWRc99Gtg2tx4iOYAAKBvaVNEaVNcH7aX
i157ovzySzMdLTPX//0Fefo/159qUxRFaRPnlnuX3Lnjgb9V/zJO8ehEP+rqDbd3yQOvPVG+
PtX+ctQvS34/5fXfvWzT6/P6Hy71+6kWoUU0BwAAfYzisbajoWLHoSdmp7o0xt2fOeVQeUWD
M8IqiuxdctNcpfyDX47SXzsiuktEPIer+vPr4jrcqEUffOvotmrxwJsG2B9L9yqiKA0vp9y0
eMMrKWpjyis2R9kvpzi2XFLl7G3vEnvj4kqXS1ADqxYhRTQHAAB9i6J4zB/X/+PQlDFxbo0j
zePk//6j3r4mu63t1IYZD7yRU148w/v8tOitNfccrk3EczjHhPrSzNcf337xwvmLF7Y/IW8U
bbCpBby+o61UbTz0K+tedcu7nh6nbnn0hc8yl+xtU5S2UxtmPCDbL144f/Fo0bHMGS/b2pz1
B1QtQotoDgAA+hZFj94b9gXZiiIix36feuczY18oOp4585VTuj04u+nscKpUa+u31pmKoijK
zNk57QVMeeAncYqiKKPGTBFFFEWp3PnGlBeWqlvG/bKqtThVUZRTFeWHHs+YqSiKMmrhMzmH
dvzllLq7ojdct51yBIpoDgAA+hjF44LI2/9jyqF/2HSunpwy5nb1yaGxy79ttf5y4aYXZMWT
rzToX22pv37bczgR0RvO/mh4JXXQTTcPuunmQZmb7R2IyDjzKJdrOk/ZPhP9Kz7fmKeze8DV
IqSI5gAAoG9RFI+lHSPT5k5+Y3elS2PDK7/bPDnzJyPV+eXHZ6nrWEYu3PS8rJiwtNLLBZcB
DTcjI0fchttrGXSTpVJRTm1InbBj7pEL35y78M257TnOuXUR5xDq87jbx4miiEcBijxedk7d
/cI3595fMNJ1d3/VIrSI5gAAoG/Rm8UetXB5zuasHyx739Hy/tIJz8rzmxbGeWwct6Dk+clv
ZC2r9HKXwkCGS136/JTNWTM3nlZfntr4k3mbJxctnanIqX98LGPNcYqIUrVs3ma1A93HzIyc
j595SS349Ia0W2duPK1I3E8emPzGnvbGpVV+7qkIY+GvgQIAgL6lrU3vzoEzrF9/nfHU4Jtv
sb/O+dPXlTNFaXOEWEVR2uxvxeVtLHp30rxbZNvXL6Zq+9Cdh9Yd7ra8yq9vtwyeePOz6uvs
bV+/mCqK0jZj6ZrJE7Ju3Swik9dsWzP5oRM2pe12TQHtxcz8w+Gi+yfZC87+U2veSKVNicv7
yzbHUUxZc7hyhv0qUG39vqpFaBHNgd7PZDKpTxRFcT4HgD7MEVfdzfxD8zd/cL5yBurbFlQ0
uy7Nvm1BRfMCEXHvRz/sehkuZV1L87r2jezbxOXt+SbP2Zr7jYiiiKaA29yeL9CM4nkUitKm
V7+vanUPwL4l30SCjWgO9H4333zzuXPnROS777674YYbQl0OAISY0hZ4KO15wwXDd999pz65
+eabQ1tJr8dac6D3i4yMVJ+oAR0A+ji9xd9d89CfMw/acMGoVpfz24fzGwqChGgO9H7Dhg1T
n1RXV4e2EgAwAs/7mnTVQzfuBm+4YFSry/ntw/kNBUHCghag9xs5cuSnn34qInv37p0/f35E
BP/jA+i7Jk6c+Oabbwav/8cee6w7h7tObtXqunr16t69e9XnI0eODG5BfR7foYHeb+DAgYMH
D/7666+vXr26f//+lJSUUFcEAKEUSB7tucN1uf3791+9elVEBg8ePHDgwFCX08uxoAXoEyZM
mKA+KS0tvXDhQmiLAQD0FBcuXCgtLVWfO7+VIHiI5kCfEBsbO3z4cPX5008/feXKldDWAwAw
vitXrjz99NPq8+HDh8fGxoa2nr6AaA70Fffcc8+AAQNEpLW1dfHixefPnw91RQAA4zp//vzi
xYtbW1tFZMCAAffcc09He5gzZ063/bfXMO3cubPhi7OjhsdoW5ctW7Zu3Tpv+wDoob755pvK
ykrnlHlubm5ycjJXhQIAtNQLk5zrWCIjI9PS0m655Rbfe6FDPMO2Gsj5lgz0IbfccktaWtpf
//rXS5cuiUhpaenrr7/+8MMPT548+dZbbyWjA0BfdvXq1XPnzn388cfvvPOOet2niNx4440/
/vGPO5fL58yZs3Pnzi6tMWSjdBu+EwN9yy233JKenv7RRx998cUXInL16tW33nrrrbfeCnVd
AADDGT58+L333tu/f//O7d49ibk35XJhrTnQB/Xv33/69OmpqamDBw8OdS0AACMaPHhwamrq
9OnTO53LpbtWgfeyteb2WfOGL866veHZAqB3MY25c+LFby+cazn7bWvr5cuXrnLbFgDowyIi
IwcMuPGmQYNujY4ZeNMPLl293jRYvOm1bsiT3TNKMOiWHSEibteAqnQbAfQ6MSLmUNcAAOiF
WGvum27YZkELAAAAuh5rzTuBaA4AAICux1rzTiCaAwAAoOsxa94JRHMAAAB0PWbNO4FoDgAA
gK7HrHkn8CeHAAAA0PXmzJmzY8e7Dz74QPCGUPvXpnObzRa84a6T2ez/lmhEcwAAAHS9nTt3
qnc2vHr1ajD6P3/+gmcuN5vNQRruOp0/f2Hfvn3Tp0/3vRkLWgAAAND11FXgioiiBOshHmvN
gzrc9Vfrl9dZ82XLll3PFwMAAAB9mdlsttlsoigSWCrtOEUdxZlaFy1aFMzhrpMiAQRs/Wi+
bt26rq8HAAAAfYZzPjtIYVmdh7bZbM41LepCc2Nmc7VavxmbBS0AAADoeu2rwHWXd3y569kH
Mx+0PzYedWnXvHRsuekT9x4URRHdO7T4HW7TJ0FctuJZv6Zav7gMFAAAAF1PM2vu4eimeWuq
Up/bsWa84+W8Zx9f/3zGMHHuYt/rq93PLTmYtH5HxjCPfhxrzd3SuZfh5LntO8aLiDTufm7R
gyUrt+eN7+SBBUDxLCOwqXxmzQEAAND11MTcpjOz/NWubVWpz+7IvdvRcveC9TnyxvpdXymi
XjDp2PJoyZKD965/ftZQvRlqzShOesMdLVlTlfrsgrvtL4fOWvz46Kptu74Kzry5S/3u1frF
rDkAAAC6nn3WXPGYQW48UnMydd54l8ahE5NGb6450piRYW9QRI6WPLRGnt2eMdRLrG1TxHPW
3HO4o/+vSlKf1Q43NGPNtgz7ZkdLHnq+yt4+87lteeOlcfdzS/6VlHNm8+YTIjI6Z/0adTK/
cfdzS944qW74rHPO/WjJQ2veFxFJfXabdh7eo+a2gMI5s+YAAADoempiVhRFaXN9fPmvk6OH
x7o1xgwbISf+9aWitCkiorR9teu5Ne/PeDb3Lo/dnQ/RWWuuM5wi4jmc/fFJyfNVM1dsK/vT
trIVqfL+tl32AqpqlMVq48nNf/5E3XLJGyPULYtzzjxf8kmbohYpK7aV/WlbcfY/n39u15dt
zvr1q/WLaA4AAICu51hrrnheEikiorfkwxFg/7nj10u2/Cgn+3+fX7mn0fvSEe0oTgEPp4ii
jH/ynW1P3iWKIspdk2eo+4uIjL53wlBFEWXo8NGiiCLK0UPvj86Zo24Zm/Hbd3LvVkRpPPLR
yZmT7xJFkaHp82acrPmkURyDsaAFAAAAhqH+NVCdexkO++Hok//6SiTWfY/RP7Rf63nyR5nb
fnO3SJN8lL9+z4Q16R6biuN2hO53aNENwfrDiYhI0+7/yt980lFAtj2ajxgW296PItL41RmR
H4rOOpX3ix5+X9x317kMNLAbtDBrDgAAgCCw/zVQz4siY8bfE//+x0ddGpv2bP8g/t4JMeos
98xJd6lbzlr0c9mSX+J5L0Ln7Qjd/xqo53B3TZ4hbsN98urDD716VJHGXb/Or7nHWvbO1rJ3
tq6Y4ZhyVzRz7+rz2KEjdGfdFZn5q63q7mXvbF2dHuO+u1u1fhHNAQAA0PXsa83tM8jax9BZ
mTM+eCHr1U8dLZ+W5P+P/HzRrFiXSWdFRIlNX/Tz+Pdf+O+jHp0o2lGc9Ia7+6c/H/3BCyv3
NKkvv9rzX0UfxGf/9G5FvvpXvfxoWKwiohz976IP7EOL6xCKiCJ3T55Rv+XPasFNu389f+We
JkViJ9wb//7h9saSox71u1XrFwtaAAAA0PUcs+ZKm+eE8V1Pvv325Nce+dnD9tczfvX2b+4S
pa09Dytt9rdi7/9l9kfLih6WZ97+xd2uvejcoUV3uCH3/+btYa8+YvnZ/9hHe+btX9wtitJ2
108fi89/Yf4HIhL/2DOPxf/ui6+UtqGaAtqLuesX67ILltkLnvGrd+6PUdqU2PsLn3EcxejH
1v3mLvtVoNr6Xar1y6TzJ5QAAACA6zZnzpzt23d8/92VYHR+4cKFBQtztVHWZrP96EcjgjTc
dbpw4cLJ+n9Mnz7d92YsaAEAAEDXc641D9JDO4pT8Ibrkmr9IpoDAACg6znms3UXXnfBQ3EZ
xSlYw3VJtX4RzQEAAND12mfNgzMTrS7e1pk1D/0Uuddq/WKtOQAAAILi+++vPvjgA8Hrf8eO
d6OiXG5qsm/fvuANd538LjQXojkAAACCwe3eKT16lG4Tdvny5VDXAAAAgN7G/tdAHWtOgvTf
3pTLL1++bLr//vvz8vIiIrjBOQAAABAaV69eLSkpCR82bNjf//73cePGRUVFhbokAAAAoM+5
dOnSa6+9dunSpfDbbrvtypUrH3/88U033RQdHR0Wxj1bAAAAgO5w7dq1Y8eObd269cqVKyJi
mjp1qjOOh4eHJycnx8fHDxw4MDw8PKR1AgAAAL3TtWvXLl68WF9fv3///mvXrqmNbW1tpokT
J/br148gDgAAAITKtWvXvvvuu/8PcFn0+mKrc68AAAAASUVORK5CYII=
--------------050805050704010707040600--
--------------080707060003080807040003--
3
4
I have been trying to use libgfapi glusterfs support in oVirt but can't
get it to work. After talks on IRC it seems I should apply a patch
(http://gerrit.ovirt.org/33768) to enable libgf BUT I can't get it to
work. Systems used:
- hosts Centos7 or Fedora20 (so upto date qemu/libvirt/oVirt(3.5))
- glusterfs-3.6.1
- vdsm-4.16.0-524.gitbc618a4.el7.x86_64 (snapshot master 14-nov)
- vdsm-4.16.7-1.gitdb83943.el7.x86_64 (official ovirt-3.5 vdsm, seems
newer than master snapshot?? )
Just adding the patch to vdsm-4.16.7-1.gitdb83943.el7.x86_64 doesn't
work, vdsm doesn't start anymore due to an error in virt/vm.py.
Q1: what is de exact status of libgf and oVirt.
Q2: how do I test that patch?
Joop
7
18
Hey all,
Is there a way to override the host parameter that novnc uses in it's URL?
It now tries to use it's internal hostname (private LAN), while we also
like to connect over the internet:
https://ovirt.domain.com/ovirt-engine/services/novnc-main.html?host=engine.…
--
Met vriendelijke groeten / With kind regards,
Johan Kooijman
3
2
Hi,
are you full of energy after winter holidays and do you want to get involved in oVirt project?
Here are some bugs you can hopefully fix in less that one day or you can just try to reproduce providing info:
Bug 1080823 - [RFE] make override of iptables configurable when using hosted-engine
Bug 1065350 - hosted-engine should prompt a question at the user when the host was already a host in the engine
Bug 1059952 - hosted-engine --deploy (additional host) will fail if the engine is not using the default self-signed CA
Bug 1073421 - [RFE] allow additional parameter for engine-backup to omit audit_log data
Bug 1083104 - engine-setup --offline does not update versionlock
Do you want something easier?
Bug ID Status Summary
1174285 NEW [de-DE] "Live Snapshot Support" reads "Live Snapsnot Support"
734120 NEW [RFE] VDSM: use virt-sparsify/zerofree to reduce image size
1115059 NEW Incomplete error message when adding VNIC profile to running VM
1156060 NEW [text] engine admin password prompt consistency
1143817 NEW [TEXT ONLY] - Hosted Engine - Instructions for FQDN are not clear enough
772931 NEW [RFE] Reports should include the name of the oVirt engine
You don't have programming skills but you want to contribute?
Here are some bugs you can take care of, also without writing a line of code:
Bug ID Status Summary
1099998 NEW Hosted Engine documentation has several errors
1099995 NEW Migrate to Hosted Engine How-To does not state all pre-reqs
1159784 NEW [RFE] Document when and where new features are available when upgrading cluster / datacenters
1074545 NEW Error in API documentation: Create API object in python sdk
1120585 NEW update image uploader documentation
1120586 NEW update iso uploader documentation
1120588 NEW update log collector documentation
1074301 NEW [RFE] ovirt-shell has no man page
Do you prefer to test things? We have some test cases[5] you can try using nightly snapshots[6]
Do you want to contribute test cases? Most of the features[7] included in oVirt are missing a test case, you're welcome to contribute one!
Is this the first time you try to contribute to oVirt project?
You can start from here [1][2]!
Don't know gerrit very well? You can find some more docs here [3].
Any other question about development? Feel free to ask on devel(a)ovirt.org or on irc channel[4].
[1] http://www.ovirt.org/Develop
[2] http://www.ovirt.org/Working_with_oVirt_Gerrit
[3] https://gerrit-review.googlesource.com/Documentation
[4] http://www.ovirt.org/Community
[5] http://www.ovirt.org/Category:TestCase
[6] http://www.ovirt.org/Install_nightly_snapshot
[7] http://www.ovirt.org/Category:Feature
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
1
0
------=_Part_5951093_1160316627.1420655829032
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
Hi
I would like to see if anyone has good suggestion.
I have two physical hosts with 1GB connections to switched networks. The ho=
sts also have 10GB interface connected directly using Twinax cable like cop=
per crossover cable.=C2=A0 The idea was to use the 10GB as a "private netwo=
rk" for GlusterFS till the day we want to grow out of this 2 node setup.
GlusterFS was setup with the 10GB ports using non-routable IPs and hostname=
s in /etc/hosts, for example, gfs1 192.168.1.1 and gfs2 192.168.1.2.=C2=A0 =
I'm following example from community.redhat.com/blog/2014/10/up-and-running=
-with-ovirt-3-5/ , Currently I'm only using Gluster volume on node1, but `g=
luster probe peer` test worked fine with node2 through the 10GB connection.
oVirt engine was setup on physical host1 with hosted engine.=C2=A0 Now, whe=
n I try to create new Gluster storage domain, I can only see the host "node=
1" available.=20
Is there anyway I can setup oVirt on node1 and node2, while using "gfs1" an=
d "gfs2" for GlusterFS? or some way to take advantage of the 10GB connectio=
n?
ThanksW
------=_Part_5951093_1160316627.1420655829032
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"color:#000; background-color:#fff; font-family:He=
lveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif;fo=
nt-size:16px"><div id=3D"yui_3_16_0_1_1420578468148_131800">Hi</div><div di=
r=3D"ltr" id=3D"yui_3_16_0_1_1420578468148_131799"><br></div><div dir=3D"lt=
r">I would like to see if anyone has good suggestion.</div><div id=3D"yui_3=
_16_0_1_1420578468148_133155" dir=3D"ltr"><br></div><div id=3D"yui_3_16_0_1=
_1420578468148_132431" dir=3D"ltr">I have two physical hosts with 1GB conne=
ctions to switched networks. The hosts also have 10GB interface connected d=
irectly using Twinax cable like copper crossover cable. The idea was =
to use the 10GB as a "private network" for GlusterFS till the day we want t=
o grow out of this 2 node setup.<br></div><div id=3D"yui_3_16_0_1_142057846=
8148_133145"><br></div><div dir=3D"ltr" id=3D"yui_3_16_0_1_1420578468148_13=
3144">GlusterFS was setup with the 10GB ports using non-routable IPs and ho=
stnames in /etc/hosts, for example, gfs1 192.168.1.1 and gfs2 192.168.1.2.&=
nbsp; I'm following example from <a id=3D"yui_3_16_0_1_1420578468148_133153=
" href=3D"http://community.redhat.com/blog/2014/10/up-and-running-with-ovir=
t-3-5/">community.redhat.com/blog/2014/10/up-and-running-with-ovirt-3-5/</a=
> , Currently I'm only using Gluster volume on node1, but `gluster probe pe=
er` test worked fine with node2 through the 10GB connection.<br></div><div =
id=3D"yui_3_16_0_1_1420578468148_132429" style=3D"" class=3D"" dir=3D"ltr">=
<br></div><div id=3D"yui_3_16_0_1_1420578468148_132427" style=3D"" class=3D=
"" dir=3D"ltr">oVirt engine was setup on physical host1 with hosted engine.=
Now, when I try to create new Gluster storage domain, I can only see=
the host "node1" available. <br></div><div id=3D"yui_3_16_0_1_142057846814=
8_133151" style=3D"" class=3D"" dir=3D"ltr"><br></div><div id=3D"yui_3_16_0=
_1_1420578468148_133150" style=3D"" class=3D"" dir=3D"ltr">Is there anyway =
I can setup oVirt on node1 and node2, while using "gfs1" and "gfs2" for Glu=
sterFS? or some way to take advantage of the 10GB connection?<br></div><div=
id=3D"yui_3_16_0_1_1420578468148_131852" style=3D"" class=3D"" dir=3D"ltr"=
><br></div><div id=3D"yui_3_16_0_1_1420578468148_133160" style=3D"" class=
=3D"" dir=3D"ltr">Thanks</div><div style=3D"" class=3D"" dir=3D"ltr">W<br s=
tyle=3D"" class=3D""></div></div></body></html>
------=_Part_5951093_1160316627.1420655829032--
3
6
Hello,
I'm trying to configure LDAP authentication with oVirt 3.5 as described in http://www.ovirt.org/Features/AAA. I chose the simple bind transport example. But the given examples are missing the explicit specification of a base dn. Could you please advise me how this can be done?
Kind regards
Jannick
1
0
I believe that I'm stuck with trying to follow the deployment via the
popular guide at:
http://community.redhat.com/blog/2014/11/up-and-running-with-ovirt-3-5-part…
I'm trying to get my second hosted-engine deployed and seem to keep running
into errors with '--deploy' on the second node.
---
--== NETWORK CONFIGURATION ==--
Please indicate a pingable gateway IP address [x.x.x.x]:
The following CPU types are supported by this host:
- model_Westmere: Intel Westmere Family
- model_Nehalem: Intel Nehalem Family
- model_Penryn: Intel Penryn Family
- model_Conroe: Intel Conroe Family
[ ERROR ] Failed to execute stage 'Environment customization': Invalid CPU
type specified: None
---
It seems like the answer file that is copied from the first node is either
not being copied or is incorrect. In any case, I was wondering if there
was a way to verify this as well as if there is a method for regenerating
the answer file on the first node should it be missing (which I think that
it might be).
I apologize if this is pretty obvious stuff at this point as I feel like
I'm missing something. Any help would be greatly appreciated.
Thanks,
Chris
2
3
Hi we have recently updated our production environment to ovirt 3.4.4 .
I have created a positive enforcing VM Affinity Group with 2 vms in one of
our clusters, but they don't seem to be moving (currently on different
hosts). Is there something else I need to activate ?
Thanks
*Gary Lloyd*
----------------------------------
IT Services
Keele University
-----------------------------------
2
1
Heya--
I'm using ovirt 3.5 and trying to add a posix compliant FS to a node in
my cluster.
The storage I'm trying to add is contained within LVM. Below is a link
to my log files on the node where I'm trying to attach the storage.
http://pastebin.com/fzN9ktAX
I've read the ovirt manual for adding posix compliant storage and
believe I'm doing everything correct.
Any help to get this storage added would be great thanks and if I forgot
to include any info please ask.
--julian
5
12
=========================
#ovirt: oVirt Weekly Sync
=========================
Meeting started by bkp at 15:03:23 UTC.
Minutes: http://ovirt.org/meetings/ovirt/2015/ovirt.2015-01-07-15.03.html
Minutes (text): http://ovirt.org/meetings/ovirt/2015/ovirt.2015-01-07-15.03.txt
Log: http://ovirt.org/meetings/ovirt/2015/ovirt.2015-01-07-15.03.log.html
Meeting summary
---------------
* Agenda and Roll Call (bkp, 15:03:35)
* infra update (bkp, 15:03:35)
* 3.5.z updates (bkp, 15:03:35)
* 3.6.0 status (bkp, 15:03:35)
* conferences and workshops (bkp, 15:03:35)
* other topics (bkp, 15:03:38)
* infra update (bkp, 15:05:39)
* infra update The ILO issue was fixed on the PHX lab, and we're
waiting now for dcaro to return to start phase II (bkp, 15:13:45)
* infra update Phase II includes assining more IPs to the lab,
migrating linode services, etc. Expected start: newxt week. (bkp,
15:13:48)
* infra update A new unstable view was added to show productions jobs
to better focus on fixing critical jobs first, and leave jobs that
are work-in-progress for another view (bkp, 15:13:51)
* 3.5.z updates (bkp, 15:14:29)
* 3.5.z updates 3.5.1 RC is postponed due to two known blockers:
1160846 (which SLA is still working on) and 1177290 (ovirt-engine
not installable on CentOS 6: broken dependency novnc) (bkp,
15:34:35)
* 3.5.z updates Status of 1177290 fix: Not sure of status, but may be
able to have a workaround by the end of this week, according to
sbonazzo (bkp, 15:34:39)
* 3.5.z updates According to didi, there is already a "workaround"
mentioned in the 1177290 - we took downstream rhel sources and built
on upstream jenkins, eedri signed them and they are available in the
3.5 repo. So not sure it's a blocker for 3.5.1, although we do need
a longer-term solution (bkp, 15:34:42)
* 3.5.z updates 1177290 dropped as a blocker, retargeted to 3.6 for
proper handling (bkp, 15:34:47)
* 3.5.z updates Status of 1160846 fix: Hopefully Monday, according to
gchaplik (bkp, 15:34:50)
* 3.5.z updates danken reported another potential blocker with vdsm.
No bug assigned/tracked yet. sbonazzo asked to assist. (bkp,
15:34:53)
* 3.5.z updates There are still 57 bugs targeted to 3.5.1. Excluding
node and documentation bugs we still have 37 bugs targeted to 3.5.1
(bkp, 15:34:56)
* 3.5.z updates 3.5.1 RC tentatively scheduled for Jan. 14, 2015.
(bkp, 15:34:59)
* 3.6 status (bkp, 15:35:07)
* ACTION: 3.6.0 status Feature proposed for 3.6.0 must now be
collected in the 3.6 Google doc (http://goo.gl/9X3G49) and reviewed
by maintainers (bkp, 15:47:28)
* 3.6.0 status Finished the review process, the remaining key
milestones for this release will be scheduled. (bkp, 15:47:31)
* 3.6.0 status Will now need to think about what to do with novnc in
this release. (bkp, 15:47:34)
* 3.6.0 status Jenkins build for 3.6 currently failing, but once back
to nomral, 3.6 should be able to run on F21 (thanks emesika) (bkp,
15:47:37)
* 3.6.0 status There are 466 bugs targeted to 3.6.0. Excluding node
and documentation bugs we have 440 bugs targeted to 3.6.0. (bkp,
15:47:40)
* 3.6.0 status Host Network QoS is all merged, at long last. (bkp,
15:47:43)
* 3.6.0 status emesika's patch to introduce 3.6 clusters into oVirt
was merged today, now this feature (and any other that's already
merged) will be available for users to use. (bkp, 15:47:45)
* 3.6.0 status Networking is also making decent progress on a bunch of
other features, updating on the spreadsheet. More should be merged
in the coming weeks. (bkp, 15:47:48)
* 3.6.0 status UX has completed a feature for 3.6 that a lot of people
have apparently wanted returned since 3.0. When switching main tabs
the search now remembers the query you entered. (bkp, 15:47:51)
* conferences and workshops (bkp, 15:47:58)
* conferences and workshops FOSDEM is fast approaching. If you plan to
attend this event on Jan. 31-Feb. 1, stop by the oVirt booth as well
as the Virt and IaaS devrooms to hear more about oVirt! (bkp,
15:48:22)
* conferences and workshops Stay tuned for more info on a FOSDEM
social gathering for oVirt during the event, too! (bkp, 15:48:26)
* conferences and workshops We are currently planning a one-day oVirt
Workshop to coincide with FOSSAsia in Singapore, Feb. 13-15.
Asia-Pac, Australia, and West Asia users are invited to attend.
Details and registration to follow soon! (bkp, 15:48:29)
* conferences and workshops jbrooks will be speaking on the new Smart
VM Scheduler at SCALE 13X, as well as bkp, who will be discussing
data centers vs. cloud VM management. A shared booth will be present
as well. (bkp, 15:48:33)
* conferences and workshops bkp will be speaking at the Linux Collab
Summit on VM and container management (bkp, 15:48:37)
* conferences and workshops amureini will be giving a talk at
DevConfCZ in Feb, after FOSDEM (bkp, 15:51:27)
* other topics (bkp, 15:51:53)
* other topics bkp has completed a case study on CloudSpin, and will
be posting it today on ovirt.org. (bkp, 15:52:25)
* other topics One of the 2015 goals is updating oVirt documentation.
To that end, we will compile a list of known use cases/tasks that
users and admins do with oVirt in the real world. Once this task
list is compiled, we can start building an action-oriented set of
user documentation. bkp will be setting this up and sending out a
notice soon. (bkp, 15:52:36)
* other topics The Dec. 2014 newsletter was just posted (which is why
this meeting was late), and can be found at
http://lists.ovirt.org/pipermail/users/2015-January/030436.html
(bkp, 15:52:50)
Meeting ended at 15:55:53 UTC.
Action Items
------------
* 3.6.0 status Feature proposed for 3.6.0 must now be collected in the
3.6 Google doc (http://goo.gl/9X3G49) and reviewed by maintainers
Action Items, by person
-----------------------
* **UNASSIGNED**
* 3.6.0 status Feature proposed for 3.6.0 must now be collected in the
3.6 Google doc (http://goo.gl/9X3G49) and reviewed by maintainers
--
Brian Proffitt
Community Liaison
oVirt
Open Source and Standards, Red Hat - http://community.redhat.com
Phone: +1 574 383 9BKP
IRC: bkp @ OFTC
1
0
The close of the old year was a little more of a downbeat, but the news and views about oVirt were still happening!
News and Community
oVirt is now officially part of the CentOS Virt SIG! (http://lists.centos.org/pipermail/centos-virt/2014-December/004187.html)
Meanwhile, CentOS 7 gets reviewed using oVirt (http://www.networkworld.com/article/2861458/linux/centos-7-the-perfect-gift…)
CloudSpin is a great new *free* (as in everything) machine-hosting service that uses oVirt. Learn more at https://cloudspin.me/
Better performance than VMware? Here's how to set up oVirt and Gluster to achieve better benchmarks [Spanish] http://blog.sotoca.es/2014/08/agilizando-la-infraestructura-iii-ovirt.html
Find out how the University of Sevilla used oVirt and tools from UDS Enterprise to build a better virtual desktop solution for their campus: http://www.slideshare.net/UDSenterprise/virtual-desktops-in-educational-env… (http://www.slideshare.net/UDSenterpriseSpain/escritorios-virtuales-en-entor… [Spanish])
Allon Murenik opines on The Helpful Stranger and Meaning of Open Source (http://opensource.com/life/14/12/the-meaning-of-open-source)
Open Source Is Just Another Way of Doing Good Business, and everyone knows it: http://community.redhat.com/blog/2014/12/open-source-is-just-another-way-of…
ManageIQ partner VMTurbo asks: Does a Monitoring Tool Provide Automated Decisions? Especially using technologies like oVirt and RHEV? http://vmturbo.com/blog/monitoring-tool-automated-decisions/
Deploying and Managing Gluster using oVirt Webadmin Portal http://blogs-ramesh.blogspot.in/2014/12/deploying-gluster-using-ovirt.html
oVirt's China community just posted the oVirt 3.5 release notes (http://ovirt-china.org/mediawiki/index.php/OVirt_3.5_Release_Notes)
Discover more about open source virtualization options (http://computerworld.cz/software/virtualizace-ve-viru-open-source-51667) [Czech]
Software
Check out rbovirt, a Ruby client for the oVirt REST API (http://rubygems.org/gems/rbovirt)
Learn more about the new UI plugin for oVirt, iso-uploader-plugin: https://github.com/ovirt-china/iso-uploader-plugin/wiki/Specifications
Videos
oVirt is featured software on the popular Hak5 video podcast http://youtu.be/6w8F5k41_9E
Learn more about another successful VMware to oVirt migrtation: http://tv.rediris.es/es/jt2014/400.html [Spanish]
Introduction to oVirt as CISL: http://youtu.be/EuxRrlsGO3k [Portuguese]
Manage oVirt Open Source Project Infrastructure with Jenkins: http://youtu.be/WIRxw3noMmA [Hebrew]
Take a video tour of oVirt http://youtu.be/cgH20bYt2z8 [Russian]
--
Brian Proffitt
Community Liaison
oVirt
Open Source and Standards, Red Hat - http://community.redhat.com
Phone: +1 574 383 9BKP
IRC: bkp @ OFTC
1
0
Hi All!
Just tried to install the hosted engine on a fresh CentOS 6.6: beside
setting up a gluster cluster, I just added the repo and installed
hosted-engine-setup package. Otherwise it's just a very minimal
installation. hosted-engine deploy failed immediately. The issue seems to
be the same as described here
http://lists.ovirt.org/pipermail/users/2014-October/028461.html but that
conversation didn't continue after the reporting user was asked for
additional details.
Failed installation:
[root@vhost1 ~]# hosted-engine --deploy
[ INFO ] Stage: Initializing
Continuing will configure this host for serving as hypervisor and
create a VM where you have to install oVirt Engine afterwards.
Are you sure you want to continue? (Yes, No)[Yes]:
[ INFO ] Generating a temporary VNC password.
[ INFO ] Stage: Environment setup
Configuration files: []
Log file:
/var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20141223224236-y9ttk4.log
Version: otopi-1.3.0 (otopi-1.3.0-1.el6)
[ INFO ] Hardware supports virtualization
[ INFO ] Stage: Environment packages setup
[ INFO ] Stage: Programs detection
[ INFO ] Stage: Environment setup
[ ERROR ] Failed to execute stage 'Environment setup': Command
'/sbin/service' failed to execute
[ INFO ] Stage: Clean up
[ INFO ] Generating answer file '/etc/ovirt-hosted-engine/answers.conf'
[ INFO ] Answer file '/etc/ovirt-hosted-engine/answers.conf' has been
updated
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
Packages:
[root@vhost1 ~]# rpm -qa|grep hosted-engine
ovirt-hosted-engine-setup-1.2.1-1.el6.noarch
ovirt-hosted-engine-ha-1.2.4-1.el6.noarch
[root@vhost1 ~]# rpm -qa|grep vdsm
vdsm-yajsonrpc-4.16.7-1.gitdb83943.el6.noarch
vdsm-python-zombiereaper-4.16.7-1.gitdb83943.el6.noarch
vdsm-python-4.16.7-1.gitdb83943.el6.noarch
vdsm-xmlrpc-4.16.7-1.gitdb83943.el6.noarch
vdsm-4.16.7-1.gitdb83943.el6.x86_64
vdsm-cli-4.16.7-1.gitdb83943.el6.noarch
vdsm-jsonrpc-4.16.7-1.gitdb83943.el6.noarch
So here's the output that was requested in the other thread. Hope someone
can help me here. Thanks!
[root@vhost1 ~]# find /var/lib/vdsm/persistence
/var/lib/vdsm/persistence
[root@vhost1 ~]# find /var/run/vdsm/netconf
find: `/var/run/vdsm/netconf': No such file or directory
[root@vhost1 ~]# ip l 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue
state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2:
eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen
1000 link/ether 00:25:90:d8:0a:b0 brd ff:ff:ff:ff:ff:ff 4: ;vdsmdummy;:
<BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether
62:e5:28:13:9d:ba brd ff:ff:ff:ff:ff:ff 5: bond0:
<BROADCAST,MULTICAST,MASTER> mtu 1500 qdisc noop state DOWN link/ether
00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff virsh -r net
[root@vhost1 ~]# virsh -r net
error: unknown command: 'net'
[root@vhost1 ~]# virsh -r net-list
Name State Autostart Persistent
--------------------------------------------------
;vdsmdummy; active no no
[root@vhost1 ~]# vdsm-tool restore-nets
Traceback (most recent call last):
File "/usr/share/vdsm/vdsm-restore-net-config", line 137, in <module>
restore()
File "/usr/share/vdsm/vdsm-restore-net-config", line 123, in restore
unified_restoration()
File "/usr/share/vdsm/vdsm-restore-net-config", line 57, in
unified_restoration
_inRollback=True)
File "/usr/share/vdsm/network/api.py", line 616, in setupNetworks
netinfo._libvirtNets2vdsm(libvirt_nets)))
File "/usr/lib/python2.6/site-packages/vdsm/netinfo.py", line 822, in get
d['nics'][dev.name] = _nicinfo(dev, paddr, ipaddrs)
File "/usr/lib/python2.6/site-packages/vdsm/netinfo.py", line 653, in
_nicinfo
info = _devinfo(link, ipaddrs)
File "/usr/lib/python2.6/site-packages/vdsm/netinfo.py", line 681, in
_devinfo
ipv4addr, ipv4netmask, ipv4addrs, ipv6addrs = getIpInfo(link.name,
ipaddrs)
File "/usr/lib/python2.6/site-packages/vdsm/netinfo.py", line 370, in
getIpInfo
ipv4addr, prefix = addr['address'].split('/')
ValueError: need more than 1 value to unpack
Traceback (most recent call last):
File "/usr/bin/vdsm-tool", line 209, in main
return tool_command[cmd]["command"](*args)
File "/usr/lib/python2.6/site-packages/vdsm/tool/restore_nets.py", line
36, in restore_command
restore()
File "/usr/lib/python2.6/site-packages/vdsm/tool/restore_nets.py", line
45, in restore
raise EnvironmentError('Failed to restore the persisted networks')
EnvironmentError: Failed to restore the persisted networks
following is mentioned in the original thread, but should help other
googlers:
ovirt-hosted-engine-setup.log (first attempt):
2014-12-23 22:04:38 DEBUG otopi.plugins.otopi.services.rhel
plugin.execute:861 execute-output: ('/sbin/service', 'vdsmd', 'start')
stdout:
Starting multipathd daemon: [ OK ]
Starting rpcbind: [ OK ]
Starting ntpd: [ OK ]
Loading the softdog kernel module: [ OK ]
Starting wdmd: [ OK ]
Starting sanlock: [ OK ]
supervdsm start[ OK ]
Starting iscsid: [ OK ]
[ OK ]
vdsm: Running mkdirs
vdsm: Running configure_coredump
vdsm: Running configure_vdsm_logs
vdsm: Running run_init_hooks
vdsm: Running check_is_configured
libvirt is already configured for vdsm
vdsm: Running validate_configuration
SUCCESS: ssl configured to true. No conflicts
vdsm: Running prepare_transient_repository
vdsm: Running syslog_available
vdsm: Running nwfilter
vdsm: Running dummybr
vdsm: Running load_needed_modules
vdsm: Running tune_system
vdsm: Running test_space
vdsm: Running test_lo
vdsm: Running unified_network_persistence_upgrade
vdsm: stopped during execute unified_network_persistence_upgrade task (task
returned with error code 1).
vdsm start[FAILED]
2014-12-23 22:04:38 DEBUG otopi.plugins.otopi.services.rhel
plugin.execute:866 execute-output: ('/sbin/service', 'vdsmd', 'start')
stderr:
initctl: Job is already running: libvirtd
libvirt: Network Filter Driver error : Network filter not found: no
nwfilter with matching name 'vdsm-no-mac-spoofing'
2014-12-23 22:04:38 DEBUG otopi.context context._executeMethod:152 method
exception
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/otopi/context.py", line 142, in
_executeMethod
method['method']()
File
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/system/vdsmenv.py",
line 155, in _late_setup
state=True
File "/usr/share/otopi/plugins/otopi/services/rhel.py", line 188, in state
'start' if state else 'stop'
File "/usr/share/otopi/plugins/otopi/services/rhel.py", line 96, in
_executeServiceCommand
raiseOnError=raiseOnError
File "/usr/lib/python2.6/site-packages/otopi/plugin.py", line 871, in
execute
command=args[0],
RuntimeError: Command '/sbin/service' failed to execute
2014-12-23 22:04:38 ERROR otopi.context context._executeMethod:161 Failed
to execute stage 'Environment setup': Command '/sbin/service' failed to
execute
ovirt-hosted-engine-setup.log (further attempts):
2014-12-23 22:42:40 DEBUG otopi.plugins.otopi.services.rhel
plugin.execute:861 execute-output: ('/sbin/service', 'vdsmd', 'start')
stdout:
vdsm: Running mkdirs
vdsm: Running configure_coredump
vdsm: Running configure_vdsm_logs
vdsm: Running run_init_hooks
vdsm: Running check_is_configured
libvirt is already configured for vdsm
vdsm: Running validate_configuration
SUCCESS: ssl configured to true. No conflicts
vdsm: Running prepare_transient_repository
vdsm: Running syslog_available
vdsm: Running nwfilter
vdsm: Running dummybr
vdsm: Running load_needed_modules
vdsm: Running tune_system
vdsm: Running test_space
vdsm: Running test_lo
vdsm: Running unified_network_persistence_upgrade
vdsm: stopped during execute unified_network_persistence_upgrade task (task
returned with error code 1).
vdsm start[FAILED]
2014-12-23 22:42:40 DEBUG otopi.plugins.otopi.services.rhel
plugin.execute:866 execute-output: ('/sbin/service', 'vdsmd', 'start')
stderr:
initctl: Job is already running: libvirtd
2014-12-23 22:42:40 DEBUG otopi.context context._executeMethod:152 method
exception
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/otopi/context.py", line 142, in
_executeMethod
method['method']()
File
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/system/vdsmenv.py",
line 155, in _late_setup
state=True
File "/usr/share/otopi/plugins/otopi/services/rhel.py", line 188, in state
'start' if state else 'stop'
File "/usr/share/otopi/plugins/otopi/services/rhel.py", line 96, in
_executeServiceCommand
raiseOnError=raiseOnError
File "/usr/lib/python2.6/site-packages/otopi/plugin.py", line 871, in
execute
command=args[0],
RuntimeError: Command '/sbin/service' failed to execute
2014-12-23 22:42:40 ERROR otopi.context context._executeMethod:161 Failed
to execute stage 'Environment setup': Command '/sbin/service' failed to
execute
----
Andreas
4
6
Hi,
I haven't many news for 3.6 this week:
ACTION: Feature proposed for 3.6.0 must now be collected in the 3.6 Google doc [1] and reviewed by maintainers.
Finished the review process, the remaining key milestones for this release will be scheduled.
For reference, external project schedules we're tracking are:
Fedora 21: 2014-12-09 (RELEASED)
Fedora 22: no earlier than 2015-05-19
Foreman 1.8.0: 2015-03-01
GlusterFS 3.7: 2015-04-29
OpenStack Kilo: 2015-04-30
QEMU 2.1.3: 2014-12-29 (DELAYED)
QEMU 2.2.0: 2014-12-09 (RELEASED)
The tracker bug for 3.6.0 [2] currently shows no blockers.
There are 466 bugs [3] targeted to 3.6.0.
Excluding node and documentation bugs we have 440 bugs [4] targeted to 3.6.0.
[1] http://goo.gl/9X3G49
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1155425
[3] http://goo.gl/zwkF3r
[4] http://goo.gl/ZbUiMc
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
1
0
Hi everyone,
I would like to create storage domain based on MooseFS cluster. The first
idea was to do it by mount Posix compliant FS, but it doesn't like to mount
(or i don't know how to do it). The next idea was to use NFS, but even that
won't work. NFS causing Kernel crash and rebooting a Host when trying to
create storage domain.
I've checked everything I found on Google and all my NFS settings seems to
be OK but it's still crashing.
I know that MFS's issue have been taken here and it may be supported in
future, but does anybody know how to get it working in oVirt 3.4 or 3.5?
Or maybe somebody known problem with NFS and kernel panics on storage
domain creating?
1
0
Please use the formal documentation and if needed help improve them.
References for ssl:
http://gerrit.ovirt.org/gitweb?p=ovirt-engine-extension-aaa-ldap.git;a=blob…
http://gerrit.ovirt.org/gitweb?p=ovirt-engine-extension-aaa-ldap.git;a=blob…
http://gerrit.ovirt.org/gitweb?p=ovirt-engine-extension-aaa-ldap.git;a=blob…
----- Original Message -----
> From: "Donny Davis" <donny(a)cloudspin.me>
> To: "Sandvik Agustin" <agustinsandvik(a)gmail.com>, users(a)ovirt.org
> Sent: Wednesday, January 7, 2015 12:12:23 AM
> Subject: Re: [ovirt-users] LDAP Certificate Location?
>
>
>
> In the article you referenced you didn't setup tls
> On Jan 6, 2015 2:04 PM, Sandvik Agustin <agustinsandvik(a)gmail.com> wrote:
>
>
>
> Hi Donny,
>
>
> Sorry to bother you at this time, I installed the 389ds by following this
> http://www.unixmen.com/setup-directory-serverldap-in-centos-6-4-rhel-6-4/
> and now I'm following your documentation at
> https://cloudspin.me/ovirt-simple-ldap-aaa/ I'm wondering if where can I
> find this CA or pem thing you mention on your website "
> /etc/pki/tls/cacerts/ldapCA.pem".
>
> Thanks in Advance,
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
1
0

07 Jan '15
Hello,
I ran into a nasty problem today when creating a new, cloned VM from a
template (one virtual 20 GBdisk) on our two-node oVirt cluster: on the
node where I started a VM creation job, load skyrocketed and some VMs
stopped responding until and after the job failed. Everything recovered
without intervention, but this obviously shouldn't happen. I have
attached the relevant vdsm log file. The button to create the VM was
pressed around 11:17, the first error in the vdsm log is at 11:23:58.
The ISO domain is a gluster volume exposed via NFS, the storage domain
for the VM's is also a gluster volume. The underlying filesystem is ZFS.
The hypervisor nodes are full CentOS 6 installs.
I'm guessing the 'no free file handlers in pool' in the vdsm log file is
key here. What can I do to prevent this from happening again? Apart from
not creating new VMs of course :)
Tiemen
3
12
1
0

Re: [ovirt-users] Can't remove a storage domain related to a broken hardware
by Olivier Navas 06 Jan '15
by Olivier Navas 06 Jan '15
06 Jan '15
Parts of engine.log follow.
- Part 1 repeats every 5 minutes, all the time
- Part 2 is when I try to detach storage domain from the GUI (mixed with some logs as in 1) plus logs related to a VM start initiated by one of my colleagues). You will notice that engine reports some VM as "not responding". It happens during the process of removing the defective storage domain, but in fact VMs are fine and their state reverts back to normal when the task of removing the storage domain finally fails.
Thank you for your help.
===== part 1 =====
2015-01-06 03:50:00,016 INFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (DefaultQuartzScheduler_Worker-81) [5ca5ae35] Autorecovering 1 storage domains
2015-01-06 03:50:00,017 INFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (DefaultQuartzScheduler_Worker-81) [5ca5ae35] Autorecovering storage domains id: 34c95a44-db7f-4d0f-ba13-5f06a7feefe7
2015-01-06 03:50:00,020 INFO [org.ovirt.engine.core.bll.storage.ConnectDomainToStorageCommand] (DefaultQuartzScheduler_Worker-81) [789ace0] Running command: ConnectDomainToStorageCommand internal: true. Entities affected : ID: 34c95a44-db7f-4d0f-ba13-5f06a7feefe7 Type: Storage
2015-01-06 03:50:00,022 INFO [org.ovirt.engine.core.bll.storage.ConnectDomainToStorageCommand] (DefaultQuartzScheduler_Worker-81) [789ace0] ConnectDomainToStorage. Before Connect all hosts to pool. Time:1/6/15 3:50 AM
2015-01-06 03:50:00,208 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-12) [789ace0] START, ConnectStorageServerVDSCommand(HostName = n2orna, HostId = 38893029-0eb8-4d19-a28f-07680d8d6868, storagePoolId = 00000002-0002-0002-0002-00000000000f, storageType = ISCSI, connectionList = [{ id: 1b9f3167-3236-431e-93c2-ab5ee18eba04, connection: 10.23.2.199, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };{ id: ea5971f8-e1a0-42e3-826d-b95e9031ce53, connection: 10.23.2.198, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]), log id: 65ce62da
2015-01-06 03:50:00,215 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-37) [789ace0] START, ConnectStorageServerVDSCommand(HostName = n4orna, HostId = 920bf64c-62f5-4b12-a69e-eef9936576c5, storagePoolId = 00000002-0002-0002-0002-00000000000f, storageType = ISCSI, connectionList = [{ id: 1b9f3167-3236-431e-93c2-ab5ee18eba04, connection: 10.23.2.199, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };{ id: ea5971f8-e1a0-42e3-826d-b95e9031ce53, connection: 10.23.2.198, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]), log id: 50bc0a6d
2015-01-06 03:50:00,241 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-29) [789ace0] START, ConnectStorageServerVDSCommand(HostName = n3orna, HostId = ab744426-2294-4c0b-aaf5-08ebb162f542, storagePoolId = 00000002-0002-0002-0002-00000000000f, storageType = ISCSI, connectionList = [{ id: 1b9f3167-3236-431e-93c2-ab5ee18eba04, connection: 10.23.2.199, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };{ id: ea5971f8-e1a0-42e3-826d-b95e9031ce53, connection: 10.23.2.198, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]), log id: e40ca91
2015-01-06 03:50:00,246 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-32) [789ace0] START, ConnectStorageServerVDSCommand(HostName = n1orna, HostId = 60d8c75b-38f1-4cd0-b162-729285eadefd, storagePoolId = 00000002-0002-0002-0002-00000000000f, storageType = ISCSI, connectionList = [{ id: 1b9f3167-3236-431e-93c2-ab5ee18eba04, connection: 10.23.2.199, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };{ id: ea5971f8-e1a0-42e3-826d-b95e9031ce53, connection: 10.23.2.198, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]), log id: 5ddb16e
2015-01-06 03:53:00,216 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-12) [789ace0] Command ConnectStorageServerVDSCommand(HostName = n2orna, HostId = 38893029-0eb8-4d19-a28f-07680d8d6868, storagePoolId = 00000002-0002-0002-0002-00000000000f, storageType = ISCSI, connectionList = [{ id: 1b9f3167-3236-431e-93c2-ab5ee18eba04, connection: 10.23.2.199, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };{ id: ea5971f8-e1a0-42e3-826d-b95e9031ce53, connection: 10.23.2.198, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]) execution failed. Exception: VDSNetworkException: java.util.concurrent.TimeoutException
2015-01-06 03:53:00,222 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-12) [789ace0] FINISH, ConnectStorageServerVDSCommand, log id: 65ce62da
2015-01-06 03:53:00,223 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-37) [789ace0] Command ConnectStorageServerVDSCommand(HostName = n4orna, HostId = 920bf64c-62f5-4b12-a69e-eef9936576c5, storagePoolId = 00000002-0002-0002-0002-00000000000f, storageType = ISCSI, connectionList = [{ id: 1b9f3167-3236-431e-93c2-ab5ee18eba04, connection: 10.23.2.199, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };{ id: ea5971f8-e1a0-42e3-826d-b95e9031ce53, connection: 10.23.2.198, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]) execution failed. Exception: VDSNetworkException: java.util.concurrent.TimeoutException
2015-01-06 03:53:00,224 ERROR [org.ovirt.engine.core.bll.storage.ConnectDomainToStorageCommand] (org.ovirt.thread.pool-8-thread-12) [789ace0] Failed to connect host n2orna to storage domain (name: fujitsu_backup_rsync, id: 34c95a44-db7f-4d0f-ba13-5f06a7feefe7). Exception: org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: java.util.concurrent.TimeoutException (Failed with error VDS_NETWORK_ERROR and code 5022): org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: java.util.concurrent.TimeoutException (Failed with error VDS_NETWORK_ERROR and code 5022)
at org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:117) [bll.jar:]
at org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.RunVdsCommand(VDSBrokerFrontendImpl.java:33) [bll.jar:]
at org.ovirt.engine.core.bll.storage.ISCSIStorageHelper.runConnectionStorageToDomain(ISCSIStorageHelper.java:58) [bll.jar:]
at org.ovirt.engine.core.bll.storage.ISCSIStorageHelper.runConnectionStorageToDomain(ISCSIStorageHelper.java:31) [bll.jar:]
at org.ovirt.engine.core.bll.storage.ISCSIStorageHelper.connectStorageToDomainByVdsId(ISCSIStorageHelper.java:227) [bll.jar:]
at org.ovirt.engine.core.bll.storage.StorageDomainCommandBase$3.call(StorageDomainCommandBase.java:336) [bll.jar:]
at org.ovirt.engine.core.bll.storage.StorageDomainCommandBase$3.call(StorageDomainCommandBase.java:331) [bll.jar:]
at org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$InternalCallable.call(ThreadPoolUtil.java:112) [utils.jar:]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) [rt.jar:1.7.0_71]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_71]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_71]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_71]
2015-01-06 03:53:00,230 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-37) [789ace0] FINISH, ConnectStorageServerVDSCommand, log id: 50bc0a6d
2015-01-06 03:53:00,245 ERROR [org.ovirt.engine.core.bll.storage.ConnectDomainToStorageCommand] (org.ovirt.thread.pool-8-thread-37) [789ace0] Failed to connect host n4orna to storage domain (name: fujitsu_backup_rsync, id: 34c95a44-db7f-4d0f-ba13-5f06a7feefe7). Exception: org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: java.util.concurrent.TimeoutException (Failed with error VDS_NETWORK_ERROR and code 5022): org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: java.util.concurrent.TimeoutException (Failed with error VDS_NETWORK_ERROR and code 5022)
at org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:117) [bll.jar:]
at org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.RunVdsCommand(VDSBrokerFrontendImpl.java:33) [bll.jar:]
at org.ovirt.engine.core.bll.storage.ISCSIStorageHelper.runConnectionStorageToDomain(ISCSIStorageHelper.java:58) [bll.jar:]
at org.ovirt.engine.core.bll.storage.ISCSIStorageHelper.runConnectionStorageToDomain(ISCSIStorageHelper.java:31) [bll.jar:]
at org.ovirt.engine.core.bll.storage.ISCSIStorageHelper.connectStorageToDomainByVdsId(ISCSIStorageHelper.java:227) [bll.jar:]
at org.ovirt.engine.core.bll.storage.StorageDomainCommandBase$3.call(StorageDomainCommandBase.java:336) [bll.jar:]
at org.ovirt.engine.core.bll.storage.StorageDomainCommandBase$3.call(StorageDomainCommandBase.java:331) [bll.jar:]
at org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$InternalCallable.call(ThreadPoolUtil.java:112) [utils.jar:]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) [rt.jar:1.7.0_71]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_71]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_71]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_71]
2015-01-06 03:53:00,249 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-29) [789ace0] Command ConnectStorageServerVDSCommand(HostName = n3orna, HostId = ab744426-2294-4c0b-aaf5-08ebb162f542, storagePoolId = 00000002-0002-0002-0002-00000000000f, storageType = ISCSI, connectionList = [{ id: 1b9f3167-3236-431e-93c2-ab5ee18eba04, connection: 10.23.2.199, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };{ id: ea5971f8-e1a0-42e3-826d-b95e9031ce53, connection: 10.23.2.198, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]) execution failed. Exception: VDSNetworkException: java.util.concurrent.TimeoutException
2015-01-06 03:53:00,264 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-29) [789ace0] FINISH, ConnectStorageServerVDSCommand, log id: e40ca91
2015-01-06 03:53:00,265 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-32) [789ace0] Command ConnectStorageServerVDSCommand(HostName = n1orna, HostId = 60d8c75b-38f1-4cd0-b162-729285eadefd, storagePoolId = 00000002-0002-0002-0002-00000000000f, storageType = ISCSI, connectionList = [{ id: 1b9f3167-3236-431e-93c2-ab5ee18eba04, connection: 10.23.2.199, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };{ id: ea5971f8-e1a0-42e3-826d-b95e9031ce53, connection: 10.23.2.198, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]) execution failed. Exception: VDSNetworkException: java.util.concurrent.TimeoutException
2015-01-06 03:53:00,266 ERROR [org.ovirt.engine.core.bll.storage.ConnectDomainToStorageCommand] (org.ovirt.thread.pool-8-thread-29) [789ace0] Failed to connect host n3orna to storage domain (name: fujitsu_backup_rsync, id: 34c95a44-db7f-4d0f-ba13-5f06a7feefe7). Exception: org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: java.util.concurrent.TimeoutException (Failed with error VDS_NETWORK_ERROR and code 5022): org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: java.util.concurrent.TimeoutException (Failed with error VDS_NETWORK_ERROR and code 5022)
at org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:117) [bll.jar:]
at org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.RunVdsCommand(VDSBrokerFrontendImpl.java:33) [bll.jar:]
at org.ovirt.engine.core.bll.storage.ISCSIStorageHelper.runConnectionStorageToDomain(ISCSIStorageHelper.java:58) [bll.jar:]
at org.ovirt.engine.core.bll.storage.ISCSIStorageHelper.runConnectionStorageToDomain(ISCSIStorageHelper.java:31) [bll.jar:]
at org.ovirt.engine.core.bll.storage.ISCSIStorageHelper.connectStorageToDomainByVdsId(ISCSIStorageHelper.java:227) [bll.jar:]
at org.ovirt.engine.core.bll.storage.StorageDomainCommandBase$3.call(StorageDomainCommandBase.java:336) [bll.jar:]
at org.ovirt.engine.core.bll.storage.StorageDomainCommandBase$3.call(StorageDomainCommandBase.java:331) [bll.jar:]
at org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$InternalCallable.call(ThreadPoolUtil.java:112) [utils.jar:]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) [rt.jar:1.7.0_71]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_71]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_71]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_71]
2015-01-06 03:53:00,272 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-32) [789ace0] FINISH, ConnectStorageServerVDSCommand, log id: 5ddb16e
2015-01-06 03:53:00,287 ERROR [org.ovirt.engine.core.bll.storage.ConnectDomainToStorageCommand] (org.ovirt.thread.pool-8-thread-32) [789ace0] Failed to connect host n1orna to storage domain (name: fujitsu_backup_rsync, id: 34c95a44-db7f-4d0f-ba13-5f06a7feefe7). Exception: org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: java.util.concurrent.TimeoutException (Failed with error VDS_NETWORK_ERROR and code 5022): org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: java.util.concurrent.TimeoutException (Failed with error VDS_NETWORK_ERROR and code 5022)
at org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:117) [bll.jar:]
at org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.RunVdsCommand(VDSBrokerFrontendImpl.java:33) [bll.jar:]
at org.ovirt.engine.core.bll.storage.ISCSIStorageHelper.runConnectionStorageToDomain(ISCSIStorageHelper.java:58) [bll.jar:]
at org.ovirt.engine.core.bll.storage.ISCSIStorageHelper.runConnectionStorageToDomain(ISCSIStorageHelper.java:31) [bll.jar:]
at org.ovirt.engine.core.bll.storage.ISCSIStorageHelper.connectStorageToDomainByVdsId(ISCSIStorageHelper.java:227) [bll.jar:]
at org.ovirt.engine.core.bll.storage.StorageDomainCommandBase$3.call(StorageDomainCommandBase.java:336) [bll.jar:]
at org.ovirt.engine.core.bll.storage.StorageDomainCommandBase$3.call(StorageDomainCommandBase.java:331) [bll.jar:]
at org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$InternalCallable.call(ThreadPoolUtil.java:112) [utils.jar:]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) [rt.jar:1.7.0_71]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_71]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_71]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_71]
2015-01-06 03:53:00,301 INFO [org.ovirt.engine.core.bll.storage.ConnectDomainToStorageCommand] (DefaultQuartzScheduler_Worker-81) [789ace0] ConnectDomainToStorage. After Connect all hosts to pool. Time:1/6/15 3:53 AM
===== part 2 =====
2015-01-06 09:55:00,009 INFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (DefaultQuartzScheduler_Worker-9) [71cf84f9] Autorecovering 1 storage domains
2015-01-06 09:55:00,011 INFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (DefaultQuartzScheduler_Worker-9) [71cf84f9] Autorecovering storage domains id: 34c95a44-db7f-4d0f-ba13-5f06a7feefe7
2015-01-06 09:55:00,013 INFO [org.ovirt.engine.core.bll.storage.ConnectDomainToStorageCommand] (DefaultQuartzScheduler_Worker-9) [6ce74644] Running command: ConnectDomainToStorageCommand internal: true. Entities affected : ID: 34c95a44-db7f-4d0f-ba13-5f06a7feefe7 Type: Storage
2015-01-06 09:55:00,016 INFO [org.ovirt.engine.core.bll.storage.ConnectDomainToStorageCommand] (DefaultQuartzScheduler_Worker-9) [6ce74644] ConnectDomainToStorage. Before Connect all hosts to pool. Time:1/6/15 9:55 AM
2015-01-06 09:55:00,201 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-49) [6ce74644] START, ConnectStorageServerVDSCommand(HostName = n2orna, HostId = 38893029-0eb8-4d19-a28f-07680d8d6868, storagePoolId = 00000002-0002-0002-0002-00000000000f, storageType = ISCSI, connectionList = [{ id: 1b9f3167-3236-431e-93c2-ab5ee18eba04, connection: 10.23.2.199, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };{ id: ea5971f8-e1a0-42e3-826d-b95e9031ce53, connection: 10.23.2.198, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]), log id: 2bb163a4
2015-01-06 09:55:00,203 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-18) [6ce74644] START, ConnectStorageServerVDSCommand(HostName = n1orna, HostId = 60d8c75b-38f1-4cd0-b162-729285eadefd, storagePoolId = 00000002-0002-0002-0002-00000000000f, storageType = ISCSI, connectionList = [{ id: 1b9f3167-3236-431e-93c2-ab5ee18eba04, connection: 10.23.2.199, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };{ id: ea5971f8-e1a0-42e3-826d-b95e9031ce53, connection: 10.23.2.198, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]), log id: 49562818
2015-01-06 09:55:00,212 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-39) [6ce74644] START, ConnectStorageServerVDSCommand(HostName = n3orna, HostId = ab744426-2294-4c0b-aaf5-08ebb162f542, storagePoolId = 00000002-0002-0002-0002-00000000000f, storageType = ISCSI, connectionList = [{ id: 1b9f3167-3236-431e-93c2-ab5ee18eba04, connection: 10.23.2.199, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };{ id: ea5971f8-e1a0-42e3-826d-b95e9031ce53, connection: 10.23.2.198, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]), log id: 4cb88aac
2015-01-06 09:55:00,214 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-7) [6ce74644] START, ConnectStorageServerVDSCommand(HostName = n4orna, HostId = 920bf64c-62f5-4b12-a69e-eef9936576c5, storagePoolId = 00000002-0002-0002-0002-00000000000f, storageType = ISCSI, connectionList = [{ id: 1b9f3167-3236-431e-93c2-ab5ee18eba04, connection: 10.23.2.199, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };{ id: ea5971f8-e1a0-42e3-826d-b95e9031ce53, connection: 10.23.2.198, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]), log id: 23eae7f9
2015-01-06 09:55:57,598 INFO [org.ovirt.engine.core.utils.servlet.UnsupportedLocaleHelper] (ajp--127.0.0.1-8702-6) Invalid locale found in configuration:
2015-01-06 09:55:57,600 INFO [org.ovirt.engine.core.utils.servlet.UnsupportedLocaleHelper] (ajp--127.0.0.1-8702-6) Invalid locale found in configuration:
2015-01-06 09:56:00,437 INFO [org.ovirt.engine.core.utils.servlet.UnsupportedLocaleHelper] (ajp--127.0.0.1-8702-11) Invalid locale found in configuration:
2015-01-06 09:56:00,438 INFO [org.ovirt.engine.core.utils.servlet.UnsupportedLocaleHelper] (ajp--127.0.0.1-8702-11) Invalid locale found in configuration:
2015-01-06 09:56:10,268 INFO [org.ovirt.engine.core.bll.aaa.LoginAdminUserCommand] (ajp--127.0.0.1-8702-3) Running command: LoginAdminUserCommand internal: false.
2015-01-06 09:56:10,276 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-3) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: User admin logged in.
2015-01-06 09:56:10,388 INFO [org.ovirt.engine.core.bll.aaa.LoginUserCommand] (ajp--127.0.0.1-8702-9) Running command: LoginUserCommand internal: false.
2015-01-06 09:56:24,013 INFO [org.ovirt.engine.core.bll.RunVmCommand] (ajp--127.0.0.1-8702-13) [c2fb019] Lock Acquired to object EngineLock [exclusiveLocks= key: e2abb477-f807-470f-ae20-a7205e690638 value: VM
, sharedLocks= ]
2015-01-06 09:56:24,134 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (ajp--127.0.0.1-8702-13) [c2fb019] START, IsVmDuringInitiatingVDSCommand( vmId = e2abb477-f807-470f-ae20-a7205e690638), log id: 5c2ceac9
2015-01-06 09:56:24,136 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (ajp--127.0.0.1-8702-13) [c2fb019] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 5c2ceac9
2015-01-06 09:56:24,357 INFO [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-8-thread-9) [c2fb019] Running command: RunVmCommand internal: false. Entities affected : ID: e2abb477-f807-470f-ae20-a7205e690638 Type: VMAction group VM_BASIC_OPERATIONS with role type USER
2015-01-06 09:56:24,500 INFO [org.ovirt.engine.core.bll.scheduling.policyunits.HaReservationWeightPolicyUnit] (org.ovirt.thread.pool-8-thread-9) [c2fb019] Started HA reservation scoring method
2015-01-06 09:56:24,561 INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (org.ovirt.thread.pool-8-thread-9) [c2fb019] START, UpdateVmDynamicDataVDSCommand(HostName = n4orna, HostId = 920bf64c-62f5-4b12-a69e-eef9936576c5, vmDynamic=org.ovirt.engine.core.common.businessentities.VmDynamic@57214812), log id: 63eaa18e
2015-01-06 09:56:24,573 INFO [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (org.ovirt.thread.pool-8-thread-9) [c2fb019] FINISH, UpdateVmDynamicDataVDSCommand, log id: 63eaa18e
2015-01-06 09:56:24,612 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (org.ovirt.thread.pool-8-thread-9) [c2fb019] START, CreateVmVDSCommand(HostName = n4orna, HostId = 920bf64c-62f5-4b12-a69e-eef9936576c5, vmId=e2abb477-f807-470f-ae20-a7205e690638, vm=VM [ciril-recette]), log id: 5c6b51df
2015-01-06 09:56:24,647 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (org.ovirt.thread.pool-8-thread-9) [c2fb019] START, CreateVDSCommand(HostName = n4orna, HostId = 920bf64c-62f5-4b12-a69e-eef9936576c5, vmId=e2abb477-f807-470f-ae20-a7205e690638, vm=VM [ciril-recette]), log id: 6bcd4ecc
2015-01-06 09:56:24,691 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (org.ovirt.thread.pool-8-thread-9) [c2fb019] org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand kvmEnable=true,keyboardLayout=en-us,nice=0,pitReinjection=false,displayNetwork=ovirtmgmt,copyPasteEnable=true,timeOffset=0,transparentHugePages=true,vmId=e2abb477-f807-470f-ae20-a7205e690638,acpiEnable=true,custom={device_5cc74d49-61c4-4887-8cf3-41135fbdee99device_ee0562ba-7750-4857-8151-f422de37a388device_41d6489a-07ab-481d-b35e-a14e007df6cd=VmDevice {vmId=e2abb477-f807-470f-ae20-a7205e690638, deviceId=41d6489a-07ab-481d-b35e-a14e007df6cd, device=spicevmc, type=CHANNEL, bootOrder=0, specParams={}, address={port=3, bus=0, controller=0, type=virtio-serial}, managed=false, plugged=true, readOnly=false, deviceAlias=channel2, customProperties={}, snapshotId=null}, device_5cc74d49-61c4-4887-8cf3-41135fbdee99device_ee0562ba-7750-4857-8151-f422de37a388device_41d6489a-07ab-481d-b35e-a14e007df6cddevice_e3fa7b29-5b4f-462d-bcf4-298916996aaf=VmDevice {vmId=e2abb477-f807-470f-ae20-a7205e690638, deviceId=e3fa7b29-5b4f-462d-bcf4-298916996aaf, device=ide, type=CONTROLLER, bootOrder=0, specParams={}, address={bus=0x00, domain=0x0000, type=pci, slot=0x01, function=0x1}, managed=false, plugged=true, readOnly=false, deviceAlias=ide0, customProperties={}, snapshotId=null}, device_5cc74d49-61c4-4887-8cf3-41135fbdee99=VmDevice {vmId=e2abb477-f807-470f-ae20-a7205e690638, deviceId=5cc74d49-61c4-4887-8cf3-41135fbdee99, device=unix, type=CHANNEL, bootOrder=0, specParams={}, address={port=1, bus=0, controller=0, type=virtio-serial}, managed=false, plugged=true, readOnly=false, deviceAlias=channel0, customProperties={}, snapshotId=null}, device_5cc74d49-61c4-4887-8cf3-41135fbdee99device_ee0562ba-7750-4857-8151-f422de37a388=VmDevice {vmId=e2abb477-f807-470f-ae20-a7205e690638, deviceId=ee0562ba-7750-4857-8151-f422de37a388, device=unix, type=CHANNEL, bootOrder=0, specParams={}, address={port=2, bus=0, controller=0, type=virtio-serial}, managed=false, plugged=true, readOnly=false, deviceAlias=channel1, customProperties={}, snapshotId=null}},spiceSslCipherSuite=DEFAULT,memSize=6000,smp=4,emulatedMachine=rhel6.5.0,vmType=kvm,memGuaranteedSize=1500,display=qxl,smartcardEnable=false,bootMenuEnable=false,numaTune={mode=interleave, nodeset=0,1},spiceSecureChannels=smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard,smpCoresPerSocket=1,maxVCpus=16,devices=[{address={bus=0x00, domain=0x0000, slot=0x02, type=pci, function=0x0}, specParams={ram=65536, vram=32768, heads=1}, device=qxl, type=video, deviceId=061ed0c0-22f5-4099-8bee-44a3dfa19956}, {shared=false, iface=ide, index=2, address={unit=0, bus=1, target=0, controller=0, type=drive}, specParams={path=}, path=, device=cdrom, type=disk, readonly=true, deviceId=dd759f21-7cca-4d56-8f0f-348d61492714}, {shared=false, index=0, volumeID=82528148-a4db-4892-87e6-b41c7b2f3ab8, propagateErrors=off, format=raw, type=disk, iface=virtio, bootOrder=1, address={bus=0x00, domain=0x0000, slot=0x06, type=pci, function=0x0}, domainID=05d4e406-de6d-47dc-8414-bdc7381b6d4a, imageID=f09ba312-5078-4070-801c-4a8c7e28d0aa, specParams={}, optional=false, device=disk, poolID=00000002-0002-0002-0002-00000000000f, readonly=false, deviceId=f09ba312-5078-4070-801c-4a8c7e28d0aa}, {nicModel=pv, address={bus=0x00, domain=0x0000, slot=0x03, type=pci, function=0x0}, specParams={outbound={}, inbound={}}, macAddr=00:1a:4a:70:41:a3, device=bridge, linkActive=true, type=interface, filter=vdsm-no-mac-spoofing, network=Cluster_VM, deviceId=b2e197f0-00f4-44f4-bd5c-32824d872c95}, {address={bus=0x00, domain=0x0000, slot=0x07, type=pci, function=0x0}, specParams={model=virtio}, device=memballoon, type=balloon, deviceId=e0a803e7-eb97-4285-a49f-6e99a5df974c}, {index=0, model=virtio-scsi, address={bus=0x00, domain=0x0000, slot=0x04, type=pci, function=0x0}, specParams={}, device=scsi, type=controller, deviceId=bd1cf13f-212b-4ceb-96c7-ee4950f7a7d3}, {address={bus=0x00, domain=0x0000, slot=0x05, type=pci, function=0x0}, specParams={}, device=virtio-serial, type=controller, deviceId=a2df4f4f-2920-4243-ba9a-26206057de6e}],vmName=ciril-recette,cpuType=Westmere,fileTransferEnable=true
2015-01-06 09:56:24,839 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (org.ovirt.thread.pool-8-thread-9) [c2fb019] FINISH, CreateVDSCommand, log id: 6bcd4ecc
2015-01-06 09:56:24,852 INFO [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (org.ovirt.thread.pool-8-thread-9) [c2fb019] FINISH, CreateVmVDSCommand, return: WaitForLaunch, log id: 5c6b51df
2015-01-06 09:56:24,854 INFO [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-8-thread-9) [c2fb019] Lock freed to object EngineLock [exclusiveLocks= key: e2abb477-f807-470f-ae20-a7205e690638 value: VM
, sharedLocks= ]
2015-01-06 09:56:24,873 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-9) [c2fb019] Correlation ID: c2fb019, Job ID: 42d86800-fc18-42d6-98a6-0cf205a85a2c, Call Stack: null, Custom Event ID: -1, Message: VM ciril-recette was started by admin (Host: n4orna).
2015-01-06 09:56:28,296 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-73) [128f72d7] VM ciril-recette e2abb477-f807-470f-ae20-a7205e690638 moved from WaitForLaunch --> PoweringUp
2015-01-06 09:56:28,298 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand] (DefaultQuartzScheduler_Worker-73) [128f72d7] START, FullListVdsCommand(HostName = n4orna, HostId = 920bf64c-62f5-4b12-a69e-eef9936576c5, vds=Host[n4orna,920bf64c-62f5-4b12-a69e-eef9936576c5], vmIds=[e2abb477-f807-470f-ae20-a7205e690638]), log id: 79d27c1c
2015-01-06 09:56:28,307 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand] (DefaultQuartzScheduler_Worker-73) [128f72d7] FINISH, FullListVdsCommand, return: [{displaySecurePort=5909, kvmEnable=true, nicModel=rtl8139,pv, keyboardLayout=en-us, displayIp=10.99.23.4, pauseCode=NOERR, pitReinjection=false, nice=0, displayNetwork=ovirtmgmt, copyPasteEnable=true, timeOffset=0, transparentHugePages=true, vmId=e2abb477-f807-470f-ae20-a7205e690638, acpiEnable=true, custom={device_5cc74d49-61c4-4887-8cf3-41135fbdee99device_ee0562ba-7750-4857-8151-f422de37a388device_41d6489a-07ab-481d-b35e-a14e007df6cd=VmDevice {vmId=e2abb477-f807-470f-ae20-a7205e690638, deviceId=41d6489a-07ab-481d-b35e-a14e007df6cd, device=spicevmc, type=CHANNEL, bootOrder=0, specParams={}, address={port=3, bus=0, controller=0, type=virtio-serial}, managed=false, plugged=true, readOnly=false, deviceAlias=channel2, customProperties={}, snapshotId=null}, device_5cc74d49-61c4-4887-8cf3-41135fbdee99device_ee0562ba-7750-4857-8151-f422de37a388device_41d6489a-07ab-481d-b35e-a14e007df6cddevice_e3fa7b29-5b4f-462d-bcf4-298916996aaf=VmDevice {vmId=e2abb477-f807-470f-ae20-a7205e690638, deviceId=e3fa7b29-5b4f-462d-bcf4-298916996aaf, device=ide, type=CONTROLLER, bootOrder=0, specParams={}, address={bus=0x00, domain=0x0000, type=pci, slot=0x01, function=0x1}, managed=false, plugged=true, readOnly=false, deviceAlias=ide0, customProperties={}, snapshotId=null}, device_5cc74d49-61c4-4887-8cf3-41135fbdee99=VmDevice {vmId=e2abb477-f807-470f-ae20-a7205e690638, deviceId=5cc74d49-61c4-4887-8cf3-41135fbdee99, device=unix, type=CHANNEL, bootOrder=0, specParams={}, address={port=1, bus=0, controller=0, type=virtio-serial}, managed=false, plugged=true, readOnly=false, deviceAlias=channel0, customProperties={}, snapshotId=null}, device_5cc74d49-61c4-4887-8cf3-41135fbdee99device_ee0562ba-7750-4857-8151-f422de37a388=VmDevice {vmId=e2abb477-f807-470f-ae20-a7205e690638, deviceId=ee0562ba-7750-4857-8151-f422de37a388, device=unix, type=CHANNEL, bootOrder=0, specParams={}, address={port=2, bus=0, controller=0, type=virtio-serial}, managed=false, plugged=true, readOnly=false, deviceAlias=channel1, customProperties={}, snapshotId=null}}, spiceSslCipherSuite=DEFAULT, memSize=6000, smp=4, displayPort=-1, emulatedMachine=rhel6.5.0, vmType=kvm, status=Up, memGuaranteedSize=1500, display=qxl, pid=23762, smartcardEnable=false, bootMenuEnable=false, spiceSecureChannels=smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard, numaTune={nodeset=0,1, mode=interleave}, smpCoresPerSocket=1, maxVCpus=16, clientIp=, devices=[Ljava.lang.Object;@69d250bf, vmName=ciril-recette, fileTransferEnable=true, cpuType=Westmere}], log id: 79d27c1c
2015-01-06 09:56:28,328 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-73) [128f72d7] Received a spice Device without an address when processing VM e2abb477-f807-470f-ae20-a7205e690638 devices, skipping device: {specParams={spiceSecureChannels=smain,sinputs,scursor,splayback,srecord,sdisplay,susbredir,ssmartcard, keyMap=en-us, displayNetwork=ovirtmgmt, copyPasteEnable=true, displayIp=10.99.23.4}, device=spice, tlsPort=5909, type=graphics}
2015-01-06 09:56:46,936 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-55) [27e4f6a6] VM ciril-recette e2abb477-f807-470f-ae20-a7205e690638 moved from PoweringUp --> Up
2015-01-06 09:56:47,065 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-55) [27e4f6a6] Correlation ID: c2fb019, Job ID: 42d86800-fc18-42d6-98a6-0cf205a85a2c, Call Stack: null, Custom Event ID: -1, Message: VM ciril-recette started on Host n4orna
2015-01-06 09:57:57,112 INFO [org.ovirt.engine.core.bll.storage.DetachStorageDomainFromPoolCommand] (ajp--127.0.0.1-8702-6) [631b7fb5] Lock Acquired to object EngineLock [exclusiveLocks= key: 34c95a44-db7f-4d0f-ba13-5f06a7feefe7 value: STORAGE
, sharedLocks= ]
2015-01-06 09:57:57,517 INFO [org.ovirt.engine.core.bll.storage.DetachStorageDomainFromPoolCommand] (org.ovirt.thread.pool-8-thread-17) [631b7fb5] Running command: DetachStorageDomainFromPoolCommand internal: false. Entities affected : ID: 34c95a44-db7f-4d0f-ba13-5f06a7feefe7 Type: StorageAction group MANIPULATE_STORAGE_DOMAIN with role type ADMIN
2015-01-06 09:57:57,520 INFO [org.ovirt.engine.core.bll.storage.DetachStorageDomainFromPoolCommand] (org.ovirt.thread.pool-8-thread-17) [631b7fb5] Start detach storage domain
2015-01-06 09:57:57,530 INFO [org.ovirt.engine.core.bll.storage.DetachStorageDomainFromPoolCommand] (org.ovirt.thread.pool-8-thread-17) [631b7fb5] Detach storage domain: before connect
2015-01-06 09:57:57,720 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-50) [631b7fb5] START, ConnectStorageServerVDSCommand(HostName = n2orna, HostId = 38893029-0eb8-4d19-a28f-07680d8d6868, storagePoolId = 00000002-0002-0002-0002-00000000000f, storageType = ISCSI, connectionList = [{ id: 1b9f3167-3236-431e-93c2-ab5ee18eba04, connection: 10.23.2.199, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };{ id: ea5971f8-e1a0-42e3-826d-b95e9031ce53, connection: 10.23.2.198, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]), log id: 13e1b92c
2015-01-06 09:57:57,728 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-21) [631b7fb5] START, ConnectStorageServerVDSCommand(HostName = n1orna, HostId = 60d8c75b-38f1-4cd0-b162-729285eadefd, storagePoolId = 00000002-0002-0002-0002-00000000000f, storageType = ISCSI, connectionList = [{ id: 1b9f3167-3236-431e-93c2-ab5ee18eba04, connection: 10.23.2.199, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };{ id: ea5971f8-e1a0-42e3-826d-b95e9031ce53, connection: 10.23.2.198, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]), log id: 7b9a83cf
2015-01-06 09:57:57,737 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-37) [631b7fb5] START, ConnectStorageServerVDSCommand(HostName = n4orna, HostId = 920bf64c-62f5-4b12-a69e-eef9936576c5, storagePoolId = 00000002-0002-0002-0002-00000000000f, storageType = ISCSI, connectionList = [{ id: 1b9f3167-3236-431e-93c2-ab5ee18eba04, connection: 10.23.2.199, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };{ id: ea5971f8-e1a0-42e3-826d-b95e9031ce53, connection: 10.23.2.198, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]), log id: 2a3e2925
2015-01-06 09:57:57,759 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-16) [631b7fb5] START, ConnectStorageServerVDSCommand(HostName = n3orna, HostId = ab744426-2294-4c0b-aaf5-08ebb162f542, storagePoolId = 00000002-0002-0002-0002-00000000000f, storageType = ISCSI, connectionList = [{ id: 1b9f3167-3236-431e-93c2-ab5ee18eba04, connection: 10.23.2.199, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };{ id: ea5971f8-e1a0-42e3-826d-b95e9031ce53, connection: 10.23.2.198, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]), log id: 6386df71
2015-01-06 09:58:00,209 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-49) [6ce74644] Command ConnectStorageServerVDSCommand(HostName = n2orna, HostId = 38893029-0eb8-4d19-a28f-07680d8d6868, storagePoolId = 00000002-0002-0002-0002-00000000000f, storageType = ISCSI, connectionList = [{ id: 1b9f3167-3236-431e-93c2-ab5ee18eba04, connection: 10.23.2.199, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };{ id: ea5971f8-e1a0-42e3-826d-b95e9031ce53, connection: 10.23.2.198, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]) execution failed. Exception: VDSNetworkException: java.util.concurrent.TimeoutException
2015-01-06 09:58:00,214 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-18) [6ce74644] Command ConnectStorageServerVDSCommand(HostName = n1orna, HostId = 60d8c75b-38f1-4cd0-b162-729285eadefd, storagePoolId = 00000002-0002-0002-0002-00000000000f, storageType = ISCSI, connectionList = [{ id: 1b9f3167-3236-431e-93c2-ab5ee18eba04, connection: 10.23.2.199, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };{ id: ea5971f8-e1a0-42e3-826d-b95e9031ce53, connection: 10.23.2.198, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]) execution failed. Exception: VDSNetworkException: java.util.concurrent.TimeoutException
2015-01-06 09:58:00,220 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-39) [6ce74644] Command ConnectStorageServerVDSCommand(HostName = n3orna, HostId = ab744426-2294-4c0b-aaf5-08ebb162f542, storagePoolId = 00000002-0002-0002-0002-00000000000f, storageType = ISCSI, connectionList = [{ id: 1b9f3167-3236-431e-93c2-ab5ee18eba04, connection: 10.23.2.199, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };{ id: ea5971f8-e1a0-42e3-826d-b95e9031ce53, connection: 10.23.2.198, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]) execution failed. Exception: VDSNetworkException: java.util.concurrent.TimeoutException
2015-01-06 09:58:00,215 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-49) [6ce74644] FINISH, ConnectStorageServerVDSCommand, log id: 2bb163a4
2015-01-06 09:58:00,227 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-39) [6ce74644] FINISH, ConnectStorageServerVDSCommand, log id: 4cb88aac
2015-01-06 09:58:00,225 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-7) [6ce74644] Command ConnectStorageServerVDSCommand(HostName = n4orna, HostId = 920bf64c-62f5-4b12-a69e-eef9936576c5, storagePoolId = 00000002-0002-0002-0002-00000000000f, storageType = ISCSI, connectionList = [{ id: 1b9f3167-3236-431e-93c2-ab5ee18eba04, connection: 10.23.2.199, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };{ id: ea5971f8-e1a0-42e3-826d-b95e9031ce53, connection: 10.23.2.198, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]) execution failed. Exception: VDSNetworkException: java.util.concurrent.TimeoutException
2015-01-06 09:58:00,221 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-18) [6ce74644] FINISH, ConnectStorageServerVDSCommand, log id: 49562818
2015-01-06 09:58:00,236 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-7) [6ce74644] FINISH, ConnectStorageServerVDSCommand, log id: 23eae7f9
2015-01-06 09:58:00,230 ERROR [org.ovirt.engine.core.bll.storage.ConnectDomainToStorageCommand] (org.ovirt.thread.pool-8-thread-39) [6ce74644] Failed to connect host n3orna to storage domain (name: fujitsu_backup_rsync, id: 34c95a44-db7f-4d0f-ba13-5f06a7feefe7). Exception: org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: java.util.concurrent.TimeoutException (Failed with error VDS_NETWORK_ERROR and code 5022): org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: java.util.concurrent.TimeoutException (Failed with error VDS_NETWORK_ERROR and code 5022)
at org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:117) [bll.jar:]
at org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.RunVdsCommand(VDSBrokerFrontendImpl.java:33) [bll.jar:]
at org.ovirt.engine.core.bll.storage.ISCSIStorageHelper.runConnectionStorageToDomain(ISCSIStorageHelper.java:58) [bll.jar:]
at org.ovirt.engine.core.bll.storage.ISCSIStorageHelper.runConnectionStorageToDomain(ISCSIStorageHelper.java:31) [bll.jar:]
at org.ovirt.engine.core.bll.storage.ISCSIStorageHelper.connectStorageToDomainByVdsId(ISCSIStorageHelper.java:227) [bll.jar:]
at org.ovirt.engine.core.bll.storage.StorageDomainCommandBase$3.call(StorageDomainCommandBase.java:336) [bll.jar:]
at org.ovirt.engine.core.bll.storage.StorageDomainCommandBase$3.call(StorageDomainCommandBase.java:331) [bll.jar:]
at org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$InternalCallable.call(ThreadPoolUtil.java:112) [utils.jar:]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) [rt.jar:1.7.0_71]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_71]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_71]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_71]
2015-01-06 09:58:00,228 ERROR [org.ovirt.engine.core.bll.storage.ConnectDomainToStorageCommand] (org.ovirt.thread.pool-8-thread-49) [6ce74644] Failed to connect host n2orna to storage domain (name: fujitsu_backup_rsync, id: 34c95a44-db7f-4d0f-ba13-5f06a7feefe7). Exception: org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: java.util.concurrent.TimeoutException (Failed with error VDS_NETWORK_ERROR and code 5022): org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: java.util.concurrent.TimeoutException (Failed with error VDS_NETWORK_ERROR and code 5022)
at org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:117) [bll.jar:]
at org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.RunVdsCommand(VDSBrokerFrontendImpl.java:33) [bll.jar:]
at org.ovirt.engine.core.bll.storage.ISCSIStorageHelper.runConnectionStorageToDomain(ISCSIStorageHelper.java:58) [bll.jar:]
at org.ovirt.engine.core.bll.storage.ISCSIStorageHelper.runConnectionStorageToDomain(ISCSIStorageHelper.java:31) [bll.jar:]
at org.ovirt.engine.core.bll.storage.ISCSIStorageHelper.connectStorageToDomainByVdsId(ISCSIStorageHelper.java:227) [bll.jar:]
at org.ovirt.engine.core.bll.storage.StorageDomainCommandBase$3.call(StorageDomainCommandBase.java:336) [bll.jar:]
at org.ovirt.engine.core.bll.storage.StorageDomainCommandBase$3.call(StorageDomainCommandBase.java:331) [bll.jar:]
at org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$InternalCallable.call(ThreadPoolUtil.java:112) [utils.jar:]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) [rt.jar:1.7.0_71]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_71]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_71]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_71]
2015-01-06 09:58:00,239 ERROR [org.ovirt.engine.core.bll.storage.ConnectDomainToStorageCommand] (org.ovirt.thread.pool-8-thread-7) [6ce74644] Failed to connect host n4orna to storage domain (name: fujitsu_backup_rsync, id: 34c95a44-db7f-4d0f-ba13-5f06a7feefe7). Exception: org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: java.util.concurrent.TimeoutException (Failed with error VDS_NETWORK_ERROR and code 5022): org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: java.util.concurrent.TimeoutException (Failed with error VDS_NETWORK_ERROR and code 5022)
at org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:117) [bll.jar:]
at org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.RunVdsCommand(VDSBrokerFrontendImpl.java:33) [bll.jar:]
at org.ovirt.engine.core.bll.storage.ISCSIStorageHelper.runConnectionStorageToDomain(ISCSIStorageHelper.java:58) [bll.jar:]
at org.ovirt.engine.core.bll.storage.ISCSIStorageHelper.runConnectionStorageToDomain(ISCSIStorageHelper.java:31) [bll.jar:]
at org.ovirt.engine.core.bll.storage.ISCSIStorageHelper.connectStorageToDomainByVdsId(ISCSIStorageHelper.java:227) [bll.jar:]
at org.ovirt.engine.core.bll.storage.StorageDomainCommandBase$3.call(StorageDomainCommandBase.java:336) [bll.jar:]
at org.ovirt.engine.core.bll.storage.StorageDomainCommandBase$3.call(StorageDomainCommandBase.java:331) [bll.jar:]
at org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$InternalCallable.call(ThreadPoolUtil.java:112) [utils.jar:]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) [rt.jar:1.7.0_71]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_71]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_71]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_71]
2015-01-06 09:58:00,237 ERROR [org.ovirt.engine.core.bll.storage.ConnectDomainToStorageCommand] (org.ovirt.thread.pool-8-thread-18) [6ce74644] Failed to connect host n1orna to storage domain (name: fujitsu_backup_rsync, id: 34c95a44-db7f-4d0f-ba13-5f06a7feefe7). Exception: org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: java.util.concurrent.TimeoutException (Failed with error VDS_NETWORK_ERROR and code 5022): org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: java.util.concurrent.TimeoutException (Failed with error VDS_NETWORK_ERROR and code 5022)
at org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:117) [bll.jar:]
at org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.RunVdsCommand(VDSBrokerFrontendImpl.java:33) [bll.jar:]
at org.ovirt.engine.core.bll.storage.ISCSIStorageHelper.runConnectionStorageToDomain(ISCSIStorageHelper.java:58) [bll.jar:]
at org.ovirt.engine.core.bll.storage.ISCSIStorageHelper.runConnectionStorageToDomain(ISCSIStorageHelper.java:31) [bll.jar:]
at org.ovirt.engine.core.bll.storage.ISCSIStorageHelper.connectStorageToDomainByVdsId(ISCSIStorageHelper.java:227) [bll.jar:]
at org.ovirt.engine.core.bll.storage.StorageDomainCommandBase$3.call(StorageDomainCommandBase.java:336) [bll.jar:]
at org.ovirt.engine.core.bll.storage.StorageDomainCommandBase$3.call(StorageDomainCommandBase.java:331) [bll.jar:]
at org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$InternalCallable.call(ThreadPoolUtil.java:112) [utils.jar:]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) [rt.jar:1.7.0_71]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_71]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_71]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_71]
2015-01-06 09:58:00,294 INFO [org.ovirt.engine.core.bll.storage.ConnectDomainToStorageCommand] (DefaultQuartzScheduler_Worker-9) [6ce74644] ConnectDomainToStorage. After Connect all hosts to pool. Time:1/6/15 9:58 AM
2015-01-06 09:58:03,032 INFO [org.ovirt.engine.core.bll.storage.SyncLunsInfoForBlockStorageDomainCommand] (org.ovirt.thread.pool-8-thread-6) [4205431d] Running command: SyncLunsInfoForBlockStorageDomainCommand internal: true. Entities affected : ID: 34c95a44-db7f-4d0f-ba13-5f06a7feefe7 Type: Storage
2015-01-06 09:58:03,111 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVGInfoVDSCommand] (org.ovirt.thread.pool-8-thread-6) [4205431d] START, GetVGInfoVDSCommand(HostName = n2orna, HostId = 38893029-0eb8-4d19-a28f-07680d8d6868, VGID=Zqoys8-LKeb-QJIG-TRDR-dnkP-0RWd-ftbOom), log id: a46928a
2015-01-06 09:58:03,225 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVGInfoVDSCommand] (org.ovirt.thread.pool-8-thread-6) [4205431d] Failed in GetVGInfoVDS method
2015-01-06 09:58:03,227 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVGInfoVDSCommand] (org.ovirt.thread.pool-8-thread-6) [4205431d] Command org.ovirt.engine.core.vdsbroker.vdsbroker.GetVGInfoVDSCommand return value
OneVGReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=506, mMessage=Volume Group does not exist: ('vg_uuid: Zqoys8-LKeb-QJIG-TRDR-dnkP-0RWd-ftbOom',)]]
2015-01-06 09:58:03,229 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVGInfoVDSCommand] (org.ovirt.thread.pool-8-thread-6) [4205431d] HostName = n2orna
2015-01-06 09:58:03,231 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVGInfoVDSCommand] (org.ovirt.thread.pool-8-thread-6) [4205431d] Command GetVGInfoVDSCommand(HostName = n2orna, HostId = 38893029-0eb8-4d19-a28f-07680d8d6868, VGID=Zqoys8-LKeb-QJIG-TRDR-dnkP-0RWd-ftbOom) execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to GetVGInfoVDS, error = Volume Group does not exist: ('vg_uuid: Zqoys8-LKeb-QJIG-TRDR-dnkP-0RWd-ftbOom',), code = 506
2015-01-06 09:58:03,234 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVGInfoVDSCommand] (org.ovirt.thread.pool-8-thread-6) [4205431d] FINISH, GetVGInfoVDSCommand, log id: a46928a
2015-01-06 09:58:03,236 ERROR [org.ovirt.engine.core.bll.storage.SyncLunsInfoForBlockStorageDomainCommand] (org.ovirt.thread.pool-8-thread-6) [4205431d] Command org.ovirt.engine.core.bll.storage.SyncLunsInfoForBlockStorageDomainCommand throw Vdc Bll exception. With error message VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to GetVGInfoVDS, error = Volume Group does not exist: ('vg_uuid: Zqoys8-LKeb-QJIG-TRDR-dnkP-0RWd-ftbOom',), code = 506 (Failed with error VolumeGroupDoesNotExist and code 506)
2015-01-06 09:58:06,526 WARN [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (org.ovirt.thread.pool-8-thread-46) domain 34c95a44-db7f-4d0f-ba13-5f06a7feefe7:fujitsu_backup_rsync in problem. vds: n1orna
2015-01-06 09:58:06,529 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (org.ovirt.thread.pool-8-thread-46) Host 60d8c75b-38f1-4cd0-b162-729285eadefd has reported new storage access problem to the following domains 34c95a44-db7f-4d0f-ba13-5f06a7feefe7 marking it for storage connections and pool metadata refresh (report id: 6a081ad8-201f-48e4-ba56-5a581fac4475)
2015-01-06 09:58:06,559 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (org.ovirt.thread.pool-8-thread-49) Host 38893029-0eb8-4d19-a28f-07680d8d6868 has reported new storage access problem to the following domains 34c95a44-db7f-4d0f-ba13-5f06a7feefe7 marking it for storage connections and pool metadata refresh (report id: 922d1075-c975-4d9f-87d1-db8d5aa4059d)
2015-01-06 09:58:06,644 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (DefaultQuartzScheduler_Worker-57) [6e3156c6] Running storage connections refresh for hosts [38893029-0eb8-4d19-a28f-07680d8d6868, 60d8c75b-38f1-4cd0-b162-729285eadefd]
2015-01-06 09:58:06,648 INFO [org.ovirt.engine.core.bll.storage.ConnectHostToStoragePoolServersCommand] (org.ovirt.thread.pool-8-thread-1) [265f9df9] Running command: ConnectHostToStoragePoolServersCommand internal: true. Entities affected : ID: 00000002-0002-0002-0002-00000000000f Type: StoragePool
2015-01-06 09:58:06,648 INFO [org.ovirt.engine.core.bll.storage.ConnectHostToStoragePoolServersCommand] (org.ovirt.thread.pool-8-thread-42) [4f7c5a1d] Running command: ConnectHostToStoragePoolServersCommand internal: true. Entities affected : ID: 00000002-0002-0002-0002-00000000000f Type: StoragePool
2015-01-06 09:58:06,700 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (org.ovirt.thread.pool-8-thread-24) Host 920bf64c-62f5-4b12-a69e-eef9936576c5 has reported new storage access problem to the following domains 34c95a44-db7f-4d0f-ba13-5f06a7feefe7 marking it for storage connections and pool metadata refresh (report id: 12dfe3fe-966a-486e-8eda-89575648545f)
2015-01-06 09:58:06,762 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-1) [265f9df9] START, ConnectStorageServerVDSCommand(HostName = n1orna, HostId = 60d8c75b-38f1-4cd0-b162-729285eadefd, storagePoolId = 00000002-0002-0002-0002-00000000000f, storageType = ISCSI, connectionList = [{ id: 1b9f3167-3236-431e-93c2-ab5ee18eba04, connection: 10.23.2.199, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };{ id: ea5971f8-e1a0-42e3-826d-b95e9031ce53, connection: 10.23.2.198, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]), log id: 70a8393b
2015-01-06 09:58:06,769 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-42) [4f7c5a1d] START, ConnectStorageServerVDSCommand(HostName = n2orna, HostId = 38893029-0eb8-4d19-a28f-07680d8d6868, storagePoolId = 00000002-0002-0002-0002-00000000000f, storageType = ISCSI, connectionList = [{ id: 1b9f3167-3236-431e-93c2-ab5ee18eba04, connection: 10.23.2.199, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };{ id: ea5971f8-e1a0-42e3-826d-b95e9031ce53, connection: 10.23.2.198, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]), log id: 396e347
2015-01-06 09:58:15,975 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (org.ovirt.thread.pool-8-thread-31) Host ab744426-2294-4c0b-aaf5-08ebb162f542 has reported new storage access problem to the following domains 34c95a44-db7f-4d0f-ba13-5f06a7feefe7 marking it for storage connections and pool metadata refresh (report id: 18fffe35-403c-413e-acb1-8d173aa447da)
2015-01-06 09:58:52,862 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-1) [7face9e1] VM ciril-recette e2abb477-f807-470f-ae20-a7205e690638 moved from Up --> RebootInProgress
2015-01-06 09:59:23,829 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-98) [4c461177] VM ciril-recette e2abb477-f807-470f-ae20-a7205e690638 moved from RebootInProgress --> Up
2015-01-06 09:59:48,453 WARN [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (org.ovirt.thread.pool-8-thread-35) domain 05d4e406-de6d-47dc-8414-bdc7381b6d4a:MSA1-O2-vd01 in problem. vds: n3orna
2015-01-06 10:00:34,503 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-94) [289a4e02] VM ldap-recette ad20808d-b62c-4087-a41a-31ed7672e364 moved from Up --> NotResponding
2015-01-06 10:00:34,550 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-94) [289a4e02] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM ldap-recette is not responding.
2015-01-06 10:00:34,572 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-94) [289a4e02] VM ciril 627fbbe4-4812-4218-a512-cc1ed26124fd moved from Up --> NotResponding
2015-01-06 10:00:34,618 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-94) [289a4e02] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM ciril is not responding.
2015-01-06 10:00:50,138 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-96) [50e93ebc] VM antivirus2 1437b20a-d3cb-4537-ace5-2dee650f561f moved from Up --> NotResponding
2015-01-06 10:00:50,183 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-96) [50e93ebc] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM antivirus2 is not responding.
2015-01-06 10:00:50,188 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-96) [50e93ebc] VM rsst-formation 064f2b1f-f245-4f34-9747-27c28ce388df moved from Up --> NotResponding
2015-01-06 10:00:50,257 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-96) [50e93ebc] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM rsst-formation is not responding.
2015-01-06 10:00:50,262 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-96) [50e93ebc] VM forum c94e5f1f-1f10-47e5-b4bb-87d1ef31caf9 moved from Up --> NotResponding
2015-01-06 10:00:50,303 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-96) [50e93ebc] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM forum is not responding.
2015-01-06 10:00:57,729 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-50) [631b7fb5] Command ConnectStorageServerVDSCommand(HostName = n2orna, HostId = 38893029-0eb8-4d19-a28f-07680d8d6868, storagePoolId = 00000002-0002-0002-0002-00000000000f, storageType = ISCSI, connectionList = [{ id: 1b9f3167-3236-431e-93c2-ab5ee18eba04, connection: 10.23.2.199, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };{ id: ea5971f8-e1a0-42e3-826d-b95e9031ce53, connection: 10.23.2.198, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]) execution failed. Exception: VDSNetworkException: java.util.concurrent.TimeoutException
2015-01-06 10:00:57,737 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-21) [631b7fb5] Command ConnectStorageServerVDSCommand(HostName = n1orna, HostId = 60d8c75b-38f1-4cd0-b162-729285eadefd, storagePoolId = 00000002-0002-0002-0002-00000000000f, storageType = ISCSI, connectionList = [{ id: 1b9f3167-3236-431e-93c2-ab5ee18eba04, connection: 10.23.2.199, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };{ id: ea5971f8-e1a0-42e3-826d-b95e9031ce53, connection: 10.23.2.198, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]) execution failed. Exception: VDSNetworkException: java.util.concurrent.TimeoutException
2015-01-06 10:00:57,745 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-37) [631b7fb5] Command ConnectStorageServerVDSCommand(HostName = n4orna, HostId = 920bf64c-62f5-4b12-a69e-eef9936576c5, storagePoolId = 00000002-0002-0002-0002-00000000000f, storageType = ISCSI, connectionList = [{ id: 1b9f3167-3236-431e-93c2-ab5ee18eba04, connection: 10.23.2.199, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };{ id: ea5971f8-e1a0-42e3-826d-b95e9031ce53, connection: 10.23.2.198, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]) execution failed. Exception: VDSNetworkException: java.util.concurrent.TimeoutException
2015-01-06 10:00:57,741 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-50) [631b7fb5] FINISH, ConnectStorageServerVDSCommand, log id: 13e1b92c
2015-01-06 10:00:57,760 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-37) [631b7fb5] FINISH, ConnectStorageServerVDSCommand, log id: 2a3e2925
2015-01-06 10:00:57,750 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-21) [631b7fb5] FINISH, ConnectStorageServerVDSCommand, log id: 7b9a83cf
2015-01-06 10:00:57,764 ERROR [org.ovirt.engine.core.bll.storage.DetachStorageDomainFromPoolCommand] (org.ovirt.thread.pool-8-thread-37) [631b7fb5] Failed to connect host n4orna to storage domain (name: fujitsu_backup_rsync, id: 34c95a44-db7f-4d0f-ba13-5f06a7feefe7). Exception: org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: java.util.concurrent.TimeoutException (Failed with error VDS_NETWORK_ERROR and code 5022): org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: java.util.concurrent.TimeoutException (Failed with error VDS_NETWORK_ERROR and code 5022)
at org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:117) [bll.jar:]
at org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.RunVdsCommand(VDSBrokerFrontendImpl.java:33) [bll.jar:]
at org.ovirt.engine.core.bll.storage.ISCSIStorageHelper.runConnectionStorageToDomain(ISCSIStorageHelper.java:58) [bll.jar:]
at org.ovirt.engine.core.bll.storage.ISCSIStorageHelper.runConnectionStorageToDomain(ISCSIStorageHelper.java:31) [bll.jar:]
at org.ovirt.engine.core.bll.storage.ISCSIStorageHelper.connectStorageToDomainByVdsId(ISCSIStorageHelper.java:227) [bll.jar:]
at org.ovirt.engine.core.bll.storage.StorageDomainCommandBase$3.call(StorageDomainCommandBase.java:336) [bll.jar:]
at org.ovirt.engine.core.bll.storage.StorageDomainCommandBase$3.call(StorageDomainCommandBase.java:331) [bll.jar:]
at org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$InternalCallable.call(ThreadPoolUtil.java:112) [utils.jar:]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) [rt.jar:1.7.0_71]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_71]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_71]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_71]
2015-01-06 10:00:57,762 ERROR [org.ovirt.engine.core.bll.storage.DetachStorageDomainFromPoolCommand] (org.ovirt.thread.pool-8-thread-50) [631b7fb5] Failed to connect host n2orna to storage domain (name: fujitsu_backup_rsync, id: 34c95a44-db7f-4d0f-ba13-5f06a7feefe7). Exception: org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: java.util.concurrent.TimeoutException (Failed with error VDS_NETWORK_ERROR and code 5022): org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: java.util.concurrent.TimeoutException (Failed with error VDS_NETWORK_ERROR and code 5022)
at org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:117) [bll.jar:]
at org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.RunVdsCommand(VDSBrokerFrontendImpl.java:33) [bll.jar:]
at org.ovirt.engine.core.bll.storage.ISCSIStorageHelper.runConnectionStorageToDomain(ISCSIStorageHelper.java:58) [bll.jar:]
at org.ovirt.engine.core.bll.storage.ISCSIStorageHelper.runConnectionStorageToDomain(ISCSIStorageHelper.java:31) [bll.jar:]
at org.ovirt.engine.core.bll.storage.ISCSIStorageHelper.connectStorageToDomainByVdsId(ISCSIStorageHelper.java:227) [bll.jar:]
at org.ovirt.engine.core.bll.storage.StorageDomainCommandBase$3.call(StorageDomainCommandBase.java:336) [bll.jar:]
at org.ovirt.engine.core.bll.storage.StorageDomainCommandBase$3.call(StorageDomainCommandBase.java:331) [bll.jar:]
at org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$InternalCallable.call(ThreadPoolUtil.java:112) [utils.jar:]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) [rt.jar:1.7.0_71]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_71]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_71]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_71]
2015-01-06 10:00:57,767 ERROR [org.ovirt.engine.core.bll.storage.DetachStorageDomainFromPoolCommand] (org.ovirt.thread.pool-8-thread-21) [631b7fb5] Failed to connect host n1orna to storage domain (name: fujitsu_backup_rsync, id: 34c95a44-db7f-4d0f-ba13-5f06a7feefe7). Exception: org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: java.util.concurrent.TimeoutException (Failed with error VDS_NETWORK_ERROR and code 5022): org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: java.util.concurrent.TimeoutException (Failed with error VDS_NETWORK_ERROR and code 5022)
at org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:117) [bll.jar:]
at org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.RunVdsCommand(VDSBrokerFrontendImpl.java:33) [bll.jar:]
at org.ovirt.engine.core.bll.storage.ISCSIStorageHelper.runConnectionStorageToDomain(ISCSIStorageHelper.java:58) [bll.jar:]
at org.ovirt.engine.core.bll.storage.ISCSIStorageHelper.runConnectionStorageToDomain(ISCSIStorageHelper.java:31) [bll.jar:]
at org.ovirt.engine.core.bll.storage.ISCSIStorageHelper.connectStorageToDomainByVdsId(ISCSIStorageHelper.java:227) [bll.jar:]
at org.ovirt.engine.core.bll.storage.StorageDomainCommandBase$3.call(StorageDomainCommandBase.java:336) [bll.jar:]
at org.ovirt.engine.core.bll.storage.StorageDomainCommandBase$3.call(StorageDomainCommandBase.java:331) [bll.jar:]
at org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$InternalCallable.call(ThreadPoolUtil.java:112) [utils.jar:]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) [rt.jar:1.7.0_71]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_71]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_71]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_71]
2015-01-06 10:00:57,767 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-16) [631b7fb5] Command ConnectStorageServerVDSCommand(HostName = n3orna, HostId = ab744426-2294-4c0b-aaf5-08ebb162f542, storagePoolId = 00000002-0002-0002-0002-00000000000f, storageType = ISCSI, connectionList = [{ id: 1b9f3167-3236-431e-93c2-ab5ee18eba04, connection: 10.23.2.199, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };{ id: ea5971f8-e1a0-42e3-826d-b95e9031ce53, connection: 10.23.2.198, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]) execution failed. Exception: VDSNetworkException: java.util.concurrent.TimeoutException
2015-01-06 10:00:57,868 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-16) [631b7fb5] FINISH, ConnectStorageServerVDSCommand, log id: 6386df71
2015-01-06 10:00:57,871 ERROR [org.ovirt.engine.core.bll.storage.DetachStorageDomainFromPoolCommand] (org.ovirt.thread.pool-8-thread-16) [631b7fb5] Failed to connect host n3orna to storage domain (name: fujitsu_backup_rsync, id: 34c95a44-db7f-4d0f-ba13-5f06a7feefe7). Exception: org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: java.util.concurrent.TimeoutException (Failed with error VDS_NETWORK_ERROR and code 5022): org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: java.util.concurrent.TimeoutException (Failed with error VDS_NETWORK_ERROR and code 5022)
at org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:117) [bll.jar:]
at org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.RunVdsCommand(VDSBrokerFrontendImpl.java:33) [bll.jar:]
at org.ovirt.engine.core.bll.storage.ISCSIStorageHelper.runConnectionStorageToDomain(ISCSIStorageHelper.java:58) [bll.jar:]
at org.ovirt.engine.core.bll.storage.ISCSIStorageHelper.runConnectionStorageToDomain(ISCSIStorageHelper.java:31) [bll.jar:]
at org.ovirt.engine.core.bll.storage.ISCSIStorageHelper.connectStorageToDomainByVdsId(ISCSIStorageHelper.java:227) [bll.jar:]
at org.ovirt.engine.core.bll.storage.StorageDomainCommandBase$3.call(StorageDomainCommandBase.java:336) [bll.jar:]
at org.ovirt.engine.core.bll.storage.StorageDomainCommandBase$3.call(StorageDomainCommandBase.java:331) [bll.jar:]
at org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$InternalCallable.call(ThreadPoolUtil.java:112) [utils.jar:]
at java.util.concurrent.FutureTask.run(FutureTask.java:262) [rt.jar:1.7.0_71]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_71]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_71]
at java.lang.Thread.run(Thread.java:745) [rt.jar:1.7.0_71]
2015-01-06 10:00:57,902 INFO [org.ovirt.engine.core.bll.storage.DetachStorageDomainFromPoolCommand] (org.ovirt.thread.pool-8-thread-17) [631b7fb5] Detach storage domain: after connect
2015-01-06 10:00:57,906 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.DetachStorageDomainVDSCommand] (org.ovirt.thread.pool-8-thread-17) [631b7fb5] START, DetachStorageDomainVDSCommand( storagePoolId = 00000002-0002-0002-0002-00000000000f, ignoreFailoverLimit = false, storageDomainId = 34c95a44-db7f-4d0f-ba13-5f06a7feefe7, masterDomainId = 00000000-0000-0000-0000-000000000000, masterVersion = 1, force = false), log id: 7e14d5fa
2015-01-06 10:01:05,877 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-89) [3c054f3d] VM voip-admin 34d7033b-72bd-4c61-9a24-a73c5541955c moved from Up --> NotResponding
2015-01-06 10:01:05,925 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-89) [3c054f3d] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM voip-admin is not responding.
2015-01-06 10:01:06,770 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-1) [265f9df9] Command ConnectStorageServerVDSCommand(HostName = n1orna, HostId = 60d8c75b-38f1-4cd0-b162-729285eadefd, storagePoolId = 00000002-0002-0002-0002-00000000000f, storageType = ISCSI, connectionList = [{ id: 1b9f3167-3236-431e-93c2-ab5ee18eba04, connection: 10.23.2.199, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };{ id: ea5971f8-e1a0-42e3-826d-b95e9031ce53, connection: 10.23.2.198, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]) execution failed. Exception: VDSNetworkException: java.util.concurrent.TimeoutException
2015-01-06 10:01:06,776 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-42) [4f7c5a1d] Command ConnectStorageServerVDSCommand(HostName = n2orna, HostId = 38893029-0eb8-4d19-a28f-07680d8d6868, storagePoolId = 00000002-0002-0002-0002-00000000000f, storageType = ISCSI, connectionList = [{ id: 1b9f3167-3236-431e-93c2-ab5ee18eba04, connection: 10.23.2.199, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };{ id: ea5971f8-e1a0-42e3-826d-b95e9031ce53, connection: 10.23.2.198, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]) execution failed. Exception: VDSNetworkException: java.util.concurrent.TimeoutException
2015-01-06 10:01:06,785 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-1) [265f9df9] FINISH, ConnectStorageServerVDSCommand, log id: 70a8393b
2015-01-06 10:01:06,796 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-42) [4f7c5a1d] FINISH, ConnectStorageServerVDSCommand, log id: 396e347
2015-01-06 10:01:06,799 ERROR [org.ovirt.engine.core.bll.storage.ConnectHostToStoragePoolServersCommand] (org.ovirt.thread.pool-8-thread-1) [265f9df9] Command org.ovirt.engine.core.bll.storage.ConnectHostToStoragePoolServersCommand throw Vdc Bll exception. With error message VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: java.util.concurrent.TimeoutException (Failed with error VDS_NETWORK_ERROR and code 5022)
2015-01-06 10:01:06,801 ERROR [org.ovirt.engine.core.bll.storage.ConnectHostToStoragePoolServersCommand] (org.ovirt.thread.pool-8-thread-42) [4f7c5a1d] Command org.ovirt.engine.core.bll.storage.ConnectHostToStoragePoolServersCommand throw Vdc Bll exception. With error message VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: java.util.concurrent.TimeoutException (Failed with error VDS_NETWORK_ERROR and code 5022)
2015-01-06 10:01:06,813 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (DefaultQuartzScheduler_Worker-57) [6e3156c6] Submitting to the event queue pool refresh for hosts [38893029-0eb8-4d19-a28f-07680d8d6868, 60d8c75b-38f1-4cd0-b162-729285eadefd]
2015-01-06 10:01:06,816 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (org.ovirt.thread.pool-8-thread-37) Running storage pool metadata refresh for hosts {1}
2015-01-06 10:01:06,844 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] (org.ovirt.thread.pool-8-thread-47) START, ConnectStoragePoolVDSCommand(HostName = n2orna, HostId = 38893029-0eb8-4d19-a28f-07680d8d6868, vdsId = 38893029-0eb8-4d19-a28f-07680d8d6868, storagePoolId = 00000002-0002-0002-0002-00000000000f, masterVersion = 1), log id: 377e2518
2015-01-06 10:01:06,844 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] (org.ovirt.thread.pool-8-thread-26) START, ConnectStoragePoolVDSCommand(HostName = n1orna, HostId = 60d8c75b-38f1-4cd0-b162-729285eadefd, vdsId = 60d8c75b-38f1-4cd0-b162-729285eadefd, storagePoolId = 00000002-0002-0002-0002-00000000000f, masterVersion = 1), log id: 2d074200
2015-01-06 10:01:15,895 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.DetachStorageDomainVDSCommand] (org.ovirt.thread.pool-8-thread-17) [631b7fb5] Failed in DetachStorageDomainVDS method
2015-01-06 10:01:15,898 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-8-thread-17) [631b7fb5] IrsBroker::Failed::DetachStorageDomainVDS due to: IRSErrorException: IRSGenericException: IRSErrorException: Failed to DetachStorageDomainVDS, error = Storage domain does not exist: ('34c95a44-db7f-4d0f-ba13-5f06a7feefe7',), code = 358
2015-01-06 10:01:21,229 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] (org.ovirt.thread.pool-8-thread-26) FINISH, ConnectStoragePoolVDSCommand, log id: 2d074200
2015-01-06 10:01:22,578 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-86) [2ba9b406] VM packetfence adef4f71-d322-4307-b1d1-c0988bd4efcb moved from Up --> NotResponding
2015-01-06 10:01:22,624 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-86) [2ba9b406] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM packetfence is not responding.
2015-01-06 10:01:22,629 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-86) [2ba9b406] VM afs1 2256fed4-ff9e-4520-9e89-fe0bca30ec1b moved from Up --> NotResponding
2015-01-06 10:01:22,672 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-86) [2ba9b406] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM afs1 is not responding.
2015-01-06 10:01:22,676 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-86) [2ba9b406] VM piros e22277e0-723f-40b6-a46d-ca8f821807e7 moved from Up --> NotResponding
2015-01-06 10:01:22,719 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-86) [2ba9b406] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM piros is not responding.
2015-01-06 10:03:21,860 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (org.ovirt.thread.pool-8-thread-17) [631b7fb5] START, SpmStopVDSCommand(HostName = n2orna, HostId = 38893029-0eb8-4d19-a28f-07680d8d6868, storagePoolId = 00000002-0002-0002-0002-00000000000f), log id: 15cd9c64
2015-01-06 10:03:22,004 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (org.ovirt.thread.pool-8-thread-17) [631b7fb5] SpmStopVDSCommand::Stopping SPM on vds n2orna, pool id 00000002-0002-0002-0002-00000000000f
2015-01-06 10:03:22,032 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] (org.ovirt.thread.pool-8-thread-47) FINISH, ConnectStoragePoolVDSCommand, log id: 377e2518
2015-01-06 10:03:22,037 WARN [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (org.ovirt.thread.pool-8-thread-3) domain 75a3e25a-2ea6-4b95-9d9f-9fe791e38e97:MSA2-O2 in problem. vds: n2orna
2015-01-06 10:03:22,047 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (org.ovirt.thread.pool-8-thread-46) starting processDomainRecovery for domain 34c95a44-db7f-4d0f-ba13-5f06a7feefe7:fujitsu_backup_rsync
2015-01-06 10:03:22,179 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (org.ovirt.thread.pool-8-thread-17) [631b7fb5] FINISH, SpmStopVDSCommand, log id: 15cd9c64
2015-01-06 10:03:22,181 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (org.ovirt.thread.pool-8-thread-17) [631b7fb5] Irs placed on server 38893029-0eb8-4d19-a28f-07680d8d6868 failed. Proceed Failover
2015-01-06 10:03:22,219 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (org.ovirt.thread.pool-8-thread-46) Domain 34c95a44-db7f-4d0f-ba13-5f06a7feefe7:fujitsu_backup_rsync was reported by all hosts in status UP as problematic. Moving the domain to NonOperational.
2015-01-06 10:03:22,251 INFO [org.ovirt.engine.core.bll.storage.DeactivateStorageDomainCommand] (org.ovirt.thread.pool-8-thread-46) [74a0283] Failed to Acquire Lock to object EngineLock [exclusiveLocks= key: 34c95a44-db7f-4d0f-ba13-5f06a7feefe7 value: STORAGE
, sharedLocks= key: 00000002-0002-0002-0002-00000000000f value: POOL
]
2015-01-06 10:03:22,258 WARN [org.ovirt.engine.core.bll.storage.DeactivateStorageDomainCommand] (org.ovirt.thread.pool-8-thread-46) [74a0283] CanDoAction of action DeactivateStorageDomain failed. Reasons:VAR__TYPE__STORAGE__DOMAIN,VAR__ACTION__DEACTIVATE,ACTION_TYPE_FAILED_OBJECT_LOCKED
2015-01-06 10:03:22,279 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (org.ovirt.thread.pool-8-thread-46) [74a0283] Removing vds [] from the domain in maintenance cache
2015-01-06 10:03:22,282 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (org.ovirt.thread.pool-8-thread-46) [74a0283] Removing host(s) [] from hosts unseen domain report cache
2015-01-06 10:03:22,300 INFO [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (org.ovirt.thread.pool-8-thread-17) [422d495e] Running command: SetStoragePoolStatusCommand internal: true. Entities affected : ID: 00000002-0002-0002-0002-00000000000f Type: StoragePool
2015-01-06 10:03:22,306 INFO [org.ovirt.engine.core.vdsbroker.storage.StoragePoolDomainHelper] (org.ovirt.thread.pool-8-thread-17) [422d495e] Storage Pool 00000002-0002-0002-0002-00000000000f - Updating Storage Domain 05d4e406-de6d-47dc-8414-bdc7381b6d4a status from Active to Unknown, reason : null
2015-01-06 10:03:22,311 INFO [org.ovirt.engine.core.vdsbroker.storage.StoragePoolDomainHelper] (org.ovirt.thread.pool-8-thread-17) [422d495e] Storage Pool 00000002-0002-0002-0002-00000000000f - Updating Storage Domain 75a3e25a-2ea6-4b95-9d9f-9fe791e38e97 status from Active to Unknown, reason : null
2015-01-06 10:03:22,315 INFO [org.ovirt.engine.core.vdsbroker.storage.StoragePoolDomainHelper] (org.ovirt.thread.pool-8-thread-17) [422d495e] Storage Pool 00000002-0002-0002-0002-00000000000f - Updating Storage Domain 5a8e48d4-25f0-46f4-9682-e92785e9057a status from Active to Unknown, reason : null
2015-01-06 10:03:22,319 INFO [org.ovirt.engine.core.vdsbroker.storage.StoragePoolDomainHelper] (org.ovirt.thread.pool-8-thread-17) [422d495e] Storage Pool 00000002-0002-0002-0002-00000000000f - Updating Storage Domain 34c95a44-db7f-4d0f-ba13-5f06a7feefe7 status from Active to Unknown, reason : null
2015-01-06 10:03:22,363 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-17) [422d495e] Correlation ID: 422d495e, Call Stack: null, Custom Event ID: -1, Message: Data Center is being initialized, please wait for initialization to complete.
2015-01-06 10:03:22,381 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (org.ovirt.thread.pool-8-thread-17) [422d495e] hostFromVds::selectedVds - n3orna, spmStatus Free, storage pool Default
2015-01-06 10:03:22,391 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (org.ovirt.thread.pool-8-thread-17) [422d495e] starting spm on vds n3orna, storage pool Default, prevId -1, LVER -1
2015-01-06 10:03:22,418 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-8-thread-17) [422d495e] START, SpmStartVDSCommand(HostName = n3orna, HostId = ab744426-2294-4c0b-aaf5-08ebb162f542, storagePoolId = 00000002-0002-0002-0002-00000000000f, prevId=-1, prevLVER=-1, storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log id: 51aadbb5
2015-01-06 10:03:22,446 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-8-thread-17) [422d495e] spmStart polling started: taskId = 58d4fff4-d230-4c6b-aae3-62b7ff427098
2015-01-06 10:03:26,697 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-100) [7ad552fa] VM ldap-recette ad20808d-b62c-4087-a41a-31ed7672e364 moved from NotResponding --> Up
2015-01-06 10:03:26,699 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-100) [7ad552fa] VM voip-admin 34d7033b-72bd-4c61-9a24-a73c5541955c moved from NotResponding --> Up
2015-01-06 10:03:26,702 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-100) [7ad552fa] VM packetfence adef4f71-d322-4307-b1d1-c0988bd4efcb moved from NotResponding --> Up
2015-01-06 10:03:26,704 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-100) [7ad552fa] VM afs1 2256fed4-ff9e-4520-9e89-fe0bca30ec1b moved from NotResponding --> Up
2015-01-06 10:03:26,707 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-100) [7ad552fa] VM piros e22277e0-723f-40b6-a46d-ca8f821807e7 moved from NotResponding --> Up
2015-01-06 10:03:26,709 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-100) [7ad552fa] VM ciril 627fbbe4-4812-4218-a512-cc1ed26124fd moved from NotResponding --> Up
2015-01-06 10:03:26,712 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-100) [7ad552fa] VM antivirus2 1437b20a-d3cb-4537-ace5-2dee650f561f moved from NotResponding --> Up
2015-01-06 10:03:26,714 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-100) [7ad552fa] VM rsst-formation 064f2b1f-f245-4f34-9747-27c28ce388df moved from NotResponding --> Up
2015-01-06 10:03:26,716 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-100) [7ad552fa] VM forum c94e5f1f-1f10-47e5-b4bb-87d1ef31caf9 moved from NotResponding --> Up
2015-01-06 10:03:28,136 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-8-thread-17) [422d495e] spmStart polling ended: taskId = 58d4fff4-d230-4c6b-aae3-62b7ff427098 task status = finished
2015-01-06 10:03:28,155 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-8-thread-17) [422d495e] spmStart polling ended, spm status: SPM
2015-01-06 10:03:28,191 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-8-thread-17) [422d495e] START, HSMClearTaskVDSCommand(HostName = n3orna, HostId = ab744426-2294-4c0b-aaf5-08ebb162f542, taskId=58d4fff4-d230-4c6b-aae3-62b7ff427098), log id: 3c3c7472
2015-01-06 10:03:28,199 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-8-thread-17) [422d495e] FINISH, HSMClearTaskVDSCommand, log id: 3c3c7472
2015-01-06 10:03:28,201 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-8-thread-17) [422d495e] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@146b8db3, log id: 51aadbb5
2015-01-06 10:03:28,212 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (org.ovirt.thread.pool-8-thread-17) [422d495e] Initialize Irs proxy from vds: 10.99.23.3
2015-01-06 10:03:28,248 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-17) [422d495e] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Storage Pool Manager runs on Host n3orna (Address: 10.99.23.3).
2015-01-06 10:03:28,273 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-8-thread-26) [422d495e] START, SPMGetAllTasksInfoVDSCommand( storagePoolId = 00000002-0002-0002-0002-00000000000f, ignoreFailoverLimit = false), log id: 5be8ceb1
2015-01-06 10:03:28,530 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.DetachStorageDomainVDSCommand] (org.ovirt.thread.pool-8-thread-17) [422d495e] Failed in DetachStorageDomainVDS method
2015-01-06 10:03:28,533 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-8-thread-17) [422d495e] IrsBroker::Failed::DetachStorageDomainVDS due to: IRSErrorException: IRSGenericException: IRSErrorException: Failed to DetachStorageDomainVDS, error = Storage domain does not exist: ('34c95a44-db7f-4d0f-ba13-5f06a7feefe7',), code = 358
2015-01-06 10:03:28,688 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (org.ovirt.thread.pool-8-thread-17) [422d495e] START, SpmStopVDSCommand(HostName = n3orna, HostId = ab744426-2294-4c0b-aaf5-08ebb162f542, storagePoolId = 00000002-0002-0002-0002-00000000000f), log id: 430bdc38
2015-01-06 10:03:28,696 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (org.ovirt.thread.pool-8-thread-17) [422d495e] SpmStopVDSCommand::Stopping SPM on vds n3orna, pool id 00000002-0002-0002-0002-00000000000f
2015-01-06 10:03:28,943 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (org.ovirt.thread.pool-8-thread-17) [422d495e] FINISH, SpmStopVDSCommand, log id: 430bdc38
2015-01-06 10:03:28,946 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (org.ovirt.thread.pool-8-thread-17) [422d495e] Irs placed on server ab744426-2294-4c0b-aaf5-08ebb162f542 failed. Proceed Failover
2015-01-06 10:03:29,052 INFO [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (org.ovirt.thread.pool-8-thread-17) [18526185] Running command: SetStoragePoolStatusCommand internal: true. Entities affected : ID: 00000002-0002-0002-0002-00000000000f Type: StoragePool
2015-01-06 10:03:29,099 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-17) [18526185] Correlation ID: 18526185, Call Stack: null, Custom Event ID: -1, Message: Data Center is being initialized, please wait for initialization to complete.
2015-01-06 10:03:29,113 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (org.ovirt.thread.pool-8-thread-17) [18526185] hostFromVds::selectedVds - n4orna, spmStatus Free, storage pool Default
2015-01-06 10:03:29,123 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (org.ovirt.thread.pool-8-thread-17) [18526185] starting spm on vds n4orna, storage pool Default, prevId -1, LVER -1
2015-01-06 10:03:29,151 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-8-thread-17) [18526185] START, SpmStartVDSCommand(HostName = n4orna, HostId = 920bf64c-62f5-4b12-a69e-eef9936576c5, storagePoolId = 00000002-0002-0002-0002-00000000000f, prevId=-1, prevLVER=-1, storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log id: 3992a29c
2015-01-06 10:03:29,164 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-8-thread-17) [18526185] spmStart polling started: taskId = d2c1864d-c71c-4378-a4a5-97d0afafd775
2015-01-06 10:03:31,478 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-8-thread-17) [18526185] spmStart polling ended: taskId = d2c1864d-c71c-4378-a4a5-97d0afafd775 task status = finished
2015-01-06 10:03:31,490 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-8-thread-17) [18526185] spmStart polling ended, spm status: SPM
2015-01-06 10:03:31,518 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-8-thread-17) [18526185] START, HSMClearTaskVDSCommand(HostName = n4orna, HostId = 920bf64c-62f5-4b12-a69e-eef9936576c5, taskId=d2c1864d-c71c-4378-a4a5-97d0afafd775), log id: 29266dd2
2015-01-06 10:03:31,526 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-8-thread-17) [18526185] FINISH, HSMClearTaskVDSCommand, log id: 29266dd2
2015-01-06 10:03:31,528 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-8-thread-17) [18526185] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@59cd7a77, log id: 3992a29c
2015-01-06 10:03:31,539 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (org.ovirt.thread.pool-8-thread-17) [18526185] Initialize Irs proxy from vds: 10.99.23.4
2015-01-06 10:03:31,575 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-17) [18526185] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Storage Pool Manager runs on Host n4orna (Address: 10.99.23.4).
2015-01-06 10:03:31,599 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-8-thread-13) [18526185] START, SPMGetAllTasksInfoVDSCommand( storagePoolId = 00000002-0002-0002-0002-00000000000f, ignoreFailoverLimit = false), log id: 5e2b03ab
2015-01-06 10:03:31,711 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.DetachStorageDomainVDSCommand] (org.ovirt.thread.pool-8-thread-17) [18526185] Failed in DetachStorageDomainVDS method
2015-01-06 10:03:31,714 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-8-thread-17) [18526185] IrsBroker::Failed::DetachStorageDomainVDS due to: IRSErrorException: IRSGenericException: IRSErrorException: Failed to DetachStorageDomainVDS, error = Storage domain does not exist: ('34c95a44-db7f-4d0f-ba13-5f06a7feefe7',), code = 358
2015-01-06 10:03:31,718 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.DetachStorageDomainVDSCommand] (org.ovirt.thread.pool-8-thread-17) [18526185] FINISH, DetachStorageDomainVDSCommand, log id: 7e14d5fa
2015-01-06 10:03:31,718 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-8-thread-13) [18526185] -- executeIrsBrokerCommand: Attempting on storage pool 00000002-0002-0002-0002-00000000000f
2015-01-06 10:03:31,721 ERROR [org.ovirt.engine.core.bll.storage.DetachStorageDomainFromPoolCommand] (org.ovirt.thread.pool-8-thread-17) [18526185] Command org.ovirt.engine.core.bll.storage.DetachStorageDomainFromPoolCommand throw Vdc Bll exception. With error message VdcBLLException: org.ovirt.engine.core.vdsbroker.irsbroker.IRSErrorException: IRSGenericException: IRSErrorException: Failed to DetachStorageDomainVDS, error = Storage domain does not exist: ('34c95a44-db7f-4d0f-ba13-5f06a7feefe7',), code = 358 (Failed with error StorageDomainDoesNotExist and code 358)
2015-01-06 10:03:31,741 INFO [org.ovirt.engine.core.bll.storage.DetachStorageDomainFromPoolCommand] (org.ovirt.thread.pool-8-thread-17) [18526185] Command [id=3b4688b2-b626-470c-9ac9-4a65b0ef0864]: Compensating CHANGED_STATUS_ONLY of org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap; snapshot: EntityStatusSnapshot [id=storagePoolId = 00000002-0002-0002-0002-00000000000f, storageId = 34c95a44-db7f-4d0f-ba13-5f06a7feefe7, status=Inactive].
2015-01-06 10:03:31,761 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-17) [18526185] Correlation ID: 631b7fb5, Job ID: 780fb013-3a4c-4cc7-9bfd-a755989a6eeb, Call Stack: null, Custom Event ID: -1, Message: Failed to detach Storage Domain fujitsu_backup_rsync to Data Center Default. (User: admin)
2015-01-06 10:03:31,771 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-8-thread-13) [18526185] START, HSMGetAllTasksInfoVDSCommand(HostName = n4orna, HostId = 920bf64c-62f5-4b12-a69e-eef9936576c5), log id: 32a82db7
2015-01-06 10:03:31,773 INFO [org.ovirt.engine.core.bll.storage.DetachStorageDomainFromPoolCommand] (org.ovirt.thread.pool-8-thread-17) [18526185] Lock freed to object EngineLock [exclusiveLocks= key: 34c95a44-db7f-4d0f-ba13-5f06a7feefe7 value: STORAGE
, sharedLocks= ]
2015-01-06 10:03:31,779 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-8-thread-13) [18526185] FINISH, HSMGetAllTasksInfoVDSCommand, return: [], log id: 32a82db7
2015-01-06 10:03:31,782 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-8-thread-13) [18526185] FINISH, SPMGetAllTasksInfoVDSCommand, return: [], log id: 5e2b03ab
2015-01-06 10:03:31,782 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-8-thread-26) [422d495e] -- executeIrsBrokerCommand: Attempting on storage pool 00000002-0002-0002-0002-00000000000f
2015-01-06 10:03:31,785 INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (org.ovirt.thread.pool-8-thread-13) [18526185] Discovered no tasks on Storage Pool Default
2015-01-06 10:03:31,815 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-8-thread-26) [422d495e] START, HSMGetAllTasksInfoVDSCommand(HostName = n4orna, HostId = 920bf64c-62f5-4b12-a69e-eef9936576c5), log id: 4eb41b3a
2015-01-06 10:03:31,825 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-8-thread-26) [422d495e] FINISH, HSMGetAllTasksInfoVDSCommand, return: [], log id: 4eb41b3a
2015-01-06 10:03:31,829 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-8-thread-26) [422d495e] FINISH, SPMGetAllTasksInfoVDSCommand, return: [], log id: 5be8ceb1
2015-01-06 10:03:31,831 INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (org.ovirt.thread.pool-8-thread-26) [422d495e] Discovered no tasks on Storage Pool Default
2015-01-06 10:03:32,008 INFO [org.ovirt.engine.core.bll.storage.SyncLunsInfoForBlockStorageDomainCommand] (org.ovirt.thread.pool-8-thread-32) [afbbe0a] Running command: SyncLunsInfoForBlockStorageDomainCommand internal: true. Entities affected : ID: 05d4e406-de6d-47dc-8414-bdc7381b6d4a Type: Storage
2015-01-06 10:03:32,023 INFO [org.ovirt.engine.core.bll.storage.SyncLunsInfoForBlockStorageDomainCommand] (org.ovirt.thread.pool-8-thread-7) [73793c2b] Running command: SyncLunsInfoForBlockStorageDomainCommand internal: true. Entities affected : ID: 75a3e25a-2ea6-4b95-9d9f-9fe791e38e97 Type: Storage
2015-01-06 10:03:32,075 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVGInfoVDSCommand] (org.ovirt.thread.pool-8-thread-32) [afbbe0a] START, GetVGInfoVDSCommand(HostName = n4orna, HostId = 920bf64c-62f5-4b12-a69e-eef9936576c5, VGID=CtELsP-UYZb-zgRH-bVz7-lijD-Elfy-frzEJp), log id: 19fe6181
2015-01-06 10:03:32,091 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVGInfoVDSCommand] (org.ovirt.thread.pool-8-thread-7) [73793c2b] START, GetVGInfoVDSCommand(HostName = n4orna, HostId = 920bf64c-62f5-4b12-a69e-eef9936576c5, VGID=dEB6XY-cNBu-hsoz-avXf-m6Vp-4jOq-eDMCBK), log id: 14fc74a1
2015-01-06 10:03:32,102 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (DefaultQuartzScheduler_Worker-55) [27e4f6a6] Running storage connections refresh for hosts [ab744426-2294-4c0b-aaf5-08ebb162f542, 920bf64c-62f5-4b12-a69e-eef9936576c5]
2015-01-06 10:03:32,107 INFO [org.ovirt.engine.core.bll.storage.ConnectHostToStoragePoolServersCommand] (org.ovirt.thread.pool-8-thread-27) [4bd57497] Running command: ConnectHostToStoragePoolServersCommand internal: true. Entities affected : ID: 00000002-0002-0002-0002-00000000000f Type: StoragePool
2015-01-06 10:03:32,107 INFO [org.ovirt.engine.core.bll.storage.ConnectHostToStoragePoolServersCommand] (org.ovirt.thread.pool-8-thread-15) [2ad98c66] Running command: ConnectHostToStoragePoolServersCommand internal: true. Entities affected : ID: 00000002-0002-0002-0002-00000000000f Type: StoragePool
2015-01-06 10:03:32,163 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-27) [4bd57497] START, ConnectStorageServerVDSCommand(HostName = n4orna, HostId = 920bf64c-62f5-4b12-a69e-eef9936576c5, storagePoolId = 00000002-0002-0002-0002-00000000000f, storageType = NFS, connectionList = [{ id: 79bae336-2d76-4986-bf1e-50e350cde9f7, connection: cluster.33sdis.fr:/var/lib/exports/iso, iqn: null, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]), log id: 1acd4935
2015-01-06 10:03:32,167 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-15) [2ad98c66] START, ConnectStorageServerVDSCommand(HostName = n3orna, HostId = ab744426-2294-4c0b-aaf5-08ebb162f542, storagePoolId = 00000002-0002-0002-0002-00000000000f, storageType = NFS, connectionList = [{ id: 79bae336-2d76-4986-bf1e-50e350cde9f7, connection: cluster.33sdis.fr:/var/lib/exports/iso, iqn: null, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]), log id: 10087d1
2015-01-06 10:03:32,196 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-15) [2ad98c66] FINISH, ConnectStorageServerVDSCommand, return: {79bae336-2d76-4986-bf1e-50e350cde9f7=0}, log id: 10087d1
2015-01-06 10:03:32,199 INFO [org.ovirt.engine.core.bll.storage.ConnectHostToStoragePoolServersCommand] (org.ovirt.thread.pool-8-thread-15) [2ad98c66] Host n3orna storage connection was succeeded
2015-01-06 10:03:32,643 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVGInfoVDSCommand] (org.ovirt.thread.pool-8-thread-32) [afbbe0a] FINISH, GetVGInfoVDSCommand, return: [LUNs [id=3600c0ff000149e9d22c0105401000000, physicalVolumeId=OMa7pu-EyxG-mdNM-9EP5-eP2f-2GPV-tWiexe, volumeGroupId=CtELsP-UYZb-zgRH-bVz7-lijD-Elfy-frzEJp, serial=SHP_P2000_G3_FC_00c0ff149e9d000022c0105401000000, lunMapping=0, vendorId=HP, productId=P2000 G3 FC, _lunConnections=[], deviceSize=5583, vendorName=HP, pathsDictionary={sdb=true, sdd=true}, lunType=UNKNOWN, status=null, diskId=null, diskAlias=null, storageDomainId=05d4e406-de6d-47dc-8414-bdc7381b6d4a, storageDomainName=null]], log id: 19fe6181
2015-01-06 10:03:32,649 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVGInfoVDSCommand] (org.ovirt.thread.pool-8-thread-7) [73793c2b] FINISH, GetVGInfoVDSCommand, return: [LUNs [id=3600c0ff00015986a35d98a5401000000, physicalVolumeId=oWeQjL-dOc0-G9dH-wu1g-8pCd-jGjG-rzL0ZM, volumeGroupId=dEB6XY-cNBu-hsoz-avXf-m6Vp-4jOq-eDMCBK, serial=SHP_P2000_G3_FC_00c0ff15986a000035d98a5401000000, lunMapping=0, vendorId=HP, productId=P2000 G3 FC, _lunConnections=[], deviceSize=5583, vendorName=HP, pathsDictionary={sdc=true, sde=true}, lunType=UNKNOWN, status=null, diskId=null, diskAlias=null, storageDomainId=75a3e25a-2ea6-4b95-9d9f-9fe791e38e97, storageDomainName=null]], log id: 14fc74a1
2015-01-06 10:03:32,664 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-27) [4bd57497] FINISH, ConnectStorageServerVDSCommand, return: {79bae336-2d76-4986-bf1e-50e350cde9f7=0}, log id: 1acd4935
2015-01-06 10:03:32,668 INFO [org.ovirt.engine.core.bll.storage.ConnectHostToStoragePoolServersCommand] (org.ovirt.thread.pool-8-thread-27) [4bd57497] Host n4orna storage connection was succeeded
2015-01-06 10:03:32,671 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (DefaultQuartzScheduler_Worker-55) [27e4f6a6] Submitting to the event queue pool refresh for hosts [ab744426-2294-4c0b-aaf5-08ebb162f542, 920bf64c-62f5-4b12-a69e-eef9936576c5]
2015-01-06 10:03:32,675 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (org.ovirt.thread.pool-8-thread-35) Running storage pool metadata refresh for hosts {1}
2015-01-06 10:03:32,704 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] (org.ovirt.thread.pool-8-thread-10) START, ConnectStoragePoolVDSCommand(HostName = n4orna, HostId = 920bf64c-62f5-4b12-a69e-eef9936576c5, vdsId = 920bf64c-62f5-4b12-a69e-eef9936576c5, storagePoolId = 00000002-0002-0002-0002-00000000000f, masterVersion = 1), log id: 66dc8feb
2015-01-06 10:03:32,705 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] (org.ovirt.thread.pool-8-thread-41) START, ConnectStoragePoolVDSCommand(HostName = n3orna, HostId = ab744426-2294-4c0b-aaf5-08ebb162f542, vdsId = ab744426-2294-4c0b-aaf5-08ebb162f542, storagePoolId = 00000002-0002-0002-0002-00000000000f, masterVersion = 1), log id: 1a6cc20f
2015-01-06 10:03:34,083 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] (org.ovirt.thread.pool-8-thread-10) FINISH, ConnectStoragePoolVDSCommand, log id: 66dc8feb
2015-01-06 10:03:34,307 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand] (org.ovirt.thread.pool-8-thread-41) FINISH, ConnectStoragePoolVDSCommand, log id: 1a6cc20f
2015-01-06 10:03:37,386 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (org.ovirt.thread.pool-8-thread-47) Host 60d8c75b-38f1-4cd0-b162-729285eadefd no longer storage access problem to any relevant domain clearing its report (report id: 6a081ad8-201f-48e4-ba56-5a581fac4475)
2015-01-06 10:03:37,392 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (org.ovirt.thread.pool-8-thread-47) Domain 75a3e25a-2ea6-4b95-9d9f-9fe791e38e97:MSA2-O2 recovered from problem. vds: n1orna
2015-01-06 10:03:37,465 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (org.ovirt.thread.pool-8-thread-34) Host 38893029-0eb8-4d19-a28f-07680d8d6868 no longer storage access problem to any relevant domain clearing its report (report id: 922d1075-c975-4d9f-87d1-db8d5aa4059d)
2015-01-06 10:03:37,471 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (org.ovirt.thread.pool-8-thread-34) Domain 75a3e25a-2ea6-4b95-9d9f-9fe791e38e97:MSA2-O2 recovered from problem. vds: n2orna
2015-01-06 10:03:37,475 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (org.ovirt.thread.pool-8-thread-34) Domain 75a3e25a-2ea6-4b95-9d9f-9fe791e38e97:MSA2-O2 has recovered from problem. No active host in the DC is reporting it as problematic, so clearing the domain recovery timer.
2015-01-06 10:03:37,634 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (org.ovirt.thread.pool-8-thread-11) Host 920bf64c-62f5-4b12-a69e-eef9936576c5 no longer storage access problem to any relevant domain clearing its report (report id: 12dfe3fe-966a-486e-8eda-89575648545f)
2015-01-06 10:03:39,263 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (org.ovirt.thread.pool-8-thread-17) Host ab744426-2294-4c0b-aaf5-08ebb162f542 no longer storage access problem to any relevant domain clearing its report (report id: 18fffe35-403c-413e-acb1-8d173aa447da)
2015-01-06 10:03:39,268 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (org.ovirt.thread.pool-8-thread-17) Domain 05d4e406-de6d-47dc-8414-bdc7381b6d4a:MSA1-O2-vd01 recovered from problem. vds: n3orna
2015-01-06 10:03:39,270 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (org.ovirt.thread.pool-8-thread-17) Domain 05d4e406-de6d-47dc-8414-bdc7381b6d4a:MSA1-O2-vd01 has recovered from problem. No active host in the DC is reporting it as problematic, so clearing the domain recovery timer.
2015-01-06 10:03:44,310 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData] (DefaultQuartzScheduler_Worker-45) [5770eda1] No hosts has reported storage access problem to domains, clearing the handled hosts reports map
2015-01-06 10:05:00,006 INFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (DefaultQuartzScheduler_Worker-29) [1cbf3978] Autorecovering 1 storage domains
2015-01-06 10:05:00,009 INFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (DefaultQuartzScheduler_Worker-29) [1cbf3978] Autorecovering storage domains id: 34c95a44-db7f-4d0f-ba13-5f06a7feefe7
2015-01-06 10:05:00,014 INFO [org.ovirt.engine.core.bll.storage.ConnectDomainToStorageCommand] (DefaultQuartzScheduler_Worker-29) [344e83b5] Running command: ConnectDomainToStorageCommand internal: true. Entities affected : ID: 34c95a44-db7f-4d0f-ba13-5f06a7feefe7 Type: Storage
2015-01-06 10:05:00,019 INFO [org.ovirt.engine.core.bll.storage.ConnectDomainToStorageCommand] (DefaultQuartzScheduler_Worker-29) [344e83b5] ConnectDomainToStorage. Before Connect all hosts to pool. Time:1/6/15 10:05 AM
2015-01-06 10:05:00,209 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-37) [344e83b5] START, ConnectStorageServerVDSCommand(HostName = n4orna, HostId = 920bf64c-62f5-4b12-a69e-eef9936576c5, storagePoolId = 00000002-0002-0002-0002-00000000000f, storageType = ISCSI, connectionList = [{ id: 1b9f3167-3236-431e-93c2-ab5ee18eba04, connection: 10.23.2.199, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };{ id: ea5971f8-e1a0-42e3-826d-b95e9031ce53, connection: 10.23.2.198, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]), log id: 1d6d563c
2015-01-06 10:05:00,210 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-28) [344e83b5] START, ConnectStorageServerVDSCommand(HostName = n1orna, HostId = 60d8c75b-38f1-4cd0-b162-729285eadefd, storagePoolId = 00000002-0002-0002-0002-00000000000f, storageType = ISCSI, connectionList = [{ id: 1b9f3167-3236-431e-93c2-ab5ee18eba04, connection: 10.23.2.199, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };{ id: ea5971f8-e1a0-42e3-826d-b95e9031ce53, connection: 10.23.2.198, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]), log id: 12a471c7
2015-01-06 10:05:00,258 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-34) [344e83b5] START, ConnectStorageServerVDSCommand(HostName = n3orna, HostId = ab744426-2294-4c0b-aaf5-08ebb162f542, storagePoolId = 00000002-0002-0002-0002-00000000000f, storageType = ISCSI, connectionList = [{ id: 1b9f3167-3236-431e-93c2-ab5ee18eba04, connection: 10.23.2.199, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };{ id: ea5971f8-e1a0-42e3-826d-b95e9031ce53, connection: 10.23.2.198, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]), log id: 70547eab
2015-01-06 10:05:00,258 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (org.ovirt.thread.pool-8-thread-19) [344e83b5] START, ConnectStorageServerVDSCommand(HostName = n2orna, HostId = 38893029-0eb8-4d19-a28f-07680d8d6868, storagePoolId = 00000002-0002-0002-0002-00000000000f, storageType = ISCSI, connectionList = [{ id: 1b9f3167-3236-431e-93c2-ab5ee18eba04, connection: 10.23.2.199, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };{ id: ea5971f8-e1a0-42e3-826d-b95e9031ce53, connection: 10.23.2.198, iqn: iqn.1999-06.com.fujitsu-siemens:0907d7d0e3.a, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]), log id: 6438f79b
2015-01-06 10:06:11,746 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-83) [453f31bd] VM ciril-recette e2abb477-f807-470f-ae20-a7205e690638 moved from Up --> RebootInProgress
2015-01-06 10:06:39,611 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-84) [537321db] VM ciril-recette e2abb477-f807-470f-ae20-a7205e690638 moved from RebootInProgress --> Up
--
Ce courriel et tous les fichiers attachés qu'il contient sont confidentiels et destinés exclusivement à la personne à laquelle ils sont adressés. Si vous avez reçu ce courriel par erreur, merci de le retourner à son expéditeur et de le détruire. Il est rappelé que tout message électronique est susceptible d'alteration au cours de son acheminement sur internet. Seuls les documents officiels du SDIS sont de nature à engager sa responsabilité. Les idées ou opinions présentées dans ce courriel sont celles de son auteur et ne représentent pas nécessairement celles du SDIS de la Gironde.
1
0

Re: [ovirt-users] 3.5 live merge findings and mysteries [was Re: Simple way to activate live merge in FC20 cluster]
by Gianluca Cecchi 06 Jan '15
by Gianluca Cecchi 06 Jan '15
06 Jan '15
On Fri, Dec 12, 2014 at 4:32 PM, Itamar Heim <iheim(a)redhat.com> wrote:
>
> On 11/21/2014 09:53 AM, Gianluca Cecchi wrote:
>
>> So the official statement is this one at:
>> http://www.ovirt.org/OVirt_3.5_Release_Notes
>>
>> Live Merge
>> If an image has one or more snapshots, oVirt 3.5's merge command will
>> combine the data of one volume into another. Live merges can be
>> performed with data is pulled from one snapshot into another snapshot.
>> The engine can merge multiple disks at the same time and each merge can
>> independently fail or succeed in each operation.
>>
>> I think we should remove the part above, or at least have some of the
>> developers to better clarify it.
>> The feature in my opinion is very important and crucial for oVirt/RHEV
>> because it is able to almost-fill the gap with VMware, especially in
>> development environments, where having flexibility on snapshots
>> management is very important and could be a starting point to have
>> greater user base familiarize with the product and adopt it.
>>
>> So these are my findings and all combinations and none of them was able
>> to provide live merge....
>> Could anyone tell me where I'm failing? Or correct release notes?
>>
>> 1) Environment with All-In-One F20
>>
>> installing oVirt AIO on F20 automatically gives the virt-preview repo
>> through the ovirt-3.5-dependencies.repo file, but only for libvirt*
>> packages:
>>
>> the same server is the engine and the hypervisor
>>
>> [root@tekkaman qemu]# rpm -q libvirt
>> libvirt-1.2.9.1-1.fc20.x86_64
>>
>> [root@tekkaman qemu]# vdsClient -s 0 getVdsCaps | grep -i merge
>> liveMerge = 'true'
>>
>> but
>> [root@tekkaman qemu]# rpm -q qemu
>> qemu-1.6.2-10.fc20.x86_64
>>
>> So that tryng live merge it initially start but you get this in vdsm.log:
>> libvirtError: unsupported configuration: active commit not supported
>> with this QEMU binary
>>
>>
>> 2) Another seaparate environment with a dedicated 3.5 engine f20 engine
>> and 4 test cases tried
>>
>> a) ovirt node installed and put in a dedicated cluster
>> latest available seems ovirt-node-iso-3.5.0.ovirt35.20140912.el6.iso
>> from 3.5 rc test days
>>
>> At the end of oVirt Node install and activation in engine:
>> [root@ovnode01 ~]# vdsClient -s 0 getVdsCaps | grep -i merge
>> liveMerge = 'false'
>>
>> [root@ovnode01 ~]# rpm -qa libvirt* qemu*
>> libvirt-0.10.2-29.el6_5.12.x86_64
>> libvirt-lock-sanlock-0.10.2-29.el6_5.12.x86_64
>> libvirt-python-0.10.2-29.el6_5.12.x86_64
>> qemu-kvm-tools-0.12.1.2-2.415.el6_5.14.x86_64
>> qemu-kvm-rhev-0.12.1.2-2.415.el6_5.14.x86_64
>> libvirt-client-0.10.2-29.el6_5.12.x86_64
>> qemu-img-rhev-0.12.1.2-2.415.el6_5.14.x86_64
>>
>>
>>
>> b) f20 + latest updates host installed as OS and then installed from
>> webadmin in another cluster
>> virt-preview on host it is not enabled so that libvirt/qemu are not ready
>>
>>
>> [root@ovnode02 network-scripts]# vdsClient -s 0 getVdsCaps | grep -i
>> merge
>> liveMerge = 'false'
>>
>> [root@ovnode02 network-scripts]# rpm -qa libvirt* qemu*
>> libvirt-daemon-1.1.3.6-2.fc20.x86_64
>> libvirt-python-1.1.3.6-2.fc20.x86_64
>> libvirt-daemon-config-nwfilter-1.1.3.6-2.fc20.x86_64
>> qemu-kvm-1.6.2-10.fc20.x86_64
>> qemu-common-1.6.2-10.fc20.x86_64
>> libvirt-client-1.1.3.6-2.fc20.x86_64
>> libvirt-daemon-driver-network-1.1.3.6-2.fc20.x86_64
>> libvirt-daemon-driver-nwfilter-1.1.3.6-2.fc20.x86_64
>> libvirt-daemon-driver-interface-1.1.3.6-2.fc20.x86_64
>> libvirt-daemon-driver-nodedev-1.1.3.6-2.fc20.x86_64
>> libvirt-daemon-driver-secret-1.1.3.6-2.fc20.x86_64
>> qemu-system-x86-1.6.2-10.fc20.x86_64
>> libvirt-daemon-kvm-1.1.3.6-2.fc20.x86_64
>> qemu-kvm-tools-1.6.2-10.fc20.x86_64
>> qemu-img-1.6.2-10.fc20.x86_64
>> libvirt-daemon-driver-qemu-1.1.3.6-2.fc20.x86_64
>> libvirt-daemon-driver-storage-1.1.3.6-2.fc20.x86_64
>> libvirt-lock-sanlock-1.1.3.6-2.fc20.x86_64
>>
>>
>>
>> c) CentOS 6.6 host + latest updates installed as OS and then installed
>> from webadmin in another cluster
>>
>> [root@ovnode03 ~]# vdsClient -s 0 getVdsCaps | grep -i merge
>> liveMerge = 'false'
>>
>> [root@ovnode03 ~]# rpm -qa libvirt* qemu*
>> libvirt-python-0.10.2-46.el6_6.2.x86_64
>> qemu-kvm-rhev-tools-0.12.1.2-2.415.el6_5.14.x86_64
>> qemu-img-rhev-0.12.1.2-2.415.el6_5.14.x86_64
>> libvirt-client-0.10.2-46.el6_6.2.x86_64
>> libvirt-0.10.2-46.el6_6.2.x86_64
>> qemu-kvm-rhev-0.12.1.2-2.415.el6_5.14.x86_64
>> libvirt-lock-sanlock-0.10.2-46.el6_6.2.x86_64
>>
>>
>>
>> d) CentOS 7.0 host + latest updates installed as OS and then installed
>> from webadmin in another cluster
>>
>> [root@ovnode04 ~]# vdsClient -s 0 getVdsCaps | grep -i merge
>> liveMerge = 'false'
>>
>> [root@ovnode04 ~]# rpm -qa qemu* libvirt*
>> qemu-img-rhev-1.5.3-60.el7_0.2.x86_64
>> libvirt-daemon-1.1.1-29.el7_0.3.x86_64
>> libvirt-daemon-driver-storage-1.1.1-29.el7_0.3.x86_64
>> libvirt-daemon-driver-nodedev-1.1.1-29.el7_0.3.x86_64
>> libvirt-daemon-kvm-1.1.1-29.el7_0.3.x86_64
>> qemu-kvm-tools-rhev-1.5.3-60.el7_0.2.x86_64
>> qemu-kvm-common-rhev-1.5.3-60.el7_0.2.x86_64
>> libvirt-client-1.1.1-29.el7_0.3.x86_64
>> libvirt-daemon-driver-nwfilter-1.1.1-29.el7_0.3.x86_64
>> libvirt-daemon-driver-interface-1.1.1-29.el7_0.3.x86_64
>> libvirt-lock-sanlock-1.1.1-29.el7_0.3.x86_64
>> libvirt-daemon-config-nwfilter-1.1.1-29.el7_0.3.x86_64
>> libvirt-daemon-driver-network-1.1.1-29.el7_0.3.x86_64
>> qemu-kvm-rhev-1.5.3-60.el7_0.2.x86_64
>> libvirt-python-1.1.1-29.el7_0.3.x86_64
>> libvirt-daemon-driver-secret-1.1.1-29.el7_0.3.x86_64
>> libvirt-daemon-driver-qemu-1.1.1-29.el7_0.3.x86_64
>>
>>
>> Thanks in advance,
>> Gianluca
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
> was this resolved?
>
In my opinion it was not resolved and what written into release notes
doesn't correspond to features set.
See my test cases and eventually describe other test cases where live merge
is usable
See also other findings about live merge on active layer here:
http://lists.ovirt.org/pipermail/users/2014-November/029450.html
Gianluca
4
7