Users
Threads by month
- ----- 2026 -----
- February
- January
- ----- 2025 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- 8 participants
- 19178 discussions
------=_Part_226735_346808180.1497684594828
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
Dear Martin
My Ovirt Engine Log In The Time When Ansible Script Run Can You Please chec=
k the log :
2017-06-17 10:13:53,478 INFO=C2=A0 [org.ovirt.engine.core.sso.utils.Authent=
icationUtils] (default task-5) [] User admin@internal successfully logged i=
n with scopes: ovirt-app-api ovirt-ext=3Dtoken-info:authz-search ovirt-ext=
=3Dtoken-info:public-authz-search ovirt-ext=3Dtoken-info:validate
2017-06-17 10:13:54,076 INFO=C2=A0 [org.ovirt.engine.core.bll.aaa.CreateUse=
rSessionCommand] (default task-2) [2cefa073] Running command: CreateUserSes=
sionCommand internal: false.
2017-06-17 10:13:54,227 INFO=C2=A0 [org.ovirt.engine.core.bll.AddVmCommand]=
(default task-1) [22a2647d] Lock Acquired to object 'EngineLock:{exclusive=
Locks=3D'[myvm05=3D<VM_NAME, ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLoc=
ks=3D'[71e0d46a-d8b8-48dc-bae4-3895a42fa005=3D<TEMPLATE, ACTION_TYPE_FAILED=
_TEMPLATE_IS_USED_FOR_CREATE_VM$VmName myvm05>, 4aba3e03-6665-46d5-882c-8d8=
931a563f0=3D<DISK, ACTION_TYPE_FAILED_DISK_IS_USED_FOR_CREATE_VM$VmName myv=
m05>]'}'
2017-06-17 10:13:54,287 INFO=C2=A0 [org.ovirt.engine.core.bll.AddVmCommand]=
(default task-1) [22a2647d] Running command: AddVmCommand internal: false.=
Entities affected :=C2=A0 ID: ffc26dac-0aeb-4486-8ce4-6eada0a99f0a Type: C=
lusterAction group CREATE_VM with role type USER,=C2=A0 ID: 71e0d46a-d8b8-4=
8dc-bae4-3895a42fa005 Type: VmTemplateAction group CREATE_VM with role type=
USER,=C2=A0 ID: 4a62e1c2-8942-48f6-a3c4-1e54d158e13a Type: StorageAction g=
roup CREATE_DISK with role type USER
2017-06-17 10:13:54,362 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.SetVmSt=
atusVDSCommand] (default task-1) [] START, SetVmStatusVDSCommand( SetVmStat=
usVDSCommandParameters:{runAsync=3D'true', vmId=3D'535f7d2b-a3df-4f7e-91e1-=
7b564bff625c', status=3D'ImageLocked', exitStatus=3D'Normal'}), log id: 758=
c11b1
2017-06-17 10:13:54,366 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.SetVmSt=
atusVDSCommand] (default task-1) [] FINISH, SetVmStatusVDSCommand, log id: =
758c11b1
2017-06-17 10:13:54,375 INFO=C2=A0 [org.ovirt.engine.core.bll.snapshots.Cre=
ateSnapshotFromTemplateCommand] (default task-1) [431bbdcb] Running command=
: CreateSnapshotFromTemplateCommand internal: true. Entities affected :=C2=
=A0 ID: 4a62e1c2-8942-48f6-a3c4-1e54d158e13a Type: Storage
2017-06-17 10:13:54,389 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.irsbrok=
er.CreateSnapshotVDSCommand] (default task-1) [431bbdcb] START, CreateSnaps=
hotVDSCommand( CreateSnapshotVDSCommandParameters:{runAsync=3D'true', stora=
gePoolId=3D'ca3d926a-fff6-41e3-bddd-f244289713fc', ignoreFailoverLimit=3D'f=
alse', storageDomainId=3D'4a62e1c2-8942-48f6-a3c4-1e54d158e13a', imageGroup=
Id=3D'67bfd3f8-8def-4b3a-bb45-c52aed5d1a4f', imageSizeInBytes=3D'1717986918=
4', volumeFormat=3D'COW', newImageId=3D'b5142477-2134-470e-8ba4-a2e72119ec9=
a', newImageDescription=3D'', imageInitialSizeInBytes=3D'0', imageId=3D'901=
367d7-8a2a-4f7e-80e8-636738ea3990', sourceImageGroupId=3D'4aba3e03-6665-46d=
5-882c-8d8931a563f0'}), log id: 1931032
2017-06-17 10:13:54,390 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.irsbrok=
er.CreateSnapshotVDSCommand] (default task-1) [431bbdcb] -- executeIrsBroke=
rCommand: calling 'createVolume' with two new parameters: description and U=
UID
2017-06-17 10:13:55,518 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.irsbrok=
er.CreateSnapshotVDSCommand] (default task-1) [431bbdcb] FINISH, CreateSnap=
shotVDSCommand, return: b5142477-2134-470e-8ba4-a2e72119ec9a, log id: 19310=
32
2017-06-17 10:13:55,521 INFO=C2=A0 [org.ovirt.engine.core.bll.tasks.Command=
AsyncTask] (default task-1) [431bbdcb] CommandAsyncTask::Adding CommandMult=
iAsyncTasks object for command 'b35f7ba3-89fc-4720-86e2-98cfa2ac4aeb'
2017-06-17 10:13:55,521 INFO=C2=A0 [org.ovirt.engine.core.bll.CommandMultiA=
syncTasks] (default task-1) [431bbdcb] CommandMultiAsyncTasks::attachTask: =
Attaching task 'a236b425-4414-49d8-860b-a0e932db175f' to command 'b35f7ba3-=
89fc-4720-86e2-98cfa2ac4aeb'.
2017-06-17 10:13:55,529 INFO=C2=A0 [org.ovirt.engine.core.bll.tasks.AsyncTa=
skManager] (default task-1) [431bbdcb] Adding task 'a236b425-4414-49d8-860b=
-a0e932db175f' (Parent Command 'CreateSnapshotFromTemplate', Parameters Typ=
e 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'), polling h=
asn't started yet..
2017-06-17 10:13:55,571 INFO=C2=A0 [org.ovirt.engine.core.dal.dbbroker.audi=
tloghandling.AuditLogDirector] (default task-1) [431bbdcb] Correlation ID: =
22a2647d, Job ID: 638b7524-baaa-4bb2-bb36-e02b78bb5dfd, Call Stack: null, C=
ustom Event ID: -1, Message: VM myvm05 creation was initiated by admin@inte=
rnal-authz.
2017-06-17 10:13:55,571 INFO=C2=A0 [org.ovirt.engine.core.bll.tasks.SPMAsyn=
cTask] (default task-1) [431bbdcb] BaseAsyncTask::startPollingTask: Startin=
g to poll task 'a236b425-4414-49d8-860b-a0e932db175f'.
2017-06-17 10:13:57,274 INFO=C2=A0 [org.ovirt.engine.core.bll.ConcurrentChi=
ldCommandsExecutionCallback] (DefaultQuartzScheduler5) [431bbdcb] Command '=
AddVm' (id: 'd089dd43-f977-4eb7-aa22-245d8ab69e5d') waiting on child comman=
d id: 'b35f7ba3-89fc-4720-86e2-98cfa2ac4aeb' type:'CreateSnapshotFromTempla=
te' to complete
2017-06-17 10:13:58,289 INFO=C2=A0 [org.ovirt.engine.core.bll.tasks.AsyncTa=
skManager] (DefaultQuartzScheduler8) [3a68eba3] Polling and updating Async =
Tasks: 2 tasks, 1 tasks to poll now
2017-06-17 10:13:58,586 INFO=C2=A0 [org.ovirt.engine.core.bll.tasks.SPMAsyn=
cTask] (DefaultQuartzScheduler8) [3a68eba3] SPMAsyncTask::PollTask: Polling=
task 'a236b425-4414-49d8-860b-a0e932db175f' (Parent Command 'CreateSnapsho=
tFromTemplate', Parameters Type 'org.ovirt.engine.core.common.asynctasks.As=
yncTaskParameters') returned status 'finished', result 'success'.
2017-06-17 10:13:58,589 INFO=C2=A0 [org.ovirt.engine.core.bll.tasks.SPMAsyn=
cTask] (DefaultQuartzScheduler8) [3a68eba3] BaseAsyncTask::onTaskEndSuccess=
: Task 'a236b425-4414-49d8-860b-a0e932db175f' (Parent Command 'CreateSnapsh=
otFromTemplate', Parameters Type 'org.ovirt.engine.core.common.asynctasks.A=
syncTaskParameters') ended successfully.
2017-06-17 10:13:58,591 INFO=C2=A0 [org.ovirt.engine.core.bll.tasks.Command=
AsyncTask] (DefaultQuartzScheduler8) [3a68eba3] CommandAsyncTask::endAction=
IfNecessary: All tasks of command 'b35f7ba3-89fc-4720-86e2-98cfa2ac4aeb' ha=
s ended -> executing 'endAction'
2017-06-17 10:13:58,591 INFO=C2=A0 [org.ovirt.engine.core.bll.tasks.Command=
AsyncTask] (DefaultQuartzScheduler8) [3a68eba3] CommandAsyncTask::endAction=
: Ending action for '1' tasks (command ID: 'b35f7ba3-89fc-4720-86e2-98cfa2a=
c4aeb'): calling endAction '.
2017-06-17 10:13:58,591 INFO=C2=A0 [org.ovirt.engine.core.bll.tasks.Command=
AsyncTask] (org.ovirt.thread.pool-6-thread-38) [3a68eba3] CommandAsyncTask:=
:endCommandAction [within thread] context: Attempting to endAction 'CreateS=
napshotFromTemplate', executionIndex: '0'
2017-06-17 10:13:58,596 INFO=C2=A0 [org.ovirt.engine.core.bll.snapshots.Cre=
ateSnapshotFromTemplateCommand] (org.ovirt.thread.pool-6-thread-38) [431bbd=
cb] Command [id=3Db35f7ba3-89fc-4720-86e2-98cfa2ac4aeb]: Updating status to=
'SUCCEEDED', The command end method logic will be executed by one of its p=
arent commands.
2017-06-17 10:13:58,597 INFO=C2=A0 [org.ovirt.engine.core.bll.tasks.Command=
AsyncTask] (org.ovirt.thread.pool-6-thread-38) [431bbdcb] CommandAsyncTask:=
:HandleEndActionResult [within thread]: endAction for action type 'CreateSn=
apshotFromTemplate' completed, handling the result.
2017-06-17 10:13:58,597 INFO=C2=A0 [org.ovirt.engine.core.bll.tasks.Command=
AsyncTask] (org.ovirt.thread.pool-6-thread-38) [431bbdcb] CommandAsyncTask:=
:HandleEndActionResult [within thread]: endAction for action type 'CreateSn=
apshotFromTemplate' succeeded, clearing tasks.
2017-06-17 10:13:58,597 INFO=C2=A0 [org.ovirt.engine.core.bll.tasks.SPMAsyn=
cTask] (org.ovirt.thread.pool-6-thread-38) [431bbdcb] SPMAsyncTask::ClearAs=
yncTask: Attempting to clear task 'a236b425-4414-49d8-860b-a0e932db175f'
2017-06-17 10:13:58,598 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.irsbrok=
er.SPMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-38) [431bbdcb] S=
TART, SPMClearTaskVDSCommand( SPMTaskGuidBaseVDSCommandParameters:{runAsync=
=3D'true', storagePoolId=3D'ca3d926a-fff6-41e3-bddd-f244289713fc', ignoreFa=
iloverLimit=3D'false', taskId=3D'a236b425-4414-49d8-860b-a0e932db175f'}), l=
og id: 37a1669b
2017-06-17 10:13:58,598 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.vdsbrok=
er.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-38) [431bbdcb] S=
TART, HSMClearTaskVDSCommand(HostName =3D h1, HSMTaskGuidBaseVDSCommandPara=
meters:{runAsync=3D'true', hostId=3D'e62505b1-d9b6-4231-992c-8e4851c66624',=
taskId=3D'a236b425-4414-49d8-860b-a0e932db175f'}), log id: 5ecec77e
2017-06-17 10:13:59,633 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.vdsbrok=
er.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-38) [431bbdcb] F=
INISH, HSMClearTaskVDSCommand, log id: 5ecec77e
2017-06-17 10:13:59,633 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.irsbrok=
er.SPMClearTaskVDSCommand] (org.ovirt.thread.pool-6-thread-38) [431bbdcb] F=
INISH, SPMClearTaskVDSCommand, log id: 37a1669b
2017-06-17 10:13:59,635 INFO=C2=A0 [org.ovirt.engine.core.bll.tasks.SPMAsyn=
cTask] (org.ovirt.thread.pool-6-thread-38) [431bbdcb] BaseAsyncTask::remove=
TaskFromDB: Removed task 'a236b425-4414-49d8-860b-a0e932db175f' from DataBa=
se
2017-06-17 10:13:59,635 INFO=C2=A0 [org.ovirt.engine.core.bll.tasks.Command=
AsyncTask] (org.ovirt.thread.pool-6-thread-38) [431bbdcb] CommandAsyncTask:=
:HandleEndActionResult [within thread]: Removing CommandMultiAsyncTasks obj=
ect for entity 'b35f7ba3-89fc-4720-86e2-98cfa2ac4aeb'
2017-06-17 10:14:01,284 INFO=C2=A0 [org.ovirt.engine.core.bll.ConcurrentChi=
ldCommandsExecutionCallback] (DefaultQuartzScheduler7) [431bbdcb] Command '=
AddVm' id: 'd089dd43-f977-4eb7-aa22-245d8ab69e5d' child commands '[b35f7ba3=
-89fc-4720-86e2-98cfa2ac4aeb]' executions were completed, status 'SUCCEEDED=
'
2017-06-17 10:14:02,299 INFO=C2=A0 [org.ovirt.engine.core.bll.AddVmCommand]=
(DefaultQuartzScheduler2) [431bbdcb] Ending command 'org.ovirt.engine.core=
.bll.AddVmCommand' successfully.
2017-06-17 10:14:02,304 INFO=C2=A0 [org.ovirt.engine.core.bll.snapshots.Cre=
ateSnapshotFromTemplateCommand] (DefaultQuartzScheduler2) [431bbdcb] Ending=
command 'org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCom=
mand' successfully.
2017-06-17 10:14:02,309 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.irsbrok=
er.GetImageInfoVDSCommand] (DefaultQuartzScheduler2) [431bbdcb] START, GetI=
mageInfoVDSCommand( GetImageInfoVDSCommandParameters:{runAsync=3D'true', st=
oragePoolId=3D'ca3d926a-fff6-41e3-bddd-f244289713fc', ignoreFailoverLimit=
=3D'false', storageDomainId=3D'4a62e1c2-8942-48f6-a3c4-1e54d158e13a', image=
GroupId=3D'67bfd3f8-8def-4b3a-bb45-c52aed5d1a4f', imageId=3D'b5142477-2134-=
470e-8ba4-a2e72119ec9a'}), log id: 6fd9785f
2017-06-17 10:14:03,344 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.irsbrok=
er.GetImageInfoVDSCommand] (DefaultQuartzScheduler2) [431bbdcb] FINISH, Get=
ImageInfoVDSCommand, return: org.ovirt.engine.core.common.businessentities.=
storage.DiskImage@22deef7c, log id: 6fd9785f
2017-06-17 10:14:03,356 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.SetVmSt=
atusVDSCommand] (DefaultQuartzScheduler2) [] START, SetVmStatusVDSCommand( =
SetVmStatusVDSCommandParameters:{runAsync=3D'true', vmId=3D'535f7d2b-a3df-4=
f7e-91e1-7b564bff625c', status=3D'Down', exitStatus=3D'Normal'}), log id: 4=
4ed9ea6
2017-06-17 10:14:03,358 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.SetVmSt=
atusVDSCommand] (DefaultQuartzScheduler2) [] FINISH, SetVmStatusVDSCommand,=
log id: 44ed9ea6
2017-06-17 10:14:03,362 INFO=C2=A0 [org.ovirt.engine.core.bll.AddVmCommand]=
(DefaultQuartzScheduler2) [] Lock freed to object 'EngineLock:{exclusiveLo=
cks=3D'[myvm05=3D<VM_NAME, ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks=
=3D'[71e0d46a-d8b8-48dc-bae4-3895a42fa005=3D<TEMPLATE, ACTION_TYPE_FAILED_T=
EMPLATE_IS_USED_FOR_CREATE_VM$VmName myvm05>, 4aba3e03-6665-46d5-882c-8d893=
1a563f0=3D<DISK, ACTION_TYPE_FAILED_DISK_IS_USED_FOR_CREATE_VM$VmName myvm0=
5>]'}'
2017-06-17 10:14:03,370 INFO=C2=A0 [org.ovirt.engine.core.dal.dbbroker.audi=
tloghandling.AuditLogDirector] (DefaultQuartzScheduler2) [] Correlation ID:=
22a2647d, Job ID: 638b7524-baaa-4bb2-bb36-e02b78bb5dfd, Call Stack: null, =
Custom Event ID: -1, Message: VM myvm05 creation has been completed.
2017-06-17 10:14:04,768 INFO=C2=A0 [org.ovirt.engine.core.bll.RunVmOnceComm=
and] (default task-19) [590cac7] Lock Acquired to object 'EngineLock:{exclu=
siveLocks=3D'[535f7d2b-a3df-4f7e-91e1-7b564bff625c=3D<VM, ACTION_TYPE_FAILE=
D_OBJECT_LOCKED>]', sharedLocks=3D'null'}'
2017-06-17 10:14:04,777 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.IsVmDur=
ingInitiatingVDSCommand] (default task-19) [590cac7] START, IsVmDuringIniti=
atingVDSCommand( IsVmDuringInitiatingVDSCommandParameters:{runAsync=3D'true=
', vmId=3D'535f7d2b-a3df-4f7e-91e1-7b564bff625c'}), log id: 261674f1
2017-06-17 10:14:04,777 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.IsVmDur=
ingInitiatingVDSCommand] (default task-19) [590cac7] FINISH, IsVmDuringInit=
iatingVDSCommand, return: false, log id: 261674f1
2017-06-17 10:14:04,798 INFO=C2=A0 [org.ovirt.engine.core.bll.RunVmOnceComm=
and] (default task-19) [590cac7] Running command: RunVmOnceCommand internal=
: false. Entities affected :=C2=A0 ID: 535f7d2b-a3df-4f7e-91e1-7b564bff625c=
Type: VMAction group RUN_VM with role type USER
2017-06-17 10:14:04,817 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.UpdateV=
mDynamicDataVDSCommand] (default task-19) [590cac7] START, UpdateVmDynamicD=
ataVDSCommand( UpdateVmDynamicDataVDSCommandParameters:{runAsync=3D'true', =
hostId=3D'null', vmId=3D'00000000-0000-0000-0000-000000000000', vmDynamic=
=3D'org.ovirt.engine.core.common.businessentities.VmDynamic@e1224d5b'}), lo=
g id: bc72f78
2017-06-17 10:14:04,820 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.UpdateV=
mDynamicDataVDSCommand] (default task-19) [590cac7] FINISH, UpdateVmDynamic=
DataVDSCommand, log id: bc72f78
2017-06-17 10:14:04,822 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.CreateV=
mVDSCommand] (default task-19) [590cac7] START, CreateVmVDSCommand( CreateV=
mVDSCommandParameters:{runAsync=3D'true', hostId=3D'e62505b1-d9b6-4231-992c=
-8e4851c66624', vmId=3D'535f7d2b-a3df-4f7e-91e1-7b564bff625c', vm=3D'VM [my=
vm05]'}), log id: 5d92bbd3
2017-06-17 10:14:04,825 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.vdsbrok=
er.CreateVmFromCloudInitVDSCommand] (default task-19) [590cac7] START, Crea=
teVmFromCloudInitVDSCommand(HostName =3D h1, CreateVmVDSCommandParameters:{=
runAsync=3D'true', hostId=3D'e62505b1-d9b6-4231-992c-8e4851c66624', vmId=3D=
'535f7d2b-a3df-4f7e-91e1-7b564bff625c', vm=3D'VM [myvm05]'}), log id: 7b28f=
107
2017-06-17 10:14:04,828 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.vdsbrok=
er.VmInfoBuilderBase] (default task-19) [590cac7] Bootable disk '67bfd3f8-8=
def-4b3a-bb45-c52aed5d1a4f' set to index '0'
2017-06-17 10:14:04,842 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.vdsbrok=
er.CreateVDSCommand] (default task-19) [590cac7] org.ovirt.engine.core.vdsb=
roker.vdsbroker.CreateVmFromCloudInitVDSCommand pitReinjection=3Dfalse,memG=
uaranteedSize=3D2000,smpThreadsPerCore=3D1,cpuType=3DWestmere,vmId=3D535f7d=
2b-a3df-4f7e-91e1-7b564bff625c,acpiEnable=3Dtrue,vmType=3Dkvm,smp=3D8,smpCo=
resPerSocket=3D2,emulatedMachine=3Dpc-i440fx-rhel7.2.0,smartcardEnable=3Dfa=
lse,guestNumaNodes=3D[{memory=3D4096, cpus=3D0,1,2,3,4,5,6,7, nodeIndex=3D0=
}],transparentHugePages=3Dtrue,displayNetwork=3Dovirtmgmt,vmName=3Dmyvm05,m=
axVCpus=3D32,kvmEnable=3Dtrue,devices=3D[{iface=3Dide, shared=3Dfalse, path=
=3D, readonly=3Dtrue, index=3D3, type=3Ddisk, specParams=3D{vmPayload=3D{fi=
le=3D{openstack/latest/meta_data.json=3DewogICJuZXR3b3JrLWludGVyZmFjZXMiIDo=
gImF1dG8gZXRoMFxuaWZhY2UgZXRoMCBpbmV0IHN0YXRpY1xuICBhZGRyZXNzIDEwLjEwLjEwLj=
UwXG4gIG5ldG1hc2sgMjU1LjI1NS4yNTUuMFxuICBnYXRld2F5IDEwLjEwLjEwLjFcbiAgZG5zL=
W5hbWVzZXJ2ZXJzIDEwOS4yMjQuMTQuMlxuICBkbnMtc2VhcmNoIGVsY2xkLm5ldFxuIiwKICAi=
YXZhaWxhYmlsaXR5X3pvbmUiIDogIm5vdmEiLAogICJob3N0bmFtZSIgOiAidm0wMSIsCiAgImx=
hdW5jaF9pbmRleCIgOiAiMCIsCiAgIm1ldGEiIDogewogICAgInJvbGUiIDogInNlcnZlciIsCi=
AgICAiZHNtb2RlIiA6ICJsb2NhbCIsCiAgICAiZXNzZW50aWFsIiA6ICJmYWxzZSIKICB9LAogI=
CJuYW1lIiA6ICJ2bTAxIiwKICAibmV0d29ya19jb25maWciIDogewogICAgInBhdGgiIDogIi9l=
dGMvbmV0d29yay9pbnRlcmZhY2VzIiwKICAgICJjb250ZW50X3BhdGgiIDogIi9jb250ZW50LzA=
wMDAiCiAgfSwKICAidXVpZCIgOiAiNjdjOThkOWQtOWZlOC00ZWU2LTg0NGUtZjU4N2Y1Njc5NW=
ExIgp9, openstack/content/0000=3DYXV0byBldGgwCmlmYWNlIGV0aDAgaW5ldCBzdGF0aW=
MKICBhZGRyZXNzIDEwLjEwLjEwLjUwCiAgbmV0bWFzayAyNTUuMjU1LjI1NS4wCiAgZ2F0ZXdhe=
SAxMC4xMC4xMC4xCiAgZG5zLW5hbWVzZXJ2ZXJzIDEwOS4yMjQuMTQuMgogIGRucy1zZWFyY2gg=
ZWxjbGQubmV0Cg=3D=3D, openstack/latest/user_data=3DI2Nsb3VkLWNvbmZpZwpvdXRw=
dXQ6CiAgYWxsOiAnPj4gL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJwpwYXNzd29yZDo=
gYm9vYm9vCmRpc2FibGVfcm9vdDogMApydW5jbWQ6Ci0gJ3NlZCAtaSAnJy9eZGF0YXNvdXJjZV=
9saXN0OiAvZCcnIC9ldGMvY2xvdWQvY2xvdWQuY2ZnOyBlY2hvICcnZGF0YXNvdXJjZV9saXN0O=
gogIFsiTm9DbG91ZCIsICJDb25maWdEcml2ZSJdJycgPj4gL2V0Yy9jbG91ZC9jbG91ZC5jZmcn=
CnNzaF9wd2F1dGg6IHRydWUKY2hwYXNzd2Q6CiAgZXhwaXJlOiBmYWxzZQp1c2VyOiByb290Cg=
=3D=3D}, volId=3Dconfig-2}}, device=3Dcdrom, deviceId=3D95cee92c-3f8d-4b2b-=
93f9-679c854f84b0}, {type=3Dvideo, specParams=3D{vgamem=3D16384, heads=3D1,=
vram=3D32768, ram=3D65536}, device=3Dqxl, deviceId=3D5cb25f3c-dccb-4a8e-87=
d7-5873db429f4e}, {type=3Dgraphics, specParams=3D{spiceSecureChannels=3Dsma=
in,sinputs,scursor,splayback,srecord,sdisplay,ssmartcard,susbredir, fileTra=
nsferEnable=3Dtrue, spiceSslCipherSuite=3DDEFAULT, copyPasteEnable=3Dtrue},=
device=3Dspice, deviceId=3D4f91a674-25e7-45be-92a6-50285b15d130}, {iface=
=3Dide, shared=3Dfalse, path=3D, address=3D{bus=3D1, controller=3D0, unit=
=3D0, type=3Ddrive, target=3D0}, readonly=3Dtrue, index=3D2, type=3Ddisk, s=
pecParams=3D{path=3D}, device=3Dcdrom, deviceId=3Dabc8d3e6-ef97-4e2a-9559-e=
e62519a8e81}, {shared=3Dfalse, address=3D{bus=3D0x00, domain=3D0x0000, func=
tion=3D0x0, slot=3D0x06, type=3Dpci}, imageID=3D67bfd3f8-8def-4b3a-bb45-c52=
aed5d1a4f, format=3Dcow, index=3D0, optional=3Dfalse, type=3Ddisk, deviceId=
=3D67bfd3f8-8def-4b3a-bb45-c52aed5d1a4f, domainID=3D4a62e1c2-8942-48f6-a3c4=
-1e54d158e13a, propagateErrors=3Doff, iface=3Dvirtio, readonly=3Dfalse, boo=
tOrder=3D1, poolID=3Dca3d926a-fff6-41e3-bddd-f244289713fc, volumeID=3Db5142=
477-2134-470e-8ba4-a2e72119ec9a, specParams=3D{}, device=3Ddisk}, {filter=
=3Dvdsm-no-mac-spoofing, nicModel=3Dpv, address=3D{bus=3D0x00, domain=3D0x0=
000, function=3D0x0, slot=3D0x03, type=3Dpci}, type=3Dinterface, specParams=
=3D{inbound=3D{}, outbound=3D{}}, device=3Dbridge, linkActive=3Dtrue, devic=
eId=3Dd90c311d-1d07-445d-bc0c-93da4fc38327, macAddr=3D00:1a:4a:16:01:69, ne=
twork=3Dovirtmgmt}, {address=3D{bus=3D0x00, function=3D0x0, domain=3D0x0000=
, slot=3D0x07, type=3Dpci}, type=3Dballoon, specParams=3D{model=3Dvirtio}, =
device=3Dmemballoon, deviceId=3Ddcf2375e-54b2-471f-9dba-a0c7d509492f}, {ind=
ex=3D0, model=3Dvirtio-scsi, type=3Dcontroller, specParams=3D{}, device=3Ds=
csi, deviceId=3De1eb89e1-77df-4677-ba9a-4b3c92a46e69}, {address=3D{bus=3D0x=
00, domain=3D0x0000, function=3D0x0, slot=3D0x05, type=3Dpci}, type=3Dcontr=
oller, specParams=3D{}, device=3Dvirtio-serial, deviceId=3D3d096c2f-fdbd-43=
dd-8243-11e562e0a612}],custom=3D{device_966e95b5-6a22-4836-aad0-c9f4bb79389=
e=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'966e95b5-6a22-4836-aad0-c9f4bb7=
9389e', vmId=3D'535f7d2b-a3df-4f7e-91e1-7b564bff625c'}', device=3D'unix', t=
ype=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', address=3D'{bus=3D0, c=
ontroller=3D0, type=3Dvirtio-serial, port=3D1}', managed=3D'false', plugged=
=3D'true', readOnly=3D'false', deviceAlias=3D'channel0', customProperties=
=3D'[]', snapshotId=3D'null', logicalName=3D'null', usingScsiReservation=3D=
'false', hostDevice=3D'null'}, device_966e95b5-6a22-4836-aad0-c9f4bb79389ed=
evice_5172a707-33ea-4f05-855b-40232655ccb6=3DVmDevice:{id=3D'VmDeviceId:{de=
viceId=3D'5172a707-33ea-4f05-855b-40232655ccb6', vmId=3D'535f7d2b-a3df-4f7e=
-91e1-7b564bff625c'}', device=3D'spicevmc', type=3D'CHANNEL', bootOrder=3D'=
0', specParams=3D'[]', address=3D'{bus=3D0, controller=3D0, type=3Dvirtio-s=
erial, port=3D3}', managed=3D'false', plugged=3D'true', readOnly=3D'false',=
deviceAlias=3D'channel2', customProperties=3D'[]', snapshotId=3D'null', lo=
gicalName=3D'null', usingScsiReservation=3D'false', hostDevice=3D'null'}, d=
evice_966e95b5-6a22-4836-aad0-c9f4bb79389edevice_5172a707-33ea-4f05-855b-40=
232655ccb6device_60409373-848b-43e5-8c8c-117688b0ba71=3DVmDevice:{id=3D'VmD=
eviceId:{deviceId=3D'60409373-848b-43e5-8c8c-117688b0ba71', vmId=3D'535f7d2=
b-a3df-4f7e-91e1-7b564bff625c'}', device=3D'ide', type=3D'CONTROLLER', boot=
Order=3D'0', specParams=3D'[]', address=3D'{slot=3D0x01, bus=3D0x00, domain=
=3D0x0000, type=3Dpci, function=3D0x1}', managed=3D'false', plugged=3D'true=
', readOnly=3D'false', deviceAlias=3D'ide', customProperties=3D'[]', snapsh=
otId=3D'null', logicalName=3D'null', usingScsiReservation=3D'false', hostDe=
vice=3D'null'}, device_966e95b5-6a22-4836-aad0-c9f4bb79389edevice_5172a707-=
33ea-4f05-855b-40232655ccb6device_60409373-848b-43e5-8c8c-117688b0ba71devic=
e_63038946-00aa-4b0a-9045-5dfb47b31e89=3DVmDevice:{id=3D'VmDeviceId:{device=
Id=3D'63038946-00aa-4b0a-9045-5dfb47b31e89', vmId=3D'535f7d2b-a3df-4f7e-91e=
1-7b564bff625c'}', device=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', spec=
Params=3D'[]', address=3D'{bus=3D0, controller=3D0, type=3Dvirtio-serial, p=
ort=3D2}', managed=3D'false', plugged=3D'true', readOnly=3D'false', deviceA=
lias=3D'channel1', customProperties=3D'[]', snapshotId=3D'null', logicalNam=
e=3D'null', usingScsiReservation=3D'false', hostDevice=3D'null'}},display=
=3Dqxl,timeOffset=3D0,nice=3D0,maxMemSize=3D4194304,maxMemSlots=3D16,bootMe=
nuEnable=3Dfalse,memSize=3D4096
2017-06-17 10:14:04,856 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.vdsbrok=
er.CreateVmFromCloudInitVDSCommand] (default task-19) [590cac7] FINISH, Cre=
ateVmFromCloudInitVDSCommand, log id: 7b28f107
2017-06-17 10:14:04,860 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.CreateV=
mVDSCommand] (default task-19) [590cac7] FINISH, CreateVmVDSCommand, return=
: WaitForLaunch, log id: 5d92bbd3
2017-06-17 10:14:04,860 INFO=C2=A0 [org.ovirt.engine.core.bll.RunVmOnceComm=
and] (default task-19) [590cac7] Lock freed to object 'EngineLock:{exclusiv=
eLocks=3D'[535f7d2b-a3df-4f7e-91e1-7b564bff625c=3D<VM, ACTION_TYPE_FAILED_O=
BJECT_LOCKED>]', sharedLocks=3D'null'}'
2017-06-17 10:14:04,863 INFO=C2=A0 [org.ovirt.engine.core.dal.dbbroker.audi=
tloghandling.AuditLogDirector] (default task-19) [590cac7] Correlation ID: =
590cac7, Job ID: a66974f1-d8f1-477d-85cc-51c3ef70ebdb, Call Stack: null, Cu=
stom Event ID: -1, Message: VM myvm05 was started by admin@internal-authz (=
Host: h1).
2017-06-17 10:14:06,360 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.monitor=
ing.VmAnalyzer] (ForkJoinPool-1-worker-1) [] VM '535f7d2b-a3df-4f7e-91e1-7b=
564bff625c'(myvm05) moved from 'WaitForLaunch' --> 'PoweringUp'
2017-06-17 10:14:06,367 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.vdsbrok=
er.FullListVDSCommand] (ForkJoinPool-1-worker-1) [] START, FullListVDSComma=
nd(HostName =3D , FullListVDSCommandParameters:{runAsync=3D'true', hostId=
=3D'e62505b1-d9b6-4231-992c-8e4851c66624', vds=3D'Host[,e62505b1-d9b6-4231-=
992c-8e4851c66624]', vmIds=3D'[535f7d2b-a3df-4f7e-91e1-7b564bff625c]'}), lo=
g id: 3a801479
2017-06-17 10:14:06,376 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.vdsbrok=
er.FullListVDSCommand] (ForkJoinPool-1-worker-1) [] FINISH, FullListVDSComm=
and, return: [{acpiEnable=3Dtrue, emulatedMachine=3Dpc-i440fx-rhel7.2.0, vm=
Id=3D535f7d2b-a3df-4f7e-91e1-7b564bff625c, guestDiskMapping=3D{}, transpare=
ntHugePages=3Dtrue, timeOffset=3D0, cpuType=3DWestmere, smp=3D8, pauseCode=
=3DNOERR, guestNumaNodes=3D[Ljava.lang.Object;@8a07465, smartcardEnable=3Df=
alse, custom=3D{device_966e95b5-6a22-4836-aad0-c9f4bb79389edevice_5172a707-=
33ea-4f05-855b-40232655ccb6device_60409373-848b-43e5-8c8c-117688b0ba71=3DVm=
Device:{id=3D'VmDeviceId:{deviceId=3D'60409373-848b-43e5-8c8c-117688b0ba71'=
, vmId=3D'535f7d2b-a3df-4f7e-91e1-7b564bff625c'}', device=3D'ide', type=3D'=
CONTROLLER', bootOrder=3D'0', specParams=3D'[]', address=3D'{slot=3D0x01, b=
us=3D0x00, domain=3D0x0000, type=3Dpci, function=3D0x1}', managed=3D'false'=
, plugged=3D'true', readOnly=3D'false', deviceAlias=3D'ide', customProperti=
es=3D'[]', snapshotId=3D'null', logicalName=3D'null', usingScsiReservation=
=3D'false', hostDevice=3D'null'}, device_966e95b5-6a22-4836-aad0-c9f4bb7938=
9edevice_5172a707-33ea-4f05-855b-40232655ccb6=3DVmDevice:{id=3D'VmDeviceId:=
{deviceId=3D'5172a707-33ea-4f05-855b-40232655ccb6', vmId=3D'535f7d2b-a3df-4=
f7e-91e1-7b564bff625c'}', device=3D'spicevmc', type=3D'CHANNEL', bootOrder=
=3D'0', specParams=3D'[]', address=3D'{bus=3D0, controller=3D0, type=3Dvirt=
io-serial, port=3D3}', managed=3D'false', plugged=3D'true', readOnly=3D'fal=
se', deviceAlias=3D'channel2', customProperties=3D'[]', snapshotId=3D'null'=
, logicalName=3D'null', usingScsiReservation=3D'false', hostDevice=3D'null'=
}, device_966e95b5-6a22-4836-aad0-c9f4bb79389edevice_5172a707-33ea-4f05-855=
b-40232655ccb6device_60409373-848b-43e5-8c8c-117688b0ba71device_63038946-00=
aa-4b0a-9045-5dfb47b31e89=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'6303894=
6-00aa-4b0a-9045-5dfb47b31e89', vmId=3D'535f7d2b-a3df-4f7e-91e1-7b564bff625=
c'}', device=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]'=
, address=3D'{bus=3D0, controller=3D0, type=3Dvirtio-serial, port=3D2}', ma=
naged=3D'false', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'chann=
el1', customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null', u=
singScsiReservation=3D'false', hostDevice=3D'null'}, device_966e95b5-6a22-4=
836-aad0-c9f4bb79389e=3DVmDevice:{id=3D'VmDeviceId:{deviceId=3D'966e95b5-6a=
22-4836-aad0-c9f4bb79389e', vmId=3D'535f7d2b-a3df-4f7e-91e1-7b564bff625c'}'=
, device=3D'unix', type=3D'CHANNEL', bootOrder=3D'0', specParams=3D'[]', ad=
dress=3D'{bus=3D0, controller=3D0, type=3Dvirtio-serial, port=3D1}', manage=
d=3D'false', plugged=3D'true', readOnly=3D'false', deviceAlias=3D'channel0'=
, customProperties=3D'[]', snapshotId=3D'null', logicalName=3D'null', using=
ScsiReservation=3D'false', hostDevice=3D'null'}}, vmType=3Dkvm, memSize=3D4=
096, smpCoresPerSocket=3D2, vmName=3Dmyvm05, nice=3D0, status=3DUp, maxMemS=
ize=3D4194304, bootMenuEnable=3Dfalse, pid=3D1569, smpThreadsPerCore=3D1, m=
emGuaranteedSize=3D2000, kvmEnable=3Dtrue, pitReinjection=3Dfalse, displayN=
etwork=3Dovirtmgmt, devices=3D[Ljava.lang.Object;@690fc889, display=3Dqxl, =
maxVCpus=3D32, clientIp=3D, statusTime=3D4467062130, maxMemSlots=3D16}], lo=
g id: 3a801479
2017-06-17 10:14:06,379 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.monitor=
ing.VmDevicesMonitoring] (ForkJoinPool-1-worker-1) [] Received a spice Devi=
ce without an address when processing VM 535f7d2b-a3df-4f7e-91e1-7b564bff62=
5c devices, skipping device: {device=3Dspice, specParams=3D{fileTransferEna=
ble=3Dtrue, displayNetwork=3Dovirtmgmt, displayIp=3D192.168.215.215, spiceS=
slCipherSuite=3DDEFAULT, spiceSecureChannels=3Dsmain,sinputs,scursor,splayb=
ack,srecord,sdisplay,ssmartcard,susbredir, copyPasteEnable=3Dtrue}, type=3D=
graphics, deviceId=3D4f91a674-25e7-45be-92a6-50285b15d130, tlsPort=3D5901}
2017-06-17 10:14:13,332 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.monitor=
ing.VmsStatisticsFetcher] (DefaultQuartzScheduler6) [3465d470] Fetched 2 VM=
s from VDS 'e62505b1-d9b6-4231-992c-8e4851c66624'
2017-06-17 10:14:54,749 INFO=C2=A0 [org.ovirt.engine.core.bll.tasks.AsyncTa=
skManager] (DefaultQuartzScheduler4) [1a9e3b34] Setting new tasks map. The =
map contains now 1 tasks
2017-06-17 10:15:13,767 INFO=C2=A0 [org.ovirt.engine.core.vdsbroker.monitor=
ing.VmAnalyzer] (DefaultQuartzScheduler9) [3a68eba3] VM '535f7d2b-a3df-4f7e=
-91e1-7b564bff625c'(myvm05) moved from 'PoweringUp' --> 'Up'
2017-06-17 10:15:13,780 INFO=C2=A0 [org.ovirt.engine.core.dal.dbbroker.audi=
tloghandling.AuditLogDirector] (DefaultQuartzScheduler9) [3a68eba3] Correla=
tion ID: 590cac7, Job ID: a66974f1-d8f1-477d-85cc-51c3ef70ebdb, Call Stack:=
null, Custom Event ID: -1, Message: VM myvm05 started on Host h1
2017-06-17 10:15:14,881 INFO=C2=A0 [org.ovirt.engine.core.sso.servlets.OAut=
hRevokeServlet] (default task-62) [] User admin@internal successfully logge=
d out
2017-06-17 10:15:14,889 INFO=C2=A0 [org.ovirt.engine.core.bll.aaa.Terminate=
SessionsForTokenCommand] (default task-46) [361ecd2f] Running command: Term=
inateSessionsForTokenCommand internal: true.
2017-06-17 10:15:24,749 INFO=C2=A0 [org.ovirt.engine.core.bll.tasks.AsyncTa=
skManager] (DefaultQuartzScheduler4) [1a9e3b34] Setting new tasks map. The =
map contains now 0 tasks
2017-06-17 10:15:24,749 INFO=C2=A0 [org.ovirt.engine.core.bll.tasks.AsyncTa=
skManager] (DefaultQuartzScheduler4) [1a9e3b34] Cleared all tasks of pool '=
ca3d926a-fff6-41e3-bddd-f244289713fc'.
------=_Part_226735_346808180.1497684594828
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 7bit
<html><head></head><body><div style="color:#000; background-color:#fff; font-family:Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif;font-size:13px"><div id="yiv7471100670"><div id="yui_3_16_0_ym19_1_1497684527403_3564"><div style="color:#000;background-color:#fff;font-family:Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif;font-size:13px;" id="yui_3_16_0_ym19_1_1497684527403_3563"><div id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_3877">Dear Martin<br></div><div class="yiv7471100670signature" id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_3829"><div id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_3828">
<div id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4073"><br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4074">
</div>
<div id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4075"><br></div><div id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4075">My Ovirt Engine Log In The Time When Ansible Script Run Can You Please check the log :<br></div>
<div dir="ltr" id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4076"><br></div><div dir="ltr" id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4076"><br></div><div dir="ltr" id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4076"><br></div><div dir="ltr" id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4076">2017-06-17 10:13:53,478 INFO
[org.ovirt.engine.core.sso.utils.AuthenticationUtils] (default task-5)
[] User admin@internal successfully logged in with scopes: ovirt-app-api
ovirt-ext=token-info:authz-search
ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4077">
2017-06-17 10:13:54,076 INFO
[org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default
task-2) [2cefa073] Running command: CreateUserSessionCommand internal:
false.<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4078">
2017-06-17 10:13:54,227 INFO [org.ovirt.engine.core.bll.AddVmCommand]
(default task-1) [22a2647d] Lock Acquired to object
'EngineLock:{exclusiveLocks='[myvm05=<VM_NAME,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
sharedLocks='[71e0d46a-d8b8-48dc-bae4-3895a42fa005=<TEMPLATE,
ACTION_TYPE_FAILED_TEMPLATE_IS_USED_FOR_CREATE_VM$VmName myvm05>,
4aba3e03-6665-46d5-882c-8d8931a563f0=<DISK,
ACTION_TYPE_FAILED_DISK_IS_USED_FOR_CREATE_VM$VmName myvm05>]'}'<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4079">
2017-06-17 10:13:54,287 INFO [org.ovirt.engine.core.bll.AddVmCommand]
(default task-1) [22a2647d] Running command: AddVmCommand internal:
false. Entities affected : ID: ffc26dac-0aeb-4486-8ce4-6eada0a99f0a
Type: ClusterAction group CREATE_VM with role type USER, ID:
71e0d46a-d8b8-48dc-bae4-3895a42fa005 Type: VmTemplateAction group
CREATE_VM with role type USER, ID: 4a62e1c2-8942-48f6-a3c4-1e54d158e13a
Type: StorageAction group CREATE_DISK with role type USER<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4080">
2017-06-17 10:13:54,362 INFO
[org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] (default task-1)
[] START, SetVmStatusVDSCommand(
SetVmStatusVDSCommandParameters:{runAsync='true',
vmId='535f7d2b-a3df-4f7e-91e1-7b564bff625c', status='ImageLocked',
exitStatus='Normal'}), log id: 758c11b1<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4081">
2017-06-17 10:13:54,366 INFO
[org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] (default task-1)
[] FINISH, SetVmStatusVDSCommand, log id: 758c11b1<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4082">
2017-06-17 10:13:54,375 INFO
[org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCommand]
(default task-1) [431bbdcb] Running command:
CreateSnapshotFromTemplateCommand internal: true. Entities affected :
ID: 4a62e1c2-8942-48f6-a3c4-1e54d158e13a Type: Storage<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4083">
2017-06-17 10:13:54,389 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand]
(default task-1) [431bbdcb] START, CreateSnapshotVDSCommand(
CreateSnapshotVDSCommandParameters:{runAsync='true',
storagePoolId='ca3d926a-fff6-41e3-bddd-f244289713fc',
ignoreFailoverLimit='false',
storageDomainId='4a62e1c2-8942-48f6-a3c4-1e54d158e13a',
imageGroupId='67bfd3f8-8def-4b3a-bb45-c52aed5d1a4f',
imageSizeInBytes='17179869184', volumeFormat='COW',
newImageId='b5142477-2134-470e-8ba4-a2e72119ec9a',
newImageDescription='', imageInitialSizeInBytes='0',
imageId='901367d7-8a2a-4f7e-80e8-636738ea3990',
sourceImageGroupId='4aba3e03-6665-46d5-882c-8d8931a563f0'}), log id:
1931032<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4084">
2017-06-17 10:13:54,390 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand]
(default task-1) [431bbdcb] -- executeIrsBrokerCommand: calling
'createVolume' with two new parameters: description and UUID<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4085">
2017-06-17 10:13:55,518 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.CreateSnapshotVDSCommand]
(default task-1) [431bbdcb] FINISH, CreateSnapshotVDSCommand, return:
b5142477-2134-470e-8ba4-a2e72119ec9a, log id: 1931032<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4086">
2017-06-17 10:13:55,521 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (default task-1)
[431bbdcb] CommandAsyncTask::Adding CommandMultiAsyncTasks object for
command 'b35f7ba3-89fc-4720-86e2-98cfa2ac4aeb'<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4087">
2017-06-17 10:13:55,521 INFO
[org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (default task-1)
[431bbdcb] CommandMultiAsyncTasks::attachTask: Attaching task
'a236b425-4414-49d8-860b-a0e932db175f' to command
'b35f7ba3-89fc-4720-86e2-98cfa2ac4aeb'.<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4088">
2017-06-17 10:13:55,529 INFO
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (default task-1)
[431bbdcb] Adding task 'a236b425-4414-49d8-860b-a0e932db175f' (Parent
Command 'CreateSnapshotFromTemplate', Parameters Type
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'), polling
hasn't started yet..<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4089">
2017-06-17 10:13:55,571 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-1) [431bbdcb] Correlation ID: 22a2647d, Job ID:
638b7524-baaa-4bb2-bb36-e02b78bb5dfd, Call Stack: null, Custom Event ID:
-1, Message: VM myvm05 creation was initiated by admin@internal-authz.<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4090">
2017-06-17 10:13:55,571 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (default task-1)
[431bbdcb] BaseAsyncTask::startPollingTask: Starting to poll task
'a236b425-4414-49d8-860b-a0e932db175f'.<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4091">
2017-06-17 10:13:57,274 INFO
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
(DefaultQuartzScheduler5) [431bbdcb] Command 'AddVm' (id:
'd089dd43-f977-4eb7-aa22-245d8ab69e5d') waiting on child command id:
'b35f7ba3-89fc-4720-86e2-98cfa2ac4aeb' type:'CreateSnapshotFromTemplate'
to complete<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4092">
2017-06-17 10:13:58,289 INFO
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
(DefaultQuartzScheduler8) [3a68eba3] Polling and updating Async Tasks: 2
tasks, 1 tasks to poll now<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4093">
2017-06-17 10:13:58,586 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (DefaultQuartzScheduler8)
[3a68eba3] SPMAsyncTask::PollTask: Polling task
'a236b425-4414-49d8-860b-a0e932db175f' (Parent Command
'CreateSnapshotFromTemplate', Parameters Type
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') returned
status 'finished', result 'success'.<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4094">
2017-06-17 10:13:58,589 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (DefaultQuartzScheduler8)
[3a68eba3] BaseAsyncTask::onTaskEndSuccess: Task
'a236b425-4414-49d8-860b-a0e932db175f' (Parent Command
'CreateSnapshotFromTemplate', Parameters Type
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended
successfully.<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4095">
2017-06-17 10:13:58,591 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(DefaultQuartzScheduler8) [3a68eba3]
CommandAsyncTask::endActionIfNecessary: All tasks of command
'b35f7ba3-89fc-4720-86e2-98cfa2ac4aeb' has ended -> executing
'endAction'<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4096">
2017-06-17 10:13:58,591 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(DefaultQuartzScheduler8) [3a68eba3] CommandAsyncTask::endAction: Ending
action for '1' tasks (command ID:
'b35f7ba3-89fc-4720-86e2-98cfa2ac4aeb'): calling endAction '.<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4097">
2017-06-17 10:13:58,591 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(org.ovirt.thread.pool-6-thread-38) [3a68eba3]
CommandAsyncTask::endCommandAction [within thread] context: Attempting
to endAction 'CreateSnapshotFromTemplate', executionIndex: '0'<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4098">
2017-06-17 10:13:58,596 INFO
[org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCommand]
(org.ovirt.thread.pool-6-thread-38) [431bbdcb] Command
[id=b35f7ba3-89fc-4720-86e2-98cfa2ac4aeb]: Updating status to
'SUCCEEDED', The command end method logic will be executed by one of its
parent commands.<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4099">
2017-06-17 10:13:58,597 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(org.ovirt.thread.pool-6-thread-38) [431bbdcb]
CommandAsyncTask::HandleEndActionResult [within thread]: endAction for
action type 'CreateSnapshotFromTemplate' completed, handling the result.<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4100">
2017-06-17 10:13:58,597 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(org.ovirt.thread.pool-6-thread-38) [431bbdcb]
CommandAsyncTask::HandleEndActionResult [within thread]: endAction for
action type 'CreateSnapshotFromTemplate' succeeded, clearing tasks.<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4101">
2017-06-17 10:13:58,597 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(org.ovirt.thread.pool-6-thread-38) [431bbdcb]
SPMAsyncTask::ClearAsyncTask: Attempting to clear task
'a236b425-4414-49d8-860b-a0e932db175f'<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4102">
2017-06-17 10:13:58,598 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(org.ovirt.thread.pool-6-thread-38) [431bbdcb] START,
SPMClearTaskVDSCommand(
SPMTaskGuidBaseVDSCommandParameters:{runAsync='true',
storagePoolId='ca3d926a-fff6-41e3-bddd-f244289713fc',
ignoreFailoverLimit='false',
taskId='a236b425-4414-49d8-860b-a0e932db175f'}), log id: 37a1669b<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4103">
2017-06-17 10:13:58,598 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(org.ovirt.thread.pool-6-thread-38) [431bbdcb] START,
HSMClearTaskVDSCommand(HostName = h1,
HSMTaskGuidBaseVDSCommandParameters:{runAsync='true',
hostId='e62505b1-d9b6-4231-992c-8e4851c66624',
taskId='a236b425-4414-49d8-860b-a0e932db175f'}), log id: 5ecec77e<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4104">
2017-06-17 10:13:59,633 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(org.ovirt.thread.pool-6-thread-38) [431bbdcb] FINISH,
HSMClearTaskVDSCommand, log id: 5ecec77e<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4105">
2017-06-17 10:13:59,633 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMClearTaskVDSCommand]
(org.ovirt.thread.pool-6-thread-38) [431bbdcb] FINISH,
SPMClearTaskVDSCommand, log id: 37a1669b<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4106">
2017-06-17 10:13:59,635 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(org.ovirt.thread.pool-6-thread-38) [431bbdcb]
BaseAsyncTask::removeTaskFromDB: Removed task
'a236b425-4414-49d8-860b-a0e932db175f' from DataBase<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4107">
2017-06-17 10:13:59,635 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(org.ovirt.thread.pool-6-thread-38) [431bbdcb]
CommandAsyncTask::HandleEndActionResult [within thread]: Removing
CommandMultiAsyncTasks object for entity
'b35f7ba3-89fc-4720-86e2-98cfa2ac4aeb'<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4108">
2017-06-17 10:14:01,284 INFO
[org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback]
(DefaultQuartzScheduler7) [431bbdcb] Command 'AddVm' id:
'd089dd43-f977-4eb7-aa22-245d8ab69e5d' child commands
'[b35f7ba3-89fc-4720-86e2-98cfa2ac4aeb]' executions were completed,
status 'SUCCEEDED'<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4109">
2017-06-17 10:14:02,299 INFO [org.ovirt.engine.core.bll.AddVmCommand]
(DefaultQuartzScheduler2) [431bbdcb] Ending command
'org.ovirt.engine.core.bll.AddVmCommand' successfully.<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4110">
2017-06-17 10:14:02,304 INFO
[org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCommand]
(DefaultQuartzScheduler2) [431bbdcb] Ending command
'org.ovirt.engine.core.bll.snapshots.CreateSnapshotFromTemplateCommand'
successfully.<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4111">
2017-06-17 10:14:02,309 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand]
(DefaultQuartzScheduler2) [431bbdcb] START, GetImageInfoVDSCommand(
GetImageInfoVDSCommandParameters:{runAsync='true',
storagePoolId='ca3d926a-fff6-41e3-bddd-f244289713fc',
ignoreFailoverLimit='false',
storageDomainId='4a62e1c2-8942-48f6-a3c4-1e54d158e13a',
imageGroupId='67bfd3f8-8def-4b3a-bb45-c52aed5d1a4f',
imageId='b5142477-2134-470e-8ba4-a2e72119ec9a'}), log id: 6fd9785f<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4112">
2017-06-17 10:14:03,344 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand]
(DefaultQuartzScheduler2) [431bbdcb] FINISH, GetImageInfoVDSCommand,
return:
org.ovirt.engine.core.common.businessentities.storage.DiskImage@22deef7c,
log id: 6fd9785f<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4113">
2017-06-17 10:14:03,356 INFO
[org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand]
(DefaultQuartzScheduler2) [] START, SetVmStatusVDSCommand(
SetVmStatusVDSCommandParameters:{runAsync='true',
vmId='535f7d2b-a3df-4f7e-91e1-7b564bff625c', status='Down',
exitStatus='Normal'}), log id: 44ed9ea6<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4114">
2017-06-17 10:14:03,358 INFO
[org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand]
(DefaultQuartzScheduler2) [] FINISH, SetVmStatusVDSCommand, log id:
44ed9ea6<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4115">
2017-06-17 10:14:03,362 INFO [org.ovirt.engine.core.bll.AddVmCommand]
(DefaultQuartzScheduler2) [] Lock freed to object
'EngineLock:{exclusiveLocks='[myvm05=<VM_NAME,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
sharedLocks='[71e0d46a-d8b8-48dc-bae4-3895a42fa005=<TEMPLATE,
ACTION_TYPE_FAILED_TEMPLATE_IS_USED_FOR_CREATE_VM$VmName myvm05>,
4aba3e03-6665-46d5-882c-8d8931a563f0=<DISK,
ACTION_TYPE_FAILED_DISK_IS_USED_FOR_CREATE_VM$VmName myvm05>]'}'<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4116">
2017-06-17 10:14:03,370 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler2) [] Correlation ID: 22a2647d, Job ID:
638b7524-baaa-4bb2-bb36-e02b78bb5dfd, Call Stack: null, Custom Event ID:
-1, Message: VM myvm05 creation has been completed.<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4117">
2017-06-17 10:14:04,768 INFO
[org.ovirt.engine.core.bll.RunVmOnceCommand] (default task-19) [590cac7]
Lock Acquired to object
'EngineLock:{exclusiveLocks='[535f7d2b-a3df-4f7e-91e1-7b564bff625c=<VM,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4118">
2017-06-17 10:14:04,777 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
(default task-19) [590cac7] START, IsVmDuringInitiatingVDSCommand(
IsVmDuringInitiatingVDSCommandParameters:{runAsync='true',
vmId='535f7d2b-a3df-4f7e-91e1-7b564bff625c'}), log id: 261674f1<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4119">
2017-06-17 10:14:04,777 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
(default task-19) [590cac7] FINISH, IsVmDuringInitiatingVDSCommand,
return: false, log id: 261674f1<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4120">
2017-06-17 10:14:04,798 INFO
[org.ovirt.engine.core.bll.RunVmOnceCommand] (default task-19) [590cac7]
Running command: RunVmOnceCommand internal: false. Entities affected :
ID: 535f7d2b-a3df-4f7e-91e1-7b564bff625c Type: VMAction group RUN_VM
with role type USER<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4121">
2017-06-17 10:14:04,817 INFO
[org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (default
task-19) [590cac7] START, UpdateVmDynamicDataVDSCommand(
UpdateVmDynamicDataVDSCommandParameters:{runAsync='true', hostId='null',
vmId='00000000-0000-0000-0000-000000000000',
vmDynamic='org.ovirt.engine.core.common.businessentities.VmDynamic@e1224d5b'}),
log id: bc72f78<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4122">
2017-06-17 10:14:04,820 INFO
[org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (default
task-19) [590cac7] FINISH, UpdateVmDynamicDataVDSCommand, log id:
bc72f78<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4123">
2017-06-17 10:14:04,822 INFO
[org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (default task-19)
[590cac7] START, CreateVmVDSCommand(
CreateVmVDSCommandParameters:{runAsync='true',
hostId='e62505b1-d9b6-4231-992c-8e4851c66624',
vmId='535f7d2b-a3df-4f7e-91e1-7b564bff625c', vm='VM [myvm05]'}), log id:
5d92bbd3<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4124">
2017-06-17 10:14:04,825 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVmFromCloudInitVDSCommand]
(default task-19) [590cac7] START,
CreateVmFromCloudInitVDSCommand(HostName = h1,
CreateVmVDSCommandParameters:{runAsync='true',
hostId='e62505b1-d9b6-4231-992c-8e4851c66624',
vmId='535f7d2b-a3df-4f7e-91e1-7b564bff625c', vm='VM [myvm05]'}), log id:
7b28f107<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4125">
2017-06-17 10:14:04,828 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.VmInfoBuilderBase] (default
task-19) [590cac7] Bootable disk '67bfd3f8-8def-4b3a-bb45-c52aed5d1a4f'
set to index '0'<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4126">
2017-06-17 10:14:04,842 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (default
task-19) [590cac7]
org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVmFromCloudInitVDSCommand
pitReinjection=false,memGuaranteedSize=2000,smpThreadsPerCore=1,cpuType=Westmere,vmId=535f7d2b-a3df-4f7e-91e1-7b564bff625c,acpiEnable=true,vmType=kvm,smp=8,smpCoresPerSocket=2,emulatedMachine=pc-i440fx-rhel7.2.0,smartcardEnable=false,guestNumaNodes=[{memory=4096,
cpus=0,1,2,3,4,5,6,7,
nodeIndex=0}],transparentHugePages=true,displayNetwork=ovirtmgmt,vmName=myvm05,maxVCpus=32,kvmEnable=true,devices=[{iface=ide,
shared=false, path=, readonly=true, index=3, type=disk,
specParams={vmPayload={file={openstack/latest/meta_data.json=ewogICJuZXR3b3JrLWludGVyZmFjZXMiIDogImF1dG8gZXRoMFxuaWZhY2UgZXRoMCBpbmV0IHN0YXRpY1xuICBhZGRyZXNzIDEwLjEwLjEwLjUwXG4gIG5ldG1hc2sgMjU1LjI1NS4yNTUuMFxuICBnYXRld2F5IDEwLjEwLjEwLjFcbiAgZG5zLW5hbWVzZXJ2ZXJzIDEwOS4yMjQuMTQuMlxuICBkbnMtc2VhcmNoIGVsY2xkLm5ldFxuIiwKICAiYXZhaWxhYmlsaXR5X3pvbmUiIDogIm5vdmEiLAogICJob3N0bmFtZSIgOiAidm0wMSIsCiAgImxhdW5jaF9pbmRleCIgOiAiMCIsCiAgIm1ldGEiIDogewogICAgInJvbGUiIDogInNlcnZlciIsCiAgICAiZHNtb2RlIiA6ICJsb2NhbCIsCiAgICAiZXNzZW50aWFsIiA6ICJmYWxzZSIKICB9LAogICJuYW1lIiA6ICJ2bTAxIiwKICAibmV0d29ya19jb25maWciIDogewogICAgInBhdGgiIDogIi9ldGMvbmV0d29yay9pbnRlcmZhY2VzIiwKICAgICJjb250ZW50X3BhdGgiIDogIi9jb250ZW50LzAwMDAiCiAgfSwKICAidXVpZCIgOiAiNjdjOThkOWQtOWZlOC00ZWU2LTg0NGUtZjU4N2Y1Njc5NWExIgp9,
openstack/content/0000=YXV0byBldGgwCmlmYWNlIGV0aDAgaW5ldCBzdGF0aWMKICBhZGRyZXNzIDEwLjEwLjEwLjUwCiAgbmV0bWFzayAyNTUuMjU1LjI1NS4wCiAgZ2F0ZXdheSAxMC4xMC4xMC4xCiAgZG5zLW5hbWVzZXJ2ZXJzIDEwOS4yMjQuMTQuMgogIGRucy1zZWFyY2ggZWxjbGQubmV0Cg==,
openstack/latest/user_data=I2Nsb3VkLWNvbmZpZwpvdXRwdXQ6CiAgYWxsOiAnPj4gL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJwpwYXNzd29yZDogYm9vYm9vCmRpc2FibGVfcm9vdDogMApydW5jbWQ6Ci0gJ3NlZCAtaSAnJy9eZGF0YXNvdXJjZV9saXN0OiAvZCcnIC9ldGMvY2xvdWQvY2xvdWQuY2ZnOyBlY2hvICcnZGF0YXNvdXJjZV9saXN0OgogIFsiTm9DbG91ZCIsICJDb25maWdEcml2ZSJdJycgPj4gL2V0Yy9jbG91ZC9jbG91ZC5jZmcnCnNzaF9wd2F1dGg6IHRydWUKY2hwYXNzd2Q6CiAgZXhwaXJlOiBmYWxzZQp1c2VyOiByb290Cg==},
volId=config-2}}, device=cdrom,
deviceId=95cee92c-3f8d-4b2b-93f9-679c854f84b0}, {type=video,
specParams={vgamem=16384, heads=1, vram=32768, ram=65536}, device=qxl,
deviceId=5cb25f3c-dccb-4a8e-87d7-5873db429f4e}, {type=graphics,
specParams={spiceSecureChannels=smain,sinputs,scursor,splayback,srecord,sdisplay,ssmartcard,susbredir,
fileTransferEnable=true, spiceSslCipherSuite=DEFAULT,
copyPasteEnable=true}, device=spice,
deviceId=4f91a674-25e7-45be-92a6-50285b15d130}, {iface=ide,
shared=false, path=, address={bus=1, controller=0, unit=0, type=drive,
target=0}, readonly=true, index=2, type=disk, specParams={path=},
device=cdrom, deviceId=abc8d3e6-ef97-4e2a-9559-ee62519a8e81},
{shared=false, address={bus=0x00, domain=0x0000, function=0x0,
slot=0x06, type=pci}, imageID=67bfd3f8-8def-4b3a-bb45-c52aed5d1a4f,
format=cow, index=0, optional=false, type=disk,
deviceId=67bfd3f8-8def-4b3a-bb45-c52aed5d1a4f,
domainID=4a62e1c2-8942-48f6-a3c4-1e54d158e13a, propagateErrors=off,
iface=virtio, readonly=false, bootOrder=1,
poolID=ca3d926a-fff6-41e3-bddd-f244289713fc,
volumeID=b5142477-2134-470e-8ba4-a2e72119ec9a, specParams={},
device=disk}, {filter=vdsm-no-mac-spoofing, nicModel=pv,
address={bus=0x00, domain=0x0000, function=0x0, slot=0x03, type=pci},
type=interface, specParams={inbound={}, outbound={}}, device=bridge,
linkActive=true, deviceId=d90c311d-1d07-445d-bc0c-93da4fc38327,
macAddr=00:1a:4a:16:01:69, network=ovirtmgmt}, {address={bus=0x00,
function=0x0, domain=0x0000, slot=0x07, type=pci}, type=balloon,
specParams={model=virtio}, device=memballoon,
deviceId=dcf2375e-54b2-471f-9dba-a0c7d509492f}, {index=0,
model=virtio-scsi, type=controller, specParams={}, device=scsi,
deviceId=e1eb89e1-77df-4677-ba9a-4b3c92a46e69}, {address={bus=0x00,
domain=0x0000, function=0x0, slot=0x05, type=pci}, type=controller,
specParams={}, device=virtio-serial,
deviceId=3d096c2f-fdbd-43dd-8243-11e562e0a612}],custom={device_966e95b5-6a22-4836-aad0-c9f4bb79389e=VmDevice:{id='VmDeviceId:{deviceId='966e95b5-6a22-4836-aad0-c9f4bb79389e',
vmId='535f7d2b-a3df-4f7e-91e1-7b564bff625c'}', device='unix',
type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0,
controller=0, type=virtio-serial, port=1}', managed='false',
plugged='true', readOnly='false', deviceAlias='channel0',
customProperties='[]', snapshotId='null', logicalName='null',
usingScsiReservation='false', hostDevice='null'},
device_966e95b5-6a22-4836-aad0-c9f4bb79389edevice_5172a707-33ea-4f05-855b-40232655ccb6=VmDevice:{id='VmDeviceId:{deviceId='5172a707-33ea-4f05-855b-40232655ccb6',
vmId='535f7d2b-a3df-4f7e-91e1-7b564bff625c'}', device='spicevmc',
type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0,
controller=0, type=virtio-serial, port=3}', managed='false',
plugged='true', readOnly='false', deviceAlias='channel2',
customProperties='[]', snapshotId='null', logicalName='null',
usingScsiReservation='false', hostDevice='null'},
device_966e95b5-6a22-4836-aad0-c9f4bb79389edevice_5172a707-33ea-4f05-855b-40232655ccb6device_60409373-848b-43e5-8c8c-117688b0ba71=VmDevice:{id='VmDeviceId:{deviceId='60409373-848b-43e5-8c8c-117688b0ba71',
vmId='535f7d2b-a3df-4f7e-91e1-7b564bff625c'}', device='ide',
type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01,
bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false',
plugged='true', readOnly='false', deviceAlias='ide',
customProperties='[]', snapshotId='null', logicalName='null',
usingScsiReservation='false', hostDevice='null'},
device_966e95b5-6a22-4836-aad0-c9f4bb79389edevice_5172a707-33ea-4f05-855b-40232655ccb6device_60409373-848b-43e5-8c8c-117688b0ba71device_63038946-00aa-4b0a-9045-5dfb47b31e89=VmDevice:{id='VmDeviceId:{deviceId='63038946-00aa-4b0a-9045-5dfb47b31e89',
vmId='535f7d2b-a3df-4f7e-91e1-7b564bff625c'}', device='unix',
type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0,
controller=0, type=virtio-serial, port=2}', managed='false',
plugged='true', readOnly='false', deviceAlias='channel1',
customProperties='[]', snapshotId='null', logicalName='null',
usingScsiReservation='false',
hostDevice='null'}},display=qxl,timeOffset=0,nice=0,maxMemSize=4194304,maxMemSlots=16,bootMenuEnable=false,memSize=4096<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4127">
2017-06-17 10:14:04,856 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVmFromCloudInitVDSCommand]
(default task-19) [590cac7] FINISH, CreateVmFromCloudInitVDSCommand,
log id: 7b28f107<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4128">
2017-06-17 10:14:04,860 INFO
[org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (default task-19)
[590cac7] FINISH, CreateVmVDSCommand, return: WaitForLaunch, log id:
5d92bbd3<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4129">
2017-06-17 10:14:04,860 INFO
[org.ovirt.engine.core.bll.RunVmOnceCommand] (default task-19) [590cac7]
Lock freed to object
'EngineLock:{exclusiveLocks='[535f7d2b-a3df-4f7e-91e1-7b564bff625c=<VM,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4130">
2017-06-17 10:14:04,863 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-19) [590cac7] Correlation ID: 590cac7, Job ID:
a66974f1-d8f1-477d-85cc-51c3ef70ebdb, Call Stack: null, Custom Event ID:
-1, Message: VM myvm05 was started by admin@internal-authz (Host: h1).<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4131">
2017-06-17 10:14:06,360 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(ForkJoinPool-1-worker-1) [] VM
'535f7d2b-a3df-4f7e-91e1-7b564bff625c'(myvm05) moved from
'WaitForLaunch' --> 'PoweringUp'<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4132">
2017-06-17 10:14:06,367 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand]
(ForkJoinPool-1-worker-1) [] START, FullListVDSCommand(HostName = ,
FullListVDSCommandParameters:{runAsync='true',
hostId='e62505b1-d9b6-4231-992c-8e4851c66624',
vds='Host[,e62505b1-d9b6-4231-992c-8e4851c66624]',
vmIds='[535f7d2b-a3df-4f7e-91e1-7b564bff625c]'}), log id: 3a801479<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4133">
2017-06-17 10:14:06,376 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand]
(ForkJoinPool-1-worker-1) [] FINISH, FullListVDSCommand, return:
[{acpiEnable=true, emulatedMachine=pc-i440fx-rhel7.2.0,
vmId=535f7d2b-a3df-4f7e-91e1-7b564bff625c, guestDiskMapping={},
transparentHugePages=true, timeOffset=0, cpuType=Westmere, smp=8,
pauseCode=NOERR, guestNumaNodes=[Ljava.lang.Object;@8a07465,
smartcardEnable=false,
custom={device_966e95b5-6a22-4836-aad0-c9f4bb79389edevice_5172a707-33ea-4f05-855b-40232655ccb6device_60409373-848b-43e5-8c8c-117688b0ba71=VmDevice:{id='VmDeviceId:{deviceId='60409373-848b-43e5-8c8c-117688b0ba71',
vmId='535f7d2b-a3df-4f7e-91e1-7b564bff625c'}', device='ide',
type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01,
bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false',
plugged='true', readOnly='false', deviceAlias='ide',
customProperties='[]', snapshotId='null', logicalName='null',
usingScsiReservation='false', hostDevice='null'},
device_966e95b5-6a22-4836-aad0-c9f4bb79389edevice_5172a707-33ea-4f05-855b-40232655ccb6=VmDevice:{id='VmDeviceId:{deviceId='5172a707-33ea-4f05-855b-40232655ccb6',
vmId='535f7d2b-a3df-4f7e-91e1-7b564bff625c'}', device='spicevmc',
type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0,
controller=0, type=virtio-serial, port=3}', managed='false',
plugged='true', readOnly='false', deviceAlias='channel2',
customProperties='[]', snapshotId='null', logicalName='null',
usingScsiReservation='false', hostDevice='null'},
device_966e95b5-6a22-4836-aad0-c9f4bb79389edevice_5172a707-33ea-4f05-855b-40232655ccb6device_60409373-848b-43e5-8c8c-117688b0ba71device_63038946-00aa-4b0a-9045-5dfb47b31e89=VmDevice:{id='VmDeviceId:{deviceId='63038946-00aa-4b0a-9045-5dfb47b31e89',
vmId='535f7d2b-a3df-4f7e-91e1-7b564bff625c'}', device='unix',
type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0,
controller=0, type=virtio-serial, port=2}', managed='false',
plugged='true', readOnly='false', deviceAlias='channel1',
customProperties='[]', snapshotId='null', logicalName='null',
usingScsiReservation='false', hostDevice='null'},
device_966e95b5-6a22-4836-aad0-c9f4bb79389e=VmDevice:{id='VmDeviceId:{deviceId='966e95b5-6a22-4836-aad0-c9f4bb79389e',
vmId='535f7d2b-a3df-4f7e-91e1-7b564bff625c'}', device='unix',
type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0,
controller=0, type=virtio-serial, port=1}', managed='false',
plugged='true', readOnly='false', deviceAlias='channel0',
customProperties='[]', snapshotId='null', logicalName='null',
usingScsiReservation='false', hostDevice='null'}}, vmType=kvm,
memSize=4096, smpCoresPerSocket=2, vmName=myvm05, nice=0, status=Up,
maxMemSize=4194304, bootMenuEnable=false, pid=1569, smpThreadsPerCore=1,
memGuaranteedSize=2000, kvmEnable=true, pitReinjection=false,
displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@690fc889,
display=qxl, maxVCpus=32, clientIp=, statusTime=4467062130,
maxMemSlots=16}], log id: 3a801479<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4134">
2017-06-17 10:14:06,379 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonitoring]
(ForkJoinPool-1-worker-1) [] Received a spice Device without an address
when processing VM 535f7d2b-a3df-4f7e-91e1-7b564bff625c devices,
skipping device: {device=spice, specParams={fileTransferEnable=true,
displayNetwork=ovirtmgmt, displayIp=192.168.215.215,
spiceSslCipherSuite=DEFAULT,
spiceSecureChannels=smain,sinputs,scursor,splayback,srecord,sdisplay,ssmartcard,susbredir,
copyPasteEnable=true}, type=graphics,
deviceId=4f91a674-25e7-45be-92a6-50285b15d130, tlsPort=5901}<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4135">
2017-06-17 10:14:13,332 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher]
(DefaultQuartzScheduler6) [3465d470] Fetched 2 VMs from VDS
'e62505b1-d9b6-4231-992c-8e4851c66624'<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4136">
2017-06-17 10:14:54,749 INFO
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
(DefaultQuartzScheduler4) [1a9e3b34] Setting new tasks map. The map
contains now 1 tasks<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4137">
2017-06-17 10:15:13,767 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(DefaultQuartzScheduler9) [3a68eba3] VM
'535f7d2b-a3df-4f7e-91e1-7b564bff625c'(myvm05) moved from 'PoweringUp'
--> 'Up'<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4138">
2017-06-17 10:15:13,780 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler9) [3a68eba3] Correlation ID: 590cac7, Job ID:
a66974f1-d8f1-477d-85cc-51c3ef70ebdb, Call Stack: null, Custom Event ID:
-1, Message: VM myvm05 started on Host h1<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4139">
2017-06-17 10:15:14,881 INFO
[org.ovirt.engine.core.sso.servlets.OAuthRevokeServlet] (default
task-62) [] User admin@internal successfully logged out<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4140">
2017-06-17 10:15:14,889 INFO
[org.ovirt.engine.core.bll.aaa.TerminateSessionsForTokenCommand]
(default task-46) [361ecd2f] Running command:
TerminateSessionsForTokenCommand internal: true.<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4141">
2017-06-17 10:15:24,749 INFO
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
(DefaultQuartzScheduler4) [1a9e3b34] Setting new tasks map. The map
contains now 0 tasks<br id="yiv7471100670yui_3_16_0_ym19_1_1497684224227_4142">
2017-06-17 10:15:24,749 INFO
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
(DefaultQuartzScheduler4) [1a9e3b34] Cleared all tasks of pool
'ca3d926a-fff6-41e3-bddd-f244289713fc'.</div></div></div></div></div></div></div></body></html>
------=_Part_226735_346808180.1497684594828--
2
1
Hi.
I'm trying to set up a 3 node ovirt cluster with gluster as this guide
describes:
https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluste…
I've installed oVirt node 4.1.2 in one partition and left a partition to
hold the gluster volumes on all three nodes. The problem is that I can't
get through gdeploy for gluster install. I only get the error:
Error: Unsupported disk type!
PLAY [gluster_servers]
*********************************************************
TASK [Run a shell script]
******************************************************
changed: [host03] =>
(item=/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
host01,host02,host03)
changed: [host02] =>
(item=/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
host01,host02,host03)
changed: [host01] =>
(item=/usr/share/ansible/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
host01,host02,host03)
TASK [debug]
*******************************************************************
ok: [host01] => {
"changed": false,
"msg": "All items completed"
}
ok: [host02] => {
"changed": false,
"msg": "All items completed"
}
ok: [host03] => {
"changed": false,
"msg": "All items completed"
}
PLAY RECAP
*********************************************************************
host01 : ok=2 changed=1 unreachable=0
failed=0
host02 : ok=2 changed=1 unreachable=0
failed=0
host03 : ok=2 changed=1 unreachable=0
failed=0
PLAY [gluster_servers]
*********************************************************
TASK [Enable or disable services]
**********************************************
ok: [host01] => (item=chronyd)
ok: [host03] => (item=chronyd)
ok: [host02] => (item=chronyd)
PLAY RECAP
*********************************************************************
host01 : ok=1 changed=0 unreachable=0
failed=0
host02 : ok=1 changed=0 unreachable=0
failed=0
host03 : ok=1 changed=0 unreachable=0
failed=0
PLAY [gluster_servers]
*********************************************************
TASK [start/stop/restart/reload services]
**************************************
changed: [host03] => (item=chronyd)
changed: [host01] => (item=chronyd)
changed: [host02] => (item=chronyd)
PLAY RECAP
*********************************************************************
host01 : ok=1 changed=1 unreachable=0
failed=0
host02 : ok=1 changed=1 unreachable=0
failed=0
host03 : ok=1 changed=1 unreachable=0
failed=0
Error: Unsupported disk type!
[root@host01 scripts]# fdisk -l
Disk /dev/sdb: 898.3 GB, 898319253504 bytes, 1754529792 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x0629cdcf
Device Boot Start End Blocks Id System
Disk /dev/sda: 299.4 GB, 299439751168 bytes, 584843264 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x00007c39
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 2099199 1048576 83 Linux
/dev/sda2 2099200 584843263 291372032 8e Linux LVM
Disk /dev/mapper/onn_host01-swap: 16.9 GB, 16911433728 bytes, 33030144
sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/onn_host01-pool00_tmeta: 1073 MB, 1073741824 bytes,
2097152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/onn_host01-pool00_tdata: 264.3 GB, 264266317824 bytes,
516145152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/onn_host01-pool00-tpool: 264.3 GB, 264266317824 bytes,
516145152 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 131072 bytes
Disk /dev/mapper/onn_host01-ovirt--node--ng--4.1.2--0.20170613.0+1: 248.2
GB, 248160190464 bytes, 484687872 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 131072 bytes
Disk /dev/mapper/onn_host01-pool00: 264.3 GB, 264266317824 bytes, 516145152
sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 131072 bytes
Disk /dev/mapper/onn_host01-var: 16.1 GB, 16106127360 bytes, 31457280
sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 131072 bytes
Disk /dev/mapper/onn_host01-root: 248.2 GB, 248160190464 bytes, 484687872
sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 131072 bytes / 131072 bytes
Any input is appreciated
Best regards
Jesper
2
1
--------=_MBFFB0EF4D-0CCB-43B8-B6B6-961FF15C7403
Content-Type: text/plain; format=flowed; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hi
Well I've got myself into a fine mess.
host01 was setup with hosted-engine v4.1. This was successful.
Imported 3 VMs from a v3.6 OVirt AIO instance. (This OVirt 3.6 is still=20
running with more VMs on it)
Tried to add host02 to the new Ovirt 4.1 setup. This partially succeeded=20
but I couldn't add any storage domains to it. Cannot remember why.
In Ovirt engine UI I removed host02.
I reinstalled host02 with Centos7, tried to add it and Ovirt UI told me=20
it was already there (but it wasn't listed in the UI).
Renamed the reinstalled host02 to host03, changed the ipaddress,=20
reconfig the DNS server and added host03 into the Ovirt Engine UI.
All good, and I was able to import more VMs to it.
I was also able to shutdown a VM on host01 assign it to host03 and start=20
the VM. Cool, everything working.
The above was all last couple of weeks.
This week I performed some yum updates on the Engine VM. No reboot.
Today noticed that the Ovirt services in the Engine VM were in a endless=20
restart loop. They would be up for a 5 minutes and then die.
Looking into /var/log/ovirt-engine/engine.log and I could only see=20
errors relating to host02. Ovirt was trying to find it and failing. Then=20
falling over.
I ran "hosted-engine --clean-metadata" thinking it would cleanup and=20
remove bad references to hosts, but now realise that was a really bad=20
idea as it didn't do what I'd hoped.
At this point the sequence below worked, I could login to Ovirt UI but=20
after 5 minutes the services would be off
service ovirt-engine restart
service ovirt-websocket-proxy restart
service httpd restart
I saw some reference to having to remove hosts from the database by hand=20
in situations where under the hood of Ovirt a decommission host was=20
still listed, but wasn't showing in the GUI.
So I removed reference to host02 (vds_id and host_id) in the following=20
tables in this order.
vds_dynamic
vds_statistics
vds_static
host_device
Now when I try to start ovirt-websocket it will not start
service ovirt-websocket start
Redirecting to /bin/systemctl start ovirt-websocket.service
Failed to start ovirt-websocket.service: Unit not found.
I'm now thinking that I need to do the following in the engine VM
# engine-cleanup # yum remove ovirt-engine # yum install ovirt-engine #=20
engine-setup
But to run engine-cleanup I need to put the engine-vm into maintenance=20
mode and because of the --clean-metadata that I ran earlier on host01 I=20
cannot do that.
What is the best course of action from here?
Cheers
Andrew
--------=_MBFFB0EF4D-0CCB-43B8-B6B6-961FF15C7403
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<?xml version=3D"1.0" encoding=3D"utf-16"?><html><head>
<style id=3D"signatureStyle"><!--#xbaf36d997b7f497, #xbaf36d997b7f497, #xba=
f36d997b7f497, #xbaf36d997b7f497 #xb71224f920234978acc74f4d23143069, #xbaf3=
6d997b7f497
{font-family: Tahoma; font-size: 12pt;}
#xbaf36d997b7f497, #xbaf36d997b7f497
{font-family: 'Segoe UI'; font-size: 12pt;}
#xbaf36d997b7f497 #xb71224f920234978acc74f4d23143069 p.MsoNormal, #xbaf36d9=
97b7f497 p.MsoNormal
{margin: 0cm 0cm 0.0001pt; font-size: 11pt; font-family: Calibri, sans-seri=
f;}
#xbaf36d997b7f497 #xb71224f920234978acc74f4d23143069 div.WordSection1, #xba=
f36d997b7f497 div.WordSection1
{page: WordSection1;}
#xbaf36d997b7f497 a:link
{color: rgb(5, 99, 193); text-decoration: underline;}
--></style>
<style><![CDATA[#x87d978c56880415e9594d7fe317f30f3 #x0ec1289986994fb5ac3098=
27edd33abe,#x87d978c56880415e9594d7fe317f30f3{
font-family:Tahoma;
font-size:12pt;
}#x0ec1289986994fb5ac309827edd33abe #xbaf36d997b7f497,#x0ec1289986994fb5ac3=
09827edd33abe #xbaf36d997b7f497,#x0ec1289986994fb5ac309827edd33abe #xbaf36d=
997b7f497,#x0ec1289986994fb5ac309827edd33abe #xbaf36d997b7f497 #xb71224f920=
234978acc74f4d23143069,#x0ec1289986994fb5ac309827edd33abe #xbaf36d997b7f497=
{
font-family:Tahoma;
font-size:12pt;
}
#x0ec1289986994fb5ac309827edd33abe #xbaf36d997b7f497,#x0ec1289986994fb5ac30=
9827edd33abe #xbaf36d997b7f497{
font-family:'Segoe UI';
font-size:12pt;
}
#x0ec1289986994fb5ac309827edd33abe #xbaf36d997b7f497 #xb71224f920234978acc7=
4f4d23143069 p.MsoNormal,#x0ec1289986994fb5ac309827edd33abe #xbaf36d997b7f4=
97 p.MsoNormal{
margin:0cm 0cm 0.0001pt;
font-size:11pt;
font-family:Calibri,sans-serif;
}
#x0ec1289986994fb5ac309827edd33abe #xbaf36d997b7f497 #xb71224f920234978acc7=
4f4d23143069 div.WordSection1,#x0ec1289986994fb5ac309827edd33abe #xbaf36d99=
7b7f497 div.WordSection1{
page:WordSection1;
}
#x0ec1289986994fb5ac309827edd33abe{
font-family:Tahoma;
font-size:12pt;
}]]><!--body
{font-family: Tahoma; font-size: 12pt;}
--></style>
</head>
<body><div>Hi</div><div><br /></div><div>Well I've got myself into a fine m=
ess.=C2=A0</div><div><br /></div><div>host01 was setup with hosted-engine v=
4.1.=C2=A0<span style=3D"font-size: 12pt;">This was successful.=C2=A0</span=
></div><div><span style=3D"font-size: 12pt;">Imported 3 VMs from a v3.6 OVi=
rt AIO instance. (This OVirt 3.6 is still running with more VMs on it)</spa=
n></div><div>Tried to add host02 to the new Ovirt 4.1 setup. This partially =
succeeded but I couldn't add any storage domains to it. Cannot remember wh=
y.=C2=A0</div><div>In Ovirt engine UI I removed host02.=C2=A0</div><div>I r=
einstalled host02 with Centos7, tried to add it and Ovirt UI told me it was =
already there (but it wasn't listed in the UI).=C2=A0</div><div>Renamed th=
e reinstalled host02 to host03, changed the ipaddress, reconfig the DNS ser=
ver and added host03 into the Ovirt Engine UI.=C2=A0</div><div>All good, an=
d I was able to import more VMs to it.=C2=A0</div><div>I was also able to s=
hutdown a VM on host01 assign it to host03 and start the VM. Cool, everythi=
ng working.=C2=A0</div><div>The above was all last couple of weeks.=C2=A0</=
div><div><br /></div><div>This week I performed some yum updates on the Eng=
ine VM. No reboot.=C2=A0</div><div>Today noticed that the Ovirt services in =
the Engine VM were in a endless restart loop. They would be up for a 5 min=
utes and then die.=C2=A0</div><div>Looking into=C2=A0/var/log/ovirt-engine/=
engine.log and I could only see errors relating to host02. Ovirt was trying =
to find it and failing. Then falling over.=C2=A0</div><div>I ran "hosted-e=
ngine --clean-metadata" thinking it would cleanup and remove bad references =
to hosts, but now realise that was a really bad idea as it didn't do what=
I'd hoped.=C2=A0</div><div>At this point the sequence below worked, I could =
login to Ovirt UI but after 5 minutes the services would be off</div><div>=
<div>
<span style=3D"color: rgb(34, 34, 34); font-family: Verdana, Arial, Helveti=
ca, sans-serif; font-variant-ligatures: normal; orphans: 2; widows: 2; back=
ground-color: rgb(255, 255, 255);">service ovirt-engine restart</span></div=
></div><div><span style=3D"color: rgb(34, 34, 34); font-family: Verdana, Ar=
ial, Helvetica, sans-serif; font-variant-ligatures: normal; orphans: 2; wid=
ows: 2; background-color: rgb(255, 255, 255);"><div>
<span style=3D"font-variant-ligatures: normal;">service ovirt-websocket-pro=
xy restart</span></div><div><span style=3D"font-variant-ligatures: normal;"=
><div>
<span style=3D"font-variant-ligatures: normal;">service httpd restart</span=
></div></span></div></span></div><div><br /></div><div>I saw some reference =
to having to remove hosts from the database by hand in situations where un=
der the hood of Ovirt a decommission host was still listed, but wasn't show=
ing in the GUI.=C2=A0</div><div>So I removed reference to host02 (vds_id an=
d host_id) in the following tables in this order.=C2=A0</div><div><span sty=
le=3D"font-size: 12pt;">vds_dynamic</span></div><div><div id=3D"x0ec1289986=
994fb5ac309827edd33abe">
<div><div><span style=3D"font-size: 12pt;">vds_statistics</span></div><div>=
<div id=3D"x87d978c56880415e9594d7fe317f30f3">
<div><div><div id=3D"x0ec1289986994fb5ac309827edd33abe"><div><div>vds_stati=
c</div><div></div></div></div></div></div></div></div><div></div></div></di=
v></div><div>host_device</div><div><br /></div><div>Now when I try to start =
ovirt-websocket it will not start</div><div>service ovirt-websocket start<=
br />Redirecting to /bin/systemctl start =C2=A0ovirt-websocket.service<br /=
>Failed to start ovirt-websocket.service: Unit not found.</div><div><br /><=
/div><div>I'm now thinking that I need to do the following in the engine VM=
</div><div><div>
<pre class=3D"highlight plaintext"><code>#=C2=A0engine-cleanup
#=C2=A0yum=C2=A0remove=C2=A0ovirt-engine
#=C2=A0yum=C2=A0install=C2=A0ovirt-engine
#=C2=A0engine-setup=C2=A0</code></pre></div></div><div>But to run engine-cl=
eanup I need to put the engine-vm into maintenance mode and because of the=
--clean-metadata that I ran earlier on host01 I cannot do that.=C2=A0</div>=
<div><br /></div><div>What is the best course of action from here?</div><di=
v><br /></div><div id=3D"signature_old"><div id=3D"xbaf36d997b7f497">
<div class=3D"WordSection1">
<div id=3D"xb71224f920234978acc74f4d23143069"><div class=3D"WordSection1"><=
p class=3D"MsoNormal"><font face=3D"Tahoma"><span style=3D"font-size: 16px;=
">Cheers</span></font></p><p class=3D"MsoNormal"><br /></p><p class=3D"MsoN=
ormal"><font face=3D"Tahoma"><span style=3D"font-size: 16px;">Andrew</span>=
</font></p></div></div></div></div></div>
</body></html>
--------=_MBFFB0EF4D-0CCB-43B8-B6B6-961FF15C7403--
2
1
--Apple-Mail-0EAE97BF-235C-4B5F-A7EF-F4D18A898C29
Content-Type: text/plain;
charset=utf-8
Content-Transfer-Encoding: quoted-printable
I had 3 hosts running in a hosted engine setup, oVirt Engine Version: 4.1.2=
.2-1.el7.centos, using FC storage. One of my hosts went unresponsive in the=
GUI, and attempts to bring it back were fruitless. I eventually decided to=
just remove it and have gotten it removed from the GUI, but it still shows i=
n =E2=80=9Chosted-engine =E2=80=94vm-status=E2=80=9D command on the other 2 h=
osts. The 2 good nodes show it as the following:
--=3D=3D Host 3 status =3D=3D--
conf_on_shared_storage : True
Status up-to-date : False
Hostname : host3.my.lab
Host ID : 3
Engine status : unknown stale-data
Score : 0
stopped : False
Local maintenance : True
crc32 : bce9a8c5
local_conf_timestamp : 2605898
Host timestamp : 2605882
Extra metadata (valid at timestamp):
metadata_parse_version=3D1
metadata_feature_version=3D1
timestamp=3D2605882 (Thu Jun 15 15:18:13 2017)
host-id=3D3
score=3D0
vm_conf_refresh_time=3D2605898 (Thu Jun 15 15:18:29 2017)
conf_on_shared_storage=3DTrue
maintenance=3DTrue
state=3DLocalMaintenance
stopped=3DFalse
How can I either remove this host altogether from the configuration, or repa=
ir it so that it is back in a good state? The host is up, but due to my rem=
oval attempts earlier, reports =E2=80=9Cunknown stale data=E2=80=9D for all 3=
hosts in the config.
Thanks
--Apple-Mail-0EAE97BF-235C-4B5F-A7EF-F4D18A898C29
Content-Type: text/html;
charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><head><meta http-equiv=3D"content-type" content=3D"text/html; charset=3D=
utf-8"></head><body dir=3D"auto"><span style=3D"background-color: rgba(255, 2=
55, 255, 0);">I had 3 hosts running in a hosted engine setup, oVirt En=
gine Version: 4.1.2.2-1.el7.centos, using FC storage. One of my hosts w=
ent unresponsive in the GUI, and attempts to bring it back were fruitless.&n=
bsp; I eventually decided to just remove it and have gotten it removed f=
rom the GUI, but it still shows in =E2=80=9Chosted-engine =E2=80=94vm-status=
=E2=80=9D command on the other 2 hosts. The 2 good nodes show it as th=
e following:<br><br>--=3D=3D Host 3 status =3D=3D--<br><br>conf_on_shared_st=
orage : True<br>Status up-to-date &=
nbsp; : False<br>Host=
name &=
nbsp; : host3.my.lab<br>Host ID &n=
bsp; : 3<br>En=
gine status &=
nbsp; : unknown stale-data<br>Score =
: 0<br=
>stopped &nbs=
p; : False<br>Local maintenance &nb=
sp; : True<br>crc32 &=
nbsp; &=
nbsp; : bce9a8c5<br>local_conf_timestamp &=
nbsp; : <a href=3D"tel:2605898" dir=3D"ltr" x-apple-data-=
detectors=3D"true" x-apple-data-detectors-type=3D"telephone" x-apple-data-de=
tectors-result=3D"1">2605898</a><br>Host timestamp &nbs=
p; : <a href=3D"tel:2605882" d=
ir=3D"ltr" x-apple-data-detectors=3D"true" x-apple-data-detectors-type=3D"te=
lephone" x-apple-data-detectors-result=3D"2">2605882</a><br>Extra metadata (=
valid at timestamp):<br><span class=3D"gmail-Apple-tab-span"> </span>meta=
data_parse_version=3D1<br><span class=3D"gmail-Apple-tab-span"> </span>meta=
data_feature_version=3D1<br><span class=3D"gmail-Apple-tab-span"> </s=
pan>timestamp=3D<a href=3D"tel:2605882" dir=3D"ltr" x-apple-data-detectors=3D=
"true" x-apple-data-detectors-type=3D"telephone" x-apple-data-detectors-resu=
lt=3D"3">2605882</a> (Thu Jun 15 15:18:13 2017)<br><span class=3D"gmail=
-Apple-tab-span"> </span>host-id=3D3<br><span class=3D"gmail-Apple-ta=
b-span"> </span>score=3D0<br><span class=3D"gmail-Apple-tab-span">=
</span>vm_conf_refresh_time=3D<a href=3D"tel:2605898" dir=3D"ltr" x-apple-d=
ata-detectors=3D"true" x-apple-data-detectors-type=3D"telephone" x-apple-dat=
a-detectors-result=3D"5">2605898</a> (Thu Jun 15 15:18:29 2017)<br><spa=
n class=3D"gmail-Apple-tab-span"> </span>conf_on_shared_storage=3DTru=
e<br><span class=3D"gmail-Apple-tab-span"> </span>maintenance=3DTrue<b=
r><span class=3D"gmail-Apple-tab-span"> </span>state=3DLocalMaintenance<br>=
<span class=3D"gmail-Apple-tab-span"> </span>stopped=3DFalse<br><br></spa=
n><div><span style=3D"background-color: rgba(255, 255, 255, 0);"><br></span>=
</div><div><span style=3D"background-color: rgba(255, 255, 255, 0);">How can=
I either remove this host altogether from the configuration, or repair it s=
o that it is back in a good state? The host is up, but due to my remov=
al attempts earlier, reports =E2=80=9Cunknown stale data=E2=80=9D for all 3 h=
osts in the config.</span></div><div><span style=3D"background-color: rgba(2=
55, 255, 255, 0);"><br></span></div><div><span style=3D"background-color: rg=
ba(255, 255, 255, 0);">Thanks</span></div><div><br></div><div></div></body><=
/html>=
--Apple-Mail-0EAE97BF-235C-4B5F-A7EF-F4D18A898C29--
2
2
Hello,
between problems solved in upcoming 4.1.3 release I see this:
Lost Connection After Host Deploy when 4.1.3 Host Added to 4.1.2 Engine
tracked by
https://bugzilla.redhat.com/show_bug.cgi?id=1459484
As a matter of principle I would prefer to force that an engine version
must be greater or equal than all the hosts it is intended to manage.
I don't find safe to allow this and probably unnecessary maintenance
work... what do you think?
For example if you go here:
http://www.vmware.com/resources/compatibility/sim/interop_matrix.php#intero…
you can see that:
- a vCenter Server 5.0U3 cannot manage an ESXi 5.1 host
- a vCenter Server 5.1U3 cannot manage an ESXi 6.0 host
- a vCenter Server 6.0U3 cannot manage an ESXi 6.5 host
In my opinion an administrator of the virtual infrastructure doesn't expect
to be able to manage newer versions' hosts with older engines... and
probably he/she doesn't feel this feature as a value added.
Just my thoughts.
Cheers,
Gianluca
2
2
16 Jun '17
The oVirt Project is pleased to announce the availability of the Second
Release Candidate of oVirt 4.1.3 for testing, as of June 16th, 2017
This is pre-release software. Please take a look at our community page[1]
to know how to ask questions and interact with developers and users.
All issues or bugs should be reported via oVirt Bugzilla[2].
This pre-release should not to be used in production.
This update is the first release candidate of the third in a series of
stabilization updates to the 4.1 series.
4.1.3 brings more than 40 enhancements and more than 200 bugfixes,
including more than 120 high or urgent
severity fixes, on top of oVirt 4.1 series
This release is available now for:
* Fedora 24 (tech preview)
* Red Hat Enterprise Linux 7.3 or later
* CentOS Linux (or similar) 7.3 or later
This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.3 or later
* CentOS Linux (or similar) 7.3 or later
* oVirt Node 4.1
* Fedora 24 (tech preview)
See the release notes draft [3] for installation / upgrade instructions and
a list of new features and bugs fixed.
Notes:
- oVirt Appliance is already available
- oVirt Live is already available[4]
- oVirt Node is already available[4]
We are addressing compose issues for above components which are missing
ansible 2.3 and latest fluentd builds from CentOS SIGs.
Additional Resources:
* Read more about the oVirt 4.1.3 release highlights:
http://www.ovirt.org/release/4.1.3/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] https://www.ovirt.org/community/
[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
[3] http://www.ovirt.org/release/4.1.3/
[4] resources.ovirt.org/pub/ovirt-4.1-pre/iso/
--
SANDRO BONAZZOLA
ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/>
<https://red.ht/sig>
TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
1
0
I just read the blog entry about performance increate in for the python sdk (https://www.ovirt.org/blog/2017/05/higher-performance-for-python-sdk/)
I'm quite sceptical about pipelining.
A few explanation about that can be found at:
https://devcentral.f5.com/articles/http-pipelining-a-security-risk-without-…
https://stackoverflow.com/questions/14810890/what-are-the-disadvantages-of-…
It also talks about multiple connection, but don't use pycurl.CurlShare(). I thing this might be very helpfull, as it allows to share cookies, see https://curl.haxx.se/libcurl/c/CURLOPT_SHARE.html.
3
3
------=_Part_13273584_849961723.1497542412090
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
Dear Users
Procedure :1- create clean volume replica 2 distributed with glusterfs .2- =
create clean ovirt-engine machine .3- create clean vm from scratch then cre=
ate template from this vm.4- then create two vm from this template (vm1) & =
(vm2).5- then delete the two vm .6- create new two vm with the same name (v=
m1) & (vm2) from the template .7- till now the two vm stable and work corre=
ctly .8- repeat no (7) three time all vm's is working correctly .
issue :i have ansible playbook to deploy vm's to our ovirt , my playbook us=
e the above template to deploy the vm's .my issue is after ansible script d=
eploy the vm's , all vm's disk crash and the template disk is crash also an=
d the script make change into the template checksum hash .
you can look at ansible parameters :
- hosts: localhost=C2=A0 =C2=A0 connection: local=C2=A0 =C2=A0 gather_facts=
: false=C2=A0 =C2=A0 tasks:=C2=A0 =C2=A0 =C2=A0 - name: entering=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 ovirt_auth:=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 url:=
https://ovirt-engine.elcld.net:443/ovirt-engine/api=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 username: admin@internal=C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 password: pass=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 insec=
ure: yes=C2=A0 =C2=A0 =C2=A0 - name: creating=C2=A0 =C2=A0 =C2=A0 =C2=A0 ov=
irt_vms:=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 auth: "{{ ovirt_auth }}"=C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 name: myvm05=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
template: mahdi=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 #state: present=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 cluster: Cluster02=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 memory: 4GiB=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 cpu_cores: 2=C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 comment: Dev=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 #ty=
pe: server=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 cloud_init:=C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 host_name: vm01=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 user_name: root=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 root_passwo=
rd: pass=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 nic_on_boot: true=C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 nic_boot_protocol: static=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 nic_name: eth0=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 dns_servers: 109.224.19.5=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 dns_search: elcld.net=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 nic_ip_a=
ddress: 10.10.20.2=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 nic_netmask: 25=
5.255.255.0=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 nic_gateway: 10.10.20.=
1=C2=A0 =C2=A0 =C2=A0 - name: Revoke=C2=A0=C2=A0 =C2=A0 =C2=A0 =C2=A0 ovirt=
_auth:=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 state: absent=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 ovirt_auth: "{{ ovirt_auth }}"
can you assist me with this issue by checking if that any missing in my ans=
ible .
best regards=C2=A0=C2=A0
------=_Part_13273584_849961723.1497542412090
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
<html><head></head><body><div style=3D"color:#000; background-color:#fff; f=
ont-family:Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif;font=
-size:13px"><div id=3D"yui_3_16_0_1_1497541336982_2674"><span></span></div>=
<div></div><div id=3D"yui_3_16_0_1_1497541336982_3243">Dear Users</div><div=
id=3D"yui_3_16_0_1_1497541336982_3243"><br></div>
<div id=3D"yui_3_16_0_1_1497541336982_3244"><u id=3D"yui_3_16_0_1_149754133=
6982_3245">Procedure :</u></div>
<div id=3D"yui_3_16_0_1_1497541336982_3246">1- create clean volume replica =
2 distributed with glusterfs .</div>
<div id=3D"yui_3_16_0_1_1497541336982_3247">2- create clean ovirt-engine ma=
chine .</div>
<div id=3D"yui_3_16_0_1_1497541336982_3248">3- create clean vm from scratch=
then create template from this vm.</div>
<div id=3D"yui_3_16_0_1_1497541336982_3249">4- then create two vm from this=
template (vm1) & (vm2).</div>
<div id=3D"yui_3_16_0_1_1497541336982_3250">5- then delete the two vm .</di=
v>
<div id=3D"yui_3_16_0_1_1497541336982_3251">6- create new two vm with the s=
ame name (vm1) & (vm2) from the template .</div>
<div id=3D"yui_3_16_0_1_1497541336982_3252">7- till now the two vm stable a=
nd work correctly .</div>
<div id=3D"yui_3_16_0_1_1497541336982_3253">8- repeat no (7) three time all=
vm's is working correctly .</div><div id=3D"yui_3_16_0_1_1497541336982_325=
3"><br></div><div id=3D"yui_3_16_0_1_1497541336982_3255"><u id=3D"yui_3_16_=
0_1_1497541336982_3256">issue :</u></div>
<div id=3D"yui_3_16_0_1_1497541336982_3257">i have ansible playbook to depl=
oy vm's to our ovirt , my playbook use the above template to deploy the vm'=
s .</div>
<div id=3D"yui_3_16_0_1_1497541336982_2673" dir=3D"ltr">my issue is after a=
nsible script deploy the vm's , all vm's disk crash and the template disk i=
s crash also and the script make change into the template checksum hash .</=
div><div id=3D"yui_3_16_0_1_1497541336982_2673" dir=3D"ltr"><br></div><div =
id=3D"yui_3_16_0_1_1497541336982_2673" dir=3D"ltr">you can look at ansible =
parameters :</div><div id=3D"yui_3_16_0_1_1497541336982_2673" dir=3D"ltr"><=
br></div><div dir=3D"ltr" id=3D"yui_3_16_0_1_1497541336982_5456">- hosts: l=
ocalhost</div><div dir=3D"ltr" id=3D"yui_3_16_0_1_1497541336982_5457"> =
; connection: local</div><div dir=3D"ltr" id=3D"yui_3_16_0_1_1497541=
336982_5458"> gather_facts: false</div><div dir=3D"ltr" id=3D"=
yui_3_16_0_1_1497541336982_5459"> tasks:</div><div dir=3D"ltr"=
id=3D"yui_3_16_0_1_1497541336982_5460"> - name: enteri=
ng</div><div dir=3D"ltr" id=3D"yui_3_16_0_1_1497541336982_5461"> &nbs=
p; ovirt_auth:</div><div dir=3D"ltr" id=3D"yui_3_16_0_1_14975=
41336982_5462"> url: https://ovirt=
-engine.elcld.net:443/ovirt-engine/api</div><div dir=3D"ltr" id=3D"yui_3_16=
_0_1_1497541336982_5463"> username=
: admin@internal</div><div dir=3D"ltr" id=3D"yui_3_16_0_1_1497541336982_546=
4"> password: pass</div><div dir=
=3D"ltr" id=3D"yui_3_16_0_1_1497541336982_5465"> =
insecure: yes</div><div dir=3D"ltr" id=3D"yui_3_16_0_1_14975=
41336982_5466"> - name: creating</div><div dir=3D"ltr" =
id=3D"yui_3_16_0_1_1497541336982_5467"> ovirt_vm=
s:</div><div dir=3D"ltr" id=3D"yui_3_16_0_1_1497541336982_5468"> &nbs=
p; auth: "{{ ovirt_auth }}"</div><div dir=3D"ltr" id=
=3D"yui_3_16_0_1_1497541336982_5469"> nam=
e: myvm05</div><div dir=3D"ltr" id=3D"yui_3_16_0_1_1497541336982_5470">&nbs=
p; template: mahdi</div><div dir=3D"ltr" id=3D"=
yui_3_16_0_1_1497541336982_5471"> #state:=
present</div><div dir=3D"ltr" id=3D"yui_3_16_0_1_1497541336982_5472"> =
; cluster: Cluster02</div><div dir=3D"ltr" id=
=3D"yui_3_16_0_1_1497541336982_5473"> mem=
ory: 4GiB</div><div dir=3D"ltr" id=3D"yui_3_16_0_1_1497541336982_5474">&nbs=
p; cpu_cores: 2</div><div dir=3D"ltr" id=3D"yui=
_3_16_0_1_1497541336982_5475"> comment: D=
ev</div><div dir=3D"ltr" id=3D"yui_3_16_0_1_1497541336982_5476"> &nbs=
p; #type: server</div><div dir=3D"ltr" id=3D"yui_3_16_=
0_1_1497541336982_5477"> cloud_init:</div=
><div dir=3D"ltr" id=3D"yui_3_16_0_1_1497541336982_5478"> &nbs=
p; host_name: vm01</div><div dir=3D"ltr" id=3D"yui_3_1=
6_0_1_1497541336982_5479"> user_na=
me: root</div><div dir=3D"ltr" id=3D"yui_3_16_0_1_1497541336982_5480"> =
; root_password: pass</div><div dir=3D"l=
tr" id=3D"yui_3_16_0_1_1497541336982_5481"> &nbs=
p; nic_on_boot: true</div><div dir=3D"ltr" id=3D"yui_3_16_0_1_149754=
1336982_5482"> nic_boot_protocol: =
static</div><div dir=3D"ltr" id=3D"yui_3_16_0_1_1497541336982_5483"> =
nic_name: eth0</div><div dir=3D"ltr" id=
=3D"yui_3_16_0_1_1497541336982_5484"> &nb=
sp; dns_servers: 109.224.19.5</div><div dir=3D"ltr" id=3D"yui_3_16_0_1_1497=
541336982_5485"> dns_search: elcld=
.net</div><div dir=3D"ltr" id=3D"yui_3_16_0_1_1497541336982_5486"> &n=
bsp; nic_ip_address: 10.10.20.2</div><div dir=
=3D"ltr" id=3D"yui_3_16_0_1_1497541336982_5487"> =
nic_netmask: 255.255.255.0</div><div dir=3D"ltr" id=3D"yui_3=
_16_0_1_1497541336982_5488"> nic_g=
ateway: 10.10.20.1</div><div dir=3D"ltr" id=3D"yui_3_16_0_1_1497541336982_5=
489"> - name: Revoke </div><div dir=3D"ltr" id=3D"=
yui_3_16_0_1_1497541336982_5490"> ovirt_auth:</d=
iv><div dir=3D"ltr" id=3D"yui_3_16_0_1_1497541336982_5491"> &n=
bsp; state: absent</div><div dir=3D"ltr" id=3D"yui_3_16_0_1_1=
497541336982_5492"> ovirt_auth: "{{ ovirt=
_auth }}"</div><div dir=3D"ltr" id=3D"yui_3_16_0_1_1497541336982_5492"><br>=
</div><div dir=3D"ltr" id=3D"yui_3_16_0_1_1497541336982_5492">can you assis=
t me with this issue by checking if that any missing in my ansible .</div><=
div dir=3D"ltr" id=3D"yui_3_16_0_1_1497541336982_5492"><br></div><div dir=
=3D"ltr" id=3D"yui_3_16_0_1_1497541336982_5492">best regards </div><di=
v id=3D"yui_3_16_0_1_1497541336982_2673" dir=3D"ltr"> </div><div class=
=3D"signature" id=3D"yui_3_16_0_1_1497541336982_2670"><div id=3D"yui_3_16_0=
_1_1497541336982_2672"><br></div><div id=3D"yui_3_16_0_1_1497541336982_2671=
"><br></div><div id=3D"yui_3_16_0_1_1497541336982_2669"><br></div></div></d=
iv></body></html>
------=_Part_13273584_849961723.1497542412090--
3
2
Good morning oVirt community,
I'm running a three host gluster environment with hosted engine.
Yesterday the engine went down and has not been able to come up properly.
It tries to start on all three host.
I have two gluster volumes, data and engne. The data storage domian volume
is no longer mounted but the engine volume is up. I've restarted the
gluster service and make sure both volumes were running. The data volume
will not mount.
How can I get the engine running properly again?
Thanks,
Joel
2
2
Hi,
When building a hosted engine VM, and choosing 'nfs' for storage, it
does the install to this nfs share. Once the host is setup with, e.g.,
fibre channel as storage for VMs, does the hosted engine get migrated
automatically to this storage? When does this actually happen?
Thanks,
Cam
2
6