[ovirt-users] replace ovirt engine host

Yedidyah Bar David didi at redhat.com
Wed Nov 12 13:12:07 UTC 2014


Sorry, no idea.

Does not seem very related to hosted-engine.

Perhaps better to change the subject (add 'gluster'?) to attract other people.
Also please post all relevant logs - hosted-engine, vdsm, all engine logs.
-- 
Didi

----- Original Message -----
> From: "Ml Ml" <mliebherr99 at googlemail.com>
> To: "Matt ." <yamakasi.014 at gmail.com>
> Cc: users at ovirt.org
> Sent: Wednesday, November 12, 2014 3:06:04 PM
> Subject: Re: [ovirt-users] replace ovirt engine host
> 
> Anyone? :-(
> 
> On Tue, Nov 11, 2014 at 6:39 PM, Ml Ml <mliebherr99 at googlemail.com> wrote:
> > I dunno why this is all so simple for you.
> >
> > I just replaced the ovirt-engine like described in the docs.
> >
> > I ejected the CD ISOs on every vm so i was able to delete the ISO_DOMAIN.
> >
> > But i have still problems with my storage. Its a replicated glusterfs.
> > It looks healthy on the nodes itself. But somehow my ovirt-engine gets
> > confused. Can someone explain me what the actual error is?
> >
> > Note: i only replaced the ovirt-engine host and delete the ISO_DOMAIN:
> >
> > 2014-11-11 18:32:37,832 ERROR
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand]
> > (org.ovirt.thread.pool-6-thread-39) [71891fe3] Failed in
> > HSMGetTaskStatusVDS method
> > 2014-11-11 18:32:37,833 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
> > (org.ovirt.thread.pool-6-thread-39) [71891fe3] spmStart polling ended:
> > taskId = 8c5fae2c-0ddb-41cd-ac54-c404c943e00f task status = finished
> > 2014-11-11 18:32:37,834 ERROR
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
> > (org.ovirt.thread.pool-6-thread-39) [71891fe3] Start SPM Task failed -
> > result: cleanSuccess, message: VDSGenericException: VDSErrorException:
> > Failed to HSMGetTaskStatusVDS, error = Storage domain does not exist,
> > code = 358
> > 2014-11-11 18:32:37,888 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
> > (org.ovirt.thread.pool-6-thread-39) [71891fe3] spmStart polling ended,
> > spm status: Free
> > 2014-11-11 18:32:37,889 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
> > (org.ovirt.thread.pool-6-thread-39) [71891fe3] START,
> > HSMClearTaskVDSCommand(HostName = ovirt-node01.foobar.net, HostId =
> > 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c,
> > taskId=8c5fae2c-0ddb-41cd-ac54-c404c943e00f), log id: 547e26fd
> > 2014-11-11 18:32:37,937 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
> > (org.ovirt.thread.pool-6-thread-39) [71891fe3] FINISH,
> > HSMClearTaskVDSCommand, log id: 547e26fd
> > 2014-11-11 18:32:37,938 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
> > (org.ovirt.thread.pool-6-thread-39) [71891fe3] FINISH,
> > SpmStartVDSCommand, return:
> > org.ovirt.engine.core.common.businessentities.SpmStatusResult at 5027ed97,
> > log id: 461eb5b5
> > 2014-11-11 18:32:37,941 INFO
> > [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand]
> > (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] Running command:
> > SetStoragePoolStatusCommand internal: true. Entities affected :  ID:
> > b384b3da-02a6-44f3-a3f6-56751ce8c26d Type: StoragePool
> > 2014-11-11 18:32:37,948 ERROR
> > [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
> > (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d]
> > IrsBroker::Failed::ActivateStorageDomainVDS due to:
> > IrsSpmStartFailedException: IRSGenericException: IRSErrorException:
> > SpmStart failed
> > 2014-11-11 18:32:38,006 INFO
> > [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
> > (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] Irs placed on server
> > 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c failed. Proceed Failover
> > 2014-11-11 18:32:38,044 INFO
> > [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> > (DefaultQuartzScheduler_Worker-29) START,
> > GlusterVolumesListVDSCommand(HostName = ovirt-node01.foobar.net,
> > HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c), log id: 7a110756
> > 2014-11-11 18:32:38,045 INFO
> > [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
> > (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d]
> > hostFromVds::selectedVds - ovirt-node02.foobar.net, spmStatus Free,
> > storage pool HP_Proliant_DL180G6
> > 2014-11-11 18:32:38,048 INFO
> > [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
> > (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] starting spm on vds
> > ovirt-node02.foobar.net, storage pool HP_Proliant_DL180G6, prevId -1,
> > LVER -1
> > 2014-11-11 18:32:38,050 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
> > (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] START,
> > SpmStartVDSCommand(HostName = ovirt-node02.foobar.net, HostId =
> > 6948da12-0b8a-4b6d-a9af-162e6c25dad3, storagePoolId =
> > b384b3da-02a6-44f3-a3f6-56751ce8c26d, prevId=-1, prevLVER=-1,
> > storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log
> > id: 1a6ccb9c
> > 2014-11-11 18:32:38,108 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
> > (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] spmStart polling
> > started: taskId = 78d31638-70a5-46aa-89e7-1d1e8126bdba
> > 2014-11-11 18:32:38,193 INFO
> > [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> > (DefaultQuartzScheduler_Worker-29) FINISH,
> > GlusterVolumesListVDSCommand, return:
> > {d46619e9-9368-4e82-bf3a-a2377b6e85e4=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 9746ef53},
> > log id: 7a110756
> > 2014-11-11 18:32:38,352 INFO
> > [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> > (DefaultQuartzScheduler_Worker-29) START,
> > GlusterVolumesListVDSCommand(HostName = ovirt-node04.foobar.net,
> > HostId = 073c24e1-003f-412a-be56-0c41a435829a), log id: 2f25d56e
> > 2014-11-11 18:32:38,433 INFO
> > [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> > (DefaultQuartzScheduler_Worker-29) FINISH,
> > GlusterVolumesListVDSCommand, return:
> > {660ca9ef-46fc-47b0-9b6b-61ccfd74016c=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at cd3b51c4},
> > log id: 2f25d56e
> > 2014-11-11 18:32:39,117 ERROR
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand]
> > (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] Failed in
> > HSMGetTaskStatusVDS method
> > 2014-11-11 18:32:39,118 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
> > (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] spmStart polling ended:
> > taskId = 78d31638-70a5-46aa-89e7-1d1e8126bdba task status = finished
> > 2014-11-11 18:32:39,119 ERROR
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
> > (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] Start SPM Task failed -
> > result: cleanSuccess, message: VDSGenericException: VDSErrorException:
> > Failed to HSMGetTaskStatusVDS, error = Storage domain does not exist,
> > code = 358
> > 2014-11-11 18:32:39,171 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
> > (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] spmStart polling ended,
> > spm status: Free
> > 2014-11-11 18:32:39,173 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
> > (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] START,
> > HSMClearTaskVDSCommand(HostName = ovirt-node02.foobar.net, HostId =
> > 6948da12-0b8a-4b6d-a9af-162e6c25dad3,
> > taskId=78d31638-70a5-46aa-89e7-1d1e8126bdba), log id: 46abf4a0
> > 2014-11-11 18:32:39,220 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
> > (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] FINISH,
> > HSMClearTaskVDSCommand, log id: 46abf4a0
> > 2014-11-11 18:32:39,221 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
> > (org.ovirt.thread.pool-6-thread-39) [6d5f7d9d] FINISH,
> > SpmStartVDSCommand, return:
> > org.ovirt.engine.core.common.businessentities.SpmStatusResult at 7d3782f7,
> > log id: 1a6ccb9c
> > 2014-11-11 18:32:39,224 INFO
> > [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand]
> > (org.ovirt.thread.pool-6-thread-39) [4777665a] Running command:
> > SetStoragePoolStatusCommand internal: true. Entities affected :  ID:
> > b384b3da-02a6-44f3-a3f6-56751ce8c26d Type: StoragePool
> > 2014-11-11 18:32:39,232 ERROR
> > [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
> > (org.ovirt.thread.pool-6-thread-39) [4777665a]
> > IrsBroker::Failed::ActivateStorageDomainVDS due to:
> > IrsSpmStartFailedException: IRSGenericException: IRSErrorException:
> > SpmStart failed
> > 2014-11-11 18:32:39,235 INFO
> > [org.ovirt.engine.core.vdsbroker.irsbroker.ActivateStorageDomainVDSCommand]
> > (org.ovirt.thread.pool-6-thread-39) [4777665a] FINISH,
> > ActivateStorageDomainVDSCommand, log id: 75877740
> > 2014-11-11 18:32:39,236 ERROR
> > [org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand]
> > (org.ovirt.thread.pool-6-thread-39) [4777665a] Command
> > org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand throw
> > Vdc Bll exception. With error message VdcBLLException:
> > org.ovirt.engine.core.vdsbroker.irsbroker.IrsSpmStartFailedException:
> > IRSGenericException: IRSErrorException: SpmStart failed (Failed with
> > error ENGINE and code 5001)
> > 2014-11-11 18:32:39,239 INFO
> > [org.ovirt.engine.core.bll.storage.ActivateStorageDomainCommand]
> > (org.ovirt.thread.pool-6-thread-39) [4777665a] Command
> > [id=c5315de2-0817-4da2-a13e-50c8cfa93a6a]: Compensating
> > CHANGED_STATUS_ONLY of
> > org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap;
> > snapshot: EntityStatusSnapshot [id=storagePoolId =
> > b384b3da-02a6-44f3-a3f6-56751ce8c26d, storageId =
> > abc51e26-7175-4b38-b3a8-95c6928fbc2b, status=Unknown].
> > 2014-11-11 18:32:39,243 INFO
> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> > (org.ovirt.thread.pool-6-thread-39) [4777665a] Correlation ID:
> > 71891fe3, Job ID: 239d4ac0-aa7d-486a-a70f-55a9d1b910f4, Call Stack:
> > null, Custom Event ID: -1, Message: Failed to activate Storage Domain
> > RaidVolBGluster (Data Center HP_Proliant_DL180G6) by admin
> > 2014-11-11 18:32:40,566 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand]
> > (DefaultQuartzScheduler_Worker-28) [47871083] Command
> > org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand
> > return value
> >
> > TaskStatusListReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=654,
> > mMessage=Not SPM]]
> >
> > 2014-11-11 18:32:40,569 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand]
> > (DefaultQuartzScheduler_Worker-28) [47871083] HostName =
> > ovirt-node02.foobar.net
> > 2014-11-11 18:32:40,570 ERROR
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand]
> > (DefaultQuartzScheduler_Worker-28) [47871083] Command
> > HSMGetAllTasksStatusesVDSCommand(HostName = ovirt-node02.foobar.net,
> > HostId = 6948da12-0b8a-4b6d-a9af-162e6c25dad3) execution failed.
> > Exception: IRSNonOperationalException: IRSGenericException:
> > IRSErrorException: IRSNonOperationalException: Not SPM
> > 2014-11-11 18:32:40,625 INFO
> > [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
> > (DefaultQuartzScheduler_Worker-28) [47871083] hostFromVds::selectedVds
> > - ovirt-node02.foobar.net, spmStatus Free, storage pool
> > HP_Proliant_DL180G6
> > 2014-11-11 18:32:40,628 INFO
> > [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
> > (DefaultQuartzScheduler_Worker-28) [47871083] starting spm on vds
> > ovirt-node02.foobar.net, storage pool HP_Proliant_DL180G6, prevId -1,
> > LVER -1
> > 2014-11-11 18:32:40,630 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
> > (DefaultQuartzScheduler_Worker-28) [47871083] START,
> > SpmStartVDSCommand(HostName = ovirt-node02.foobar.net, HostId =
> > 6948da12-0b8a-4b6d-a9af-162e6c25dad3, storagePoolId =
> > b384b3da-02a6-44f3-a3f6-56751ce8c26d, prevId=-1, prevLVER=-1,
> > storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log
> > id: 1f3ac280
> > 2014-11-11 18:32:40,687 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
> > (DefaultQuartzScheduler_Worker-28) [47871083] spmStart polling
> > started: taskId = 50ab033e-76cd-44d5-b661-a1c2b8c312ef
> > 2014-11-11 18:32:41,735 ERROR
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand]
> > (DefaultQuartzScheduler_Worker-28) [47871083] Failed in
> > HSMGetTaskStatusVDS method
> > 2014-11-11 18:32:41,736 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
> > (DefaultQuartzScheduler_Worker-28) [47871083] spmStart polling ended:
> > taskId = 50ab033e-76cd-44d5-b661-a1c2b8c312ef task status = finished
> > 2014-11-11 18:32:41,737 ERROR
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
> > (DefaultQuartzScheduler_Worker-28) [47871083] Start SPM Task failed -
> > result: cleanSuccess, message: VDSGenericException: VDSErrorException:
> > Failed to HSMGetTaskStatusVDS, error = Storage domain does not exist,
> > code = 358
> > 2014-11-11 18:32:41,790 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
> > (DefaultQuartzScheduler_Worker-28) [47871083] spmStart polling ended,
> > spm status: Free
> > 2014-11-11 18:32:41,791 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
> > (DefaultQuartzScheduler_Worker-28) [47871083] START,
> > HSMClearTaskVDSCommand(HostName = ovirt-node02.foobar.net, HostId =
> > 6948da12-0b8a-4b6d-a9af-162e6c25dad3,
> > taskId=50ab033e-76cd-44d5-b661-a1c2b8c312ef), log id: 852d287
> > 2014-11-11 18:32:41,839 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
> > (DefaultQuartzScheduler_Worker-28) [47871083] FINISH,
> > HSMClearTaskVDSCommand, log id: 852d287
> > 2014-11-11 18:32:41,840 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
> > (DefaultQuartzScheduler_Worker-28) [47871083] FINISH,
> > SpmStartVDSCommand, return:
> > org.ovirt.engine.core.common.businessentities.SpmStatusResult at 32b92b73,
> > log id: 1f3ac280
> > 2014-11-11 18:32:41,843 INFO
> > [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand]
> > (DefaultQuartzScheduler_Worker-28) [1ad3a509] Running command:
> > SetStoragePoolStatusCommand internal: true. Entities affected :  ID:
> > b384b3da-02a6-44f3-a3f6-56751ce8c26d Type: StoragePool
> > 2014-11-11 18:32:41,851 ERROR
> > [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
> > (DefaultQuartzScheduler_Worker-28) [1ad3a509]
> > IrsBroker::Failed::GetStoragePoolInfoVDS due to:
> > IrsSpmStartFailedException: IRSGenericException: IRSErrorException:
> > SpmStart failed
> > 2014-11-11 18:32:41,909 INFO
> > [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
> > (DefaultQuartzScheduler_Worker-28) [1ad3a509] Irs placed on server
> > 6948da12-0b8a-4b6d-a9af-162e6c25dad3 failed. Proceed Failover
> > 2014-11-11 18:32:41,928 INFO
> > [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
> > (DefaultQuartzScheduler_Worker-28) [1ad3a509] hostFromVds::selectedVds
> > - ovirt-node01.foobar.net, spmStatus Free, storage pool
> > HP_Proliant_DL180G6
> > 2014-11-11 18:32:41,930 INFO
> > [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
> > (DefaultQuartzScheduler_Worker-28) [1ad3a509] starting spm on vds
> > ovirt-node01.foobar.net, storage pool HP_Proliant_DL180G6, prevId -1,
> > LVER -1
> > 2014-11-11 18:32:41,932 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
> > (DefaultQuartzScheduler_Worker-28) [1ad3a509] START,
> > SpmStartVDSCommand(HostName = ovirt-node01.foobar.net, HostId =
> > 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c, storagePoolId =
> > b384b3da-02a6-44f3-a3f6-56751ce8c26d, prevId=-1, prevLVER=-1,
> > storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log
> > id: 56dfcc3c
> > 2014-11-11 18:32:41,984 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
> > (DefaultQuartzScheduler_Worker-28) [1ad3a509] spmStart polling
> > started: taskId = 84ac9f17-d5ec-4e43-8fcc-8ca9065a8492
> > 2014-11-11 18:32:42,993 ERROR
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand]
> > (DefaultQuartzScheduler_Worker-28) [1ad3a509] Failed in
> > HSMGetTaskStatusVDS method
> > 2014-11-11 18:32:42,994 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
> > (DefaultQuartzScheduler_Worker-28) [1ad3a509] spmStart polling ended:
> > taskId = 84ac9f17-d5ec-4e43-8fcc-8ca9065a8492 task status = finished
> > 2014-11-11 18:32:42,995 ERROR
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
> > (DefaultQuartzScheduler_Worker-28) [1ad3a509] Start SPM Task failed -
> > result: cleanSuccess, message: VDSGenericException: VDSErrorException:
> > Failed to HSMGetTaskStatusVDS, error = Storage domain does not exist,
> > code = 358
> > 2014-11-11 18:32:43,048 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
> > (DefaultQuartzScheduler_Worker-28) [1ad3a509] spmStart polling ended,
> > spm status: Free
> > 2014-11-11 18:32:43,049 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
> > (DefaultQuartzScheduler_Worker-28) [1ad3a509] START,
> > HSMClearTaskVDSCommand(HostName = ovirt-node01.foobar.net, HostId =
> > 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c,
> > taskId=84ac9f17-d5ec-4e43-8fcc-8ca9065a8492), log id: 5abaa4ce
> > 2014-11-11 18:32:43,098 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
> > (DefaultQuartzScheduler_Worker-28) [1ad3a509] FINISH,
> > HSMClearTaskVDSCommand, log id: 5abaa4ce
> > 2014-11-11 18:32:43,098 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
> > (DefaultQuartzScheduler_Worker-28) [1ad3a509] FINISH,
> > SpmStartVDSCommand, return:
> > org.ovirt.engine.core.common.businessentities.SpmStatusResult at 7d9b9905,
> > log id: 56dfcc3c
> > 2014-11-11 18:32:43,101 INFO
> > [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand]
> > (DefaultQuartzScheduler_Worker-28) [725b57af] Running command:
> > SetStoragePoolStatusCommand internal: true. Entities affected :  ID:
> > b384b3da-02a6-44f3-a3f6-56751ce8c26d Type: StoragePool
> > 2014-11-11 18:32:43,108 ERROR
> > [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
> > (DefaultQuartzScheduler_Worker-28) [725b57af]
> > IrsBroker::Failed::GetStoragePoolInfoVDS due to:
> > IrsSpmStartFailedException: IRSGenericException: IRSErrorException:
> > SpmStart failed
> > 2014-11-11 18:32:43,444 INFO
> > [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> > (DefaultQuartzScheduler_Worker-31) [7e2ba3a3] START,
> > GlusterVolumesListVDSCommand(HostName = ovirt-node01.foobar.net,
> > HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c), log id: 12ae9c47
> > 2014-11-11 18:32:43,585 INFO
> > [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> > (DefaultQuartzScheduler_Worker-31) [7e2ba3a3] FINISH,
> > GlusterVolumesListVDSCommand, return:
> > {d46619e9-9368-4e82-bf3a-a2377b6e85e4=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at a5d949dc},
> > log id: 12ae9c47
> > 2014-11-11 18:32:43,745 INFO
> > [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> > (DefaultQuartzScheduler_Worker-31) [7e2ba3a3] START,
> > GlusterVolumesListVDSCommand(HostName = ovirt-node04.foobar.net,
> > HostId = 073c24e1-003f-412a-be56-0c41a435829a), log id: 4b994fd9
> > 2014-11-11 18:32:43,826 INFO
> > [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> > (DefaultQuartzScheduler_Worker-31) [7e2ba3a3] FINISH,
> > GlusterVolumesListVDSCommand, return:
> > {660ca9ef-46fc-47b0-9b6b-61ccfd74016c=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity at 10521f1b},
> > log id: 4b994fd9
> > 2014-11-11 18:32:48,838 INFO
> > [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> > (DefaultQuartzScheduler_Worker-71) START,
> > GlusterVolumesListVDSCommand(HostName = ovirt-node01.foobar.net,
> > HostId = 2e8cec66-23d7-4a5c-b6f3-9758d1d87f5c), log id: 3b036a37
> >
> >
> >
> > Thanks,
> > Mario
> >
> > On Fri, Nov 7, 2014 at 11:49 PM, Matt . <yamakasi.014 at gmail.com> wrote:
> >> Hi,
> >>
> >> Actually it's very simple as described in the docs.
> >>
> >> Just stop the engine, make a backup, copy it over, place it back and
> >> start it. You can do this in a several of ways.
> >>
> >> ISO domains is which I would remove and recreate again. ISO domains
> >> are actually dumb domains, so nothing can go wrong.
> >>
> >> Did it some time ago because I needed more performance.
> >>
> >> VDSM can run without the engine, it doesn't need it as the egine
> >> monitors and does the commands, so when it's not there... VM's just
> >> run (until you make them die yourself :))
> >>
> >> I would give it 15-30 min/
> >>
> >> Cheers,
> >>
> >> Matt
> >>
> >>
> >> 2014-11-07 18:36 GMT+01:00 Daniel Helgenberger
> >> <daniel.helgenberger at m-box.de>:
> >>>
> >>> Daniel Helgenberger
> >>> m box bewegtbild GmbH
> >>>
> >>> ACKERSTR. 19 P:  +49/30/2408781-22
> >>> D-10115 BERLIN F:  +49/30/2408781-10
> >>>
> >>> www.m-box.de
> >>> www.monkeymen.tv
> >>>
> >>> Geschäftsführer: Martin Retschitzegger / Michaela Göllner
> >>> Handeslregister: Amtsgericht Charlottenburg / HRB 112767
> >>> On 07.11.2014, at 15:24, Koen Vanoppen <vanoppen.koen at gmail.com> wrote:
> >>>
> >>> Hi,
> >>>
> >>> We had a consulting partner who did the same for our company. This is his
> >>> procedure and worked great:
> >>>
> >>> How to migrate ovirt management engine
> >>> Packages
> >>> Ensure you have the same packages & versions installed on the destination
> >>> hostas on the source, using 'rpm -qa | grep ovirt'. Make sure versions
> >>> are
> >>> 100%identical.
> >>> Default setup
> >>>
> >>> Run 'engine-setup' on the destination host after installing the packages.
> >>> Use
> >>> the following configuration:
> >>> 1.    Backup existing configuration
> >>> 2.    On the source host, do:
> >>>
> >>> You might want your consultant take a look on [1]...
> >>> Steps a-3d:
> >>> engine-backup mode=backup --file=~/ovirt-engine-source --log=backup.log
> >>>
> >>> a.    service ovirt-engine stop
> >>> b.    service ovirt-engine-dwhd stop
> >>> c.    mkdir ~/backup
> >>> d.    tar -C /etc/pki/ovirt-engine -czpf ~/backup/ovirt-engine-pki.tar.gz
> >>> .
> >>> e.    tar -C /etc/ovirt-engine -czpf ~/backup/ovirt-engine-conf.tar.gz .
> >>> f.    cd /usr/share/ovirt-engine/dbscripts
> >>> g.    ./backup.sh
> >>> h.    mv engine_*.sql ~/backup/engine.sql
> >>> 3.    You may also want to backup dwh & reports:
> >>> a.    cd /usr/share/ovirt-engine/bin/
> >>> b.    ./engine-backup.sh --mode=backup --scope=db --db-user=engine
> >>> --db-password=XXX --file=/usr/tmp/rhevm-backups/engine-backup
> >>> --log=/tmp/engine-backup.log
> >>> c.    ./engine-backup.sh --mode=backup --scope=dwhdb --db-user=engine
> >>> --db-password=XXX --file=/usr/tmp/rhevm-backups/dwh-backup
> >>> --log=/tmp/engine-backup.log
> >>> d.    ./engine-backup.sh --mode=backup --scope=reportsdb --db-user=engine
> >>> --db-password=XXX --file=/usr/tmp/rhevm-backups/reports-backup
> >>> --log=/tmp/engine-backup.log
> >>> 4.    Download these backup files, and copy them to the destination host.
> >>> Restore configuration
> >>> 1.    On the destination host, do:
> >>>
> >>> Again, steps a-h, basically
> >>> engine-setup
> >>> engine-cleanup
> >>> engine-backup mode=restore --file=~/ovirt-engine-source --log=backup.log
> >>>
> >>> also, I would run a second
> >>> engine-setup
> >>> After that, you should be good to go..
> >>>
> >>> Of course, depending on your previous engine setup this could be a little
> >>> more complicated. Still, quite strait forward.
> >>> [1] http://www.ovirt.org/Ovirt-engine-backup
> >>>
> >>> a.    service ovirt-engine stop
> >>> b.    service ovirt-engine-dwhd stop
> >>> c.    cd backup
> >>> d.    tar -C /etc/pki/ovirt-engine -xzpf ovirt-engine-pki.tar.gz
> >>> e.     tar -C /etc/ovirt-engine -xzpf ovirt-engine-conf.tar.gz
> >>> f.     tar -xvjf engine-backup
> >>> g.     tar -xvjf dwh-backup
> >>> h.     tar -xvjf reports-backup
> >>>
> >>> Restore Database
> >>> 1.    On the destination host do:
> >>> a.    su - postgres -c "psql -d template1 -c 'drop database engine;'"
> >>> b.     su - postgres -c "psql -d template1 -c 'create database engine
> >>> owner
> >>> engine;'"
> >>> c.     su - postgres
> >>> d.     psql
> >>> e.      \c engine
> >>> f.      \i /path/to/backup/engine.sql
> >>> NOTE: in case you have issues logging in to the database, add the
> >>> following
> >>>       line to the pg_hba.conf file:
> >>>
> >>>        host    all    engine    127.0.0.1/32        trust
> >>>
> >>> 2.    Fix engine password:
> >>> a.    su - postgres
> >>> b.     psql
> >>> c.    alter user engine with password 'XXXXXXX';
> >>> Change ovirt hostname
> >>> On the destination host, run:
> >>>
> >>>  /usr/share/ovirt-engine/setup/bin/ovirt-engine-rename
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> NB:
> >>> Restoring the dwh/reports database is similar to steps 5-7, but omitted
> >>> from
> >>> this document due to problems starting the reporting service.
> >>>
> >>>
> >>> 2014-11-07 10:28 GMT+01:00 Sven Kieske <s.kieske at mittwald.de>:
> >>>>
> >>>>
> >>>>
> >>>> On 07/11/14 10:10, Ml Ml wrote:
> >>>> > anyone? :)
> >>>> >
> >>>> > Or are you only doing backups, no restore? :-P
> >>>>
> >>>> gladly I just had to test disaster recovery and not actually
> >>>>  perform it (yet) :D
> >>>>
> >>>> To be honest: I never  have restored ovirt-engine with running vdsm
> >>>> hosts connected to it, sounds like a lot of fun, I see if I can
> >>>> grab some time and try this out myself :)
> >>>>
> >>>> By your description I guess you have nfs/iso domain on your engine host?
> >>>> why don't you just seperate it, so no need for remounts
> >>>> if your engine is destroyed.
> >>>>
> >>>> HTH
> >>>>
> >>>> --
> >>>> Mit freundlichen Grüßen / Regards
> >>>>
> >>>> Sven Kieske
> >>>>
> >>>> Systemadministrator
> >>>> Mittwald CM Service GmbH & Co. KG
> >>>> Königsberger Straße 6
> >>>> 32339 Espelkamp
> >>>> T: +49-5772-293-100
> >>>> F: +49-5772-293-333
> >>>> https://www.mittwald.de
> >>>> Geschäftsführer: Robert Meyer
> >>>> St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad
> >>>> Oeynhausen
> >>>> Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad
> >>>> Oeynhausen
> >>>> _______________________________________________
> >>>> Users mailing list
> >>>> Users at ovirt.org
> >>>> http://lists.ovirt.org/mailman/listinfo/users
> >>>
> >>>
> >>> _______________________________________________
> >>> Users mailing list
> >>> Users at ovirt.org
> >>> http://lists.ovirt.org/mailman/listinfo/users
> >>>
> >>>
> >>> _______________________________________________
> >>> Users mailing list
> >>> Users at ovirt.org
> >>> http://lists.ovirt.org/mailman/listinfo/users
> >>>
> >> _______________________________________________
> >> Users mailing list
> >> Users at ovirt.org
> >> http://lists.ovirt.org/mailman/listinfo/users
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 



More information about the Users mailing list