does SPM can run over ovirt-engine host ?

Hello, When I create virtual machine from a template (centos6.5, 2 cores, 8GB mem, 500GB hd) this process takes almost 2 hours. I click on "New VM" button and just select the template and click ok. engine.log show me high network consumption (98%) between engine-server host and SPM host. I tried to make my engine-server host a spm host too, but without sucess. Does SPM can run over on the same ovirt-engine machine ? Am I make something wrong? Or create VM from template is really slow ? my servers : srv-0202 = ovirt-engine , vdsm srv-0203 = spm , vdsm srv-0204 = vdsm These servers are dell blades connected on a 100GB switch. thanks This is what I know about SPM: http://www.ovirt.org/Storage_-_oVirt_workshop_November_2011 = Storage Pool Manager (SPM) A role assigned to one host in a data center granting it sole authority over: - Creation, deletion, an dmanipulation of virtula disk images, snapshots and templates - Templates: you can create on VM as a golden image and provision to multiple VMs (QCOW layers) - Allocation of storage for sparse block devices (on SAN) - Thin provisinoing (see below) - Single metadata writer: - SPM lease mechanism (Chockler and Malkhi 2004, Light-Weight Leases for Storage-Cnntric Coordination) - Storage-centric mailbox - This role can be migrated to any host in data center

Hi Tamer, Are you familiar with the all in one feature? http://www.ovirt.org/Feature/AllInOne I'm not sure if this can help you now, as you probably don't want to re-install ovirt, right? ----- Original Message -----
From: "Tamer Lima" <tamer.americo@gmail.com> To: users@ovirt.org Sent: Monday, April 14, 2014 5:13:12 PM Subject: [ovirt-users] does SPM can run over ovirt-engine host ?
Hello,
When I create virtual machine from a template (centos6.5, 2 cores, 8GB mem, 500GB hd) this process takes almost 2 hours. I click on "New VM" button and just select the template and click ok.
engine.log show me high network consumption (98%) between engine-server host and SPM host.
I tried to make my engine-server host a spm host too, but without sucess.
Does SPM can run over on the same ovirt-engine machine ?
Am I make something wrong? Or create VM from template is really slow ?
my servers : srv-0202 = ovirt-engine , vdsm srv-0203 = spm , vdsm srv-0204 = vdsm These servers are dell blades connected on a 100GB switch.
thanks
This is what I know about SPM: http://www.ovirt.org/Storage_-_oVirt_workshop_November_2011
= Storage Pool Manager (SPM) A role assigned to one host in a data center granting it sole authority over:
- Creation, deletion, an dmanipulation of virtula disk images, snapshots and templates - Templates: you can create on VM as a golden image and provision to multiple VMs (QCOW layers) - Allocation of storage for sparse block devices (on SAN) - Thin provisinoing (see below) - Single metadata writer: - SPM lease mechanism (Chockler and Malkhi 2004, Light-Weight Leases for Storage-Cnntric Coordination) - Storage-centric mailbox - This role can be migrated to any host in data center
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi , Yair yes, I dont want to re-install ovirt I'm not sure if allInone could be fix this problem. allinone install vdsm on the same ovirt-engine host. Well, I already have this : ovirt-engine, vdsm on the same host: srv-0202 My storage domains(data and iso) are hosted on srv-0202 I believe my solution is create one engine per server. Three independent engines managing only local virtual machines. On Mon, Apr 14, 2014 at 10:07 PM, Yair Zaslavsky <yzaslavs@redhat.com>wrote:
Hi Tamer, Are you familiar with the all in one feature?
http://www.ovirt.org/Feature/AllInOne
I'm not sure if this can help you now, as you probably don't want to re-install ovirt, right?
----- Original Message -----
From: "Tamer Lima" <tamer.americo@gmail.com> To: users@ovirt.org Sent: Monday, April 14, 2014 5:13:12 PM Subject: [ovirt-users] does SPM can run over ovirt-engine host ?
Hello,
When I create virtual machine from a template (centos6.5, 2 cores, 8GB mem, 500GB hd) this process takes almost 2 hours. I click on "New VM" button and just select the template and click ok.
engine.log show me high network consumption (98%) between engine-server host and SPM host.
I tried to make my engine-server host a spm host too, but without sucess.
Does SPM can run over on the same ovirt-engine machine ?
Am I make something wrong? Or create VM from template is really slow ?
my servers : srv-0202 = ovirt-engine , vdsm srv-0203 = spm , vdsm srv-0204 = vdsm These servers are dell blades connected on a 100GB switch.
thanks
This is what I know about SPM: http://www.ovirt.org/Storage_-_oVirt_workshop_November_2011
= Storage Pool Manager (SPM) A role assigned to one host in a data center granting it sole authority over:
- Creation, deletion, an dmanipulation of virtula disk images, snapshots and templates - Templates: you can create on VM as a golden image and provision to multiple VMs (QCOW layers) - Allocation of storage for sparse block devices (on SAN) - Thin provisinoing (see below) - Single metadata writer: - SPM lease mechanism (Chockler and Malkhi 2004, Light-Weight Leases for Storage-Cnntric Coordination) - Storage-centric mailbox - This role can be migrated to any host in data center
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 04/23/2014 09:17 PM, Tamer Lima wrote:
Hi , Yair
yes, I dont want to re-install ovirt
I'm not sure if allInone could be fix this problem.
allinone install vdsm on the same ovirt-engine host. Well, I already have this : ovirt-engine, vdsm on the same host: srv-0202
My storage domains(data and iso) are hosted on srv-0202
I believe my solution is create one engine per server. Three independent engines managing only local virtual machines.
this does not sounds right. engine and SPM should not communicate at 98% traffic for 2 hours. SPM should be one of the nodes in the DC. engine isn't acting as a node (even if deployed on one by chance of all-in-one or hosted engine). are you creating the VMs from template thinly provisioned or clone?
On Mon, Apr 14, 2014 at 10:07 PM, Yair Zaslavsky <yzaslavs@redhat.com <mailto:yzaslavs@redhat.com>> wrote:
Hi Tamer, Are you familiar with the all in one feature?
http://www.ovirt.org/Feature/AllInOne
I'm not sure if this can help you now, as you probably don't want to re-install ovirt, right?
----- Original Message ----- > From: "Tamer Lima" <tamer.americo@gmail.com <mailto:tamer.americo@gmail.com>> > To: users@ovirt.org <mailto:users@ovirt.org> > Sent: Monday, April 14, 2014 5:13:12 PM > Subject: [ovirt-users] does SPM can run over ovirt-engine host ? > > Hello, > > When I create virtual machine from a template (centos6.5, 2 cores, 8GB mem, > 500GB hd) this process takes almost 2 hours. I click on "New VM" button > and just select the template and click ok. > > engine.log show me high network consumption (98%) between engine-server > host and SPM host. > > I tried to make my engine-server host a spm host too, but without sucess. > > > Does SPM can run over on the same ovirt-engine machine ? > > Am I make something wrong? Or create VM from template is really slow ? > > > my servers : > srv-0202 = ovirt-engine , vdsm > srv-0203 = spm , vdsm > srv-0204 = vdsm > These servers are dell blades connected on a 100GB switch. > > > > thanks > > > > > This is what I know about SPM: > http://www.ovirt.org/Storage_-_oVirt_workshop_November_2011 > > = Storage Pool Manager (SPM) A role assigned to one host in a data center > granting it sole authority over: > > - Creation, deletion, an dmanipulation of virtula disk images, snapshots > and templates > - Templates: you can create on VM as a golden image and provision to > multiple VMs (QCOW layers) > - Allocation of storage for sparse block devices (on SAN) > - Thin provisinoing (see below) > - Single metadata writer: > - SPM lease mechanism (Chockler and Malkhi 2004, Light-Weight Leases > for Storage-Cnntric Coordination) > - Storage-centric mailbox > - This role can be migrated to any host in data center > > _______________________________________________ > Users mailing list > Users@ovirt.org <mailto:Users@ovirt.org> > http://lists.ovirt.org/mailman/listinfo/users >
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

----- Original Message -----
From: "Tamer Lima" <tamer.americo@gmail.com> To: users@ovirt.org Sent: Monday, April 14, 2014 5:13:12 PM Subject: [ovirt-users] does SPM can run over ovirt-engine host ?
Hello,
When I create virtual machine from a template (centos6.5, 2 cores, 8GB mem, 500GB hd) this process takes almost 2 hours. I click on "New VM" button and just select the template and click ok.
engine.log show me high network consumption (98%) between engine-server host and SPM host.
Could you share that piece of log which indicates the 98% consumption is beween the engine server to the SPM host (vs the SPM node to the storage server) ?
I tried to make my engine-server host a spm host too, but without sucess.
Does SPM can run over on the same ovirt-engine machine ?
Am I make something wrong? Or create VM from template is really slow ?
my servers : srv-0202 = ovirt-engine , vdsm srv-0203 = spm , vdsm srv-0204 = vdsm These servers are dell blades connected on a 100GB switch.
thanks
This is what I know about SPM: http://www.ovirt.org/Storage_-_oVirt_workshop_November_2011
= Storage Pool Manager (SPM) A role assigned to one host in a data center granting it sole authority over:
* Creation, deletion, an dmanipulation of virtula disk images, snapshots and templates * Templates: you can create on VM as a golden image and provision to multiple VMs (QCOW layers) * Allocation of storage for sparse block devices (on SAN) * Thin provisinoing (see below) * Single metadata writer: * SPM lease mechanism (Chockler and Malkhi 2004, Light-Weight Leases for Storage-Cnntric Coordination) * Storage-centric mailbox * This role can be migrated to any host in data center
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi, this is the piece of code of engine.log at serv-0202 (engine server) the spm was defined on serv-0203 log from serv-0202 (engine server): 2014-04-24 13:18:11,746 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-82) [1bb7dfd0] Correlation ID: null, Call Stack: null, Custom Event ID: -1, *Message: Used Network resources of host srv-0202 [96%] exceeded defined threshold [95%].* 2014-04-24 13:18:22,578 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-60) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message:* Used Network resources of host srv-0203 [98%] exceeded defined threshold [95%].* below is the log before the vm creation procedure. The log starts on the moment I press to create a new virtual machine: (The procedure of creation VM takes more than 1 hour. I executed tcpdump command on srv-0203 (SPM), even creating using thinning provisioning , I collected 500Gb of traffic between serv-0202 and serv-0203. When finally a VM is created there is no real disk allocation from ovirt, only my tcpdump log file. I do not know why this traffic exists) log from serv-0202 (engine server): 2014-04-24 13:11:36,241 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-20) [1a138258] Correlation ID: 1a138258, Call Stack: null, Custom Event ID: -1, Message: Invalid status on Data Center Default. Setting status to Non Responsive. 2014-04-24 13:11:36,255 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] hostFromVds::selectedVds - srv-0202, spmStatus Free, storage pool Default 2014-04-24 13:11:36,258 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] starting spm on vds srv-0202, storage pool Default, prevId -1, LVER -1 2014-04-24 13:11:36,259 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] START, SpmStartVDSCommand(HostName = srv-0202, HostId = fbdf0655-6560-4e12-a95a-875592f62cb5, storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, prevId=-1, prevLVER=-1, storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log id: 778a334c 2014-04-24 13:11:36,310 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] spmStart polling started: taskId = 198c7765-38cb-42e7-9349-93ca43be7066 2014-04-24 13:11:37,315 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] Failed in HSMGetTaskStatusVDS method 2014-04-24 13:11:37,316 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] spmStart polling ended: taskId = 198c7765-38cb-42e7-9349-93ca43be7066 task status = finished 2014-04-24 13:11:37,316 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] Start SPM Task failed - result: cleanSuccess, message: VDSGenericException: VDSErrorException: Failed to HSMGetTaskStatusVDS, error = Storage domain does not exist, code = 358 2014-04-24 13:11:37,363 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] spmStart polling ended, spm status: Free 2014-04-24 13:11:37,364 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] START, HSMClearTaskVDSCommand(HostName = srv-0202, HostId = fbdf0655-6560-4e12-a95a-875592f62cb5, taskId=198c7765-38cb-42e7-9349-93ca43be7066), log id: 6e6ad022 2014-04-24 13:11:37,409 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] FINISH, HSMClearTaskVDSCommand, log id: 6e6ad022 2014-04-24 13:11:37,409 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@dfe925d, log id: 778a334c 2014-04-24 13:11:37,411 INFO [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] Running command: SetStoragePoolStatusCommand internal: true. Entities affected : ID: 5849b030-626e-47cb-ad90-3ce782d831b3 Type: StoragePool 2014-04-24 13:11:37,416 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-20) [443b1ed8] Correlation ID: 443b1ed8, Call Stack: null, Custom Event ID: -1, Message: Invalid status on Data Center Default. Setting status to Non Responsive. 2014-04-24 13:11:37,418 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] IrsBroker::Failed::GetStoragePoolInfoVDS due to: IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed 2014-04-24 13:11:37,466 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] Irs placed on server fbdf0655-6560-4e12-a95a-875592f62cb5 failed. Proceed Failover 2014-04-24 13:11:37,528 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] hostFromVds::selectedVds - srv-0203, spmStatus Free, storage pool Default 2014-04-24 13:11:37,530 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] starting spm on vds srv-0203, storage pool Default, prevId -1, LVER -1 2014-04-24 13:11:37,531 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] START, SpmStartVDSCommand(HostName = srv-0203, HostId = 6e86beba-ee71-4bae-88d5-b95b74095c2f, storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, prevId=-1, prevLVER=-1, storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log id: 77e0918 2014-04-24 13:11:37,589 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] spmStart polling started: taskId = 81164899-b8b5-4ea5-9c82-94b66a3df741 2014-04-24 13:11:38,595 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] spmStart polling ended: taskId = 81164899-b8b5-4ea5-9c82-94b66a3df741 task status = finished 2014-04-24 13:11:38,652 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] spmStart polling ended, spm status: SPM 2014-04-24 13:11:38,653 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] START, HSMClearTaskVDSCommand(HostName = srv-0203, HostId = 6e86beba-ee71-4bae-88d5-b95b74095c2f, taskId=81164899-b8b5-4ea5-9c82-94b66a3df741), log id: 71e2abc 2014-04-24 13:11:38,698 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] FINISH, HSMClearTaskVDSCommand, log id: 71e2abc 2014-04-24 13:11:38,698 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@67238f8a, log id: 77e0918 2014-04-24 13:11:38,699 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] Initialize Irs proxy from vds: srv-0203.lttd.br 2014-04-24 13:11:38,703 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-20) [443b1ed8] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Storage Pool Manager runs on Host srv-0203 (Address: srv-0203.lttd.br). 2014-04-24 13:11:38,703 WARN [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-48) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue. 2014-04-24 13:11:38,711 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-6-thread-48) [443b1ed8] START, SPMGetAllTasksInfoVDSCommand( storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false), log id: 710a52c9 2014-04-24 13:11:38,735 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-6-thread-48) [443b1ed8] -- executeIrsBrokerCommand: Attempting on storage pool 5849b030-626e-47cb-ad90-3ce782d831b3 2014-04-24 13:11:38,736 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-6-thread-48) [443b1ed8] START, HSMGetAllTasksInfoVDSCommand(HostName = srv-0203, HostId = 6e86beba-ee71-4bae-88d5-b95b74095c2f), log id: 14a15273 2014-04-24 13:11:38,741 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-6-thread-48) [443b1ed8] FINISH, HSMGetAllTasksInfoVDSCommand, return: [], log id: 14a15273 2014-04-24 13:11:38,741 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-6-thread-48) [443b1ed8] FINISH, SPMGetAllTasksInfoVDSCommand, return: [], log id: 710a52c9 2014-04-24 13:11:38,741 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (org.ovirt.thread.pool-6-thread-48) [443b1ed8] Discovered no tasks on Storage Pool Default 2014-04-24 13:14:52,094 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-11) starting processDomainRecovery for domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_DOMAIN 2014-04-24 13:14:52,097 WARN [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-11) Storage domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_DOMAIN is not visible to one or more hosts. Since the domains type is ISO, hosts status will not be changed to non-operational 2014-04-24 13:14:54,923 WARN [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-37) domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_DOMAIN in problem. vds: srv-0202 2014-04-24 13:17:59,281 INFO [org.ovirt.engine.core.bll.AddVmFromTemplateCommand] (ajp--127.0.0.1-8702-9) [*7fb59186*] Lock Acquired to object EngineLock [exclusiveLocks= key: *servidor-teste* value: VM_NAME , sharedLocks= key: 1f08d35a-adf0-4734-9ce6-1431406096ba value: TEMPLATE key: c8e52f2a-5384-41ee-af77-7ee37bf54355 value: DISK ] 2014-04-24 13:17:59,302 INFO [org.ovirt.engine.core.bll.AddVmFromTemplateCommand] (ajp--127.0.0.1-8702-9) [*7fb59186*] Running command: AddVmFromTemplateCommand internal: false. Entities affected : ID: 99408929-82cf-4dc7-a532-9d998063fa95 Type: VdsGroups, ID: 1f08d35a-adf0-4734-9ce6-1431406096ba Type: VmTemplate, ID: 3410b593-dbd0-4ab8-9a21-3e3c51fe8e90 Type: Storage, ID: 99408929-82cf-4dc7-a532-9d998063fa95 Type: VdsGroups 2014-04-24 13:17:59,336 INFO [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] (ajp--127.0.0.1-8702-9) [*7fb59186*] START, SetVmStatusVDSCommand( vmId = 8a94d957-621e-4cd6-b94d-64a0572cb759, status = ImageLocked), log id: 6ada3a4a 2014-04-24 13:17:59,339 INFO [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] (ajp--127.0.0.1-8702-9) [*7fb59186*] FINISH, SetVmStatusVDSCommand, log id: 6ada3a4a 2014-04-24 13:17:59,344 INFO [org.ovirt.engine.core.bll.CreateCloneOfTemplateCommand] (ajp--127.0.0.1-8702-9) [48e79aaf] Running command: CreateCloneOfTemplateCommand internal: true. Entities affected : ID: 3410b593-dbd0-4ab8-9a21-3e3c51fe8e90 Type: Storage 2014-04-24 13:17:59,371 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (ajp--127.0.0.1-8702-9) [48e79aaf] START, CopyImageVDSCommand( storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false, storageDomainId = 3410b593-dbd0-4ab8-9a21-3e3c51fe8e90, imageGroupId = c8e52f2a-5384-41ee-af77-7ee37bf54355, imageId = 5c642d47-4f03-4a81-8a10-067b98e068f4, dstImageGroupId = 5a09cae5-c7a1-466d-9b69-ff8ad739d71c, vmId = 1f08d35a-adf0-4734-9ce6-1431406096ba, dstImageId = 2d82ce92-96f1-482c-b8fe-c21d9dfb23e6, imageDescription = , dstStorageDomainId = 3410b593-dbd0-4ab8-9a21-3e3c51fe8e90, copyVolumeType = LeafVol, volumeFormat = RAW, preallocate = Sparse, postZero = false, force = false), log id: 4a480fe7 2014-04-24 13:17:59,372 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (ajp--127.0.0.1-8702-9) [48e79aaf] -- executeIrsBrokerCommand: calling 'copyImage' with two new parameters: description and UUID 2014-04-24 13:17:59,373 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (ajp--127.0.0.1-8702-9) [48e79aaf] -- copyImage parameters: sdUUID=3410b593-dbd0-4ab8-9a21-3e3c51fe8e90 spUUID=5849b030-626e-47cb-ad90-3ce782d831b3 vmGUID=1f08d35a-adf0-4734-9ce6-1431406096ba srcImageGUID=c8e52f2a-5384-41ee-af77-7ee37bf54355 srcVolUUID=5c642d47-4f03-4a81-8a10-067b98e068f4 dstImageGUID=5a09cae5-c7a1-466d-9b69-ff8ad739d71c dstVolUUID=2d82ce92-96f1-482c-b8fe-c21d9dfb23e6 descr= dstSdUUID=3410b593-dbd0-4ab8-9a21-3e3c51fe8e90 2014-04-24 13:17:59,442 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (ajp--127.0.0.1-8702-9) [48e79aaf] FINISH, CopyImageVDSCommand, return: 00000000-0000-0000-0000-000000000000, log id: 4a480fe7 2014-04-24 13:17:59,446 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (ajp--127.0.0.1-8702-9) [48e79aaf] CommandAsyncTask::Adding CommandMultiAsyncTasks object for command 755c7619-60e6-4899-b772-17c56cdec057 2014-04-24 13:17:59,447 INFO [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (ajp--127.0.0.1-8702-9) [48e79aaf] CommandMultiAsyncTasks::AttachTask: Attaching task e8726bad-05ff-4f89-a127-146a3f8bceb2 to command 755c7619-60e6-4899-b772-17c56cdec057. 2014-04-24 13:17:59,451 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (ajp--127.0.0.1-8702-9) [48e79aaf] Adding task e8726bad-05ff-4f89-a127-146a3f8bceb2 (Parent Command AddVmFromTemplate, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters), polling hasn't started yet.. 2014-04-24 13:17:59,497 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-9) [48e79aaf] Correlation ID: *7fb59186*, Job ID: aeb08ac5-d157-40ae-bcd5-ec68d9cc5ae8, Call Stack: null, Custom Event ID: -1, Message: VM* servidor-teste creation was initiated by admin.* 2014-04-24 13:17:59,497 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (ajp--127.0.0.1-8702-9) [48e79aaf] BaseAsyncTask::startPollingTask: Starting to poll task e8726bad-05ff-4f89-a127-146a3f8bceb2. 2014-04-24 13:17:59,560 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-99) Polling and updating Async Tasks: 1 tasks, 1 tasks to poll now 2014-04-24 13:17:59,566 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (DefaultQuartzScheduler_Worker-99) SPMAsyncTask::PollTask: Polling task e8726bad-05ff-4f89-a127-146a3f8bceb2 (Parent Command AddVmFromTemplate, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) returned status running. 2014-04-24 13:17:59,567 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-99) Finished polling Tasks, will poll again in 10 seconds. 2014-04-24 13:17:59,653 INFO [org.ovirt.engine.core.bll.network.vm.ReorderVmNicsCommand] (ajp--127.0.0.1-8702-5) [601e9dcb] Running command: ReorderVmNicsCommand internal: false. Entities affected : ID: 8a94d957-621e-4cd6-b94d-64a0572cb759 Type: VM 2014-04-24 13:18:11,746 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-82) [1bb7dfd0] Correlation ID: null, Call Stack: null, Custom Event ID: -1, *Message: Used Network resources of host srv-0202 [96%] exceeded defined threshold [95%].* 2014-04-24 13:18:22,578 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-60) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message:* Used Network resources of host srv-0203 [98%] exceeded defined threshold [95%].* 2014-04-24 13:19:54,926 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-28) starting processDomainRecovery for domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_DOMAIN 2014-04-24 13:19:54,929 WARN [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-28) Storage domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_*DOMAIN is not visible to one or more hosts. Since the domains type is ISO, hosts status will not be changed to non-operational* 2014-04-24 13:19:57,802 WARN [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-36) domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_DOMAIN in problem. vds: srv-0202 ^C "Os homens não são prisioneiros do destino, mas de suas próprias mentes" Franklin Roosevelt ______________________________ Tamer Américo (61) 8411-3491 Mestre em Engenharia Elétrica Cientista da Computação On Thu, Apr 24, 2014 at 3:27 AM, Moti Asayag <masayag@redhat.com> wrote:
----- Original Message -----
From: "Tamer Lima" <tamer.americo@gmail.com> To: users@ovirt.org Sent: Monday, April 14, 2014 5:13:12 PM Subject: [ovirt-users] does SPM can run over ovirt-engine host ?
Hello,
When I create virtual machine from a template (centos6.5, 2 cores, 8GB mem, 500GB hd) this process takes almost 2 hours. I click on "New VM" button and just select the template and click ok.
engine.log show me high network consumption (98%) between engine-server host and SPM host.
Could you share that piece of log which indicates the 98% consumption is beween the engine server to the SPM host (vs the SPM node to the storage server) ?
I tried to make my engine-server host a spm host too, but without sucess.
Does SPM can run over on the same ovirt-engine machine ?
Am I make something wrong? Or create VM from template is really slow ?
my servers : srv-0202 = ovirt-engine , vdsm srv-0203 = spm , vdsm srv-0204 = vdsm These servers are dell blades connected on a 100GB switch.
thanks
This is what I know about SPM: http://www.ovirt.org/Storage_-_oVirt_workshop_November_2011
= Storage Pool Manager (SPM) A role assigned to one host in a data center granting it sole authority over:
* Creation, deletion, an dmanipulation of virtula disk images,
snapshots
and templates * Templates: you can create on VM as a golden image and
provision to
multiple VMs (QCOW layers) * Allocation of storage for sparse block devices (on SAN) * Thin provisinoing (see below) * Single metadata writer: * SPM lease mechanism (Chockler and Malkhi 2004, Light-Weight
Leases
for Storage-Cnntric Coordination) * Storage-centric mailbox * This role can be migrated to any host in data center
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

I created link with an image showing network consumption between engine and spm. http://pt-br.tinypic.com/r/dzi80i/8 http://tinypic.com/view.php?pic=dzi80i&s=8#.U1lEKfldVyN This forum has an image site/blog preference ? thanks below is the log of spm-lock.log [root@srv-0203 vdsm]# tail -f spm-lock.log [2014-03-06 18:21:21] Protecting spm lock for vdsm pid 2992 [2014-03-06 18:21:21] Trying to acquire lease - sdUUID=6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d lease_file=/rhev/data-center/mnt/srv-0202.lttd.br:_var_lib_exports_iso/6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d/dom_md/leases id=1 lease_time_ms=5000 io_op_to_ms=1000 [2014-03-06 18:21:34] Lease acquired sdUUID=6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d id=1 lease_path=/rhev/data-center/mnt/srv-0202.ltd.br:_var_lib_exports_iso/6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d/dom_md/leases, TS=1394140892739675 [2014-03-06 18:21:34] *Protecting spm lock for vdsm *pid 2992 [2014-03-06 18:21:34] Started renewal process (pid=17519) for sdUUID=6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d id=1 lease_path=/rhev/data-center/mnt/srv-0202.lttd.unb.br: _var_lib_exports_iso/6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d/dom_md/leases [2014-03-06 18:21:34] Stopping lease for pool: 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d pgrps: -17519 User defined signal 1 [2014-03-06 18:21:34] releasing lease sdUUID=6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d id=1 lease_path=/rhev/data-center/mnt/srv-0202.lttd.br: _var_lib_exports_iso/6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d/dom_md/leases On Thu, Apr 24, 2014 at 1:51 PM, Tamer Lima <tamer.americo@gmail.com> wrote:
Hi, this is the piece of code of engine.log at serv-0202 (engine server) the spm was defined on serv-0203
log from serv-0202 (engine server): 2014-04-24 13:18:11,746 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-82) [1bb7dfd0] Correlation ID: null, Call Stack: null, Custom Event ID: -1, *Message: Used Network resources of host srv-0202 [96%] exceeded defined threshold [95%].* 2014-04-24 13:18:22,578 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-60) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message:* Used Network resources of host srv-0203 [98%] exceeded defined threshold [95%].*
below is the log before the vm creation procedure. The log starts on the moment I press to create a new virtual machine:
(The procedure of creation VM takes more than 1 hour. I executed tcpdump command on srv-0203 (SPM), even creating using thinning provisioning , I collected 500Gb of traffic between serv-0202 and serv-0203. When finally a VM is created there is no real disk allocation from ovirt, only my tcpdump log file. I do not know why this traffic exists)
log from serv-0202 (engine server):
2014-04-24 13:11:36,241 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-20) [1a138258] Correlation ID: 1a138258, Call Stack: null, Custom Event ID: -1, Message: Invalid status on Data Center Default. Setting status to Non Responsive. 2014-04-24 13:11:36,255 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] hostFromVds::selectedVds - srv-0202, spmStatus Free, storage pool Default 2014-04-24 13:11:36,258 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] starting spm on vds srv-0202, storage pool Default, prevId -1, LVER -1 2014-04-24 13:11:36,259 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] START, SpmStartVDSCommand(HostName = srv-0202, HostId = fbdf0655-6560-4e12-a95a-875592f62cb5, storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, prevId=-1, prevLVER=-1, storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log id: 778a334c 2014-04-24 13:11:36,310 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] spmStart polling started: taskId = 198c7765-38cb-42e7-9349-93ca43be7066 2014-04-24 13:11:37,315 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] Failed in HSMGetTaskStatusVDS method 2014-04-24 13:11:37,316 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] spmStart polling ended: taskId = 198c7765-38cb-42e7-9349-93ca43be7066 task status = finished 2014-04-24 13:11:37,316 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] Start SPM Task failed - result: cleanSuccess, message: VDSGenericException: VDSErrorException: Failed to HSMGetTaskStatusVDS, error = Storage domain does not exist, code = 358 2014-04-24 13:11:37,363 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] spmStart polling ended, spm status: Free 2014-04-24 13:11:37,364 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] START, HSMClearTaskVDSCommand(HostName = srv-0202, HostId = fbdf0655-6560-4e12-a95a-875592f62cb5, taskId=198c7765-38cb-42e7-9349-93ca43be7066), log id: 6e6ad022 2014-04-24 13:11:37,409 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] FINISH, HSMClearTaskVDSCommand, log id: 6e6ad022 2014-04-24 13:11:37,409 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@dfe925d, log id: 778a334c 2014-04-24 13:11:37,411 INFO [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] Running command: SetStoragePoolStatusCommand internal: true. Entities affected : ID: 5849b030-626e-47cb-ad90-3ce782d831b3 Type: StoragePool 2014-04-24 13:11:37,416 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-20) [443b1ed8] Correlation ID: 443b1ed8, Call Stack: null, Custom Event ID: -1, Message: Invalid status on Data Center Default. Setting status to Non Responsive. 2014-04-24 13:11:37,418 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] IrsBroker::Failed::GetStoragePoolInfoVDS due to: IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed 2014-04-24 13:11:37,466 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] Irs placed on server fbdf0655-6560-4e12-a95a-875592f62cb5 failed. Proceed Failover 2014-04-24 13:11:37,528 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] hostFromVds::selectedVds - srv-0203, spmStatus Free, storage pool Default 2014-04-24 13:11:37,530 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] starting spm on vds srv-0203, storage pool Default, prevId -1, LVER -1 2014-04-24 13:11:37,531 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] START, SpmStartVDSCommand(HostName = srv-0203, HostId = 6e86beba-ee71-4bae-88d5-b95b74095c2f, storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, prevId=-1, prevLVER=-1, storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log id: 77e0918 2014-04-24 13:11:37,589 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] spmStart polling started: taskId = 81164899-b8b5-4ea5-9c82-94b66a3df741 2014-04-24 13:11:38,595 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] spmStart polling ended: taskId = 81164899-b8b5-4ea5-9c82-94b66a3df741 task status = finished 2014-04-24 13:11:38,652 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] spmStart polling ended, spm status: SPM 2014-04-24 13:11:38,653 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] START, HSMClearTaskVDSCommand(HostName = srv-0203, HostId = 6e86beba-ee71-4bae-88d5-b95b74095c2f, taskId=81164899-b8b5-4ea5-9c82-94b66a3df741), log id: 71e2abc 2014-04-24 13:11:38,698 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] FINISH, HSMClearTaskVDSCommand, log id: 71e2abc 2014-04-24 13:11:38,698 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@67238f8a, log id: 77e0918 2014-04-24 13:11:38,699 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] Initialize Irs proxy from vds: srv-0203.lttd.br 2014-04-24 13:11:38,703 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-20) [443b1ed8] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Storage Pool Manager runs on Host srv-0203 (Address: srv-0203.lttd.br). 2014-04-24 13:11:38,703 WARN [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-48) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue. 2014-04-24 13:11:38,711 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-6-thread-48) [443b1ed8] START, SPMGetAllTasksInfoVDSCommand( storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false), log id: 710a52c9 2014-04-24 13:11:38,735 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-6-thread-48) [443b1ed8] -- executeIrsBrokerCommand: Attempting on storage pool 5849b030-626e-47cb-ad90-3ce782d831b3 2014-04-24 13:11:38,736 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-6-thread-48) [443b1ed8] START, HSMGetAllTasksInfoVDSCommand(HostName = srv-0203, HostId = 6e86beba-ee71-4bae-88d5-b95b74095c2f), log id: 14a15273 2014-04-24 13:11:38,741 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-6-thread-48) [443b1ed8] FINISH, HSMGetAllTasksInfoVDSCommand, return: [], log id: 14a15273 2014-04-24 13:11:38,741 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-6-thread-48) [443b1ed8] FINISH, SPMGetAllTasksInfoVDSCommand, return: [], log id: 710a52c9 2014-04-24 13:11:38,741 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (org.ovirt.thread.pool-6-thread-48) [443b1ed8] Discovered no tasks on Storage Pool Default 2014-04-24 13:14:52,094 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-11) starting processDomainRecovery for domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_DOMAIN 2014-04-24 13:14:52,097 WARN [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-11) Storage domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_DOMAIN is not visible to one or more hosts. Since the domains type is ISO, hosts status will not be changed to non-operational 2014-04-24 13:14:54,923 WARN [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-37) domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_DOMAIN in problem. vds: srv-0202 2014-04-24 13:17:59,281 INFO [org.ovirt.engine.core.bll.AddVmFromTemplateCommand] (ajp--127.0.0.1-8702-9) [*7fb59186*] Lock Acquired to object EngineLock [exclusiveLocks= key: *servidor-teste* value: VM_NAME , sharedLocks= key: 1f08d35a-adf0-4734-9ce6-1431406096ba value: TEMPLATE key: c8e52f2a-5384-41ee-af77-7ee37bf54355 value: DISK ] 2014-04-24 13:17:59,302 INFO [org.ovirt.engine.core.bll.AddVmFromTemplateCommand] (ajp--127.0.0.1-8702-9) [*7fb59186*] Running command: AddVmFromTemplateCommand internal: false. Entities affected : ID: 99408929-82cf-4dc7-a532-9d998063fa95 Type: VdsGroups, ID: 1f08d35a-adf0-4734-9ce6-1431406096ba Type: VmTemplate, ID: 3410b593-dbd0-4ab8-9a21-3e3c51fe8e90 Type: Storage, ID: 99408929-82cf-4dc7-a532-9d998063fa95 Type: VdsGroups 2014-04-24 13:17:59,336 INFO [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] (ajp--127.0.0.1-8702-9) [*7fb59186*] START, SetVmStatusVDSCommand( vmId = 8a94d957-621e-4cd6-b94d-64a0572cb759, status = ImageLocked), log id: 6ada3a4a 2014-04-24 13:17:59,339 INFO [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] (ajp--127.0.0.1-8702-9) [*7fb59186*] FINISH, SetVmStatusVDSCommand, log id: 6ada3a4a 2014-04-24 13:17:59,344 INFO [org.ovirt.engine.core.bll.CreateCloneOfTemplateCommand] (ajp--127.0.0.1-8702-9) [48e79aaf] Running command: CreateCloneOfTemplateCommand internal: true. Entities affected : ID: 3410b593-dbd0-4ab8-9a21-3e3c51fe8e90 Type: Storage 2014-04-24 13:17:59,371 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (ajp--127.0.0.1-8702-9) [48e79aaf] START, CopyImageVDSCommand( storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false, storageDomainId = 3410b593-dbd0-4ab8-9a21-3e3c51fe8e90, imageGroupId = c8e52f2a-5384-41ee-af77-7ee37bf54355, imageId = 5c642d47-4f03-4a81-8a10-067b98e068f4, dstImageGroupId = 5a09cae5-c7a1-466d-9b69-ff8ad739d71c, vmId = 1f08d35a-adf0-4734-9ce6-1431406096ba, dstImageId = 2d82ce92-96f1-482c-b8fe-c21d9dfb23e6, imageDescription = , dstStorageDomainId = 3410b593-dbd0-4ab8-9a21-3e3c51fe8e90, copyVolumeType = LeafVol, volumeFormat = RAW, preallocate = Sparse, postZero = false, force = false), log id: 4a480fe7 2014-04-24 13:17:59,372 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (ajp--127.0.0.1-8702-9) [48e79aaf] -- executeIrsBrokerCommand: calling 'copyImage' with two new parameters: description and UUID 2014-04-24 13:17:59,373 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (ajp--127.0.0.1-8702-9) [48e79aaf] -- copyImage parameters: sdUUID=3410b593-dbd0-4ab8-9a21-3e3c51fe8e90 spUUID=5849b030-626e-47cb-ad90-3ce782d831b3 vmGUID=1f08d35a-adf0-4734-9ce6-1431406096ba srcImageGUID=c8e52f2a-5384-41ee-af77-7ee37bf54355 srcVolUUID=5c642d47-4f03-4a81-8a10-067b98e068f4 dstImageGUID=5a09cae5-c7a1-466d-9b69-ff8ad739d71c dstVolUUID=2d82ce92-96f1-482c-b8fe-c21d9dfb23e6 descr= dstSdUUID=3410b593-dbd0-4ab8-9a21-3e3c51fe8e90 2014-04-24 13:17:59,442 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (ajp--127.0.0.1-8702-9) [48e79aaf] FINISH, CopyImageVDSCommand, return: 00000000-0000-0000-0000-000000000000, log id: 4a480fe7 2014-04-24 13:17:59,446 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (ajp--127.0.0.1-8702-9) [48e79aaf] CommandAsyncTask::Adding CommandMultiAsyncTasks object for command 755c7619-60e6-4899-b772-17c56cdec057 2014-04-24 13:17:59,447 INFO [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (ajp--127.0.0.1-8702-9) [48e79aaf] CommandMultiAsyncTasks::AttachTask: Attaching task e8726bad-05ff-4f89-a127-146a3f8bceb2 to command 755c7619-60e6-4899-b772-17c56cdec057. 2014-04-24 13:17:59,451 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (ajp--127.0.0.1-8702-9) [48e79aaf] Adding task e8726bad-05ff-4f89-a127-146a3f8bceb2 (Parent Command AddVmFromTemplate, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters), polling hasn't started yet.. 2014-04-24 13:17:59,497 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-9) [48e79aaf] Correlation ID: *7fb59186*, Job ID: aeb08ac5-d157-40ae-bcd5-ec68d9cc5ae8, Call Stack: null, Custom Event ID: -1, Message: VM* servidor-teste creation was initiated by admin.* 2014-04-24 13:17:59,497 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (ajp--127.0.0.1-8702-9) [48e79aaf] BaseAsyncTask::startPollingTask: Starting to poll task e8726bad-05ff-4f89-a127-146a3f8bceb2. 2014-04-24 13:17:59,560 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-99) Polling and updating Async Tasks: 1 tasks, 1 tasks to poll now 2014-04-24 13:17:59,566 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (DefaultQuartzScheduler_Worker-99) SPMAsyncTask::PollTask: Polling task e8726bad-05ff-4f89-a127-146a3f8bceb2 (Parent Command AddVmFromTemplate, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) returned status running. 2014-04-24 13:17:59,567 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-99) Finished polling Tasks, will poll again in 10 seconds. 2014-04-24 13:17:59,653 INFO [org.ovirt.engine.core.bll.network.vm.ReorderVmNicsCommand] (ajp--127.0.0.1-8702-5) [601e9dcb] Running command: ReorderVmNicsCommand internal: false. Entities affected : ID: 8a94d957-621e-4cd6-b94d-64a0572cb759 Type: VM 2014-04-24 13:18:11,746 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-82) [1bb7dfd0] Correlation ID: null, Call Stack: null, Custom Event ID: -1, *Message: Used Network resources of host srv-0202 [96%] exceeded defined threshold [95%].* 2014-04-24 13:18:22,578 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-60) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message:* Used Network resources of host srv-0203 [98%] exceeded defined threshold [95%].* 2014-04-24 13:19:54,926 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-28) starting processDomainRecovery for domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_DOMAIN 2014-04-24 13:19:54,929 WARN [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-28) Storage domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_*DOMAIN is not visible to one or more hosts. Since the domains type is ISO, hosts status will not be changed to non-operational* 2014-04-24 13:19:57,802 WARN [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-36) domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_DOMAIN in problem. vds: srv-0202 ^C
On Thu, Apr 24, 2014 at 3:27 AM, Moti Asayag <masayag@redhat.com> wrote:
----- Original Message -----
From: "Tamer Lima" <tamer.americo@gmail.com> To: users@ovirt.org Sent: Monday, April 14, 2014 5:13:12 PM Subject: [ovirt-users] does SPM can run over ovirt-engine host ?
Hello,
When I create virtual machine from a template (centos6.5, 2 cores, 8GB mem, 500GB hd) this process takes almost 2 hours. I click on "New VM" button and just select the template and click ok.
engine.log show me high network consumption (98%) between engine-server host and SPM host.
Could you share that piece of log which indicates the 98% consumption is beween the engine server to the SPM host (vs the SPM node to the storage server) ?
I tried to make my engine-server host a spm host too, but without
sucess.
Does SPM can run over on the same ovirt-engine machine ?
Am I make something wrong? Or create VM from template is really slow ?
my servers : srv-0202 = ovirt-engine , vdsm srv-0203 = spm , vdsm srv-0204 = vdsm These servers are dell blades connected on a 100GB switch.
thanks
This is what I know about SPM: http://www.ovirt.org/Storage_-_oVirt_workshop_November_2011
= Storage Pool Manager (SPM) A role assigned to one host in a data
granting it sole authority over:
* Creation, deletion, an dmanipulation of virtula disk images, snapshots and templates * Templates: you can create on VM as a golden image and
center provision to
multiple VMs (QCOW layers) * Allocation of storage for sparse block devices (on SAN) * Thin provisinoing (see below) * Single metadata writer: * SPM lease mechanism (Chockler and Malkhi 2004, Light-Weight
Leases
for Storage-Cnntric Coordination) * Storage-centric mailbox * This role can be migrated to any host in data center
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

one more information from engine.log it is an another thing that confused me about the strange traffic during VM creation: the virtual machine was created using : volumeFormat = RAW, preallocate = Sparse. On table 4.1 , chapter 4 of red hat enterprise , admin guide documentation this combination has a file whose initial size is close to zero and has no formatting (NFS format RAW and Type Sparse) 2014-04-24 13:17:59,371 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (ajp--127.0.0.1-8702-9) [48e79aaf] START, *CopyImageVDSCommand( * storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false, storageDomainId = 3410b593-dbd0-4ab8-9a21-3e3c51fe8e90, imageGroupId = c8e52f2a-5384-41ee-af77-7ee37bf54355, imageId = 5c642d47-4f03-4a81-8a10-067b98e068f4, dstImageGroupId = 5a09cae5-c7a1-466d-9b69-ff8ad739d71c, vmId = 1f08d35a-adf0-4734-9ce6-1431406096ba, dstImageId = 2d82ce92-96f1-482c-b8fe-c21d9dfb23e6, imageDescription = , dstStorageDomainId = 3410b593-dbd0-4ab8-9a21-3e3c51fe8e90, copyVolumeType = LeafVol, *volumeFormat = RAW, preallocate = Sparse, *postZero = false, force = false), log id: 4a480fe7 On Thu, Apr 24, 2014 at 2:04 PM, Tamer Lima <tamer.americo@gmail.com> wrote:
I created link with an image showing network consumption between engine and spm.
http://pt-br.tinypic.com/r/dzi80i/8 http://tinypic.com/view.php?pic=dzi80i&s=8#.U1lEKfldVyN
This forum has an image site/blog preference ? thanks
below is the log of spm-lock.log
[root@srv-0203 vdsm]# tail -f spm-lock.log [2014-03-06 18:21:21] Protecting spm lock for vdsm pid 2992 [2014-03-06 18:21:21] Trying to acquire lease - sdUUID=6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d lease_file=/rhev/data-center/mnt/srv-0202.lttd.br:_var_lib_exports_iso/6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d/dom_md/leases id=1 lease_time_ms=5000 io_op_to_ms=1000 [2014-03-06 18:21:34] Lease acquired sdUUID=6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d id=1 lease_path=/rhev/data-center/mnt/srv-0202.ltd.br:_var_lib_exports_iso/6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d/dom_md/leases, TS=1394140892739675 [2014-03-06 18:21:34] *Protecting spm lock for vdsm *pid 2992 [2014-03-06 18:21:34] Started renewal process (pid=17519) for sdUUID=6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d id=1 lease_path=/rhev/data-center/mnt/srv-0202.lttd.unb.br: _var_lib_exports_iso/6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d/dom_md/leases [2014-03-06 18:21:34] Stopping lease for pool: 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d pgrps: -17519 User defined signal 1 [2014-03-06 18:21:34] releasing lease sdUUID=6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d id=1 lease_path=/rhev/data-center/mnt/srv-0202.lttd.br: _var_lib_exports_iso/6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d/dom_md/leases
On Thu, Apr 24, 2014 at 1:51 PM, Tamer Lima <tamer.americo@gmail.com>wrote:
Hi, this is the piece of code of engine.log at serv-0202 (engine server) the spm was defined on serv-0203
log from serv-0202 (engine server): 2014-04-24 13:18:11,746 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-82) [1bb7dfd0] Correlation ID: null, Call Stack: null, Custom Event ID: -1, *Message: Used Network resources of host srv-0202 [96%] exceeded defined threshold [95%].* 2014-04-24 13:18:22,578 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-60) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message:* Used Network resources of host srv-0203 [98%] exceeded defined threshold [95%].*
below is the log before the vm creation procedure. The log starts on the moment I press to create a new virtual machine:
(The procedure of creation VM takes more than 1 hour. I executed tcpdump command on srv-0203 (SPM), even creating using thinning provisioning , I collected 500Gb of traffic between serv-0202 and serv-0203. When finally a VM is created there is no real disk allocation from ovirt, only my tcpdump log file. I do not know why this traffic exists)
log from serv-0202 (engine server):
2014-04-24 13:11:36,241 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-20) [1a138258] Correlation ID: 1a138258, Call Stack: null, Custom Event ID: -1, Message: Invalid status on Data Center Default. Setting status to Non Responsive. 2014-04-24 13:11:36,255 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] hostFromVds::selectedVds - srv-0202, spmStatus Free, storage pool Default 2014-04-24 13:11:36,258 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] starting spm on vds srv-0202, storage pool Default, prevId -1, LVER -1 2014-04-24 13:11:36,259 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] START, SpmStartVDSCommand(HostName = srv-0202, HostId = fbdf0655-6560-4e12-a95a-875592f62cb5, storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, prevId=-1, prevLVER=-1, storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log id: 778a334c 2014-04-24 13:11:36,310 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] spmStart polling started: taskId = 198c7765-38cb-42e7-9349-93ca43be7066 2014-04-24 13:11:37,315 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] Failed in HSMGetTaskStatusVDS method 2014-04-24 13:11:37,316 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] spmStart polling ended: taskId = 198c7765-38cb-42e7-9349-93ca43be7066 task status = finished 2014-04-24 13:11:37,316 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] Start SPM Task failed - result: cleanSuccess, message: VDSGenericException: VDSErrorException: Failed to HSMGetTaskStatusVDS, error = Storage domain does not exist, code = 358 2014-04-24 13:11:37,363 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] spmStart polling ended, spm status: Free 2014-04-24 13:11:37,364 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] START, HSMClearTaskVDSCommand(HostName = srv-0202, HostId = fbdf0655-6560-4e12-a95a-875592f62cb5, taskId=198c7765-38cb-42e7-9349-93ca43be7066), log id: 6e6ad022 2014-04-24 13:11:37,409 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] FINISH, HSMClearTaskVDSCommand, log id: 6e6ad022 2014-04-24 13:11:37,409 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@dfe925d, log id: 778a334c 2014-04-24 13:11:37,411 INFO [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] Running command: SetStoragePoolStatusCommand internal: true. Entities affected : ID: 5849b030-626e-47cb-ad90-3ce782d831b3 Type: StoragePool 2014-04-24 13:11:37,416 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-20) [443b1ed8] Correlation ID: 443b1ed8, Call Stack: null, Custom Event ID: -1, Message: Invalid status on Data Center Default. Setting status to Non Responsive. 2014-04-24 13:11:37,418 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] IrsBroker::Failed::GetStoragePoolInfoVDS due to: IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed 2014-04-24 13:11:37,466 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] Irs placed on server fbdf0655-6560-4e12-a95a-875592f62cb5 failed. Proceed Failover 2014-04-24 13:11:37,528 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] hostFromVds::selectedVds - srv-0203, spmStatus Free, storage pool Default 2014-04-24 13:11:37,530 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] starting spm on vds srv-0203, storage pool Default, prevId -1, LVER -1 2014-04-24 13:11:37,531 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] START, SpmStartVDSCommand(HostName = srv-0203, HostId = 6e86beba-ee71-4bae-88d5-b95b74095c2f, storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, prevId=-1, prevLVER=-1, storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log id: 77e0918 2014-04-24 13:11:37,589 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] spmStart polling started: taskId = 81164899-b8b5-4ea5-9c82-94b66a3df741 2014-04-24 13:11:38,595 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] spmStart polling ended: taskId = 81164899-b8b5-4ea5-9c82-94b66a3df741 task status = finished 2014-04-24 13:11:38,652 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] spmStart polling ended, spm status: SPM 2014-04-24 13:11:38,653 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] START, HSMClearTaskVDSCommand(HostName = srv-0203, HostId = 6e86beba-ee71-4bae-88d5-b95b74095c2f, taskId=81164899-b8b5-4ea5-9c82-94b66a3df741), log id: 71e2abc 2014-04-24 13:11:38,698 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] FINISH, HSMClearTaskVDSCommand, log id: 71e2abc 2014-04-24 13:11:38,698 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@67238f8a, log id: 77e0918 2014-04-24 13:11:38,699 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] Initialize Irs proxy from vds: srv-0203.lttd.br 2014-04-24 13:11:38,703 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-20) [443b1ed8] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Storage Pool Manager runs on Host srv-0203 (Address: srv-0203.lttd.br). 2014-04-24 13:11:38,703 WARN [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-48) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue. 2014-04-24 13:11:38,711 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-6-thread-48) [443b1ed8] START, SPMGetAllTasksInfoVDSCommand( storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false), log id: 710a52c9 2014-04-24 13:11:38,735 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-6-thread-48) [443b1ed8] -- executeIrsBrokerCommand: Attempting on storage pool 5849b030-626e-47cb-ad90-3ce782d831b3 2014-04-24 13:11:38,736 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-6-thread-48) [443b1ed8] START, HSMGetAllTasksInfoVDSCommand(HostName = srv-0203, HostId = 6e86beba-ee71-4bae-88d5-b95b74095c2f), log id: 14a15273 2014-04-24 13:11:38,741 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-6-thread-48) [443b1ed8] FINISH, HSMGetAllTasksInfoVDSCommand, return: [], log id: 14a15273 2014-04-24 13:11:38,741 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-6-thread-48) [443b1ed8] FINISH, SPMGetAllTasksInfoVDSCommand, return: [], log id: 710a52c9 2014-04-24 13:11:38,741 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (org.ovirt.thread.pool-6-thread-48) [443b1ed8] Discovered no tasks on Storage Pool Default 2014-04-24 13:14:52,094 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-11) starting processDomainRecovery for domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_DOMAIN 2014-04-24 13:14:52,097 WARN [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-11) Storage domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_DOMAIN is not visible to one or more hosts. Since the domains type is ISO, hosts status will not be changed to non-operational 2014-04-24 13:14:54,923 WARN [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-37) domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_DOMAIN in problem. vds: srv-0202 2014-04-24 13:17:59,281 INFO [org.ovirt.engine.core.bll.AddVmFromTemplateCommand] (ajp--127.0.0.1-8702-9) [*7fb59186*] Lock Acquired to object EngineLock [exclusiveLocks= key: *servidor-teste* value: VM_NAME , sharedLocks= key: 1f08d35a-adf0-4734-9ce6-1431406096ba value: TEMPLATE key: c8e52f2a-5384-41ee-af77-7ee37bf54355 value: DISK ] 2014-04-24 13:17:59,302 INFO [org.ovirt.engine.core.bll.AddVmFromTemplateCommand] (ajp--127.0.0.1-8702-9) [*7fb59186*] Running command: AddVmFromTemplateCommand internal: false. Entities affected : ID: 99408929-82cf-4dc7-a532-9d998063fa95 Type: VdsGroups, ID: 1f08d35a-adf0-4734-9ce6-1431406096ba Type: VmTemplate, ID: 3410b593-dbd0-4ab8-9a21-3e3c51fe8e90 Type: Storage, ID: 99408929-82cf-4dc7-a532-9d998063fa95 Type: VdsGroups 2014-04-24 13:17:59,336 INFO [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] (ajp--127.0.0.1-8702-9) [*7fb59186*] START, SetVmStatusVDSCommand( vmId = 8a94d957-621e-4cd6-b94d-64a0572cb759, status = ImageLocked), log id: 6ada3a4a 2014-04-24 13:17:59,339 INFO [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] (ajp--127.0.0.1-8702-9) [*7fb59186*] FINISH, SetVmStatusVDSCommand, log id: 6ada3a4a 2014-04-24 13:17:59,344 INFO [org.ovirt.engine.core.bll.CreateCloneOfTemplateCommand] (ajp--127.0.0.1-8702-9) [48e79aaf] Running command: CreateCloneOfTemplateCommand internal: true. Entities affected : ID: 3410b593-dbd0-4ab8-9a21-3e3c51fe8e90 Type: Storage 2014-04-24 13:17:59,371 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (ajp--127.0.0.1-8702-9) [48e79aaf] START, CopyImageVDSCommand( storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false, storageDomainId = 3410b593-dbd0-4ab8-9a21-3e3c51fe8e90, imageGroupId = c8e52f2a-5384-41ee-af77-7ee37bf54355, imageId = 5c642d47-4f03-4a81-8a10-067b98e068f4, dstImageGroupId = 5a09cae5-c7a1-466d-9b69-ff8ad739d71c, vmId = 1f08d35a-adf0-4734-9ce6-1431406096ba, dstImageId = 2d82ce92-96f1-482c-b8fe-c21d9dfb23e6, imageDescription = , dstStorageDomainId = 3410b593-dbd0-4ab8-9a21-3e3c51fe8e90, copyVolumeType = LeafVol, volumeFormat = RAW, preallocate = Sparse, postZero = false, force = false), log id: 4a480fe7 2014-04-24 13:17:59,372 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (ajp--127.0.0.1-8702-9) [48e79aaf] -- executeIrsBrokerCommand: calling 'copyImage' with two new parameters: description and UUID 2014-04-24 13:17:59,373 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (ajp--127.0.0.1-8702-9) [48e79aaf] -- copyImage parameters: sdUUID=3410b593-dbd0-4ab8-9a21-3e3c51fe8e90 spUUID=5849b030-626e-47cb-ad90-3ce782d831b3 vmGUID=1f08d35a-adf0-4734-9ce6-1431406096ba srcImageGUID=c8e52f2a-5384-41ee-af77-7ee37bf54355 srcVolUUID=5c642d47-4f03-4a81-8a10-067b98e068f4 dstImageGUID=5a09cae5-c7a1-466d-9b69-ff8ad739d71c dstVolUUID=2d82ce92-96f1-482c-b8fe-c21d9dfb23e6 descr= dstSdUUID=3410b593-dbd0-4ab8-9a21-3e3c51fe8e90 2014-04-24 13:17:59,442 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (ajp--127.0.0.1-8702-9) [48e79aaf] FINISH, CopyImageVDSCommand, return: 00000000-0000-0000-0000-000000000000, log id: 4a480fe7 2014-04-24 13:17:59,446 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (ajp--127.0.0.1-8702-9) [48e79aaf] CommandAsyncTask::Adding CommandMultiAsyncTasks object for command 755c7619-60e6-4899-b772-17c56cdec057 2014-04-24 13:17:59,447 INFO [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (ajp--127.0.0.1-8702-9) [48e79aaf] CommandMultiAsyncTasks::AttachTask: Attaching task e8726bad-05ff-4f89-a127-146a3f8bceb2 to command 755c7619-60e6-4899-b772-17c56cdec057. 2014-04-24 13:17:59,451 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (ajp--127.0.0.1-8702-9) [48e79aaf] Adding task e8726bad-05ff-4f89-a127-146a3f8bceb2 (Parent Command AddVmFromTemplate, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters), polling hasn't started yet.. 2014-04-24 13:17:59,497 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-9) [48e79aaf] Correlation ID: *7fb59186*, Job ID: aeb08ac5-d157-40ae-bcd5-ec68d9cc5ae8, Call Stack: null, Custom Event ID: -1, Message: VM* servidor-teste creation was initiated by admin.* 2014-04-24 13:17:59,497 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (ajp--127.0.0.1-8702-9) [48e79aaf] BaseAsyncTask::startPollingTask: Starting to poll task e8726bad-05ff-4f89-a127-146a3f8bceb2. 2014-04-24 13:17:59,560 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-99) Polling and updating Async Tasks: 1 tasks, 1 tasks to poll now 2014-04-24 13:17:59,566 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (DefaultQuartzScheduler_Worker-99) SPMAsyncTask::PollTask: Polling task e8726bad-05ff-4f89-a127-146a3f8bceb2 (Parent Command AddVmFromTemplate, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) returned status running. 2014-04-24 13:17:59,567 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-99) Finished polling Tasks, will poll again in 10 seconds. 2014-04-24 13:17:59,653 INFO [org.ovirt.engine.core.bll.network.vm.ReorderVmNicsCommand] (ajp--127.0.0.1-8702-5) [601e9dcb] Running command: ReorderVmNicsCommand internal: false. Entities affected : ID: 8a94d957-621e-4cd6-b94d-64a0572cb759 Type: VM 2014-04-24 13:18:11,746 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-82) [1bb7dfd0] Correlation ID: null, Call Stack: null, Custom Event ID: -1, *Message: Used Network resources of host srv-0202 [96%] exceeded defined threshold [95%].* 2014-04-24 13:18:22,578 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-60) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message:* Used Network resources of host srv-0203 [98%] exceeded defined threshold [95%].* 2014-04-24 13:19:54,926 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-28) starting processDomainRecovery for domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_DOMAIN 2014-04-24 13:19:54,929 WARN [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-28) Storage domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_*DOMAIN is not visible to one or more hosts. Since the domains type is ISO, hosts status will not be changed to non-operational* 2014-04-24 13:19:57,802 WARN [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-36) domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_DOMAIN in problem. vds: srv-0202 ^C
On Thu, Apr 24, 2014 at 3:27 AM, Moti Asayag <masayag@redhat.com> wrote:
----- Original Message -----
From: "Tamer Lima" <tamer.americo@gmail.com> To: users@ovirt.org Sent: Monday, April 14, 2014 5:13:12 PM Subject: [ovirt-users] does SPM can run over ovirt-engine host ?
Hello,
When I create virtual machine from a template (centos6.5, 2 cores, 8GB mem, 500GB hd) this process takes almost 2 hours. I click on "New VM" button and just select the template and click ok.
engine.log show me high network consumption (98%) between engine-server host and SPM host.
Could you share that piece of log which indicates the 98% consumption is beween the engine server to the SPM host (vs the SPM node to the storage server) ?
I tried to make my engine-server host a spm host too, but without
sucess.
Does SPM can run over on the same ovirt-engine machine ?
Am I make something wrong? Or create VM from template is really slow ?
my servers : srv-0202 = ovirt-engine , vdsm srv-0203 = spm , vdsm srv-0204 = vdsm These servers are dell blades connected on a 100GB switch.
thanks
This is what I know about SPM: http://www.ovirt.org/Storage_-_oVirt_workshop_November_2011
= Storage Pool Manager (SPM) A role assigned to one host in a data
granting it sole authority over:
* Creation, deletion, an dmanipulation of virtula disk images, snapshots and templates * Templates: you can create on VM as a golden image and
center provision to
multiple VMs (QCOW layers) * Allocation of storage for sparse block devices (on SAN) * Thin provisinoing (see below) * Single metadata writer: * SPM lease mechanism (Chockler and Malkhi 2004, Light-Weight
Leases
for Storage-Cnntric Coordination) * Storage-centric mailbox * This role can be migrated to any host in data center
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

----- Original Message -----
From: "Tamer Lima" <tamer.americo@gmail.com> To: "Moti Asayag" <masayag@redhat.com> Cc: users@ovirt.org Sent: Thursday, April 24, 2014 8:04:51 PM Subject: Re: [ovirt-users] does SPM can run over ovirt-engine host ?
I created link with an image showing network consumption between engine and spm.
http://pt-br.tinypic.com/r/dzi80i/8 http://tinypic.com/view.php?pic=dzi80i&s=8#.U1lEKfldVyN
The image shows a generic message regarding the host network consumption. In 3.4 will have a specific log stating the device name [1] You can check what is the specific nic by searching the rxRate or txRate in the output of the following command which should be executed on the spm: vdsClient -s localhost getVdsStats Once you've identified the interface, you can see if the 'ovirtmgmt' is reported with that high consumption or if it is configured on top of the highly used nic. Else, there is another issue not related to engine-spm connectivity. you can paste the output of 'vdsClient -s localhost getVdsStats' and 'vdsClient -s localhost getVdsCaps' to examine both utilization and network configuration. [1] https://bugzilla.redhat.com/show_bug.cgi?id=1070667
This forum has an image site/blog preference ? thanks
below is the log of spm-lock.log
[root@srv-0203 vdsm]# tail -f spm-lock.log [2014-03-06 18:21:21] Protecting spm lock for vdsm pid 2992 [2014-03-06 18:21:21] Trying to acquire lease - sdUUID=6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d lease_file=/rhev/data-center/mnt/srv-0202.lttd.br:_var_lib_exports_iso/6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d/dom_md/leases id=1 lease_time_ms=5000 io_op_to_ms=1000 [2014-03-06 18:21:34] Lease acquired sdUUID=6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d id=1 lease_path=/rhev/data-center/mnt/srv-0202.ltd.br:_var_lib_exports_iso/6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d/dom_md/leases, TS=1394140892739675 [2014-03-06 18:21:34] *Protecting spm lock for vdsm *pid 2992 [2014-03-06 18:21:34] Started renewal process (pid=17519) for sdUUID=6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d id=1 lease_path=/rhev/data-center/mnt/srv-0202.lttd.unb.br: _var_lib_exports_iso/6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d/dom_md/leases [2014-03-06 18:21:34] Stopping lease for pool: 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d pgrps: -17519 User defined signal 1 [2014-03-06 18:21:34] releasing lease sdUUID=6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d id=1 lease_path=/rhev/data-center/mnt/srv-0202.lttd.br: _var_lib_exports_iso/6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d/dom_md/leases
On Thu, Apr 24, 2014 at 1:51 PM, Tamer Lima <tamer.americo@gmail.com> wrote:
Hi, this is the piece of code of engine.log at serv-0202 (engine server) the spm was defined on serv-0203
log from serv-0202 (engine server): 2014-04-24 13:18:11,746 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-82) [1bb7dfd0] Correlation ID: null, Call Stack: null, Custom Event ID: -1, *Message: Used Network resources of host srv-0202 [96%] exceeded defined threshold [95%].* 2014-04-24 13:18:22,578 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-60) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message:* Used Network resources of host srv-0203 [98%] exceeded defined threshold [95%].*
below is the log before the vm creation procedure. The log starts on the moment I press to create a new virtual machine:
(The procedure of creation VM takes more than 1 hour. I executed tcpdump command on srv-0203 (SPM), even creating using thinning provisioning , I collected 500Gb of traffic between serv-0202 and serv-0203. When finally a VM is created there is no real disk allocation from ovirt, only my tcpdump log file. I do not know why this traffic exists)
log from serv-0202 (engine server):
2014-04-24 13:11:36,241 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-20) [1a138258] Correlation ID: 1a138258, Call Stack: null, Custom Event ID: -1, Message: Invalid status on Data Center Default. Setting status to Non Responsive. 2014-04-24 13:11:36,255 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] hostFromVds::selectedVds - srv-0202, spmStatus Free, storage pool Default 2014-04-24 13:11:36,258 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] starting spm on vds srv-0202, storage pool Default, prevId -1, LVER -1 2014-04-24 13:11:36,259 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] START, SpmStartVDSCommand(HostName = srv-0202, HostId = fbdf0655-6560-4e12-a95a-875592f62cb5, storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, prevId=-1, prevLVER=-1, storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log id: 778a334c 2014-04-24 13:11:36,310 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] spmStart polling started: taskId = 198c7765-38cb-42e7-9349-93ca43be7066 2014-04-24 13:11:37,315 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] Failed in HSMGetTaskStatusVDS method 2014-04-24 13:11:37,316 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] spmStart polling ended: taskId = 198c7765-38cb-42e7-9349-93ca43be7066 task status = finished 2014-04-24 13:11:37,316 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] Start SPM Task failed - result: cleanSuccess, message: VDSGenericException: VDSErrorException: Failed to HSMGetTaskStatusVDS, error = Storage domain does not exist, code = 358 2014-04-24 13:11:37,363 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] spmStart polling ended, spm status: Free 2014-04-24 13:11:37,364 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] START, HSMClearTaskVDSCommand(HostName = srv-0202, HostId = fbdf0655-6560-4e12-a95a-875592f62cb5, taskId=198c7765-38cb-42e7-9349-93ca43be7066), log id: 6e6ad022 2014-04-24 13:11:37,409 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] FINISH, HSMClearTaskVDSCommand, log id: 6e6ad022 2014-04-24 13:11:37,409 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@dfe925d, log id: 778a334c 2014-04-24 13:11:37,411 INFO [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] Running command: SetStoragePoolStatusCommand internal: true. Entities affected : ID: 5849b030-626e-47cb-ad90-3ce782d831b3 Type: StoragePool 2014-04-24 13:11:37,416 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-20) [443b1ed8] Correlation ID: 443b1ed8, Call Stack: null, Custom Event ID: -1, Message: Invalid status on Data Center Default. Setting status to Non Responsive. 2014-04-24 13:11:37,418 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] IrsBroker::Failed::GetStoragePoolInfoVDS due to: IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed 2014-04-24 13:11:37,466 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] Irs placed on server fbdf0655-6560-4e12-a95a-875592f62cb5 failed. Proceed Failover 2014-04-24 13:11:37,528 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] hostFromVds::selectedVds - srv-0203, spmStatus Free, storage pool Default 2014-04-24 13:11:37,530 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] starting spm on vds srv-0203, storage pool Default, prevId -1, LVER -1 2014-04-24 13:11:37,531 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] START, SpmStartVDSCommand(HostName = srv-0203, HostId = 6e86beba-ee71-4bae-88d5-b95b74095c2f, storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, prevId=-1, prevLVER=-1, storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log id: 77e0918 2014-04-24 13:11:37,589 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] spmStart polling started: taskId = 81164899-b8b5-4ea5-9c82-94b66a3df741 2014-04-24 13:11:38,595 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] spmStart polling ended: taskId = 81164899-b8b5-4ea5-9c82-94b66a3df741 task status = finished 2014-04-24 13:11:38,652 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] spmStart polling ended, spm status: SPM 2014-04-24 13:11:38,653 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] START, HSMClearTaskVDSCommand(HostName = srv-0203, HostId = 6e86beba-ee71-4bae-88d5-b95b74095c2f, taskId=81164899-b8b5-4ea5-9c82-94b66a3df741), log id: 71e2abc 2014-04-24 13:11:38,698 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] FINISH, HSMClearTaskVDSCommand, log id: 71e2abc 2014-04-24 13:11:38,698 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@67238f8a, log id: 77e0918 2014-04-24 13:11:38,699 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] Initialize Irs proxy from vds: srv-0203.lttd.br 2014-04-24 13:11:38,703 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-20) [443b1ed8] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Storage Pool Manager runs on Host srv-0203 (Address: srv-0203.lttd.br). 2014-04-24 13:11:38,703 WARN [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-48) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue. 2014-04-24 13:11:38,711 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-6-thread-48) [443b1ed8] START, SPMGetAllTasksInfoVDSCommand( storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false), log id: 710a52c9 2014-04-24 13:11:38,735 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-6-thread-48) [443b1ed8] -- executeIrsBrokerCommand: Attempting on storage pool 5849b030-626e-47cb-ad90-3ce782d831b3 2014-04-24 13:11:38,736 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-6-thread-48) [443b1ed8] START, HSMGetAllTasksInfoVDSCommand(HostName = srv-0203, HostId = 6e86beba-ee71-4bae-88d5-b95b74095c2f), log id: 14a15273 2014-04-24 13:11:38,741 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-6-thread-48) [443b1ed8] FINISH, HSMGetAllTasksInfoVDSCommand, return: [], log id: 14a15273 2014-04-24 13:11:38,741 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-6-thread-48) [443b1ed8] FINISH, SPMGetAllTasksInfoVDSCommand, return: [], log id: 710a52c9 2014-04-24 13:11:38,741 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (org.ovirt.thread.pool-6-thread-48) [443b1ed8] Discovered no tasks on Storage Pool Default 2014-04-24 13:14:52,094 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-11) starting processDomainRecovery for domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_DOMAIN 2014-04-24 13:14:52,097 WARN [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-11) Storage domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_DOMAIN is not visible to one or more hosts. Since the domains type is ISO, hosts status will not be changed to non-operational 2014-04-24 13:14:54,923 WARN [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-37) domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_DOMAIN in problem. vds: srv-0202 2014-04-24 13:17:59,281 INFO [org.ovirt.engine.core.bll.AddVmFromTemplateCommand] (ajp--127.0.0.1-8702-9) [*7fb59186*] Lock Acquired to object EngineLock [exclusiveLocks= key: *servidor-teste* value: VM_NAME , sharedLocks= key: 1f08d35a-adf0-4734-9ce6-1431406096ba value: TEMPLATE key: c8e52f2a-5384-41ee-af77-7ee37bf54355 value: DISK ] 2014-04-24 13:17:59,302 INFO [org.ovirt.engine.core.bll.AddVmFromTemplateCommand] (ajp--127.0.0.1-8702-9) [*7fb59186*] Running command: AddVmFromTemplateCommand internal: false. Entities affected : ID: 99408929-82cf-4dc7-a532-9d998063fa95 Type: VdsGroups, ID: 1f08d35a-adf0-4734-9ce6-1431406096ba Type: VmTemplate, ID: 3410b593-dbd0-4ab8-9a21-3e3c51fe8e90 Type: Storage, ID: 99408929-82cf-4dc7-a532-9d998063fa95 Type: VdsGroups 2014-04-24 13:17:59,336 INFO [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] (ajp--127.0.0.1-8702-9) [*7fb59186*] START, SetVmStatusVDSCommand( vmId = 8a94d957-621e-4cd6-b94d-64a0572cb759, status = ImageLocked), log id: 6ada3a4a 2014-04-24 13:17:59,339 INFO [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] (ajp--127.0.0.1-8702-9) [*7fb59186*] FINISH, SetVmStatusVDSCommand, log id: 6ada3a4a 2014-04-24 13:17:59,344 INFO [org.ovirt.engine.core.bll.CreateCloneOfTemplateCommand] (ajp--127.0.0.1-8702-9) [48e79aaf] Running command: CreateCloneOfTemplateCommand internal: true. Entities affected : ID: 3410b593-dbd0-4ab8-9a21-3e3c51fe8e90 Type: Storage 2014-04-24 13:17:59,371 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (ajp--127.0.0.1-8702-9) [48e79aaf] START, CopyImageVDSCommand( storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false, storageDomainId = 3410b593-dbd0-4ab8-9a21-3e3c51fe8e90, imageGroupId = c8e52f2a-5384-41ee-af77-7ee37bf54355, imageId = 5c642d47-4f03-4a81-8a10-067b98e068f4, dstImageGroupId = 5a09cae5-c7a1-466d-9b69-ff8ad739d71c, vmId = 1f08d35a-adf0-4734-9ce6-1431406096ba, dstImageId = 2d82ce92-96f1-482c-b8fe-c21d9dfb23e6, imageDescription = , dstStorageDomainId = 3410b593-dbd0-4ab8-9a21-3e3c51fe8e90, copyVolumeType = LeafVol, volumeFormat = RAW, preallocate = Sparse, postZero = false, force = false), log id: 4a480fe7 2014-04-24 13:17:59,372 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (ajp--127.0.0.1-8702-9) [48e79aaf] -- executeIrsBrokerCommand: calling 'copyImage' with two new parameters: description and UUID 2014-04-24 13:17:59,373 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (ajp--127.0.0.1-8702-9) [48e79aaf] -- copyImage parameters: sdUUID=3410b593-dbd0-4ab8-9a21-3e3c51fe8e90 spUUID=5849b030-626e-47cb-ad90-3ce782d831b3 vmGUID=1f08d35a-adf0-4734-9ce6-1431406096ba srcImageGUID=c8e52f2a-5384-41ee-af77-7ee37bf54355 srcVolUUID=5c642d47-4f03-4a81-8a10-067b98e068f4 dstImageGUID=5a09cae5-c7a1-466d-9b69-ff8ad739d71c dstVolUUID=2d82ce92-96f1-482c-b8fe-c21d9dfb23e6 descr= dstSdUUID=3410b593-dbd0-4ab8-9a21-3e3c51fe8e90 2014-04-24 13:17:59,442 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (ajp--127.0.0.1-8702-9) [48e79aaf] FINISH, CopyImageVDSCommand, return: 00000000-0000-0000-0000-000000000000, log id: 4a480fe7 2014-04-24 13:17:59,446 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (ajp--127.0.0.1-8702-9) [48e79aaf] CommandAsyncTask::Adding CommandMultiAsyncTasks object for command 755c7619-60e6-4899-b772-17c56cdec057 2014-04-24 13:17:59,447 INFO [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (ajp--127.0.0.1-8702-9) [48e79aaf] CommandMultiAsyncTasks::AttachTask: Attaching task e8726bad-05ff-4f89-a127-146a3f8bceb2 to command 755c7619-60e6-4899-b772-17c56cdec057. 2014-04-24 13:17:59,451 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (ajp--127.0.0.1-8702-9) [48e79aaf] Adding task e8726bad-05ff-4f89-a127-146a3f8bceb2 (Parent Command AddVmFromTemplate, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters), polling hasn't started yet.. 2014-04-24 13:17:59,497 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-9) [48e79aaf] Correlation ID: *7fb59186*, Job ID: aeb08ac5-d157-40ae-bcd5-ec68d9cc5ae8, Call Stack: null, Custom Event ID: -1, Message: VM* servidor-teste creation was initiated by admin.* 2014-04-24 13:17:59,497 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (ajp--127.0.0.1-8702-9) [48e79aaf] BaseAsyncTask::startPollingTask: Starting to poll task e8726bad-05ff-4f89-a127-146a3f8bceb2. 2014-04-24 13:17:59,560 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-99) Polling and updating Async Tasks: 1 tasks, 1 tasks to poll now 2014-04-24 13:17:59,566 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (DefaultQuartzScheduler_Worker-99) SPMAsyncTask::PollTask: Polling task e8726bad-05ff-4f89-a127-146a3f8bceb2 (Parent Command AddVmFromTemplate, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) returned status running. 2014-04-24 13:17:59,567 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-99) Finished polling Tasks, will poll again in 10 seconds. 2014-04-24 13:17:59,653 INFO [org.ovirt.engine.core.bll.network.vm.ReorderVmNicsCommand] (ajp--127.0.0.1-8702-5) [601e9dcb] Running command: ReorderVmNicsCommand internal: false. Entities affected : ID: 8a94d957-621e-4cd6-b94d-64a0572cb759 Type: VM 2014-04-24 13:18:11,746 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-82) [1bb7dfd0] Correlation ID: null, Call Stack: null, Custom Event ID: -1, *Message: Used Network resources of host srv-0202 [96%] exceeded defined threshold [95%].* 2014-04-24 13:18:22,578 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-60) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message:* Used Network resources of host srv-0203 [98%] exceeded defined threshold [95%].* 2014-04-24 13:19:54,926 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-28) starting processDomainRecovery for domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_DOMAIN 2014-04-24 13:19:54,929 WARN [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-28) Storage domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_*DOMAIN is not visible to one or more hosts. Since the domains type is ISO, hosts status will not be changed to non-operational* 2014-04-24 13:19:57,802 WARN [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-36) domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_DOMAIN in problem. vds: srv-0202 ^C
On Thu, Apr 24, 2014 at 3:27 AM, Moti Asayag <masayag@redhat.com> wrote:
----- Original Message -----
From: "Tamer Lima" <tamer.americo@gmail.com> To: users@ovirt.org Sent: Monday, April 14, 2014 5:13:12 PM Subject: [ovirt-users] does SPM can run over ovirt-engine host ?
Hello,
When I create virtual machine from a template (centos6.5, 2 cores, 8GB mem, 500GB hd) this process takes almost 2 hours. I click on "New VM" button and just select the template and click ok.
engine.log show me high network consumption (98%) between engine-server host and SPM host.
Could you share that piece of log which indicates the 98% consumption is beween the engine server to the SPM host (vs the SPM node to the storage server) ?
I tried to make my engine-server host a spm host too, but without
sucess.
Does SPM can run over on the same ovirt-engine machine ?
Am I make something wrong? Or create VM from template is really slow ?
my servers : srv-0202 = ovirt-engine , vdsm srv-0203 = spm , vdsm srv-0204 = vdsm These servers are dell blades connected on a 100GB switch.
thanks
This is what I know about SPM: http://www.ovirt.org/Storage_-_oVirt_workshop_November_2011
= Storage Pool Manager (SPM) A role assigned to one host in a data
granting it sole authority over:
* Creation, deletion, an dmanipulation of virtula disk images, snapshots and templates * Templates: you can create on VM as a golden image and
center provision to
multiple VMs (QCOW layers) * Allocation of storage for sparse block devices (on SAN) * Thin provisinoing (see below) * Single metadata writer: * SPM lease mechanism (Chockler and Malkhi 2004, Light-Weight
Leases
for Storage-Cnntric Coordination) * Storage-centric mailbox * This role can be migrated to any host in data center
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

below are the results of commands before and during vm creation. I executed the commands on srv-2022 (engine) and srv-0203 (vdsm + spm) 1) first srv-0202 with commands vdsClient -s localhost getVdsStat AND vdsClient -s localhost getVdsCaps BEFORE vm creating 1.1) the same with srv-0203 2) second srv-0202 with the same commands DURING vm creating , when browser admin shows the network with consumption exceeding 98% 2.1) the same with srv-0203 about VM in this test: creating VM with THIN provisioning using template, and running on srv-0203
"Once you've identified the interface, you can see if the 'ovirtmgmt' is reported with that high consumption " <<<== I dont see high consumption on rx/txRate
"or if it is configured on top of the highly used nic." my ports are listed on srv-0202 as 'ports': ['em1', 'vnet0', 'vnet1', 'vnet2'], and on srv-0203 as 'ports': ['vnet0', 'em1', 'vnet1', 'vnet2', 'vnet3'], I dont know when vnet is created and/or modified
======================================================== 1) srv-0202 vdsClient -s localhost getVdsStat and vdsClient -s localhost getVdsCaps [root@srv-0202 ~]# vdsClient -s localhost getVdsStats anonHugePages = '2394' cpuIdle = '97.23' cpuLoad = '0.16' cpuSys = '0.97' cpuSysVdsmd = '0.50' cpuUser = '1.80' cpuUserVdsmd = '1.00' dateTime = '2014-04-25T16:15:18 GMT' diskStats = {'/tmp': {'free': '1102470'}, '/var/log': {'free': '1102470'}, '/var/log/core': {'free': '1102470'}, '/var/run/vdsm/': {'free': '1102470'}} elapsedTime = '1346980' generationID = 'a1c01b50-eb16-4c73-8528-297b5116e141' haStats = {'active': False, 'configured': False, 'globalMaintenance': False, 'localMaintenance': False, 'score': 0} ksmCpu = 5 ksmPages = 64 ksmState = True memAvailable = 10341 memCommitted = 18627 memFree = 26565 memShared = 296595 memUsed = '18' momStatus = 'active' netConfigDirty = 'False' network = {';vdsmdummy;': {'name': ';vdsmdummy;', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'bond0': {'name': 'bond0', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'bond1': {'name': 'bond1', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'bond2': {'name': 'bond2', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'bond3': {'name': 'bond3', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'bond4': {'name': 'bond4', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'em1': {'name': 'em1', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', * 'state': 'up',* 'txDropped': '0', 'txErrors': '0', 'txRate': '0.1'}, 'em2': {'name': 'em2', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', *'state': 'down',* 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'em3': {'name': 'em3', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'em4': {'name': 'em4', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'lo': {'name': 'lo', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.2', 'speed': '1000', *'state': 'up',* 'txDropped': '0', 'txErrors': '0', 'txRate': '0.2'}, 'ovirtmgmt': {'name': 'ovirtmgmt', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'up', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.1'}, 'vnet0': {'name': 'vnet0', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', *'state': 'up',* 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'vnet1': {'name': 'vnet1', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', *'state': 'up',* 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'vnet2': {'name': 'vnet2', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'up', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}} rxDropped = '0' rxRate = '0.20' statsAge = '0.20' storageDomains = {'3410b593-dbd0-4ab8-9a21-3e3c51fe8e90': {'acquired': True, 'code': 0, 'delay': '0.000173751', 'lastCheck': '7.5', 'valid': True, 'version': 3}, '6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d': {'acquired': False, 'code': 358, 'delay': '0', 'lastCheck': '7.5', 'valid': False, 'version': -1}} swapFree = 7390 swapTotal = 8095 thpState = 'always' txDropped = '0' txRate = '0.39' vmActive = 3 vmCount = 3 vmMigrating = 0 [root@srv-0202 ~]# vdsClient -s localhost getVdsCaps HBAInventory = {'FC': [{'model': 'QLE2562 - PCI-Express Dual Channel 8Gb Fibre Channel HBA', 'wwnn': '2000001b329f5fce', 'wwpn': '2100001b329f5fce'}, {'model': 'QLE2562 - PCI-Express Dual Channel 8Gb Fibre Channel HBA', 'wwnn': '2001001b32bf5fce', 'wwpn': '2101001b32bf5fce'}], 'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:b956608e509'}]} ISCSIInitiatorName = 'iqn.1994-05.com.redhat:b956608e509' bondings = {'bond0': {'addr': '', 'cfg': {}, 'hwaddr': '00:00:00:00:00:00', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}, 'bond1': {'addr': '', 'cfg': {}, 'hwaddr': '00:00:00:00:00:00', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}, 'bond2': {'addr': '', 'cfg': {}, 'hwaddr': '00:00:00:00:00:00', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}, 'bond3': {'addr': '', 'cfg': {}, 'hwaddr': '00:00:00:00:00:00', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}, 'bond4': {'addr': '', 'cfg': {}, 'hwaddr': '00:00:00:00:00:00', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}} bridges = {';vdsmdummy;': {'addr': '', 'cfg': {}, 'gateway': '', 'ipv6addrs': [], 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ports': [], 'stp': 'off'}, 'ovirtmgmt': {'addr': '172.16.6.192', 'cfg': {'BOOTPROTO': 'none', 'DEFROUTE': 'yes', 'DELAY': '0', 'DEVICE': 'ovirtmgmt', 'DNS1': '172.16.4.2', 'DNS2': '164.41.222.130', 'DNS3': '164.41.222.207', 'DOMAIN': 'lttd.br', 'GATEWAY': '172.16.6.1', 'IPADDR': '172.16.6.192', 'NETMASK': '255.255.255.0', 'NM_CONTROLLED': 'no', 'ONBOOT': 'yes', 'STP': 'no', 'TYPE': 'Bridge'}, 'gateway': '172.16.6.1', 'ipv6addrs': ['fe80::222:19ff:fe69:850f/64'], 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '255.255.255.0', 'ports': ['em1', 'vnet0', 'vnet1', 'vnet2'], 'stp': 'off'}} clusterLevels = ['3.0', '3.1', '3.2', '3.3', '3.4'] cpuCores = '8' cpuFlags = 'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,dca,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_n270' cpuModel = 'Intel(R) Xeon(R) CPU X5560 @ 2.80GHz' cpuSockets = '2' cpuSpeed = '2793.024' cpuThreads = '16' emulatedMachines = ['rhel6.5.0', 'pc', 'rhel6.4.0', 'rhel6.3.0', 'rhel6.2.0', 'rhel6.1.0', 'rhel6.0.0', 'rhel5.5.0', 'rhel5.4.4', 'rhel5.4.0'] guestOverhead = '65' hooks = {} kvmEnabled = 'true' lastClient = '127.0.0.1' lastClientIface = 'lo' management_ip = '0.0.0.0' memSize = '32094' netConfigDirty = 'False' networks = {'ovirtmgmt': {'addr': '172.16.6.192', 'bridged': True, 'cfg': {'BOOTPROTO': 'none', 'DEFROUTE': 'yes', 'DELAY': '0', 'DEVICE': 'ovirtmgmt', 'DNS1': '172.16.4.2', 'DNS2': '164.41.222.130', 'DNS3': '164.41.222.207', 'DOMAIN': 'lttd.br', 'GATEWAY': '172.16.6.1', 'IPADDR': '172.16.6.192', 'NETMASK': '255.255.255.0', 'NM_CONTROLLED': 'no', 'ONBOOT': 'yes', 'STP': 'no', 'TYPE': 'Bridge'}, 'gateway': '172.16.6.1', 'iface': 'ovirtmgmt', 'ipv6addrs': ['fe80::222:19ff:fe69:850f/64'], 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '255.255.255.0', 'ports': ['em1', 'vnet0', 'vnet1', 'vnet2'], 'stp': 'off'}} nics = {'em1': {'addr': '', 'cfg': {'BRIDGE': 'ovirtmgmt', 'DEVICE': 'em1', 'HWADDR': '00:22:19:69:85:0f', 'NM_CONTROLLED': 'no', 'ONBOOT': 'yes', 'STP': 'no'}, 'hwaddr': '00:22:19:69:85:0f', 'ipv6addrs': ['fe80::222:19ff:fe69:850f/64'], 'mtu': '1500', 'netmask': '', 'speed': 1000}, 'em2': {'addr': '', 'cfg': {'BOOTPROTO': 'dhcp', 'DEFROUTE': 'yes', 'DEVICE': 'em2', 'HWADDR': '00:22:19:69:85:11', 'IPV4_FAILURE_FATAL': 'yes', 'IPV6INIT': 'no', 'NAME': 'System em2', 'NM_CONTROLLED': 'yes', 'ONBOOT': 'no', 'PEERDNS': 'yes', 'PEERROUTES': 'yes', 'TYPE': 'Ethernet', 'UUID': '16f75c25-48cc-4dec-97ff-0e7f0822b26f'}, 'hwaddr': '00:22:19:69:85:11', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'speed': 0}, 'em3': {'addr': '', 'cfg': {'BOOTPROTO': 'dhcp', 'DEVICE': 'em3', 'HWADDR': '00:22:19:69:85:13', 'NM_CONTROLLED': 'yes', 'ONBOOT': 'no', 'TYPE': 'Ethernet', 'UUID': '1d8a2db3-d8c2-480b-ab2a-2decea844280'}, 'hwaddr': '00:22:19:69:85:13', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'speed': 0}, 'em4': {'addr': '', 'cfg': {'BOOTPROTO': 'dhcp', 'DEVICE': 'em4', 'HWADDR': '00:22:19:69:85:15', 'NM_CONTROLLED': 'yes', 'ONBOOT': 'no', 'TYPE': 'Ethernet', 'UUID': 'e139faa0-bd6c-4aec-b524-134d2bd30fb6'}, 'hwaddr': '00:22:19:69:85:15', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'speed': 0}} operatingSystem = {'name': 'RHEL', 'release': '5.el6.centos.11.2', 'version': '6'} packages2 = {'kernel': {'buildtime': 1395788395.0, 'release': '431.11.2.el6.x86_64', 'version': '2.6.32'}, 'libvirt': {'buildtime': 1396856799, 'release': '29.el6_5.7', 'version': '0.10.2'}, 'mom': {'buildtime': 1391183641, 'release': '1.el6', 'version': '0.4.0'}, 'qemu-img': {'buildtime': 1398190363, 'release': '2.415.el6_5.8', 'version': '0.12.1.2'}, 'qemu-kvm': {'buildtime': 1398190363, 'release': '2.415.el6_5.8', 'version': '0.12.1.2'}, 'spice-server': {'buildtime': 1386756528, 'release': '6.el6_5.1', 'version': '0.12.4'}, 'vdsm': {'buildtime': 1395806448, 'release': '0.el6', 'version': '4.14.6'}} reservedMem = '321' rngSources = ['random'] software_revision = '0' software_version = '4.14' supportedENGINEs = ['3.0', '3.1', '3.2', '3.3', '3.4'] supportedProtocols = ['2.2', '2.3'] uuid = '4C4C4544-0050-4410-804C-B8C04F374D31' version_name = 'Snow Man' vlans = {} vmTypes = ['kvm'] [root@srv-0202 ~]# ================================================================ 1.1) srv-0203 vdsClient -s localhost getVdsStat and vdsClient -s localhost getVdsCaps [root@srv-0203 ~]# vdsClient -s localhost getVdsStats anonHugePages = '6608' cpuIdle = '98.26' cpuLoad = '0.00' cpuSys = '0.84' cpuSysVdsmd = '0.62' cpuUser = '0.90' cpuUserVdsmd = '1.24' dateTime = '2014-04-25T16:16:41 GMT' diskStats = {'/tmp': {'free': '2189974'}, '/var/log': {'free': '2189974'}, '/var/log/core': {'free': '2189974'}, '/var/run/vdsm/': {'free': '2189974'}} elapsedTime = '1346974' generationID = '5a96bbca-5947-41b9-b61b-5a7d8bd603fe' haStats = {'active': False, 'configured': False, 'globalMaintenance': False, 'localMaintenance': False, 'score': 0} ksmCpu = 4 ksmPages = 64 ksmState = True memAvailable = 1202 memCommitted = 24836 memFree = 21835 memShared = 515063 memUsed = '32' momStatus = 'active' netConfigDirty = 'False' network = {';vdsmdummy;': {'name': ';vdsmdummy;', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'bond0': {'name': 'bond0', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'bond1': {'name': 'bond1', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'bond2': {'name': 'bond2', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'bond3': {'name': 'bond3', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'bond4': {'name': 'bond4', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'em1': {'name': 'em1', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.1', 'speed': '1000', *'state': 'up',* 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'em2': {'name': 'em2', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', *'state': 'up',* 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'em3': {'name': 'em3', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'em4': {'name': 'em4', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'lo': {'name': 'lo', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.2', 'speed': '1000', 'state': 'up', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.2'}, 'ovirtmgmt': {'name': 'ovirtmgmt', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.1', 'speed': '1000', 'state': 'up', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'vnet0': {'name': 'vnet0', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'up', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'vnet1': {'name': 'vnet1', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'up', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'vnet2': {'name': 'vnet2', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'up', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'vnet3': {'name': 'vnet3', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'up', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}} rxDropped = '0' rxRate = '0.20' statsAge = '1.45' storageDomains = {'3410b593-dbd0-4ab8-9a21-3e3c51fe8e90': {'acquired': True, 'code': 0, 'delay': '0.000300909', 'lastCheck': '5.5', 'valid': True, 'version': 3}, '6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d': {'acquired': True, 'code': 0, 'delay': '0.000324327', 'lastCheck': '6.3', 'valid': True, 'version': 0}} swapFree = 6544 swapTotal = 8023 thpState = 'always' txDropped = '0' txRate = '0.11' vmActive = 4 vmCount = 4 vmMigrating = 0 [root@srv-0203 ~]# vdsClient -s localhost getVdsCaps HBAInventory = {'FC': [{'model': 'QLE2562 - PCI-Express Dual Channel 8Gb Fibre Channel HBA', 'wwnn': '2000001b329f95ce', 'wwpn': '2100001b329f95ce'}, {'model': 'QLE2562 - PCI-Express Dual Channel 8Gb Fibre Channel HBA', 'wwnn': '2001001b32bf95ce', 'wwpn': '2101001b32bf95ce'}], 'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:a94b25bd22a'}]} ISCSIInitiatorName = 'iqn.1994-05.com.redhat:a94b25bd22a' bondings = {'bond0': {'addr': '', 'cfg': {}, 'hwaddr': '00:00:00:00:00:00', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}, 'bond1': {'addr': '', 'cfg': {}, 'hwaddr': '00:00:00:00:00:00', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}, 'bond2': {'addr': '', 'cfg': {}, 'hwaddr': '00:00:00:00:00:00', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}, 'bond3': {'addr': '', 'cfg': {}, 'hwaddr': '00:00:00:00:00:00', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}, 'bond4': {'addr': '', 'cfg': {}, 'hwaddr': '00:00:00:00:00:00', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}} bridges = {';vdsmdummy;': {'addr': '', 'cfg': {}, 'gateway': '', 'ipv6addrs': [], 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ports': [], 'stp': 'off'}, 'ovirtmgmt': {'addr': '172.16.6.193', 'cfg': {'BOOTPROTO': 'none', 'DEFROUTE': 'yes', 'DELAY': '0', 'DEVICE': 'ovirtmgmt', 'GATEWAY': '172.16.6.1', 'IPADDR': '172.16.6.193', 'NETMASK': '255.255.255.0', 'NM_CONTROLLED': 'no', 'ONBOOT': 'yes', 'STP': 'no', 'TYPE': 'Bridge'}, 'gateway': '172.16.6.1', 'ipv6addrs': ['fe80::222:19ff:fe69:b6c1/64'], 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '255.255.255.0', 'ports': ['vnet0', 'em1', 'vnet1', 'vnet2', 'vnet3'], 'stp': 'off'}} clusterLevels = ['3.0', '3.1', '3.2', '3.3', '3.4'] cpuCores = '8' cpuFlags = 'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,dca,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_n270' cpuModel = 'Intel(R) Xeon(R) CPU X5560 @ 2.80GHz' cpuSockets = '2' cpuSpeed = '2793.159' cpuThreads = '16' emulatedMachines = ['rhel6.5.0', 'pc', 'rhel6.4.0', 'rhel6.3.0', 'rhel6.2.0', 'rhel6.1.0', 'rhel6.0.0', 'rhel5.5.0', 'rhel5.4.4', 'rhel5.4.0'] guestOverhead = '65' hooks = {} kvmEnabled = 'true' lastClient = '127.0.0.1' lastClientIface = 'lo' management_ip = '0.0.0.0' memSize = '32094' netConfigDirty = 'False' networks = {'ovirtmgmt': {'addr': '172.16.6.193', 'bridged': True, 'cfg': {'BOOTPROTO': 'none', 'DEFROUTE': 'yes', 'DELAY': '0', 'DEVICE': 'ovirtmgmt', 'GATEWAY': '172.16.6.1', 'IPADDR': '172.16.6.193', 'NETMASK': '255.255.255.0', 'NM_CONTROLLED': 'no', 'ONBOOT': 'yes', 'STP': 'no', 'TYPE': 'Bridge'}, 'gateway': '172.16.6.1', 'iface': 'ovirtmgmt', 'ipv6addrs': ['fe80::222:19ff:fe69:b6c1/64'], 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '255.255.255.0', 'ports': ['vnet0', 'em1', 'vnet1', 'vnet2', 'vnet3'], 'stp': 'off'}} nics = {'em1': {'addr': '', 'cfg': {'BRIDGE': 'ovirtmgmt', 'DEVICE': 'em1', 'HWADDR': '00:22:19:69:b6:c1', 'NM_CONTROLLED': 'no', 'ONBOOT': 'yes', 'STP': 'no'}, 'hwaddr': '00:22:19:69:b6:c1', 'ipv6addrs': ['fe80::222:19ff:fe69:b6c1/64'], 'mtu': '1500', 'netmask': '', 'speed': 1000}, 'em2': {'addr': '', 'cfg': {'BOOTPROTO': 'dhcp', 'DEFROUTE': 'yes', 'DEVICE': 'em2', 'HWADDR': '00:22:19:69:B6:C3', 'IPV4_FAILURE_FATAL': 'yes', 'IPV6INIT': 'no', 'NAME': 'eth1', 'NM_CONTROLLED': 'yes', 'ONBOOT': 'yes', 'PEERDNS': 'yes', 'PEERROUTES': 'yes', 'TYPE': 'Ethernet', 'UUID': '61c021d1-c174-4b06-be72-1a4e7d6fa80e'}, 'hwaddr': '00:22:19:69:b6:c3', 'ipv6addrs': ['fe80::222:19ff:fe69:b6c3/64'], 'mtu': '1500', 'netmask': '', 'speed': 1000}, 'em3': {'addr': '', 'cfg': {'BOOTPROTO': 'dhcp', 'DEFROUTE': 'yes', 'DEVICE': 'em3', 'HWADDR': '00:22:19:69:B6:C5', 'IPV4_FAILURE_FATAL': 'yes', 'IPV6INIT': 'no', 'NAME': 'eth2', 'NM_CONTROLLED': 'yes', 'ONBOOT': 'no', 'PEERDNS': 'yes', 'PEERROUTES': 'yes', 'TYPE': 'Ethernet', 'UUID': 'fecc09f3-51b4-4ba5-a306-abf3a12f3971'}, 'hwaddr': '00:22:19:69:b6:c5', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'speed': 0}, 'em4': {'addr': '', 'cfg': {'BOOTPROTO': 'dhcp', 'DEFROUTE': 'yes', 'DEVICE': 'em4', 'HWADDR': '00:22:19:69:B6:C7', 'IPV4_FAILURE_FATAL': 'yes', 'IPV6INIT': 'no', 'NAME': 'eth3', 'NM_CONTROLLED': 'yes', 'ONBOOT': 'no', 'PEERDNS': 'yes', 'PEERROUTES': 'yes', 'TYPE': 'Ethernet', 'UUID': '6e850fef-33b4-41e2-97d7-1fcd9bb334ed'}, 'hwaddr': '00:22:19:69:b6:c7', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'speed': 0}} operatingSystem = {'name': 'RHEL', 'release': '5.el6.centos.11.2', 'version': '6'} packages2 = {'kernel': {'buildtime': 1395788395.0, 'release': '431.11.2.el6.x86_64', 'version': '2.6.32'}, 'libvirt': {'buildtime': 1396856799, 'release': '29.el6_5.7', 'version': '0.10.2'}, 'mom': {'buildtime': 1391183641, 'release': '1.el6', 'version': '0.4.0'}, 'qemu-img': {'buildtime': 1398190363, 'release': '2.415.el6_5.8', 'version': '0.12.1.2'}, 'qemu-kvm': {'buildtime': 1398190363, 'release': '2.415.el6_5.8', 'version': '0.12.1.2'}, 'spice-server': {'buildtime': 1386756528, 'release': '6.el6_5.1', 'version': '0.12.4'}, 'vdsm': {'buildtime': 1395806448, 'release': '0.el6', 'version': '4.14.6'}} reservedMem = '321' rngSources = ['random'] software_revision = '0' software_version = '4.14' supportedENGINEs = ['3.0', '3.1', '3.2', '3.3', '3.4'] supportedProtocols = ['2.2', '2.3'] uuid = '4C4C4544-0050-4410-804C-B6C04F374D31' version_name = 'Snow Man' vlans = {} vmTypes = ['kvm'] ====================================== 2. srv-0202 vdsClient -s localhost getVdsStat and vdsClient -s localhost getVdsCaps [root@srv-0202 ~]# vdsClient -s localhost getVdsStats anonHugePages = '2318' cpuIdle = '98.70' cpuLoad = '0.24' cpuSys = '0.94' cpuSysVdsmd = '0.62' cpuUser = '0.37' cpuUserVdsmd = '0.75' dateTime = '2014-04-25T17:37:35 GMT' diskStats = {'/tmp': {'free': '1097545'}, '/var/log': {'free': '1097545'}, '/var/log/core': {'free': '1097545'}, '/var/run/vdsm/': {'free': '1097545'}} elapsedTime = '1351918' generationID = 'a1c01b50-eb16-4c73-8528-297b5116e141' haStats = {'active': False, 'configured': False, 'globalMaintenance': False, 'localMaintenance': False, 'score': 0} ksmCpu = 3 ksmPages = 64 ksmState = True memAvailable = 10221 memCommitted = 18627 memFree = 26835 memShared = 274282 memUsed = '17' momStatus = 'active' netConfigDirty = 'False' network = {';vdsmdummy;': {'name': ';vdsmdummy;', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'bond0': {'name': 'bond0', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'bond1': {'name': 'bond1', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'bond2': {'name': 'bond2', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'bond3': {'name': 'bond3', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'bond4': {'name': 'bond4', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'em1': {'name': 'em1', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.5', 'speed': '1000', *'state': 'up',* 'txDropped': '0', 'txErrors': '0', 'txRate': '98.4'}, 'em2': {'name': 'em2', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'em3': {'name': 'em3', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'em4': {'name': 'em4', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'lo': {'name': 'lo', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'up', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'ovirtmgmt': {'name': 'ovirtmgmt', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.4', 'speed': '1000', 'state': 'up', 'txDropped': '0', 'txErrors': '0', 'txRate': '94.1'}, 'vnet0': {'name': 'vnet0', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'up', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'vnet1': {'name': 'vnet1', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'up', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'vnet2': {'name': 'vnet2', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'up', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}} rxDropped = '0' rxRate = '0.85' statsAge = '0.72' storageDomains = {'3410b593-dbd0-4ab8-9a21-3e3c51fe8e90': {'acquired': True, 'code': 0, 'delay': '0.000274753', 'lastCheck': '7.4', 'valid': True, 'version': 3}, '6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d': {'acquired': False, 'code': 358, 'delay': '0', 'lastCheck': '2.3', 'valid': False, 'version': -1}} swapFree = 7090 swapTotal = 8095 thpState = 'always' txDropped = '0' txRate = '100.00' vmActive = 3 vmCount = 3 vmMigrating = 0 [root@srv-0202 ~]# vdsClient -s localhost getVdsCaps HBAInventory = {'FC': [{'model': 'QLE2562 - PCI-Express Dual Channel 8Gb Fibre Channel HBA', 'wwnn': '2000001b329f5fce', 'wwpn': '2100001b329f5fce'}, {'model': 'QLE2562 - PCI-Express Dual Channel 8Gb Fibre Channel HBA', 'wwnn': '2001001b32bf5fce', 'wwpn': '2101001b32bf5fce'}], 'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:b956608e509'}]} ISCSIInitiatorName = 'iqn.1994-05.com.redhat:b956608e509' bondings = {'bond0': {'addr': '', 'cfg': {}, 'hwaddr': '00:00:00:00:00:00', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}, 'bond1': {'addr': '', 'cfg': {}, 'hwaddr': '00:00:00:00:00:00', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}, 'bond2': {'addr': '', 'cfg': {}, 'hwaddr': '00:00:00:00:00:00', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}, 'bond3': {'addr': '', 'cfg': {}, 'hwaddr': '00:00:00:00:00:00', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}, 'bond4': {'addr': '', 'cfg': {}, 'hwaddr': '00:00:00:00:00:00', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}} bridges = {';vdsmdummy;': {'addr': '', 'cfg': {}, 'gateway': '', 'ipv6addrs': [], 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ports': [], 'stp': 'off'}, 'ovirtmgmt': {'addr': '172.16.6.192', 'cfg': {'BOOTPROTO': 'none', 'DEFROUTE': 'yes', 'DELAY': '0', 'DEVICE': 'ovirtmgmt', 'DNS1': '172.16.4.2', 'DNS2': '164.41.222.130', 'DNS3': '164.41.222.207', 'DOMAIN': 'lttd.br', 'GATEWAY': '172.16.6.1', 'IPADDR': '172.16.6.192', 'NETMASK': '255.255.255.0', 'NM_CONTROLLED': 'no', 'ONBOOT': 'yes', 'STP': 'no', 'TYPE': 'Bridge'}, 'gateway': '172.16.6.1', 'ipv6addrs': ['fe80::222:19ff:fe69:850f/64'], 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '255.255.255.0', 'ports': ['em1', 'vnet0', 'vnet1', 'vnet2'], 'stp': 'off'}} clusterLevels = ['3.0', '3.1', '3.2', '3.3', '3.4'] cpuCores = '8' cpuFlags = 'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,dca,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_n270' cpuModel = 'Intel(R) Xeon(R) CPU X5560 @ 2.80GHz' cpuSockets = '2' cpuSpeed = '2793.024' cpuThreads = '16' emulatedMachines = ['rhel6.5.0', 'pc', 'rhel6.4.0', 'rhel6.3.0', 'rhel6.2.0', 'rhel6.1.0', 'rhel6.0.0', 'rhel5.5.0', 'rhel5.4.4', 'rhel5.4.0'] guestOverhead = '65' hooks = {} kvmEnabled = 'true' lastClient = '127.0.0.1' lastClientIface = 'lo' management_ip = '0.0.0.0' memSize = '32094' netConfigDirty = 'False' networks = {'ovirtmgmt': {'addr': '172.16.6.192', 'bridged': True, 'cfg': {'BOOTPROTO': 'none', 'DEFROUTE': 'yes', 'DELAY': '0', 'DEVICE': 'ovirtmgmt', 'DNS1': '172.16.4.2', 'DNS2': '164.41.222.130', 'DNS3': '164.41.222.207', 'DOMAIN': 'lttd.br', 'GATEWAY': '172.16.6.1', 'IPADDR': '172.16.6.192', 'NETMASK': '255.255.255.0', 'NM_CONTROLLED': 'no', 'ONBOOT': 'yes', 'STP': 'no', 'TYPE': 'Bridge'}, 'gateway': '172.16.6.1', 'iface': 'ovirtmgmt', 'ipv6addrs': ['fe80::222:19ff:fe69:850f/64'], 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '255.255.255.0', 'ports': ['em1', 'vnet0', 'vnet1', 'vnet2'], 'stp': 'off'}} nics = {'em1': {'addr': '', 'cfg': {'BRIDGE': 'ovirtmgmt', 'DEVICE': 'em1', 'HWADDR': '00:22:19:69:85:0f', 'NM_CONTROLLED': 'no', 'ONBOOT': 'yes', 'STP': 'no'}, 'hwaddr': '00:22:19:69:85:0f', 'ipv6addrs': ['fe80::222:19ff:fe69:850f/64'], 'mtu': '1500', 'netmask': '', 'speed': 1000}, 'em2': {'addr': '', 'cfg': {'BOOTPROTO': 'dhcp', 'DEFROUTE': 'yes', 'DEVICE': 'em2', 'HWADDR': '00:22:19:69:85:11', 'IPV4_FAILURE_FATAL': 'yes', 'IPV6INIT': 'no', 'NAME': 'System em2', 'NM_CONTROLLED': 'yes', 'ONBOOT': 'no', 'PEERDNS': 'yes', 'PEERROUTES': 'yes', 'TYPE': 'Ethernet', 'UUID': '16f75c25-48cc-4dec-97ff-0e7f0822b26f'}, 'hwaddr': '00:22:19:69:85:11', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'speed': 0}, 'em3': {'addr': '', 'cfg': {'BOOTPROTO': 'dhcp', 'DEVICE': 'em3', 'HWADDR': '00:22:19:69:85:13', 'NM_CONTROLLED': 'yes', 'ONBOOT': 'no', 'TYPE': 'Ethernet', 'UUID': '1d8a2db3-d8c2-480b-ab2a-2decea844280'}, 'hwaddr': '00:22:19:69:85:13', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'speed': 0}, 'em4': {'addr': '', 'cfg': {'BOOTPROTO': 'dhcp', 'DEVICE': 'em4', 'HWADDR': '00:22:19:69:85:15', 'NM_CONTROLLED': 'yes', 'ONBOOT': 'no', 'TYPE': 'Ethernet', 'UUID': 'e139faa0-bd6c-4aec-b524-134d2bd30fb6'}, 'hwaddr': '00:22:19:69:85:15', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'speed': 0}} operatingSystem = {'name': 'RHEL', 'release': '5.el6.centos.11.2', 'version': '6'} packages2 = {'kernel': {'buildtime': 1395788395.0, 'release': '431.11.2.el6.x86_64', 'version': '2.6.32'}, 'libvirt': {'buildtime': 1396856799, 'release': '29.el6_5.7', 'version': '0.10.2'}, 'mom': {'buildtime': 1391183641, 'release': '1.el6', 'version': '0.4.0'}, 'qemu-img': {'buildtime': 1398190363, 'release': '2.415.el6_5.8', 'version': '0.12.1.2'}, 'qemu-kvm': {'buildtime': 1398190363, 'release': '2.415.el6_5.8', 'version': '0.12.1.2'}, 'spice-server': {'buildtime': 1386756528, 'release': '6.el6_5.1', 'version': '0.12.4'}, 'vdsm': {'buildtime': 1395806448, 'release': '0.el6', 'version': '4.14.6'}} reservedMem = '321' rngSources = ['random'] software_revision = '0' software_version = '4.14' supportedENGINEs = ['3.0', '3.1', '3.2', '3.3', '3.4'] supportedProtocols = ['2.2', '2.3'] uuid = '4C4C4544-0050-4410-804C-B8C04F374D31' version_name = 'Snow Man' vlans = {} vmTypes = ['kvm'] [root@srv-0202 ~]# ======================================================================== 2.1 srv-0203 [root@srv-0203 ~]# vdsClient -s localhost getVdsStats anonHugePages = '6646' cpuIdle = '97.74' cpuLoad = '1.33' cpuSys = '1.61' cpuSysVdsmd = '0.50' cpuUser = '0.65' cpuUserVdsmd = '1.25' dateTime = '2014-04-25T17:41:04 GMT' diskStats = {'/tmp': {'free': '2189939'}, '/var/log': {'free': '2189939'}, '/var/log/core': {'free': '2189939'}, '/var/run/vdsm/': {'free': '2189939'}} elapsedTime = '1352036' generationID = '5a96bbca-5947-41b9-b61b-5a7d8bd603fe' haStats = {'active': False, 'configured': False, 'globalMaintenance': False, 'localMaintenance': False, 'score': 0} ksmCpu = 8 ksmPages = 64 ksmState = True memAvailable = 1163 memCommitted = 24836 memFree = 22101 memShared = 508863 memUsed = '32' momStatus = 'active' netConfigDirty = 'False' network = {';vdsmdummy;': {'name': ';vdsmdummy;', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'bond0': {'name': 'bond0', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'bond1': {'name': 'bond1', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'bond2': {'name': 'bond2', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'bond3': {'name': 'bond3', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'bond4': {'name': 'bond4', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'em1': {'name': 'em1', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '98.4', 'speed': '1000', *'state': 'up',* 'txDropped': '0', 'txErrors': '0', 'txRate': '0.5'}, 'em2': {'name': 'em2', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', *'state': 'up',* 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'em3': {'name': 'em3', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'em4': {'name': 'em4', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'lo': {'name': 'lo', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'up', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'ovirtmgmt': {'name': 'ovirtmgmt', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '94.5', 'speed': '1000', * 'state': 'up',* 'txDropped': '0', 'txErrors': '0', 'txRate': '0.5'}, 'vnet0': {'name': 'vnet0', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', * 'state': 'up',* 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'vnet1': {'name': 'vnet1', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', *'state': 'up',* 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'vnet2': {'name': 'vnet2', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', *'state': 'up',* 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'vnet3': {'name': 'vnet3', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', *'state': 'up',* 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}} rxDropped = '0' rxRate = '96.47' statsAge = '0.07' storageDomains = {'3410b593-dbd0-4ab8-9a21-3e3c51fe8e90': {'acquired': True, 'code': 0, 'delay': '0.133906', 'lastCheck': '1.4', 'valid': True, 'version': 3}, '6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d': {'acquired': True, 'code': 0, 'delay': '0.133774', 'lastCheck': '7.2', 'valid': True, 'version': 0}} swapFree = 6263 swapTotal = 8023 thpState = 'always' txDropped = '0' txRate = '0.49' vmActive = 4 vmCount = 4 vmMigrating = 0 [root@srv-0203 ~]# [root@srv-0203 ~]# vdsClient -s localhost getVdsCaps HBAInventory = {'FC': [{'model': 'QLE2562 - PCI-Express Dual Channel 8Gb Fibre Channel HBA', 'wwnn': '2000001b329f95ce', 'wwpn': '2100001b329f95ce'}, {'model': 'QLE2562 - PCI-Express Dual Channel 8Gb Fibre Channel HBA', 'wwnn': '2001001b32bf95ce', 'wwpn': '2101001b32bf95ce'}], 'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:a94b25bd22a'}]} ISCSIInitiatorName = 'iqn.1994-05.com.redhat:a94b25bd22a' bondings = {'bond0': {'addr': '', 'cfg': {}, 'hwaddr': '00:00:00:00:00:00', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}, 'bond1': {'addr': '', 'cfg': {}, 'hwaddr': '00:00:00:00:00:00', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}, 'bond2': {'addr': '', 'cfg': {}, 'hwaddr': '00:00:00:00:00:00', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}, 'bond3': {'addr': '', 'cfg': {}, 'hwaddr': '00:00:00:00:00:00', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}, 'bond4': {'addr': '', 'cfg': {}, 'hwaddr': '00:00:00:00:00:00', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}} bridges = {';vdsmdummy;': {'addr': '', 'cfg': {}, 'gateway': '', 'ipv6addrs': [], 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ports': [], 'stp': 'off'}, 'ovirtmgmt': {'addr': '172.16.6.193', 'cfg': {'BOOTPROTO': 'none', 'DEFROUTE': 'yes', 'DELAY': '0', 'DEVICE': 'ovirtmgmt', 'GATEWAY': '172.16.6.1', 'IPADDR': '172.16.6.193', 'NETMASK': '255.255.255.0', 'NM_CONTROLLED': 'no', 'ONBOOT': 'yes', 'STP': 'no', 'TYPE': 'Bridge'}, 'gateway': '172.16.6.1', 'ipv6addrs': ['fe80::222:19ff:fe69:b6c1/64'], 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '255.255.255.0', 'ports': ['vnet0', 'em1', 'vnet1', 'vnet2', 'vnet3'], 'stp': 'off'}} clusterLevels = ['3.0', '3.1', '3.2', '3.3', '3.4'] cpuCores = '8' cpuFlags = 'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,dca,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_n270' cpuModel = 'Intel(R) Xeon(R) CPU X5560 @ 2.80GHz' cpuSockets = '2' cpuSpeed = '2793.159' cpuThreads = '16' emulatedMachines = ['rhel6.5.0', 'pc', 'rhel6.4.0', 'rhel6.3.0', 'rhel6.2.0', 'rhel6.1.0', 'rhel6.0.0', 'rhel5.5.0', 'rhel5.4.4', 'rhel5.4.0'] guestOverhead = '65' hooks = {} kvmEnabled = 'true' lastClient = '127.0.0.1' lastClientIface = 'lo' management_ip = '0.0.0.0' memSize = '32094' netConfigDirty = 'False' networks = {'ovirtmgmt': {'addr': '172.16.6.193', 'bridged': True, 'cfg': {'BOOTPROTO': 'none', 'DEFROUTE': 'yes', 'DELAY': '0', 'DEVICE': 'ovirtmgmt', 'GATEWAY': '172.16.6.1', 'IPADDR': '172.16.6.193', 'NETMASK': '255.255.255.0', 'NM_CONTROLLED': 'no', 'ONBOOT': 'yes', 'STP': 'no', 'TYPE': 'Bridge'}, 'gateway': '172.16.6.1', 'iface': 'ovirtmgmt', 'ipv6addrs': ['fe80::222:19ff:fe69:b6c1/64'], 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '255.255.255.0', 'ports': ['vnet0', 'em1', 'vnet1', 'vnet2', 'vnet3'], 'stp': 'off'}} nics = {'em1': {'addr': '', 'cfg': {'BRIDGE': 'ovirtmgmt', 'DEVICE': 'em1', 'HWADDR': '00:22:19:69:b6:c1', 'NM_CONTROLLED': 'no', 'ONBOOT': 'yes', 'STP': 'no'}, 'hwaddr': '00:22:19:69:b6:c1', 'ipv6addrs': ['fe80::222:19ff:fe69:b6c1/64'], 'mtu': '1500', 'netmask': '', 'speed': 1000}, 'em2': {'addr': '', 'cfg': {'BOOTPROTO': 'dhcp', 'DEFROUTE': 'yes', 'DEVICE': 'em2', 'HWADDR': '00:22:19:69:B6:C3', 'IPV4_FAILURE_FATAL': 'yes', 'IPV6INIT': 'no', 'NAME': 'eth1', 'NM_CONTROLLED': 'yes', 'ONBOOT': 'yes', 'PEERDNS': 'yes', 'PEERROUTES': 'yes', 'TYPE': 'Ethernet', 'UUID': '61c021d1-c174-4b06-be72-1a4e7d6fa80e'}, 'hwaddr': '00:22:19:69:b6:c3', 'ipv6addrs': ['fe80::222:19ff:fe69:b6c3/64'], 'mtu': '1500', 'netmask': '', 'speed': 1000}, 'em3': {'addr': '', 'cfg': {'BOOTPROTO': 'dhcp', 'DEFROUTE': 'yes', 'DEVICE': 'em3', 'HWADDR': '00:22:19:69:B6:C5', 'IPV4_FAILURE_FATAL': 'yes', 'IPV6INIT': 'no', 'NAME': 'eth2', 'NM_CONTROLLED': 'yes', 'ONBOOT': 'no', 'PEERDNS': 'yes', 'PEERROUTES': 'yes', 'TYPE': 'Ethernet', 'UUID': 'fecc09f3-51b4-4ba5-a306-abf3a12f3971'}, 'hwaddr': '00:22:19:69:b6:c5', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'speed': 0}, 'em4': {'addr': '', 'cfg': {'BOOTPROTO': 'dhcp', 'DEFROUTE': 'yes', 'DEVICE': 'em4', 'HWADDR': '00:22:19:69:B6:C7', 'IPV4_FAILURE_FATAL': 'yes', 'IPV6INIT': 'no', 'NAME': 'eth3', 'NM_CONTROLLED': 'yes', 'ONBOOT': 'no', 'PEERDNS': 'yes', 'PEERROUTES': 'yes', 'TYPE': 'Ethernet', 'UUID': '6e850fef-33b4-41e2-97d7-1fcd9bb334ed'}, 'hwaddr': '00:22:19:69:b6:c7', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'speed': 0}} operatingSystem = {'name': 'RHEL', 'release': '5.el6.centos.11.2', 'version': '6'} packages2 = {'kernel': {'buildtime': 1395788395.0, 'release': '431.11.2.el6.x86_64', 'version': '2.6.32'}, 'libvirt': {'buildtime': 1396856799, 'release': '29.el6_5.7', 'version': '0.10.2'}, 'mom': {'buildtime': 1391183641, 'release': '1.el6', 'version': '0.4.0'}, 'qemu-img': {'buildtime': 1398190363, 'release': '2.415.el6_5.8', 'version': '0.12.1.2'}, 'qemu-kvm': {'buildtime': 1398190363, 'release': '2.415.el6_5.8', 'version': '0.12.1.2'}, 'spice-server': {'buildtime': 1386756528, 'release': '6.el6_5.1', 'version': '0.12.4'}, 'vdsm': {'buildtime': 1395806448, 'release': '0.el6', 'version': '4.14.6'}} reservedMem = '321' rngSources = ['random'] software_revision = '0' software_version = '4.14' supportedENGINEs = ['3.0', '3.1', '3.2', '3.3', '3.4'] supportedProtocols = ['2.2', '2.3'] uuid = '4C4C4544-0050-4410-804C-B6C04F374D31' version_name = 'Snow Man' vlans = {} vmTypes = ['kvm'] [root@srv-0203 ~]# On Thu, Apr 24, 2014 at 5:03 PM, Moti Asayag <masayag@redhat.com> wrote:
----- Original Message -----
From: "Tamer Lima" <tamer.americo@gmail.com> To: "Moti Asayag" <masayag@redhat.com> Cc: users@ovirt.org Sent: Thursday, April 24, 2014 8:04:51 PM Subject: Re: [ovirt-users] does SPM can run over ovirt-engine host ?
I created link with an image showing network consumption between engine and spm.
http://pt-br.tinypic.com/r/dzi80i/8 http://tinypic.com/view.php?pic=dzi80i&s=8#.U1lEKfldVyN
The image shows a generic message regarding the host network consumption. In 3.4 will have a specific log stating the device name [1]
You can check what is the specific nic by searching the rxRate or txRate in the output of the following command which should be executed on the spm:
vdsClient -s localhost getVdsStats
Once you've identified the interface, you can see if the 'ovirtmgmt' is reported with that high consumption or if it is configured on top of the highly used nic. Else, there is another issue not related to engine-spm connectivity.
you can paste the output of 'vdsClient -s localhost getVdsStats' and 'vdsClient -s localhost getVdsCaps' to examine both utilization and network configuration.
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1070667
srv-0202, spmStatus Free, storage pool Default 2014-04-24 13:11:36,258 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] starting spm on vds srv-0202, storage pool Default, prevId -1, LVER -1 2014-04-24 13:11:36,259 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] START, SpmStartVDSCommand(HostName = srv-0202, HostId = fbdf0655-6560-4e12-a95a-875592f62cb5, storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, prevId=-1, prevLVER=-1, storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log id: 778a334c 2014-04-24 13:11:36,310 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] spmStart polling started: taskId = 198c7765-38cb-42e7-9349-93ca43be7066 2014-04-24 13:11:37,315 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] Failed in HSMGetTaskStatusVDS method 2014-04-24 13:11:37,316 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] spmStart polling ended: taskId = 198c7765-38cb-42e7-9349-93ca43be7066 task status = finished 2014-04-24 13:11:37,316 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] Start SPM Task failed - result: cleanSuccess, message: VDSGenericException: VDSErrorException: Failed to HSMGetTaskStatusVDS, error = Storage domain does not exist, code = 358 2014-04-24 13:11:37,363 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] spmStart polling ended, spm status: Free 2014-04-24 13:11:37,364 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] START, HSMClearTaskVDSCommand(HostName = srv-0202, HostId = fbdf0655-6560-4e12-a95a-875592f62cb5, taskId=198c7765-38cb-42e7-9349-93ca43be7066), log id: 6e6ad022 2014-04-24 13:11:37,409 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] FINISH, HSMClearTaskVDSCommand, log id: 6e6ad022 2014-04-24 13:11:37,409 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@dfe925d, log id: 778a334c 2014-04-24 13:11:37,411 INFO [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] Running command: SetStoragePoolStatusCommand internal: true. Entities affected : ID: 5849b030-626e-47cb-ad90-3ce782d831b3 Type: StoragePool 2014-04-24 13:11:37,416 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-20) [443b1ed8] Correlation ID: 443b1ed8, Call Stack: null, Custom Event ID: -1, Message: Invalid status on Data Center Default. Setting status to Non Responsive. 2014-04-24 13:11:37,418 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] IrsBroker::Failed::GetStoragePoolInfoVDS due to: IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed 2014-04-24 13:11:37,466 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] Irs placed on server fbdf0655-6560-4e12-a95a-875592f62cb5 failed. Proceed Failover 2014-04-24 13:11:37,528 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] hostFromVds::selectedVds
This forum has an image site/blog preference ? thanks
below is the log of spm-lock.log
[root@srv-0203 vdsm]# tail -f spm-lock.log [2014-03-06 18:21:21] Protecting spm lock for vdsm pid 2992 [2014-03-06 18:21:21] Trying to acquire lease - sdUUID=6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d lease_file=/rhev/data-center/mnt/srv-0202.lttd.br: _var_lib_exports_iso/6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d/dom_md/leases id=1 lease_time_ms=5000 io_op_to_ms=1000 [2014-03-06 18:21:34] Lease acquired sdUUID=6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d id=1 lease_path=/rhev/data-center/mnt/srv-0202.ltd.br: _var_lib_exports_iso/6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d/dom_md/leases, TS=1394140892739675 [2014-03-06 18:21:34] *Protecting spm lock for vdsm *pid 2992 [2014-03-06 18:21:34] Started renewal process (pid=17519) for sdUUID=6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d id=1 lease_path=/rhev/data-center/mnt/srv-0202.lttd.unb.br: _var_lib_exports_iso/6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d/dom_md/leases [2014-03-06 18:21:34] Stopping lease for pool: 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d pgrps: -17519 User defined signal 1 [2014-03-06 18:21:34] releasing lease sdUUID=6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d id=1 lease_path=/rhev/data-center/mnt/srv-0202.lttd.br: _var_lib_exports_iso/6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d/dom_md/leases
On Thu, Apr 24, 2014 at 1:51 PM, Tamer Lima <tamer.americo@gmail.com> wrote:
Hi, this is the piece of code of engine.log at serv-0202 (engine server) the spm was defined on serv-0203
log from serv-0202 (engine server): 2014-04-24 13:18:11,746 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-82) [1bb7dfd0] Correlation ID: null, Call Stack: null, Custom Event ID: -1, *Message: Used Network resources of host srv-0202 [96%] exceeded defined threshold [95%].* 2014-04-24 13:18:22,578 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-60) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message:* Used Network resources of host srv-0203 [98%] exceeded defined threshold [95%].*
below is the log before the vm creation procedure. The log starts on the moment I press to create a new virtual machine:
(The procedure of creation VM takes more than 1 hour. I executed tcpdump command on srv-0203 (SPM), even creating using thinning provisioning , I collected 500Gb of traffic between serv-0202 and serv-0203. When finally a VM is created there is no real disk allocation from ovirt, only my tcpdump log file. I do not know why this traffic exists)
log from serv-0202 (engine server):
2014-04-24 13:11:36,241 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-20) [1a138258] Correlation ID: 1a138258, Call Stack: null, Custom Event ID: -1, Message: Invalid status on Data Center Default. Setting status to Non Responsive. 2014-04-24 13:11:36,255 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] hostFromVds::selectedVds
srv-0203, spmStatus Free, storage pool Default 2014-04-24 13:11:37,530 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] starting spm on vds srv-0203, storage pool Default, prevId -1, LVER -1 2014-04-24 13:11:37,531 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] START, SpmStartVDSCommand(HostName = srv-0203, HostId = 6e86beba-ee71-4bae-88d5-b95b74095c2f, storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, prevId=-1, prevLVER=-1, storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log id: 77e0918 2014-04-24 13:11:37,589 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] spmStart polling started: taskId = 81164899-b8b5-4ea5-9c82-94b66a3df741 2014-04-24 13:11:38,595 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] spmStart polling ended: taskId = 81164899-b8b5-4ea5-9c82-94b66a3df741 task status = finished 2014-04-24 13:11:38,652 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] spmStart polling ended, spm status: SPM 2014-04-24 13:11:38,653 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] START, HSMClearTaskVDSCommand(HostName = srv-0203, HostId = 6e86beba-ee71-4bae-88d5-b95b74095c2f, taskId=81164899-b8b5-4ea5-9c82-94b66a3df741), log id: 71e2abc 2014-04-24 13:11:38,698 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] FINISH, HSMClearTaskVDSCommand, log id: 71e2abc 2014-04-24 13:11:38,698 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@67238f8a , log id: 77e0918 2014-04-24 13:11:38,699 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] Initialize Irs proxy from vds: srv-0203.lttd.br 2014-04-24 13:11:38,703 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-20) [443b1ed8] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Storage Pool Manager runs on Host srv-0203 (Address: srv-0203.lttd.br). 2014-04-24 13:11:38,703 WARN [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-48) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue. 2014-04-24 13:11:38,711 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand]
(org.ovirt.thread.pool-6-thread-48) [443b1ed8] START, SPMGetAllTasksInfoVDSCommand( storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false), log id: 710a52c9 2014-04-24 13:11:38,735 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand]
(org.ovirt.thread.pool-6-thread-48) [443b1ed8] -- executeIrsBrokerCommand: Attempting on storage pool 5849b030-626e-47cb-ad90-3ce782d831b3 2014-04-24 13:11:38,736 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand]
(org.ovirt.thread.pool-6-thread-48) [443b1ed8] START, HSMGetAllTasksInfoVDSCommand(HostName = srv-0203, HostId = 6e86beba-ee71-4bae-88d5-b95b74095c2f), log id: 14a15273 2014-04-24 13:11:38,741 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand]
(org.ovirt.thread.pool-6-thread-48) [443b1ed8] FINISH, HSMGetAllTasksInfoVDSCommand, return: [], log id: 14a15273 2014-04-24 13:11:38,741 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand]
(org.ovirt.thread.pool-6-thread-48) [443b1ed8] FINISH, SPMGetAllTasksInfoVDSCommand, return: [], log id: 710a52c9 2014-04-24 13:11:38,741 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (org.ovirt.thread.pool-6-thread-48) [443b1ed8] Discovered no tasks on Storage Pool Default 2014-04-24 13:14:52,094 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-11) starting processDomainRecovery for domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_DOMAIN 2014-04-24 13:14:52,097 WARN [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-11) Storage domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_DOMAIN is not visible to one or more hosts. Since the domains type is ISO, hosts status will not be changed to non-operational 2014-04-24 13:14:54,923 WARN [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-37) domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_DOMAIN in problem. vds: srv-0202 2014-04-24 13:17:59,281 INFO [org.ovirt.engine.core.bll.AddVmFromTemplateCommand] (ajp--127.0.0.1-8702-9) [*7fb59186*] Lock Acquired to object EngineLock [exclusiveLocks= key: *servidor-teste* value: VM_NAME , sharedLocks= key: 1f08d35a-adf0-4734-9ce6-1431406096ba value: TEMPLATE key: c8e52f2a-5384-41ee-af77-7ee37bf54355 value: DISK ] 2014-04-24 13:17:59,302 INFO [org.ovirt.engine.core.bll.AddVmFromTemplateCommand] (ajp--127.0.0.1-8702-9) [*7fb59186*] Running command: AddVmFromTemplateCommand internal: false. Entities affected : ID: 99408929-82cf-4dc7-a532-9d998063fa95 Type: VdsGroups, ID: 1f08d35a-adf0-4734-9ce6-1431406096ba Type: VmTemplate, ID: 3410b593-dbd0-4ab8-9a21-3e3c51fe8e90 Type: Storage, ID: 99408929-82cf-4dc7-a532-9d998063fa95 Type: VdsGroups 2014-04-24 13:17:59,336 INFO [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] (ajp--127.0.0.1-8702-9) [*7fb59186*] START, SetVmStatusVDSCommand( vmId = 8a94d957-621e-4cd6-b94d-64a0572cb759, status = ImageLocked), log id: 6ada3a4a 2014-04-24 13:17:59,339 INFO [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] (ajp--127.0.0.1-8702-9) [*7fb59186*] FINISH, SetVmStatusVDSCommand, log id: 6ada3a4a 2014-04-24 13:17:59,344 INFO [org.ovirt.engine.core.bll.CreateCloneOfTemplateCommand] (ajp--127.0.0.1-8702-9) [48e79aaf] Running command: CreateCloneOfTemplateCommand internal: true. Entities affected : ID: 3410b593-dbd0-4ab8-9a21-3e3c51fe8e90 Type: Storage 2014-04-24 13:17:59,371 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (ajp--127.0.0.1-8702-9) [48e79aaf] START, CopyImageVDSCommand( storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false, storageDomainId = 3410b593-dbd0-4ab8-9a21-3e3c51fe8e90, imageGroupId = c8e52f2a-5384-41ee-af77-7ee37bf54355, imageId = 5c642d47-4f03-4a81-8a10-067b98e068f4, dstImageGroupId = 5a09cae5-c7a1-466d-9b69-ff8ad739d71c, vmId = 1f08d35a-adf0-4734-9ce6-1431406096ba, dstImageId = 2d82ce92-96f1-482c-b8fe-c21d9dfb23e6, imageDescription = , dstStorageDomainId = 3410b593-dbd0-4ab8-9a21-3e3c51fe8e90, copyVolumeType = LeafVol, volumeFormat = RAW, preallocate = Sparse, postZero = false, force = false), log id: 4a480fe7 2014-04-24 13:17:59,372 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (ajp--127.0.0.1-8702-9) [48e79aaf] -- executeIrsBrokerCommand: calling 'copyImage' with two new parameters: description and UUID 2014-04-24 13:17:59,373 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (ajp--127.0.0.1-8702-9) [48e79aaf] -- copyImage parameters: sdUUID=3410b593-dbd0-4ab8-9a21-3e3c51fe8e90 spUUID=5849b030-626e-47cb-ad90-3ce782d831b3 vmGUID=1f08d35a-adf0-4734-9ce6-1431406096ba srcImageGUID=c8e52f2a-5384-41ee-af77-7ee37bf54355 srcVolUUID=5c642d47-4f03-4a81-8a10-067b98e068f4 dstImageGUID=5a09cae5-c7a1-466d-9b69-ff8ad739d71c dstVolUUID=2d82ce92-96f1-482c-b8fe-c21d9dfb23e6 descr= dstSdUUID=3410b593-dbd0-4ab8-9a21-3e3c51fe8e90 2014-04-24 13:17:59,442 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (ajp--127.0.0.1-8702-9) [48e79aaf] FINISH, CopyImageVDSCommand, return: 00000000-0000-0000-0000-000000000000, log id: 4a480fe7 2014-04-24 13:17:59,446 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (ajp--127.0.0.1-8702-9) [48e79aaf] CommandAsyncTask::Adding CommandMultiAsyncTasks object for command 755c7619-60e6-4899-b772-17c56cdec057 2014-04-24 13:17:59,447 INFO [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (ajp--127.0.0.1-8702-9) [48e79aaf] CommandMultiAsyncTasks::AttachTask: Attaching task e8726bad-05ff-4f89-a127-146a3f8bceb2 to command 755c7619-60e6-4899-b772-17c56cdec057. 2014-04-24 13:17:59,451 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (ajp--127.0.0.1-8702-9) [48e79aaf] Adding task e8726bad-05ff-4f89-a127-146a3f8bceb2 (Parent Command AddVmFromTemplate, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters), polling hasn't started yet.. 2014-04-24 13:17:59,497 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-9) [48e79aaf] Correlation ID: *7fb59186*, Job ID: aeb08ac5-d157-40ae-bcd5-ec68d9cc5ae8, Call Stack: null, Custom Event ID: -1, Message: VM* servidor-teste creation was initiated by admin.* 2014-04-24 13:17:59,497 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (ajp--127.0.0.1-8702-9) [48e79aaf] BaseAsyncTask::startPollingTask: Starting to poll task e8726bad-05ff-4f89-a127-146a3f8bceb2. 2014-04-24 13:17:59,560 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-99) Polling and updating Async Tasks: 1 tasks, 1 tasks to poll now 2014-04-24 13:17:59,566 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (DefaultQuartzScheduler_Worker-99) SPMAsyncTask::PollTask: Polling task e8726bad-05ff-4f89-a127-146a3f8bceb2 (Parent Command AddVmFromTemplate, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) returned status running. 2014-04-24 13:17:59,567 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-99) Finished polling Tasks, will poll again in 10 seconds. 2014-04-24 13:17:59,653 INFO [org.ovirt.engine.core.bll.network.vm.ReorderVmNicsCommand] (ajp--127.0.0.1-8702-5) [601e9dcb] Running command: ReorderVmNicsCommand internal: false. Entities affected : ID: 8a94d957-621e-4cd6-b94d-64a0572cb759 Type: VM 2014-04-24 13:18:11,746 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-82) [1bb7dfd0] Correlation ID: null, Call Stack: null, Custom Event ID: -1, *Message: Used Network resources of host srv-0202 [96%] exceeded defined threshold [95%].* 2014-04-24 13:18:22,578 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-60) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message:* Used Network resources of host srv-0203 [98%] exceeded defined threshold [95%].* 2014-04-24 13:19:54,926 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-28) starting processDomainRecovery for domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_DOMAIN 2014-04-24 13:19:54,929 WARN [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-28) Storage domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_*DOMAIN is not visible to one or more hosts. Since the domains type is ISO, hosts status will not be changed to non-operational* 2014-04-24 13:19:57,802 WARN [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-36) domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_DOMAIN in problem. vds: srv-0202 ^C
On Thu, Apr 24, 2014 at 3:27 AM, Moti Asayag <masayag@redhat.com> wrote:
----- Original Message -----
From: "Tamer Lima" <tamer.americo@gmail.com> To: users@ovirt.org Sent: Monday, April 14, 2014 5:13:12 PM Subject: [ovirt-users] does SPM can run over ovirt-engine host ?
Hello,
When I create virtual machine from a template (centos6.5, 2 cores,
8GB
mem,
500GB hd) this process takes almost 2 hours. I click on "New VM" button and just select the template and click ok.
engine.log show me high network consumption (98%) between engine-server host and SPM host.
Could you share that piece of log which indicates the 98% consumption is beween the engine server to the SPM host (vs the SPM node to the storage server) ?
I tried to make my engine-server host a spm host too, but without
sucess.
Does SPM can run over on the same ovirt-engine machine ?
Am I make something wrong? Or create VM from template is really
slow ?
my servers : srv-0202 = ovirt-engine , vdsm srv-0203 = spm , vdsm srv-0204 = vdsm These servers are dell blades connected on a 100GB switch.
thanks
This is what I know about SPM: http://www.ovirt.org/Storage_-_oVirt_workshop_November_2011
= Storage Pool Manager (SPM) A role assigned to one host in a data
granting it sole authority over:
* Creation, deletion, an dmanipulation of virtula disk images, snapshots and templates * Templates: you can create on VM as a golden image and
center provision to
multiple VMs (QCOW layers) * Allocation of storage for sparse block devices (on SAN) * Thin provisinoing (see below) * Single metadata writer: * SPM lease mechanism (Chockler and Malkhi 2004,
Light-Weight Leases
for Storage-Cnntric Coordination) * Storage-centric mailbox * This role can be migrated to any host in data center
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

----- Original Message -----
From: "Tamer Lima" <tamer.americo@gmail.com> To: "Moti Asayag" <masayag@redhat.com> Cc: users@ovirt.org Sent: Friday, April 25, 2014 9:50:11 PM Subject: Re: [ovirt-users] does SPM can run over ovirt-engine host ?
below are the results of commands before and during vm creation.
I executed the commands on srv-2022 (engine) and srv-0203 (vdsm + spm) 1) first srv-0202 with commands vdsClient -s localhost getVdsStat AND vdsClient -s localhost getVdsCaps BEFORE vm creating 1.1) the same with srv-0203
2) second srv-0202 with the same commands DURING vm creating , when browser admin shows the network with consumption exceeding 98% 2.1) the same with srv-0203
about VM in this test: creating VM with THIN provisioning using template, and running on srv-0203
"Once you've identified the interface, you can see if the 'ovirtmgmt' is reported with that high consumption " <<<== I dont see high consumption on rx/txRate
During the VM creation both srv-0202 and srv-0203 report high consumption:
From srv-0202 stats: 'em1': {'name': 'em1', 'rxRate': '0.5', 'txRate': '98.4'}, 'ovirtmgmt': {'name': 'ovirtmgmt', 'rxRate': '0.4', 'txRate': '94.1'},
srv-0203 shows similar result on its stats below. On both hosts the 'ovirtmgmt' (the management network) reported the high utilization. The 'ovirtmgmt' is configured as a vm network (linux bridge), but non of the vm vnics connected to it (reported as 'vnet*') show high throughput. Only the 'em1' nic. A mid-stage summarize, you have 3 host in the cluster: srv-0202 - 98% (serves 3 vms) srv-0203 - 98% (SPM, serves 4 vms) srv-0204 - 10% (serves 1 vm) So it seems that the issue is between srv-0202 and srv-0203 and not with the ovirt-engine server (assuming no hosted-engine involved here). Going over the thread again, i noticed I skipped a piece of log which require further attention. I'll ask Allon from the storage team to have a look at it.
"or if it is configured on top of the highly used nic." my ports are listed on srv-0202 as 'ports': ['em1', 'vnet0', 'vnet1', 'vnet2'], and on srv-0203 as 'ports': ['vnet0', 'em1', 'vnet1', 'vnet2', 'vnet3'], I dont know when vnet is created and/or modified
vnet are created when a vm nic is connected to a vm network: when vm is being started or when a vnic hot plug is executed, the vnic port will be created on that bridge (ovirtmgmt in the case above).
======================================================== 1) srv-0202 vdsClient -s localhost getVdsStat and vdsClient -s localhost getVdsCaps
[root@srv-0202 ~]# vdsClient -s localhost getVdsStats anonHugePages = '2394' cpuIdle = '97.23' cpuLoad = '0.16' cpuSys = '0.97' cpuSysVdsmd = '0.50' cpuUser = '1.80' cpuUserVdsmd = '1.00' dateTime = '2014-04-25T16:15:18 GMT' diskStats = {'/tmp': {'free': '1102470'}, '/var/log': {'free': '1102470'}, '/var/log/core': {'free': '1102470'}, '/var/run/vdsm/': {'free': '1102470'}} elapsedTime = '1346980' generationID = 'a1c01b50-eb16-4c73-8528-297b5116e141' haStats = {'active': False, 'configured': False, 'globalMaintenance': False, 'localMaintenance': False, 'score': 0} ksmCpu = 5 ksmPages = 64 ksmState = True memAvailable = 10341 memCommitted = 18627 memFree = 26565 memShared = 296595 memUsed = '18' momStatus = 'active' netConfigDirty = 'False' network = {';vdsmdummy;': {'name': ';vdsmdummy;', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'bond0': {'name': 'bond0', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'bond1': {'name': 'bond1', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'bond2': {'name': 'bond2', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'bond3': {'name': 'bond3', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'bond4': {'name': 'bond4', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'em1': {'name': 'em1', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', * 'state': 'up',* 'txDropped': '0', 'txErrors': '0', 'txRate': '0.1'}, 'em2': {'name': 'em2', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', *'state': 'down',* 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'em3': {'name': 'em3', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'em4': {'name': 'em4', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'lo': {'name': 'lo', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.2', 'speed': '1000', *'state': 'up',* 'txDropped': '0', 'txErrors': '0', 'txRate': '0.2'}, 'ovirtmgmt': {'name': 'ovirtmgmt', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'up', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.1'}, 'vnet0': {'name': 'vnet0', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', *'state': 'up',* 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'vnet1': {'name': 'vnet1', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', *'state': 'up',* 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'vnet2': {'name': 'vnet2', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'up', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}} rxDropped = '0' rxRate = '0.20' statsAge = '0.20' storageDomains = {'3410b593-dbd0-4ab8-9a21-3e3c51fe8e90': {'acquired': True, 'code': 0, 'delay': '0.000173751',
'lastCheck': '7.5', 'valid': True,
'version': 3}, '6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d': {'acquired': False, 'code': 358, 'delay': '0',
'lastCheck': '7.5', 'valid': False,
'version': -1}} swapFree = 7390 swapTotal = 8095 thpState = 'always' txDropped = '0' txRate = '0.39' vmActive = 3 vmCount = 3 vmMigrating = 0
[root@srv-0202 ~]# vdsClient -s localhost getVdsCaps HBAInventory = {'FC': [{'model': 'QLE2562 - PCI-Express Dual Channel 8Gb Fibre Channel HBA', 'wwnn': '2000001b329f5fce', 'wwpn': '2100001b329f5fce'}, {'model': 'QLE2562 - PCI-Express Dual Channel 8Gb Fibre Channel HBA', 'wwnn': '2001001b32bf5fce', 'wwpn': '2101001b32bf5fce'}], 'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:b956608e509'}]} ISCSIInitiatorName = 'iqn.1994-05.com.redhat:b956608e509' bondings = {'bond0': {'addr': '', 'cfg': {}, 'hwaddr': '00:00:00:00:00:00', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}, 'bond1': {'addr': '', 'cfg': {}, 'hwaddr': '00:00:00:00:00:00', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}, 'bond2': {'addr': '', 'cfg': {}, 'hwaddr': '00:00:00:00:00:00', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}, 'bond3': {'addr': '', 'cfg': {}, 'hwaddr': '00:00:00:00:00:00', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}, 'bond4': {'addr': '', 'cfg': {}, 'hwaddr': '00:00:00:00:00:00', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}} bridges = {';vdsmdummy;': {'addr': '', 'cfg': {}, 'gateway': '', 'ipv6addrs': [], 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ports': [], 'stp': 'off'}, 'ovirtmgmt': {'addr': '172.16.6.192', 'cfg': {'BOOTPROTO': 'none', 'DEFROUTE': 'yes', 'DELAY': '0', 'DEVICE': 'ovirtmgmt', 'DNS1': '172.16.4.2', 'DNS2': '164.41.222.130', 'DNS3': '164.41.222.207', 'DOMAIN': 'lttd.br', 'GATEWAY': '172.16.6.1', 'IPADDR': '172.16.6.192', 'NETMASK': '255.255.255.0', 'NM_CONTROLLED': 'no', 'ONBOOT': 'yes', 'STP': 'no', 'TYPE': 'Bridge'}, 'gateway': '172.16.6.1', 'ipv6addrs': ['fe80::222:19ff:fe69:850f/64'], 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '255.255.255.0', 'ports': ['em1', 'vnet0', 'vnet1', 'vnet2'], 'stp': 'off'}} clusterLevels = ['3.0', '3.1', '3.2', '3.3', '3.4'] cpuCores = '8' cpuFlags = 'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,dca,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_n270' cpuModel = 'Intel(R) Xeon(R) CPU X5560 @ 2.80GHz' cpuSockets = '2' cpuSpeed = '2793.024' cpuThreads = '16' emulatedMachines = ['rhel6.5.0', 'pc', 'rhel6.4.0', 'rhel6.3.0', 'rhel6.2.0', 'rhel6.1.0', 'rhel6.0.0', 'rhel5.5.0', 'rhel5.4.4', 'rhel5.4.0'] guestOverhead = '65' hooks = {} kvmEnabled = 'true' lastClient = '127.0.0.1' lastClientIface = 'lo' management_ip = '0.0.0.0' memSize = '32094' netConfigDirty = 'False' networks = {'ovirtmgmt': {'addr': '172.16.6.192', 'bridged': True, 'cfg': {'BOOTPROTO': 'none', 'DEFROUTE': 'yes', 'DELAY': '0', 'DEVICE': 'ovirtmgmt', 'DNS1': '172.16.4.2', 'DNS2': '164.41.222.130', 'DNS3': '164.41.222.207', 'DOMAIN': 'lttd.br', 'GATEWAY': '172.16.6.1', 'IPADDR': '172.16.6.192', 'NETMASK': '255.255.255.0', 'NM_CONTROLLED': 'no', 'ONBOOT': 'yes', 'STP': 'no', 'TYPE': 'Bridge'}, 'gateway': '172.16.6.1', 'iface': 'ovirtmgmt', 'ipv6addrs': ['fe80::222:19ff:fe69:850f/64'], 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '255.255.255.0', 'ports': ['em1', 'vnet0', 'vnet1', 'vnet2'], 'stp': 'off'}} nics = {'em1': {'addr': '', 'cfg': {'BRIDGE': 'ovirtmgmt', 'DEVICE': 'em1', 'HWADDR': '00:22:19:69:85:0f', 'NM_CONTROLLED': 'no', 'ONBOOT': 'yes', 'STP': 'no'}, 'hwaddr': '00:22:19:69:85:0f', 'ipv6addrs': ['fe80::222:19ff:fe69:850f/64'], 'mtu': '1500', 'netmask': '', 'speed': 1000}, 'em2': {'addr': '', 'cfg': {'BOOTPROTO': 'dhcp', 'DEFROUTE': 'yes', 'DEVICE': 'em2', 'HWADDR': '00:22:19:69:85:11', 'IPV4_FAILURE_FATAL': 'yes', 'IPV6INIT': 'no', 'NAME': 'System em2', 'NM_CONTROLLED': 'yes', 'ONBOOT': 'no', 'PEERDNS': 'yes', 'PEERROUTES': 'yes', 'TYPE': 'Ethernet', 'UUID': '16f75c25-48cc-4dec-97ff-0e7f0822b26f'}, 'hwaddr': '00:22:19:69:85:11', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'speed': 0}, 'em3': {'addr': '', 'cfg': {'BOOTPROTO': 'dhcp', 'DEVICE': 'em3', 'HWADDR': '00:22:19:69:85:13', 'NM_CONTROLLED': 'yes', 'ONBOOT': 'no', 'TYPE': 'Ethernet', 'UUID': '1d8a2db3-d8c2-480b-ab2a-2decea844280'}, 'hwaddr': '00:22:19:69:85:13', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'speed': 0}, 'em4': {'addr': '', 'cfg': {'BOOTPROTO': 'dhcp', 'DEVICE': 'em4', 'HWADDR': '00:22:19:69:85:15', 'NM_CONTROLLED': 'yes', 'ONBOOT': 'no', 'TYPE': 'Ethernet', 'UUID': 'e139faa0-bd6c-4aec-b524-134d2bd30fb6'}, 'hwaddr': '00:22:19:69:85:15', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'speed': 0}} operatingSystem = {'name': 'RHEL', 'release': '5.el6.centos.11.2', 'version': '6'} packages2 = {'kernel': {'buildtime': 1395788395.0, 'release': '431.11.2.el6.x86_64', 'version': '2.6.32'}, 'libvirt': {'buildtime': 1396856799, 'release': '29.el6_5.7', 'version': '0.10.2'}, 'mom': {'buildtime': 1391183641, 'release': '1.el6', 'version': '0.4.0'}, 'qemu-img': {'buildtime': 1398190363, 'release': '2.415.el6_5.8', 'version': '0.12.1.2'}, 'qemu-kvm': {'buildtime': 1398190363, 'release': '2.415.el6_5.8', 'version': '0.12.1.2'}, 'spice-server': {'buildtime': 1386756528, 'release': '6.el6_5.1', 'version': '0.12.4'}, 'vdsm': {'buildtime': 1395806448, 'release': '0.el6', 'version': '4.14.6'}} reservedMem = '321' rngSources = ['random'] software_revision = '0' software_version = '4.14' supportedENGINEs = ['3.0', '3.1', '3.2', '3.3', '3.4'] supportedProtocols = ['2.2', '2.3'] uuid = '4C4C4544-0050-4410-804C-B8C04F374D31' version_name = 'Snow Man' vlans = {} vmTypes = ['kvm'] [root@srv-0202 ~]#
================================================================ 1.1) srv-0203 vdsClient -s localhost getVdsStat and vdsClient -s localhost getVdsCaps
[root@srv-0203 ~]# vdsClient -s localhost getVdsStats anonHugePages = '6608' cpuIdle = '98.26' cpuLoad = '0.00' cpuSys = '0.84' cpuSysVdsmd = '0.62' cpuUser = '0.90' cpuUserVdsmd = '1.24' dateTime = '2014-04-25T16:16:41 GMT' diskStats = {'/tmp': {'free': '2189974'}, '/var/log': {'free': '2189974'}, '/var/log/core': {'free': '2189974'}, '/var/run/vdsm/': {'free': '2189974'}} elapsedTime = '1346974' generationID = '5a96bbca-5947-41b9-b61b-5a7d8bd603fe' haStats = {'active': False, 'configured': False, 'globalMaintenance': False, 'localMaintenance': False, 'score': 0} ksmCpu = 4 ksmPages = 64 ksmState = True memAvailable = 1202 memCommitted = 24836 memFree = 21835 memShared = 515063 memUsed = '32' momStatus = 'active' netConfigDirty = 'False' network = {';vdsmdummy;': {'name': ';vdsmdummy;', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'bond0': {'name': 'bond0', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'bond1': {'name': 'bond1', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'bond2': {'name': 'bond2', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'bond3': {'name': 'bond3', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'bond4': {'name': 'bond4', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'em1': {'name': 'em1', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.1', 'speed': '1000', *'state': 'up',* 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'em2': {'name': 'em2', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', *'state': 'up',* 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'em3': {'name': 'em3', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'em4': {'name': 'em4', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'lo': {'name': 'lo', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.2', 'speed': '1000', 'state': 'up', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.2'}, 'ovirtmgmt': {'name': 'ovirtmgmt', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.1', 'speed': '1000', 'state': 'up', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'vnet0': {'name': 'vnet0', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'up', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'vnet1': {'name': 'vnet1', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'up', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'vnet2': {'name': 'vnet2', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'up', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'vnet3': {'name': 'vnet3', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'up', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}} rxDropped = '0' rxRate = '0.20' statsAge = '1.45' storageDomains = {'3410b593-dbd0-4ab8-9a21-3e3c51fe8e90': {'acquired': True, 'code': 0, 'delay': '0.000300909',
'lastCheck': '5.5', 'valid': True,
'version': 3}, '6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d': {'acquired': True, 'code': 0, 'delay': '0.000324327',
'lastCheck': '6.3', 'valid': True,
'version': 0}} swapFree = 6544 swapTotal = 8023 thpState = 'always' txDropped = '0' txRate = '0.11' vmActive = 4 vmCount = 4 vmMigrating = 0
[root@srv-0203 ~]# vdsClient -s localhost getVdsCaps HBAInventory = {'FC': [{'model': 'QLE2562 - PCI-Express Dual Channel 8Gb Fibre Channel HBA', 'wwnn': '2000001b329f95ce', 'wwpn': '2100001b329f95ce'}, {'model': 'QLE2562 - PCI-Express Dual Channel 8Gb Fibre Channel HBA', 'wwnn': '2001001b32bf95ce', 'wwpn': '2101001b32bf95ce'}], 'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:a94b25bd22a'}]} ISCSIInitiatorName = 'iqn.1994-05.com.redhat:a94b25bd22a' bondings = {'bond0': {'addr': '', 'cfg': {}, 'hwaddr': '00:00:00:00:00:00', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}, 'bond1': {'addr': '', 'cfg': {}, 'hwaddr': '00:00:00:00:00:00', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}, 'bond2': {'addr': '', 'cfg': {}, 'hwaddr': '00:00:00:00:00:00', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}, 'bond3': {'addr': '', 'cfg': {}, 'hwaddr': '00:00:00:00:00:00', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}, 'bond4': {'addr': '', 'cfg': {}, 'hwaddr': '00:00:00:00:00:00', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}} bridges = {';vdsmdummy;': {'addr': '', 'cfg': {}, 'gateway': '', 'ipv6addrs': [], 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ports': [], 'stp': 'off'}, 'ovirtmgmt': {'addr': '172.16.6.193', 'cfg': {'BOOTPROTO': 'none', 'DEFROUTE': 'yes', 'DELAY': '0', 'DEVICE': 'ovirtmgmt', 'GATEWAY': '172.16.6.1', 'IPADDR': '172.16.6.193', 'NETMASK': '255.255.255.0', 'NM_CONTROLLED': 'no', 'ONBOOT': 'yes', 'STP': 'no', 'TYPE': 'Bridge'}, 'gateway': '172.16.6.1', 'ipv6addrs': ['fe80::222:19ff:fe69:b6c1/64'], 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '255.255.255.0', 'ports': ['vnet0', 'em1', 'vnet1', 'vnet2', 'vnet3'], 'stp': 'off'}} clusterLevels = ['3.0', '3.1', '3.2', '3.3', '3.4'] cpuCores = '8' cpuFlags = 'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,dca,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_n270' cpuModel = 'Intel(R) Xeon(R) CPU X5560 @ 2.80GHz' cpuSockets = '2' cpuSpeed = '2793.159' cpuThreads = '16' emulatedMachines = ['rhel6.5.0', 'pc', 'rhel6.4.0', 'rhel6.3.0', 'rhel6.2.0', 'rhel6.1.0', 'rhel6.0.0', 'rhel5.5.0', 'rhel5.4.4', 'rhel5.4.0'] guestOverhead = '65' hooks = {} kvmEnabled = 'true' lastClient = '127.0.0.1' lastClientIface = 'lo' management_ip = '0.0.0.0' memSize = '32094' netConfigDirty = 'False' networks = {'ovirtmgmt': {'addr': '172.16.6.193', 'bridged': True, 'cfg': {'BOOTPROTO': 'none', 'DEFROUTE': 'yes', 'DELAY': '0', 'DEVICE': 'ovirtmgmt', 'GATEWAY': '172.16.6.1', 'IPADDR': '172.16.6.193', 'NETMASK': '255.255.255.0', 'NM_CONTROLLED': 'no', 'ONBOOT': 'yes', 'STP': 'no', 'TYPE': 'Bridge'}, 'gateway': '172.16.6.1', 'iface': 'ovirtmgmt', 'ipv6addrs': ['fe80::222:19ff:fe69:b6c1/64'], 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '255.255.255.0', 'ports': ['vnet0', 'em1', 'vnet1', 'vnet2', 'vnet3'], 'stp': 'off'}} nics = {'em1': {'addr': '', 'cfg': {'BRIDGE': 'ovirtmgmt', 'DEVICE': 'em1', 'HWADDR': '00:22:19:69:b6:c1', 'NM_CONTROLLED': 'no', 'ONBOOT': 'yes', 'STP': 'no'}, 'hwaddr': '00:22:19:69:b6:c1', 'ipv6addrs': ['fe80::222:19ff:fe69:b6c1/64'], 'mtu': '1500', 'netmask': '', 'speed': 1000}, 'em2': {'addr': '', 'cfg': {'BOOTPROTO': 'dhcp', 'DEFROUTE': 'yes', 'DEVICE': 'em2', 'HWADDR': '00:22:19:69:B6:C3', 'IPV4_FAILURE_FATAL': 'yes', 'IPV6INIT': 'no', 'NAME': 'eth1', 'NM_CONTROLLED': 'yes', 'ONBOOT': 'yes', 'PEERDNS': 'yes', 'PEERROUTES': 'yes', 'TYPE': 'Ethernet', 'UUID': '61c021d1-c174-4b06-be72-1a4e7d6fa80e'}, 'hwaddr': '00:22:19:69:b6:c3', 'ipv6addrs': ['fe80::222:19ff:fe69:b6c3/64'], 'mtu': '1500', 'netmask': '', 'speed': 1000}, 'em3': {'addr': '', 'cfg': {'BOOTPROTO': 'dhcp', 'DEFROUTE': 'yes', 'DEVICE': 'em3', 'HWADDR': '00:22:19:69:B6:C5', 'IPV4_FAILURE_FATAL': 'yes', 'IPV6INIT': 'no', 'NAME': 'eth2', 'NM_CONTROLLED': 'yes', 'ONBOOT': 'no', 'PEERDNS': 'yes', 'PEERROUTES': 'yes', 'TYPE': 'Ethernet', 'UUID': 'fecc09f3-51b4-4ba5-a306-abf3a12f3971'}, 'hwaddr': '00:22:19:69:b6:c5', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'speed': 0}, 'em4': {'addr': '', 'cfg': {'BOOTPROTO': 'dhcp', 'DEFROUTE': 'yes', 'DEVICE': 'em4', 'HWADDR': '00:22:19:69:B6:C7', 'IPV4_FAILURE_FATAL': 'yes', 'IPV6INIT': 'no', 'NAME': 'eth3', 'NM_CONTROLLED': 'yes', 'ONBOOT': 'no', 'PEERDNS': 'yes', 'PEERROUTES': 'yes', 'TYPE': 'Ethernet', 'UUID': '6e850fef-33b4-41e2-97d7-1fcd9bb334ed'}, 'hwaddr': '00:22:19:69:b6:c7', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'speed': 0}} operatingSystem = {'name': 'RHEL', 'release': '5.el6.centos.11.2', 'version': '6'} packages2 = {'kernel': {'buildtime': 1395788395.0, 'release': '431.11.2.el6.x86_64', 'version': '2.6.32'}, 'libvirt': {'buildtime': 1396856799, 'release': '29.el6_5.7', 'version': '0.10.2'}, 'mom': {'buildtime': 1391183641, 'release': '1.el6', 'version': '0.4.0'}, 'qemu-img': {'buildtime': 1398190363, 'release': '2.415.el6_5.8', 'version': '0.12.1.2'}, 'qemu-kvm': {'buildtime': 1398190363, 'release': '2.415.el6_5.8', 'version': '0.12.1.2'}, 'spice-server': {'buildtime': 1386756528, 'release': '6.el6_5.1', 'version': '0.12.4'}, 'vdsm': {'buildtime': 1395806448, 'release': '0.el6', 'version': '4.14.6'}} reservedMem = '321' rngSources = ['random'] software_revision = '0' software_version = '4.14' supportedENGINEs = ['3.0', '3.1', '3.2', '3.3', '3.4'] supportedProtocols = ['2.2', '2.3'] uuid = '4C4C4544-0050-4410-804C-B6C04F374D31' version_name = 'Snow Man' vlans = {} vmTypes = ['kvm']
====================================== 2. srv-0202 vdsClient -s localhost getVdsStat and vdsClient -s localhost getVdsCaps
[root@srv-0202 ~]# vdsClient -s localhost getVdsStats anonHugePages = '2318' cpuIdle = '98.70' cpuLoad = '0.24' cpuSys = '0.94' cpuSysVdsmd = '0.62' cpuUser = '0.37' cpuUserVdsmd = '0.75' dateTime = '2014-04-25T17:37:35 GMT' diskStats = {'/tmp': {'free': '1097545'}, '/var/log': {'free': '1097545'}, '/var/log/core': {'free': '1097545'}, '/var/run/vdsm/': {'free': '1097545'}} elapsedTime = '1351918' generationID = 'a1c01b50-eb16-4c73-8528-297b5116e141' haStats = {'active': False, 'configured': False, 'globalMaintenance': False, 'localMaintenance': False, 'score': 0} ksmCpu = 3 ksmPages = 64 ksmState = True memAvailable = 10221 memCommitted = 18627 memFree = 26835 memShared = 274282 memUsed = '17' momStatus = 'active' netConfigDirty = 'False' network = {';vdsmdummy;': {'name': ';vdsmdummy;', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'bond0': {'name': 'bond0', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'bond1': {'name': 'bond1', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'bond2': {'name': 'bond2', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'bond3': {'name': 'bond3', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'bond4': {'name': 'bond4', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'em1': {'name': 'em1', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.5', 'speed': '1000', *'state': 'up',* 'txDropped': '0', 'txErrors': '0', 'txRate': '98.4'}, 'em2': {'name': 'em2', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'em3': {'name': 'em3', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'em4': {'name': 'em4', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'lo': {'name': 'lo', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'up', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'ovirtmgmt': {'name': 'ovirtmgmt', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.4', 'speed': '1000', 'state': 'up', 'txDropped': '0', 'txErrors': '0', 'txRate': '94.1'}, 'vnet0': {'name': 'vnet0', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'up', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'vnet1': {'name': 'vnet1', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'up', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'vnet2': {'name': 'vnet2', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'up', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}} rxDropped = '0' rxRate = '0.85' statsAge = '0.72' storageDomains = {'3410b593-dbd0-4ab8-9a21-3e3c51fe8e90': {'acquired': True, 'code': 0, 'delay': '0.000274753',
'lastCheck': '7.4', 'valid': True,
'version': 3}, '6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d': {'acquired': False, 'code': 358, 'delay': '0',
'lastCheck': '2.3', 'valid': False,
'version': -1}} swapFree = 7090 swapTotal = 8095 thpState = 'always' txDropped = '0' txRate = '100.00' vmActive = 3 vmCount = 3 vmMigrating = 0
[root@srv-0202 ~]# vdsClient -s localhost getVdsCaps HBAInventory = {'FC': [{'model': 'QLE2562 - PCI-Express Dual Channel 8Gb Fibre Channel HBA', 'wwnn': '2000001b329f5fce', 'wwpn': '2100001b329f5fce'}, {'model': 'QLE2562 - PCI-Express Dual Channel 8Gb Fibre Channel HBA', 'wwnn': '2001001b32bf5fce', 'wwpn': '2101001b32bf5fce'}], 'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:b956608e509'}]} ISCSIInitiatorName = 'iqn.1994-05.com.redhat:b956608e509' bondings = {'bond0': {'addr': '', 'cfg': {}, 'hwaddr': '00:00:00:00:00:00', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}, 'bond1': {'addr': '', 'cfg': {}, 'hwaddr': '00:00:00:00:00:00', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}, 'bond2': {'addr': '', 'cfg': {}, 'hwaddr': '00:00:00:00:00:00', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}, 'bond3': {'addr': '', 'cfg': {}, 'hwaddr': '00:00:00:00:00:00', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}, 'bond4': {'addr': '', 'cfg': {}, 'hwaddr': '00:00:00:00:00:00', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}} bridges = {';vdsmdummy;': {'addr': '', 'cfg': {}, 'gateway': '', 'ipv6addrs': [], 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ports': [], 'stp': 'off'}, 'ovirtmgmt': {'addr': '172.16.6.192', 'cfg': {'BOOTPROTO': 'none', 'DEFROUTE': 'yes', 'DELAY': '0', 'DEVICE': 'ovirtmgmt', 'DNS1': '172.16.4.2', 'DNS2': '164.41.222.130', 'DNS3': '164.41.222.207', 'DOMAIN': 'lttd.br', 'GATEWAY': '172.16.6.1', 'IPADDR': '172.16.6.192', 'NETMASK': '255.255.255.0', 'NM_CONTROLLED': 'no', 'ONBOOT': 'yes', 'STP': 'no', 'TYPE': 'Bridge'}, 'gateway': '172.16.6.1', 'ipv6addrs': ['fe80::222:19ff:fe69:850f/64'], 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '255.255.255.0', 'ports': ['em1', 'vnet0', 'vnet1', 'vnet2'], 'stp': 'off'}} clusterLevels = ['3.0', '3.1', '3.2', '3.3', '3.4'] cpuCores = '8' cpuFlags = 'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,dca,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_n270' cpuModel = 'Intel(R) Xeon(R) CPU X5560 @ 2.80GHz' cpuSockets = '2' cpuSpeed = '2793.024' cpuThreads = '16' emulatedMachines = ['rhel6.5.0', 'pc', 'rhel6.4.0', 'rhel6.3.0', 'rhel6.2.0', 'rhel6.1.0', 'rhel6.0.0', 'rhel5.5.0', 'rhel5.4.4', 'rhel5.4.0'] guestOverhead = '65' hooks = {} kvmEnabled = 'true' lastClient = '127.0.0.1' lastClientIface = 'lo' management_ip = '0.0.0.0' memSize = '32094' netConfigDirty = 'False' networks = {'ovirtmgmt': {'addr': '172.16.6.192', 'bridged': True, 'cfg': {'BOOTPROTO': 'none', 'DEFROUTE': 'yes', 'DELAY': '0', 'DEVICE': 'ovirtmgmt', 'DNS1': '172.16.4.2', 'DNS2': '164.41.222.130', 'DNS3': '164.41.222.207', 'DOMAIN': 'lttd.br', 'GATEWAY': '172.16.6.1', 'IPADDR': '172.16.6.192', 'NETMASK': '255.255.255.0', 'NM_CONTROLLED': 'no', 'ONBOOT': 'yes', 'STP': 'no', 'TYPE': 'Bridge'}, 'gateway': '172.16.6.1', 'iface': 'ovirtmgmt', 'ipv6addrs': ['fe80::222:19ff:fe69:850f/64'], 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '255.255.255.0', 'ports': ['em1', 'vnet0', 'vnet1', 'vnet2'], 'stp': 'off'}} nics = {'em1': {'addr': '', 'cfg': {'BRIDGE': 'ovirtmgmt', 'DEVICE': 'em1', 'HWADDR': '00:22:19:69:85:0f', 'NM_CONTROLLED': 'no', 'ONBOOT': 'yes', 'STP': 'no'}, 'hwaddr': '00:22:19:69:85:0f', 'ipv6addrs': ['fe80::222:19ff:fe69:850f/64'], 'mtu': '1500', 'netmask': '', 'speed': 1000}, 'em2': {'addr': '', 'cfg': {'BOOTPROTO': 'dhcp', 'DEFROUTE': 'yes', 'DEVICE': 'em2', 'HWADDR': '00:22:19:69:85:11', 'IPV4_FAILURE_FATAL': 'yes', 'IPV6INIT': 'no', 'NAME': 'System em2', 'NM_CONTROLLED': 'yes', 'ONBOOT': 'no', 'PEERDNS': 'yes', 'PEERROUTES': 'yes', 'TYPE': 'Ethernet', 'UUID': '16f75c25-48cc-4dec-97ff-0e7f0822b26f'}, 'hwaddr': '00:22:19:69:85:11', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'speed': 0}, 'em3': {'addr': '', 'cfg': {'BOOTPROTO': 'dhcp', 'DEVICE': 'em3', 'HWADDR': '00:22:19:69:85:13', 'NM_CONTROLLED': 'yes', 'ONBOOT': 'no', 'TYPE': 'Ethernet', 'UUID': '1d8a2db3-d8c2-480b-ab2a-2decea844280'}, 'hwaddr': '00:22:19:69:85:13', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'speed': 0}, 'em4': {'addr': '', 'cfg': {'BOOTPROTO': 'dhcp', 'DEVICE': 'em4', 'HWADDR': '00:22:19:69:85:15', 'NM_CONTROLLED': 'yes', 'ONBOOT': 'no', 'TYPE': 'Ethernet', 'UUID': 'e139faa0-bd6c-4aec-b524-134d2bd30fb6'}, 'hwaddr': '00:22:19:69:85:15', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'speed': 0}} operatingSystem = {'name': 'RHEL', 'release': '5.el6.centos.11.2', 'version': '6'} packages2 = {'kernel': {'buildtime': 1395788395.0, 'release': '431.11.2.el6.x86_64', 'version': '2.6.32'}, 'libvirt': {'buildtime': 1396856799, 'release': '29.el6_5.7', 'version': '0.10.2'}, 'mom': {'buildtime': 1391183641, 'release': '1.el6', 'version': '0.4.0'}, 'qemu-img': {'buildtime': 1398190363, 'release': '2.415.el6_5.8', 'version': '0.12.1.2'}, 'qemu-kvm': {'buildtime': 1398190363, 'release': '2.415.el6_5.8', 'version': '0.12.1.2'}, 'spice-server': {'buildtime': 1386756528, 'release': '6.el6_5.1', 'version': '0.12.4'}, 'vdsm': {'buildtime': 1395806448, 'release': '0.el6', 'version': '4.14.6'}} reservedMem = '321' rngSources = ['random'] software_revision = '0' software_version = '4.14' supportedENGINEs = ['3.0', '3.1', '3.2', '3.3', '3.4'] supportedProtocols = ['2.2', '2.3'] uuid = '4C4C4544-0050-4410-804C-B8C04F374D31' version_name = 'Snow Man' vlans = {} vmTypes = ['kvm'] [root@srv-0202 ~]#
======================================================================== 2.1 srv-0203
[root@srv-0203 ~]# vdsClient -s localhost getVdsStats anonHugePages = '6646' cpuIdle = '97.74' cpuLoad = '1.33' cpuSys = '1.61' cpuSysVdsmd = '0.50' cpuUser = '0.65' cpuUserVdsmd = '1.25' dateTime = '2014-04-25T17:41:04 GMT' diskStats = {'/tmp': {'free': '2189939'}, '/var/log': {'free': '2189939'}, '/var/log/core': {'free': '2189939'}, '/var/run/vdsm/': {'free': '2189939'}} elapsedTime = '1352036' generationID = '5a96bbca-5947-41b9-b61b-5a7d8bd603fe' haStats = {'active': False, 'configured': False, 'globalMaintenance': False, 'localMaintenance': False, 'score': 0} ksmCpu = 8 ksmPages = 64 ksmState = True memAvailable = 1163 memCommitted = 24836 memFree = 22101 memShared = 508863 memUsed = '32' momStatus = 'active' netConfigDirty = 'False' network = {';vdsmdummy;': {'name': ';vdsmdummy;', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'bond0': {'name': 'bond0', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'bond1': {'name': 'bond1', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'bond2': {'name': 'bond2', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'bond3': {'name': 'bond3', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'bond4': {'name': 'bond4', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'em1': {'name': 'em1', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '98.4', 'speed': '1000', *'state': 'up',* 'txDropped': '0', 'txErrors': '0', 'txRate': '0.5'}, 'em2': {'name': 'em2', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', *'state': 'up',* 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'em3': {'name': 'em3', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'em4': {'name': 'em4', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'down', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'lo': {'name': 'lo', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', 'state': 'up', 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'ovirtmgmt': {'name': 'ovirtmgmt', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '94.5', 'speed': '1000', * 'state': 'up',* 'txDropped': '0', 'txErrors': '0', 'txRate': '0.5'}, 'vnet0': {'name': 'vnet0', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', * 'state': 'up',* 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'vnet1': {'name': 'vnet1', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', *'state': 'up',* 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'vnet2': {'name': 'vnet2', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', *'state': 'up',* 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}, 'vnet3': {'name': 'vnet3', 'rxDropped': '0', 'rxErrors': '0', 'rxRate': '0.0', 'speed': '1000', *'state': 'up',* 'txDropped': '0', 'txErrors': '0', 'txRate': '0.0'}} rxDropped = '0' rxRate = '96.47' statsAge = '0.07' storageDomains = {'3410b593-dbd0-4ab8-9a21-3e3c51fe8e90': {'acquired': True, 'code': 0, 'delay': '0.133906',
'lastCheck': '1.4', 'valid': True,
'version': 3}, '6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d': {'acquired': True, 'code': 0, 'delay': '0.133774',
'lastCheck': '7.2', 'valid': True,
'version': 0}} swapFree = 6263 swapTotal = 8023 thpState = 'always' txDropped = '0' txRate = '0.49' vmActive = 4 vmCount = 4 vmMigrating = 0 [root@srv-0203 ~]#
[root@srv-0203 ~]# vdsClient -s localhost getVdsCaps HBAInventory = {'FC': [{'model': 'QLE2562 - PCI-Express Dual Channel 8Gb Fibre Channel HBA', 'wwnn': '2000001b329f95ce', 'wwpn': '2100001b329f95ce'}, {'model': 'QLE2562 - PCI-Express Dual Channel 8Gb Fibre Channel HBA', 'wwnn': '2001001b32bf95ce', 'wwpn': '2101001b32bf95ce'}], 'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:a94b25bd22a'}]} ISCSIInitiatorName = 'iqn.1994-05.com.redhat:a94b25bd22a' bondings = {'bond0': {'addr': '', 'cfg': {}, 'hwaddr': '00:00:00:00:00:00', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}, 'bond1': {'addr': '', 'cfg': {}, 'hwaddr': '00:00:00:00:00:00', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}, 'bond2': {'addr': '', 'cfg': {}, 'hwaddr': '00:00:00:00:00:00', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}, 'bond3': {'addr': '', 'cfg': {}, 'hwaddr': '00:00:00:00:00:00', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}, 'bond4': {'addr': '', 'cfg': {}, 'hwaddr': '00:00:00:00:00:00', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'slaves': []}} bridges = {';vdsmdummy;': {'addr': '', 'cfg': {}, 'gateway': '', 'ipv6addrs': [], 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '', 'ports': [], 'stp': 'off'}, 'ovirtmgmt': {'addr': '172.16.6.193', 'cfg': {'BOOTPROTO': 'none', 'DEFROUTE': 'yes', 'DELAY': '0', 'DEVICE': 'ovirtmgmt', 'GATEWAY': '172.16.6.1', 'IPADDR': '172.16.6.193', 'NETMASK': '255.255.255.0', 'NM_CONTROLLED': 'no', 'ONBOOT': 'yes', 'STP': 'no', 'TYPE': 'Bridge'}, 'gateway': '172.16.6.1', 'ipv6addrs': ['fe80::222:19ff:fe69:b6c1/64'], 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '255.255.255.0', 'ports': ['vnet0', 'em1', 'vnet1', 'vnet2', 'vnet3'], 'stp': 'off'}} clusterLevels = ['3.0', '3.1', '3.2', '3.3', '3.4'] cpuCores = '8' cpuFlags = 'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,est,tm2,ssse3,cx16,xtpr,pdcm,dca,sse4_1,sse4_2,popcnt,lahf_lm,ida,dts,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_n270' cpuModel = 'Intel(R) Xeon(R) CPU X5560 @ 2.80GHz' cpuSockets = '2' cpuSpeed = '2793.159' cpuThreads = '16' emulatedMachines = ['rhel6.5.0', 'pc', 'rhel6.4.0', 'rhel6.3.0', 'rhel6.2.0', 'rhel6.1.0', 'rhel6.0.0', 'rhel5.5.0', 'rhel5.4.4', 'rhel5.4.0'] guestOverhead = '65' hooks = {} kvmEnabled = 'true' lastClient = '127.0.0.1' lastClientIface = 'lo' management_ip = '0.0.0.0' memSize = '32094' netConfigDirty = 'False' networks = {'ovirtmgmt': {'addr': '172.16.6.193', 'bridged': True, 'cfg': {'BOOTPROTO': 'none', 'DEFROUTE': 'yes', 'DELAY': '0', 'DEVICE': 'ovirtmgmt', 'GATEWAY': '172.16.6.1', 'IPADDR': '172.16.6.193', 'NETMASK': '255.255.255.0', 'NM_CONTROLLED': 'no', 'ONBOOT': 'yes', 'STP': 'no', 'TYPE': 'Bridge'}, 'gateway': '172.16.6.1', 'iface': 'ovirtmgmt', 'ipv6addrs': ['fe80::222:19ff:fe69:b6c1/64'], 'ipv6gateway': '::', 'mtu': '1500', 'netmask': '255.255.255.0', 'ports': ['vnet0', 'em1', 'vnet1', 'vnet2', 'vnet3'], 'stp': 'off'}} nics = {'em1': {'addr': '', 'cfg': {'BRIDGE': 'ovirtmgmt', 'DEVICE': 'em1', 'HWADDR': '00:22:19:69:b6:c1', 'NM_CONTROLLED': 'no', 'ONBOOT': 'yes', 'STP': 'no'}, 'hwaddr': '00:22:19:69:b6:c1', 'ipv6addrs': ['fe80::222:19ff:fe69:b6c1/64'], 'mtu': '1500', 'netmask': '', 'speed': 1000}, 'em2': {'addr': '', 'cfg': {'BOOTPROTO': 'dhcp', 'DEFROUTE': 'yes', 'DEVICE': 'em2', 'HWADDR': '00:22:19:69:B6:C3', 'IPV4_FAILURE_FATAL': 'yes', 'IPV6INIT': 'no', 'NAME': 'eth1', 'NM_CONTROLLED': 'yes', 'ONBOOT': 'yes', 'PEERDNS': 'yes', 'PEERROUTES': 'yes', 'TYPE': 'Ethernet', 'UUID': '61c021d1-c174-4b06-be72-1a4e7d6fa80e'}, 'hwaddr': '00:22:19:69:b6:c3', 'ipv6addrs': ['fe80::222:19ff:fe69:b6c3/64'], 'mtu': '1500', 'netmask': '', 'speed': 1000}, 'em3': {'addr': '', 'cfg': {'BOOTPROTO': 'dhcp', 'DEFROUTE': 'yes', 'DEVICE': 'em3', 'HWADDR': '00:22:19:69:B6:C5', 'IPV4_FAILURE_FATAL': 'yes', 'IPV6INIT': 'no', 'NAME': 'eth2', 'NM_CONTROLLED': 'yes', 'ONBOOT': 'no', 'PEERDNS': 'yes', 'PEERROUTES': 'yes', 'TYPE': 'Ethernet', 'UUID': 'fecc09f3-51b4-4ba5-a306-abf3a12f3971'}, 'hwaddr': '00:22:19:69:b6:c5', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'speed': 0}, 'em4': {'addr': '', 'cfg': {'BOOTPROTO': 'dhcp', 'DEFROUTE': 'yes', 'DEVICE': 'em4', 'HWADDR': '00:22:19:69:B6:C7', 'IPV4_FAILURE_FATAL': 'yes', 'IPV6INIT': 'no', 'NAME': 'eth3', 'NM_CONTROLLED': 'yes', 'ONBOOT': 'no', 'PEERDNS': 'yes', 'PEERROUTES': 'yes', 'TYPE': 'Ethernet', 'UUID': '6e850fef-33b4-41e2-97d7-1fcd9bb334ed'}, 'hwaddr': '00:22:19:69:b6:c7', 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'speed': 0}} operatingSystem = {'name': 'RHEL', 'release': '5.el6.centos.11.2', 'version': '6'} packages2 = {'kernel': {'buildtime': 1395788395.0, 'release': '431.11.2.el6.x86_64', 'version': '2.6.32'}, 'libvirt': {'buildtime': 1396856799, 'release': '29.el6_5.7', 'version': '0.10.2'}, 'mom': {'buildtime': 1391183641, 'release': '1.el6', 'version': '0.4.0'}, 'qemu-img': {'buildtime': 1398190363, 'release': '2.415.el6_5.8', 'version': '0.12.1.2'}, 'qemu-kvm': {'buildtime': 1398190363, 'release': '2.415.el6_5.8', 'version': '0.12.1.2'}, 'spice-server': {'buildtime': 1386756528, 'release': '6.el6_5.1', 'version': '0.12.4'}, 'vdsm': {'buildtime': 1395806448, 'release': '0.el6', 'version': '4.14.6'}} reservedMem = '321' rngSources = ['random'] software_revision = '0' software_version = '4.14' supportedENGINEs = ['3.0', '3.1', '3.2', '3.3', '3.4'] supportedProtocols = ['2.2', '2.3'] uuid = '4C4C4544-0050-4410-804C-B6C04F374D31' version_name = 'Snow Man' vlans = {} vmTypes = ['kvm'] [root@srv-0203 ~]#
On Thu, Apr 24, 2014 at 5:03 PM, Moti Asayag <masayag@redhat.com> wrote:
----- Original Message -----
From: "Tamer Lima" <tamer.americo@gmail.com> To: "Moti Asayag" <masayag@redhat.com> Cc: users@ovirt.org Sent: Thursday, April 24, 2014 8:04:51 PM Subject: Re: [ovirt-users] does SPM can run over ovirt-engine host ?
I created link with an image showing network consumption between engine and spm.
http://pt-br.tinypic.com/r/dzi80i/8 http://tinypic.com/view.php?pic=dzi80i&s=8#.U1lEKfldVyN
The image shows a generic message regarding the host network consumption. In 3.4 will have a specific log stating the device name [1]
You can check what is the specific nic by searching the rxRate or txRate in the output of the following command which should be executed on the spm:
vdsClient -s localhost getVdsStats
Once you've identified the interface, you can see if the 'ovirtmgmt' is reported with that high consumption or if it is configured on top of the highly used nic. Else, there is another issue not related to engine-spm connectivity.
you can paste the output of 'vdsClient -s localhost getVdsStats' and 'vdsClient -s localhost getVdsCaps' to examine both utilization and network configuration.
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1070667
srv-0202, spmStatus Free, storage pool Default 2014-04-24 13:11:36,258 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] starting spm on vds srv-0202, storage pool Default, prevId -1, LVER -1 2014-04-24 13:11:36,259 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] START, SpmStartVDSCommand(HostName = srv-0202, HostId = fbdf0655-6560-4e12-a95a-875592f62cb5, storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, prevId=-1, prevLVER=-1, storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log id: 778a334c 2014-04-24 13:11:36,310 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] spmStart polling started: taskId = 198c7765-38cb-42e7-9349-93ca43be7066 2014-04-24 13:11:37,315 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] Failed in HSMGetTaskStatusVDS method 2014-04-24 13:11:37,316 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] spmStart polling ended: taskId = 198c7765-38cb-42e7-9349-93ca43be7066 task status = finished 2014-04-24 13:11:37,316 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] Start SPM Task failed - result: cleanSuccess, message: VDSGenericException: VDSErrorException: Failed to HSMGetTaskStatusVDS, error = Storage domain does not exist, code = 358 2014-04-24 13:11:37,363 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] spmStart polling ended, spm status: Free 2014-04-24 13:11:37,364 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] START, HSMClearTaskVDSCommand(HostName = srv-0202, HostId = fbdf0655-6560-4e12-a95a-875592f62cb5, taskId=198c7765-38cb-42e7-9349-93ca43be7066), log id: 6e6ad022 2014-04-24 13:11:37,409 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] FINISH, HSMClearTaskVDSCommand, log id: 6e6ad022 2014-04-24 13:11:37,409 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@dfe925d, log id: 778a334c 2014-04-24 13:11:37,411 INFO [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] Running command: SetStoragePoolStatusCommand internal: true. Entities affected : ID: 5849b030-626e-47cb-ad90-3ce782d831b3 Type: StoragePool 2014-04-24 13:11:37,416 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-20) [443b1ed8] Correlation ID: 443b1ed8, Call Stack: null, Custom Event ID: -1, Message: Invalid status on Data Center Default. Setting status to Non Responsive. 2014-04-24 13:11:37,418 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] IrsBroker::Failed::GetStoragePoolInfoVDS due to: IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed 2014-04-24 13:11:37,466 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] Irs placed on server fbdf0655-6560-4e12-a95a-875592f62cb5 failed. Proceed Failover 2014-04-24 13:11:37,528 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] hostFromVds::selectedVds
This forum has an image site/blog preference ? thanks
below is the log of spm-lock.log
[root@srv-0203 vdsm]# tail -f spm-lock.log [2014-03-06 18:21:21] Protecting spm lock for vdsm pid 2992 [2014-03-06 18:21:21] Trying to acquire lease - sdUUID=6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d lease_file=/rhev/data-center/mnt/srv-0202.lttd.br: _var_lib_exports_iso/6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d/dom_md/leases id=1 lease_time_ms=5000 io_op_to_ms=1000 [2014-03-06 18:21:34] Lease acquired sdUUID=6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d id=1 lease_path=/rhev/data-center/mnt/srv-0202.ltd.br: _var_lib_exports_iso/6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d/dom_md/leases, TS=1394140892739675 [2014-03-06 18:21:34] *Protecting spm lock for vdsm *pid 2992 [2014-03-06 18:21:34] Started renewal process (pid=17519) for sdUUID=6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d id=1 lease_path=/rhev/data-center/mnt/srv-0202.lttd.unb.br: _var_lib_exports_iso/6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d/dom_md/leases [2014-03-06 18:21:34] Stopping lease for pool: 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d pgrps: -17519 User defined signal 1 [2014-03-06 18:21:34] releasing lease sdUUID=6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d id=1 lease_path=/rhev/data-center/mnt/srv-0202.lttd.br: _var_lib_exports_iso/6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d/dom_md/leases
On Thu, Apr 24, 2014 at 1:51 PM, Tamer Lima <tamer.americo@gmail.com> wrote:
Hi, this is the piece of code of engine.log at serv-0202 (engine server) the spm was defined on serv-0203
log from serv-0202 (engine server): 2014-04-24 13:18:11,746 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-82) [1bb7dfd0] Correlation ID: null, Call Stack: null, Custom Event ID: -1, *Message: Used Network resources of host srv-0202 [96%] exceeded defined threshold [95%].* 2014-04-24 13:18:22,578 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-60) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message:* Used Network resources of host srv-0203 [98%] exceeded defined threshold [95%].*
below is the log before the vm creation procedure. The log starts on the moment I press to create a new virtual machine:
(The procedure of creation VM takes more than 1 hour. I executed tcpdump command on srv-0203 (SPM), even creating using thinning provisioning , I collected 500Gb of traffic between serv-0202 and serv-0203. When finally a VM is created there is no real disk allocation from ovirt, only my tcpdump log file. I do not know why this traffic exists)
log from serv-0202 (engine server):
2014-04-24 13:11:36,241 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-20) [1a138258] Correlation ID: 1a138258, Call Stack: null, Custom Event ID: -1, Message: Invalid status on Data Center Default. Setting status to Non Responsive. 2014-04-24 13:11:36,255 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] hostFromVds::selectedVds
srv-0203, spmStatus Free, storage pool Default 2014-04-24 13:11:37,530 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] starting spm on vds srv-0203, storage pool Default, prevId -1, LVER -1 2014-04-24 13:11:37,531 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] START, SpmStartVDSCommand(HostName = srv-0203, HostId = 6e86beba-ee71-4bae-88d5-b95b74095c2f, storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, prevId=-1, prevLVER=-1, storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log id: 77e0918 2014-04-24 13:11:37,589 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] spmStart polling started: taskId = 81164899-b8b5-4ea5-9c82-94b66a3df741 2014-04-24 13:11:38,595 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] spmStart polling ended: taskId = 81164899-b8b5-4ea5-9c82-94b66a3df741 task status = finished 2014-04-24 13:11:38,652 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] spmStart polling ended, spm status: SPM 2014-04-24 13:11:38,653 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] START, HSMClearTaskVDSCommand(HostName = srv-0203, HostId = 6e86beba-ee71-4bae-88d5-b95b74095c2f, taskId=81164899-b8b5-4ea5-9c82-94b66a3df741), log id: 71e2abc 2014-04-24 13:11:38,698 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] FINISH, HSMClearTaskVDSCommand, log id: 71e2abc 2014-04-24 13:11:38,698 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@67238f8a , log id: 77e0918 2014-04-24 13:11:38,699 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] Initialize Irs proxy from vds: srv-0203.lttd.br 2014-04-24 13:11:38,703 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-20) [443b1ed8] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Storage Pool Manager runs on Host srv-0203 (Address: srv-0203.lttd.br). 2014-04-24 13:11:38,703 WARN [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-48) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue. 2014-04-24 13:11:38,711 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand]
(org.ovirt.thread.pool-6-thread-48) [443b1ed8] START, SPMGetAllTasksInfoVDSCommand( storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false), log id: 710a52c9 2014-04-24 13:11:38,735 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand]
(org.ovirt.thread.pool-6-thread-48) [443b1ed8] -- executeIrsBrokerCommand: Attempting on storage pool 5849b030-626e-47cb-ad90-3ce782d831b3 2014-04-24 13:11:38,736 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand]
(org.ovirt.thread.pool-6-thread-48) [443b1ed8] START, HSMGetAllTasksInfoVDSCommand(HostName = srv-0203, HostId = 6e86beba-ee71-4bae-88d5-b95b74095c2f), log id: 14a15273 2014-04-24 13:11:38,741 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand]
(org.ovirt.thread.pool-6-thread-48) [443b1ed8] FINISH, HSMGetAllTasksInfoVDSCommand, return: [], log id: 14a15273 2014-04-24 13:11:38,741 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand]
(org.ovirt.thread.pool-6-thread-48) [443b1ed8] FINISH, SPMGetAllTasksInfoVDSCommand, return: [], log id: 710a52c9 2014-04-24 13:11:38,741 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (org.ovirt.thread.pool-6-thread-48) [443b1ed8] Discovered no tasks on Storage Pool Default 2014-04-24 13:14:52,094 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-11) starting processDomainRecovery for domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_DOMAIN 2014-04-24 13:14:52,097 WARN [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-11) Storage domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_DOMAIN is not visible to one or more hosts. Since the domains type is ISO, hosts status will not be changed to non-operational 2014-04-24 13:14:54,923 WARN [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-37) domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_DOMAIN in problem. vds: srv-0202 2014-04-24 13:17:59,281 INFO [org.ovirt.engine.core.bll.AddVmFromTemplateCommand] (ajp--127.0.0.1-8702-9) [*7fb59186*] Lock Acquired to object EngineLock [exclusiveLocks= key: *servidor-teste* value: VM_NAME , sharedLocks= key: 1f08d35a-adf0-4734-9ce6-1431406096ba value: TEMPLATE key: c8e52f2a-5384-41ee-af77-7ee37bf54355 value: DISK ] 2014-04-24 13:17:59,302 INFO [org.ovirt.engine.core.bll.AddVmFromTemplateCommand] (ajp--127.0.0.1-8702-9) [*7fb59186*] Running command: AddVmFromTemplateCommand internal: false. Entities affected : ID: 99408929-82cf-4dc7-a532-9d998063fa95 Type: VdsGroups, ID: 1f08d35a-adf0-4734-9ce6-1431406096ba Type: VmTemplate, ID: 3410b593-dbd0-4ab8-9a21-3e3c51fe8e90 Type: Storage, ID: 99408929-82cf-4dc7-a532-9d998063fa95 Type: VdsGroups 2014-04-24 13:17:59,336 INFO [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] (ajp--127.0.0.1-8702-9) [*7fb59186*] START, SetVmStatusVDSCommand( vmId = 8a94d957-621e-4cd6-b94d-64a0572cb759, status = ImageLocked), log id: 6ada3a4a 2014-04-24 13:17:59,339 INFO [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] (ajp--127.0.0.1-8702-9) [*7fb59186*] FINISH, SetVmStatusVDSCommand, log id: 6ada3a4a 2014-04-24 13:17:59,344 INFO [org.ovirt.engine.core.bll.CreateCloneOfTemplateCommand] (ajp--127.0.0.1-8702-9) [48e79aaf] Running command: CreateCloneOfTemplateCommand internal: true. Entities affected : ID: 3410b593-dbd0-4ab8-9a21-3e3c51fe8e90 Type: Storage 2014-04-24 13:17:59,371 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (ajp--127.0.0.1-8702-9) [48e79aaf] START, CopyImageVDSCommand( storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false, storageDomainId = 3410b593-dbd0-4ab8-9a21-3e3c51fe8e90, imageGroupId = c8e52f2a-5384-41ee-af77-7ee37bf54355, imageId = 5c642d47-4f03-4a81-8a10-067b98e068f4, dstImageGroupId = 5a09cae5-c7a1-466d-9b69-ff8ad739d71c, vmId = 1f08d35a-adf0-4734-9ce6-1431406096ba, dstImageId = 2d82ce92-96f1-482c-b8fe-c21d9dfb23e6, imageDescription = , dstStorageDomainId = 3410b593-dbd0-4ab8-9a21-3e3c51fe8e90, copyVolumeType = LeafVol, volumeFormat = RAW, preallocate = Sparse, postZero = false, force = false), log id: 4a480fe7 2014-04-24 13:17:59,372 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (ajp--127.0.0.1-8702-9) [48e79aaf] -- executeIrsBrokerCommand: calling 'copyImage' with two new parameters: description and UUID 2014-04-24 13:17:59,373 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (ajp--127.0.0.1-8702-9) [48e79aaf] -- copyImage parameters: sdUUID=3410b593-dbd0-4ab8-9a21-3e3c51fe8e90 spUUID=5849b030-626e-47cb-ad90-3ce782d831b3 vmGUID=1f08d35a-adf0-4734-9ce6-1431406096ba srcImageGUID=c8e52f2a-5384-41ee-af77-7ee37bf54355 srcVolUUID=5c642d47-4f03-4a81-8a10-067b98e068f4 dstImageGUID=5a09cae5-c7a1-466d-9b69-ff8ad739d71c dstVolUUID=2d82ce92-96f1-482c-b8fe-c21d9dfb23e6 descr= dstSdUUID=3410b593-dbd0-4ab8-9a21-3e3c51fe8e90 2014-04-24 13:17:59,442 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (ajp--127.0.0.1-8702-9) [48e79aaf] FINISH, CopyImageVDSCommand, return: 00000000-0000-0000-0000-000000000000, log id: 4a480fe7 2014-04-24 13:17:59,446 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (ajp--127.0.0.1-8702-9) [48e79aaf] CommandAsyncTask::Adding CommandMultiAsyncTasks object for command 755c7619-60e6-4899-b772-17c56cdec057 2014-04-24 13:17:59,447 INFO [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (ajp--127.0.0.1-8702-9) [48e79aaf] CommandMultiAsyncTasks::AttachTask: Attaching task e8726bad-05ff-4f89-a127-146a3f8bceb2 to command 755c7619-60e6-4899-b772-17c56cdec057. 2014-04-24 13:17:59,451 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (ajp--127.0.0.1-8702-9) [48e79aaf] Adding task e8726bad-05ff-4f89-a127-146a3f8bceb2 (Parent Command AddVmFromTemplate, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters), polling hasn't started yet.. 2014-04-24 13:17:59,497 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-9) [48e79aaf] Correlation ID: *7fb59186*, Job ID: aeb08ac5-d157-40ae-bcd5-ec68d9cc5ae8, Call Stack: null, Custom Event ID: -1, Message: VM* servidor-teste creation was initiated by admin.* 2014-04-24 13:17:59,497 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (ajp--127.0.0.1-8702-9) [48e79aaf] BaseAsyncTask::startPollingTask: Starting to poll task e8726bad-05ff-4f89-a127-146a3f8bceb2. 2014-04-24 13:17:59,560 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-99) Polling and updating Async Tasks: 1 tasks, 1 tasks to poll now 2014-04-24 13:17:59,566 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (DefaultQuartzScheduler_Worker-99) SPMAsyncTask::PollTask: Polling task e8726bad-05ff-4f89-a127-146a3f8bceb2 (Parent Command AddVmFromTemplate, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) returned status running. 2014-04-24 13:17:59,567 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-99) Finished polling Tasks, will poll again in 10 seconds. 2014-04-24 13:17:59,653 INFO [org.ovirt.engine.core.bll.network.vm.ReorderVmNicsCommand] (ajp--127.0.0.1-8702-5) [601e9dcb] Running command: ReorderVmNicsCommand internal: false. Entities affected : ID: 8a94d957-621e-4cd6-b94d-64a0572cb759 Type: VM 2014-04-24 13:18:11,746 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-82) [1bb7dfd0] Correlation ID: null, Call Stack: null, Custom Event ID: -1, *Message: Used Network resources of host srv-0202 [96%] exceeded defined threshold [95%].* 2014-04-24 13:18:22,578 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-60) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message:* Used Network resources of host srv-0203 [98%] exceeded defined threshold [95%].* 2014-04-24 13:19:54,926 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-28) starting processDomainRecovery for domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_DOMAIN 2014-04-24 13:19:54,929 WARN [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-28) Storage domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_*DOMAIN is not visible to one or more hosts. Since the domains type is ISO, hosts status will not be changed to non-operational* 2014-04-24 13:19:57,802 WARN [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-36) domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_DOMAIN in problem. vds: srv-0202 ^C
On Thu, Apr 24, 2014 at 3:27 AM, Moti Asayag <masayag@redhat.com> wrote:
----- Original Message -----
From: "Tamer Lima" <tamer.americo@gmail.com> To: users@ovirt.org Sent: Monday, April 14, 2014 5:13:12 PM Subject: [ovirt-users] does SPM can run over ovirt-engine host ?
Hello,
When I create virtual machine from a template (centos6.5, 2 cores,
8GB
mem,
500GB hd) this process takes almost 2 hours. I click on "New VM" button and just select the template and click ok.
engine.log show me high network consumption (98%) between engine-server host and SPM host.
Could you share that piece of log which indicates the 98% consumption is beween the engine server to the SPM host (vs the SPM node to the storage server) ?
I tried to make my engine-server host a spm host too, but without
sucess.
Does SPM can run over on the same ovirt-engine machine ?
Am I make something wrong? Or create VM from template is really
slow ?
my servers : srv-0202 = ovirt-engine , vdsm srv-0203 = spm , vdsm srv-0204 = vdsm These servers are dell blades connected on a 100GB switch.
thanks
This is what I know about SPM: http://www.ovirt.org/Storage_-_oVirt_workshop_November_2011
= Storage Pool Manager (SPM) A role assigned to one host in a data
granting it sole authority over:
* Creation, deletion, an dmanipulation of virtula disk images, snapshots and templates * Templates: you can create on VM as a golden image and
center provision to
multiple VMs (QCOW layers) * Allocation of storage for sparse block devices (on SAN) * Thin provisinoing (see below) * Single metadata writer: * SPM lease mechanism (Chockler and Malkhi 2004,
Light-Weight Leases
for Storage-Cnntric Coordination) * Storage-centric mailbox * This role can be migrated to any host in data center
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

----- Original Message -----
From: "Tamer Lima" <tamer.americo@gmail.com> To: "Moti Asayag" <masayag@redhat.com> Cc: users@ovirt.org Sent: Thursday, April 24, 2014 7:51:39 PM Subject: Re: [ovirt-users] does SPM can run over ovirt-engine host ?
Hi, this is the piece of code of engine.log at serv-0202 (engine server) the spm was defined on serv-0203
log from serv-0202 (engine server): 2014-04-24 13:18:11,746 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-82) [1bb7dfd0] Correlation ID: null, Call Stack: null, Custom Event ID: -1, *Message: Used Network resources of host srv-0202 [96%] exceeded defined threshold [95%].* 2014-04-24 13:18:22,578 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-60) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message:* Used Network resources of host srv-0203 [98%] exceeded defined threshold [95%].*
below is the log before the vm creation procedure. The log starts on the moment I press to create a new virtual machine:
(The procedure of creation VM takes more than 1 hour. I executed tcpdump command on srv-0203 (SPM), even creating using thinning provisioning , I collected 500Gb of traffic between serv-0202 and serv-0203. When finally a VM is created there is no real disk allocation from ovirt, only my tcpdump log file. I do not know why this traffic exists)
Allon, could you advise ?
log from serv-0202 (engine server):
2014-04-24 13:11:36,241 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-20) [1a138258] Correlation ID: 1a138258, Call Stack: null, Custom Event ID: -1, Message: Invalid status on Data Center Default. Setting status to Non Responsive. 2014-04-24 13:11:36,255 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] hostFromVds::selectedVds - srv-0202, spmStatus Free, storage pool Default 2014-04-24 13:11:36,258 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] starting spm on vds srv-0202, storage pool Default, prevId -1, LVER -1 2014-04-24 13:11:36,259 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] START, SpmStartVDSCommand(HostName = srv-0202, HostId = fbdf0655-6560-4e12-a95a-875592f62cb5, storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, prevId=-1, prevLVER=-1, storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log id: 778a334c 2014-04-24 13:11:36,310 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] spmStart polling started: taskId = 198c7765-38cb-42e7-9349-93ca43be7066 2014-04-24 13:11:37,315 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] Failed in HSMGetTaskStatusVDS method 2014-04-24 13:11:37,316 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] spmStart polling ended: taskId = 198c7765-38cb-42e7-9349-93ca43be7066 task status = finished 2014-04-24 13:11:37,316 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] Start SPM Task failed - result: cleanSuccess, message: VDSGenericException: VDSErrorException: Failed to HSMGetTaskStatusVDS, error = Storage domain does not exist, code = 358 2014-04-24 13:11:37,363 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] spmStart polling ended, spm status: Free 2014-04-24 13:11:37,364 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] START, HSMClearTaskVDSCommand(HostName = srv-0202, HostId = fbdf0655-6560-4e12-a95a-875592f62cb5, taskId=198c7765-38cb-42e7-9349-93ca43be7066), log id: 6e6ad022 2014-04-24 13:11:37,409 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] FINISH, HSMClearTaskVDSCommand, log id: 6e6ad022 2014-04-24 13:11:37,409 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [1a138258] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@dfe925d, log id: 778a334c 2014-04-24 13:11:37,411 INFO [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] Running command: SetStoragePoolStatusCommand internal: true. Entities affected : ID: 5849b030-626e-47cb-ad90-3ce782d831b3 Type: StoragePool 2014-04-24 13:11:37,416 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-20) [443b1ed8] Correlation ID: 443b1ed8, Call Stack: null, Custom Event ID: -1, Message: Invalid status on Data Center Default. Setting status to Non Responsive. 2014-04-24 13:11:37,418 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] IrsBroker::Failed::GetStoragePoolInfoVDS due to: IrsSpmStartFailedException: IRSGenericException: IRSErrorException: SpmStart failed 2014-04-24 13:11:37,466 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] Irs placed on server fbdf0655-6560-4e12-a95a-875592f62cb5 failed. Proceed Failover 2014-04-24 13:11:37,528 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] hostFromVds::selectedVds - srv-0203, spmStatus Free, storage pool Default 2014-04-24 13:11:37,530 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] starting spm on vds srv-0203, storage pool Default, prevId -1, LVER -1 2014-04-24 13:11:37,531 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] START, SpmStartVDSCommand(HostName = srv-0203, HostId = 6e86beba-ee71-4bae-88d5-b95b74095c2f, storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, prevId=-1, prevLVER=-1, storagePoolFormatType=V3, recoveryMode=Manual, SCSIFencing=false), log id: 77e0918 2014-04-24 13:11:37,589 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] spmStart polling started: taskId = 81164899-b8b5-4ea5-9c82-94b66a3df741 2014-04-24 13:11:38,595 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] spmStart polling ended: taskId = 81164899-b8b5-4ea5-9c82-94b66a3df741 task status = finished 2014-04-24 13:11:38,652 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] spmStart polling ended, spm status: SPM 2014-04-24 13:11:38,653 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] START, HSMClearTaskVDSCommand(HostName = srv-0203, HostId = 6e86beba-ee71-4bae-88d5-b95b74095c2f, taskId=81164899-b8b5-4ea5-9c82-94b66a3df741), log id: 71e2abc 2014-04-24 13:11:38,698 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] FINISH, HSMClearTaskVDSCommand, log id: 71e2abc 2014-04-24 13:11:38,698 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult@67238f8a, log id: 77e0918 2014-04-24 13:11:38,699 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (DefaultQuartzScheduler_Worker-20) [443b1ed8] Initialize Irs proxy from vds: srv-0203.lttd.br 2014-04-24 13:11:38,703 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-20) [443b1ed8] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Storage Pool Manager runs on Host srv-0203 (Address: srv-0203.lttd.br). 2014-04-24 13:11:38,703 WARN [org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil] (org.ovirt.thread.pool-6-thread-48) Executing a command: java.util.concurrent.FutureTask , but note that there are 1 tasks in the queue. 2014-04-24 13:11:38,711 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-6-thread-48) [443b1ed8] START, SPMGetAllTasksInfoVDSCommand( storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false), log id: 710a52c9 2014-04-24 13:11:38,735 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-6-thread-48) [443b1ed8] -- executeIrsBrokerCommand: Attempting on storage pool 5849b030-626e-47cb-ad90-3ce782d831b3 2014-04-24 13:11:38,736 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-6-thread-48) [443b1ed8] START, HSMGetAllTasksInfoVDSCommand(HostName = srv-0203, HostId = 6e86beba-ee71-4bae-88d5-b95b74095c2f), log id: 14a15273 2014-04-24 13:11:38,741 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-6-thread-48) [443b1ed8] FINISH, HSMGetAllTasksInfoVDSCommand, return: [], log id: 14a15273 2014-04-24 13:11:38,741 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-6-thread-48) [443b1ed8] FINISH, SPMGetAllTasksInfoVDSCommand, return: [], log id: 710a52c9 2014-04-24 13:11:38,741 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (org.ovirt.thread.pool-6-thread-48) [443b1ed8] Discovered no tasks on Storage Pool Default 2014-04-24 13:14:52,094 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-11) starting processDomainRecovery for domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_DOMAIN 2014-04-24 13:14:52,097 WARN [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-11) Storage domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_DOMAIN is not visible to one or more hosts. Since the domains type is ISO, hosts status will not be changed to non-operational 2014-04-24 13:14:54,923 WARN [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-37) domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_DOMAIN in problem. vds: srv-0202 2014-04-24 13:17:59,281 INFO [org.ovirt.engine.core.bll.AddVmFromTemplateCommand] (ajp--127.0.0.1-8702-9) [*7fb59186*] Lock Acquired to object EngineLock [exclusiveLocks= key: *servidor-teste* value: VM_NAME , sharedLocks= key: 1f08d35a-adf0-4734-9ce6-1431406096ba value: TEMPLATE key: c8e52f2a-5384-41ee-af77-7ee37bf54355 value: DISK ] 2014-04-24 13:17:59,302 INFO [org.ovirt.engine.core.bll.AddVmFromTemplateCommand] (ajp--127.0.0.1-8702-9) [*7fb59186*] Running command: AddVmFromTemplateCommand internal: false. Entities affected : ID: 99408929-82cf-4dc7-a532-9d998063fa95 Type: VdsGroups, ID: 1f08d35a-adf0-4734-9ce6-1431406096ba Type: VmTemplate, ID: 3410b593-dbd0-4ab8-9a21-3e3c51fe8e90 Type: Storage, ID: 99408929-82cf-4dc7-a532-9d998063fa95 Type: VdsGroups 2014-04-24 13:17:59,336 INFO [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] (ajp--127.0.0.1-8702-9) [*7fb59186*] START, SetVmStatusVDSCommand( vmId = 8a94d957-621e-4cd6-b94d-64a0572cb759, status = ImageLocked), log id: 6ada3a4a 2014-04-24 13:17:59,339 INFO [org.ovirt.engine.core.vdsbroker.SetVmStatusVDSCommand] (ajp--127.0.0.1-8702-9) [*7fb59186*] FINISH, SetVmStatusVDSCommand, log id: 6ada3a4a 2014-04-24 13:17:59,344 INFO [org.ovirt.engine.core.bll.CreateCloneOfTemplateCommand] (ajp--127.0.0.1-8702-9) [48e79aaf] Running command: CreateCloneOfTemplateCommand internal: true. Entities affected : ID: 3410b593-dbd0-4ab8-9a21-3e3c51fe8e90 Type: Storage 2014-04-24 13:17:59,371 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (ajp--127.0.0.1-8702-9) [48e79aaf] START, CopyImageVDSCommand( storagePoolId = 5849b030-626e-47cb-ad90-3ce782d831b3, ignoreFailoverLimit = false, storageDomainId = 3410b593-dbd0-4ab8-9a21-3e3c51fe8e90, imageGroupId = c8e52f2a-5384-41ee-af77-7ee37bf54355, imageId = 5c642d47-4f03-4a81-8a10-067b98e068f4, dstImageGroupId = 5a09cae5-c7a1-466d-9b69-ff8ad739d71c, vmId = 1f08d35a-adf0-4734-9ce6-1431406096ba, dstImageId = 2d82ce92-96f1-482c-b8fe-c21d9dfb23e6, imageDescription = , dstStorageDomainId = 3410b593-dbd0-4ab8-9a21-3e3c51fe8e90, copyVolumeType = LeafVol, volumeFormat = RAW, preallocate = Sparse, postZero = false, force = false), log id: 4a480fe7 2014-04-24 13:17:59,372 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (ajp--127.0.0.1-8702-9) [48e79aaf] -- executeIrsBrokerCommand: calling 'copyImage' with two new parameters: description and UUID 2014-04-24 13:17:59,373 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (ajp--127.0.0.1-8702-9) [48e79aaf] -- copyImage parameters: sdUUID=3410b593-dbd0-4ab8-9a21-3e3c51fe8e90 spUUID=5849b030-626e-47cb-ad90-3ce782d831b3 vmGUID=1f08d35a-adf0-4734-9ce6-1431406096ba srcImageGUID=c8e52f2a-5384-41ee-af77-7ee37bf54355 srcVolUUID=5c642d47-4f03-4a81-8a10-067b98e068f4 dstImageGUID=5a09cae5-c7a1-466d-9b69-ff8ad739d71c dstVolUUID=2d82ce92-96f1-482c-b8fe-c21d9dfb23e6 descr= dstSdUUID=3410b593-dbd0-4ab8-9a21-3e3c51fe8e90 2014-04-24 13:17:59,442 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.CopyImageVDSCommand] (ajp--127.0.0.1-8702-9) [48e79aaf] FINISH, CopyImageVDSCommand, return: 00000000-0000-0000-0000-000000000000, log id: 4a480fe7 2014-04-24 13:17:59,446 INFO [org.ovirt.engine.core.bll.CommandAsyncTask] (ajp--127.0.0.1-8702-9) [48e79aaf] CommandAsyncTask::Adding CommandMultiAsyncTasks object for command 755c7619-60e6-4899-b772-17c56cdec057 2014-04-24 13:17:59,447 INFO [org.ovirt.engine.core.bll.CommandMultiAsyncTasks] (ajp--127.0.0.1-8702-9) [48e79aaf] CommandMultiAsyncTasks::AttachTask: Attaching task e8726bad-05ff-4f89-a127-146a3f8bceb2 to command 755c7619-60e6-4899-b772-17c56cdec057. 2014-04-24 13:17:59,451 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (ajp--127.0.0.1-8702-9) [48e79aaf] Adding task e8726bad-05ff-4f89-a127-146a3f8bceb2 (Parent Command AddVmFromTemplate, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters), polling hasn't started yet.. 2014-04-24 13:17:59,497 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-9) [48e79aaf] Correlation ID: *7fb59186*, Job ID: aeb08ac5-d157-40ae-bcd5-ec68d9cc5ae8, Call Stack: null, Custom Event ID: -1, Message: VM* servidor-teste creation was initiated by admin.* 2014-04-24 13:17:59,497 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (ajp--127.0.0.1-8702-9) [48e79aaf] BaseAsyncTask::startPollingTask: Starting to poll task e8726bad-05ff-4f89-a127-146a3f8bceb2. 2014-04-24 13:17:59,560 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-99) Polling and updating Async Tasks: 1 tasks, 1 tasks to poll now 2014-04-24 13:17:59,566 INFO [org.ovirt.engine.core.bll.SPMAsyncTask] (DefaultQuartzScheduler_Worker-99) SPMAsyncTask::PollTask: Polling task e8726bad-05ff-4f89-a127-146a3f8bceb2 (Parent Command AddVmFromTemplate, Parameters Type org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) returned status running. 2014-04-24 13:17:59,567 INFO [org.ovirt.engine.core.bll.AsyncTaskManager] (DefaultQuartzScheduler_Worker-99) Finished polling Tasks, will poll again in 10 seconds. 2014-04-24 13:17:59,653 INFO [org.ovirt.engine.core.bll.network.vm.ReorderVmNicsCommand] (ajp--127.0.0.1-8702-5) [601e9dcb] Running command: ReorderVmNicsCommand internal: false. Entities affected : ID: 8a94d957-621e-4cd6-b94d-64a0572cb759 Type: VM 2014-04-24 13:18:11,746 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-82) [1bb7dfd0] Correlation ID: null, Call Stack: null, Custom Event ID: -1, *Message: Used Network resources of host srv-0202 [96%] exceeded defined threshold [95%].* 2014-04-24 13:18:22,578 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-60) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message:* Used Network resources of host srv-0203 [98%] exceeded defined threshold [95%].* 2014-04-24 13:19:54,926 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-28) starting processDomainRecovery for domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_DOMAIN 2014-04-24 13:19:54,929 WARN [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-28) Storage domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_*DOMAIN is not visible to one or more hosts. Since the domains type is ISO, hosts status will not be changed to non-operational* 2014-04-24 13:19:57,802 WARN [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand] (org.ovirt.thread.pool-6-thread-36) domain 6c6178c6-f7cf-4f2c-b8f9-73cf8f18bb4d:ISO_DOMAIN in problem. vds: srv-0202 ^C
"Os homens não são prisioneiros do destino, mas de suas próprias mentes" Franklin Roosevelt ______________________________ Tamer Américo (61) 8411-3491 Mestre em Engenharia Elétrica Cientista da Computação
On Thu, Apr 24, 2014 at 3:27 AM, Moti Asayag <masayag@redhat.com> wrote:
----- Original Message -----
From: "Tamer Lima" <tamer.americo@gmail.com> To: users@ovirt.org Sent: Monday, April 14, 2014 5:13:12 PM Subject: [ovirt-users] does SPM can run over ovirt-engine host ?
Hello,
When I create virtual machine from a template (centos6.5, 2 cores, 8GB mem, 500GB hd) this process takes almost 2 hours. I click on "New VM" button and just select the template and click ok.
engine.log show me high network consumption (98%) between engine-server host and SPM host.
Could you share that piece of log which indicates the 98% consumption is beween the engine server to the SPM host (vs the SPM node to the storage server) ?
I tried to make my engine-server host a spm host too, but without sucess.
Does SPM can run over on the same ovirt-engine machine ?
Am I make something wrong? Or create VM from template is really slow ?
my servers : srv-0202 = ovirt-engine , vdsm srv-0203 = spm , vdsm srv-0204 = vdsm These servers are dell blades connected on a 100GB switch.
thanks
This is what I know about SPM: http://www.ovirt.org/Storage_-_oVirt_workshop_November_2011
= Storage Pool Manager (SPM) A role assigned to one host in a data center granting it sole authority over:
* Creation, deletion, an dmanipulation of virtula disk images,
snapshots
and templates * Templates: you can create on VM as a golden image and
provision to
multiple VMs (QCOW layers) * Allocation of storage for sparse block devices (on SAN) * Thin provisinoing (see below) * Single metadata writer: * SPM lease mechanism (Chockler and Malkhi 2004, Light-Weight
Leases
for Storage-Cnntric Coordination) * Storage-centric mailbox * This role can be migrated to any host in data center
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
participants (4)
-
Itamar Heim
-
Moti Asayag
-
Tamer Lima
-
Yair Zaslavsky