[Users] spm keeps on shifting between nodes continously

Jithin Raju rajujith at gmail.com
Tue Jan 15 12:33:50 UTC 2013


Hi Haim,

I am using posixfs(gluster). I just created one volume with one brick on
each server (distribute).
This volume is attached as data domain.

Results after executing the given commands in host 1:
[root at fig /]# vdsClient -s 0 getConnectedStoragePoolsList
a8c8bb22-5f04-11e2-986d-525400927148

[root at fig /]# vdsClient -s 0 getStoragePoolInfo `vdsClient -s 0
getConnectedStoragePoolsList`
        name = dc
        isoprefix =
        pool_status = connected
        lver = 58
        domains = 1dc4f865-f151-4016-b8f3-3dcc0eca93a3:Active
        master_uuid = 1dc4f865-f151-4016-b8f3-3dcc0eca93a3
        version = 0
        spm_id = 1
        type = SHAREDFS
        master_ver = 1
        1dc4f865-f151-4016-b8f3-3dcc0eca93a3 = {'status': 'Active',
'diskfree': '982727000064', 'alerts': [], 'disktotal': '1035756371968'}

host 2:
[root at blueberry /]# vdsClient -s 0 getConnectedStoragePoolsList
a8c8bb22-5f04-11e2-986d-525400927148

[root at blueberry /]# vdsClient -s 0 getStoragePoolInfo `vdsClient -s 0
getConnectedStoragePoolsList`
        name = dc
        isoprefix =
        pool_status = connected
        lver = 61
        domains = 1dc4f865-f151-4016-b8f3-3dcc0eca93a3:Active
        master_uuid = 1dc4f865-f151-4016-b8f3-3dcc0eca93a3
        version = 0
        spm_id = 2
        type = SHAREDFS
        master_ver = 1
        1dc4f865-f151-4016-b8f3-3dcc0eca93a3 = {'status': 'Active',
'diskfree': '982726868992', 'alerts': [], 'disktotal': '1035756371968'}

Thanks,
Jithin






On Tue, Jan 15, 2013 at 5:47 PM, Haim Ateya <hateya at redhat.com> wrote:

> what type of storage are you using? is it posix btw? can you please issues
> the following command on both hosts:
>
> vdsClient -s 0 getConnectedStoragePoolsList
> vdsClient -s 0 getStoragePoolInfo `vdsClient -s 0
> getConnectedStoragePoolsList`
>
>
> ----- Original Message -----
> > From: "Jithin Raju" <rajujith at gmail.com>
> > To: users at ovirt.org
> > Sent: Tuesday, January 15, 2013 12:22:50 PM
> > Subject: [Users] spm keeps on shifting between nodes continously
> >
> >
> >
> > Hi,
> >
> >
> > I have 2 nodes of ovirt 3.1+ gluster. When i am trying to activate
> > the Data center its changing to up then contend then back up
> > continuously.
> >
> >
> > Same way along with the above SPM status is shifting between the two
> > nodes continously.
> >
> >
> > With one node its working fine.
> >
> >
> > Somebody has reported this before I remember, but do not remember the
> > fix.
> >
> >
> > engine log:
> >
> > 2013-01-15 15:50:41,762 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand]
> > (QuartzScheduler_Worker-66) [16c01e11] START,
> > HSMGetAllTasksInfoVDSCommand(vdsId =
> > 7caf739e-5ef7-11e2-aa89-525400927148), log id: 59dae374
> > 2013-01-15 15:50:41,791 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand]
> > (QuartzScheduler_Worker-66) [16c01e11] FINISH,
> > HSMGetAllTasksInfoVDSCommand, return: [], log id: 59dae374
> > 2013-01-15 15:50:41,793 INFO
> > [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand]
> > (QuartzScheduler_Worker-66) [16c01e11] FINISH,
> > SPMGetAllTasksInfoVDSCommand, return: [], log id: 77055e85
> > 2013-01-15 15:50:41,795 INFO
> > [org.ovirt.engine.core.bll.AsyncTaskManager]
> > (QuartzScheduler_Worker-66) [16c01e11]
> > AsyncTaskManager::AddStoragePoolExistingTasks: Discovered no tasks
> > on Storage Pool DC
> > 2013-01-15 15:50:41,796 INFO
> > [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand]
> > (QuartzScheduler_Worker-66) [16c01e11] START,
> > SPMGetAllTasksInfoVDSCommand(storagePoolId =
> > 1a995d7c-5ef3-11e2-a8c4-525400927148, ignoreFailoverLimit = false,
> > compatabilityVersion = null), log id: 318b02c2
> > 2013-01-15 15:50:41,798 INFO
> > [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand]
> > (QuartzScheduler_Worker-66) [16c01e11] --
> > SPMGetAllTasksInfoVDSCommand::ExecuteIrsBrokerCommand: Attempting on
> > storage pool 1a995d7c-5ef3-11e2-a8c4-525400927148
> > 2013-01-15 15:50:41,800 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand]
> > (QuartzScheduler_Worker-66) [16c01e11] START,
> > HSMGetAllTasksInfoVDSCommand(vdsId =
> > 7caf739e-5ef7-11e2-aa89-525400927148), log id: 22d29c5b
> > 2013-01-15 15:50:41,832 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand]
> > (QuartzScheduler_Worker-66) [16c01e11] FINISH,
> > HSMGetAllTasksInfoVDSCommand, return: [], log id: 22d29c5b
> > 2013-01-15 15:50:41,836 INFO
> > [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand]
> > (QuartzScheduler_Worker-66) [16c01e11] FINISH,
> > SPMGetAllTasksInfoVDSCommand, return: [], log id: 318b02c2
> > 2013-01-15 15:50:41,841 INFO
> > [org.ovirt.engine.core.bll.AsyncTaskManager]
> > (QuartzScheduler_Worker-66) [16c01e11]
> > AsyncTaskManager::AddStoragePoolExistingTasks: Discovered no tasks
> > on Storage Pool DC
> > 2013-01-15 15:50:51,830 ERROR
> > [org.ovirt.engine.core.vdsbroker.irsbroker.GetStoragePoolInfoVDSCommand]
> > (QuartzScheduler_Worker-44)
> > irsBroker::BuildStorageDynamicFromXmlRpcStruct::Failed building
> > Storage dynamic, xmlRpcStruct =
> > org.ovirt.engine.core.vdsbroker.xmlrpc.XmlRpcStruct at 7fdd2faf
> > 2013-01-15 15:50:51,832 ERROR
> > [org.ovirt.engine.core.vdsbroker.irsbroker.GetStoragePoolInfoVDSCommand]
> > (QuartzScheduler_Worker-44)
> > org.ovirt.engine.core.vdsbroker.irsbroker.IRSErrorException:
> > IRSErrorException:
> > 2013-01-15 15:50:51,833 ERROR
> > [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
> > (QuartzScheduler_Worker-44) IrsBroker::Failed::GetStoragePoolInfoVDS
> > due to: IRSErrorException: IRSErrorException:
> > 2013-01-15 15:50:51,865 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand]
> > (QuartzScheduler_Worker-44) START, SpmStopVDSCommand(vdsId =
> > 7caf739e-5ef7-11e2-aa89-525400927148, storagePoolId =
> > 1a995d7c-5ef3-11e2-a8c4-525400927148), log id: 6c7ade5e
> > 2013-01-15 15:50:51,899 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand]
> > (QuartzScheduler_Worker-44) SpmStopVDSCommand::Stopping SPM on vds
> > blueberry, pool id 1a995d7c-5ef3-11e2-a8c4-525400927148
> > 2013-01-15 15:50:53,032 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand]
> > (QuartzScheduler_Worker-44) FINISH, SpmStopVDSCommand, log id:
> > 6c7ade5e
> > 2013-01-15 15:50:53,036 INFO
> > [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
> > (QuartzScheduler_Worker-44) Irs placed on server null failed.
> > Proceed Failover
> > 2013-01-15 15:50:53,046 INFO
> > [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand]
> > (QuartzScheduler_Worker-44) [3f11e766] Running command:
> > SetStoragePoolStatusCommand internal: true. Entities affected : ID:
> > 1a995d7c-5ef3-11e2-a8c4-525400927148 Type: StoragePool
> > 2013-01-15 15:50:53,091 INFO
> > [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
> > (QuartzScheduler_Worker-44) [3f11e766] hostFromVds::selectedVds -
> > fig, spmStatus Free, storage pool DC
> > 2013-01-15 15:50:53,097 INFO
> > [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
> > (QuartzScheduler_Worker-44) [3f11e766] starting spm on vds fig,
> > storage pool DC, prevId -1, LVER 27
> > 2013-01-15 15:50:53,103 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
> > (QuartzScheduler_Worker-44) [3f11e766] START,
> > SpmStartVDSCommand(vdsId = d199e4dc-5ef4-11e2-a538-525400927148,
> > storagePoolId = 1a995d7c-5ef3-11e2-a8c4-525400927148, prevId=-1,
> > prevLVER=27, storagePoolFormatType=V1, recoveryMode=Manual,
> > SCSIFencing=false), log id: 64385c4b
> > 2013-01-15 15:50:53,144 INFO
> > [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
> > (QuartzScheduler_Worker-44) [3f11e766] spmStart polling started:
> > taskId = 373b6b46-d79b-45a5-a534-3a18d38ac65e
> >
> >
> > Thanks,
> > Jithin
> >
> >
> >
> > _______________________________________________
> > Users mailing list
> > Users at ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20130115/d6b58874/attachment-0001.html>


More information about the Users mailing list