Gluster command [<UNKNOWN>] failed on server...

Hi, After reboot my Hypervisior host can not activate again in the cluster and failed with the following error :- Gluster command [<UNKNOWN>] failed on server... Engine logs :- 2014-11-24 18:05:28,397 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-64) START, GlusterVolumesListVDSCommand(HostName = Compute4, HostId = 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 5f251c90 2014-11-24 18:05:30,609 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-64) FINISH, GlusterVolumesListVDSCommand, return: {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@d95203e0}, log id: 5f251c90 2014-11-24 18:05:33,768 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (ajp--127.0.0.1-8702-8) [287d570d] Lock Acquired to object EngineLock [exclusiveLocks= key: 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS , sharedLocks= ] 2014-11-24 18:05:33,795 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] Running command: ActivateVdsCommand internal: false. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDSAction group MANIPULATE_HOST with role type ADMIN 2014-11-24 18:05:33,796 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] Before acquiring lock in order to prevent monitoring for host Compute5 from data-center SV_WTC 2014-11-24 18:05:33,797 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] Lock acquired, from now a monitoring of host will be skipped for host Compute5 from data-center SV_WTC 2014-11-24 18:05:33,817 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] START, SetVdsStatusVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=Unassigned, nonOperationalReason=NONE, stopSpmFailureLogged=false), log id: 1cbc7311 2014-11-24 18:05:33,820 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] FINISH, SetVdsStatusVDSCommand, log id: 1cbc7311 2014-11-24 18:05:34,086 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) Activate finished. Lock released. Monitoring can run now for host Compute5 from data-center SV_WTC 2014-11-24 18:05:34,088 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-45) Correlation ID: 287d570d, Job ID: 5ef8e4d6-b2bc-469e-8e81-7ef74b2a001a, Call Stack: null, Custom Event ID: -1, Message: Host Compute5 was activated by admin. 2014-11-24 18:05:34,090 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) Lock freed to object EngineLock [exclusiveLocks= key: 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS , sharedLocks= ] 2014-11-24 18:05:35,792 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-55) [3706e836] START, GlusterVolumesListVDSCommand(HostName = Compute4, HostId = 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 48a0c832 2014-11-24 18:05:37,064 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] (DefaultQuartzScheduler_Worker-69) START, GetHardwareInfoVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a, vds=Host[Compute5,0bf6b00f-7947-4411-b55a-cc5eea2b381a]), log id: 6d560cc2 2014-11-24 18:05:37,074 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] (DefaultQuartzScheduler_Worker-69) FINISH, GetHardwareInfoVDSCommand, log id: 6d560cc2 2014-11-24 18:05:37,093 WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (DefaultQuartzScheduler_Worker-69) Host Compute5 is running with disabled SELinux. 2014-11-24 18:05:37,127 INFO [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] (DefaultQuartzScheduler_Worker-69) [2b4a51cf] Running command: HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS 2014-11-24 18:05:37,147 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-69) [2b4a51cf] START, GlusterServersListVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a), log id: 4faed87 2014-11-24 18:05:37,164 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-69) [2b4a51cf] FINISH, GlusterServersListVDSCommand, log id: 4faed87 2014-11-24 18:05:37,189 INFO [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Running command: SetNonOperationalVdsCommand internal: true. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS 2014-11-24 18:05:37,206 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] START, SetVdsStatusVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=NonOperational, nonOperationalReason=GLUSTER_COMMAND_FAILED, stopSpmFailureLogged=false), log id: fed5617 2014-11-24 18:05:37,209 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] FINISH, SetVdsStatusVDSCommand, log id: fed5617 2014-11-24 18:05:37,223 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: 4a84c4e5, Job ID: 4bfd4a6d-c3ef-468f-a40e-a3a6ca13011b, Call Stack: null, Custom Event ID: -1, Message: Gluster command [<UNKNOWN>] failed on server Compute5. 2014-11-24 18:05:37,243 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Status of host Compute5 was set to NonOperational. 2014-11-24 18:05:37,272 INFO [org.ovirt.engine.core.bll.HandleVdsVersionCommand] (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Running command: HandleVdsVersionCommand internal: true. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS 2014-11-24 18:05:37,274 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Host 0bf6b00f-7947-4411-b55a-cc5eea2b381a : Compute5 is already in NonOperational status for reason GLUSTER_COMMAND_FAILED. SetNonOperationalVds command is skipped. 2014-11-24 18:05:38,065 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-55) [3706e836] FINISH, GlusterVolumesListVDSCommand, return: {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4e72a1b1}, log id: 48a0c832 2014-11-24 18:05:43,243 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-35) START, GlusterVolumesListVDSCommand(HostName = Compute4, HostId = 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 3ce13ebc ^C [root@ccr01 ~]# Thanks, Punit

This is a multi-part message in MIME format. --------------080701090102060806020404 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Can you send the corresponding error in vdsm.log from the host? Also check if glusterd service is running. Thanks, Kanagaraj On 11/24/2014 03:39 PM, Punit Dambiwal wrote:
Hi,
After reboot my Hypervisior host can not activate again in the cluster and failed with the following error :-
Gluster command [<UNKNOWN>] failed on server...
Engine logs :-
2014-11-24 18:05:28,397 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-64) START, GlusterVolumesListVDSCommand(HostName = Compute4, HostId = 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 5f251c90 2014-11-24 18:05:30,609 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-64) FINISH, GlusterVolumesListVDSCommand, return: {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@d95203e0}, log id: 5f251c90 2014-11-24 18:05:33,768 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (ajp--127.0.0.1-8702-8) [287d570d] Lock Acquired to object EngineLock [exclusiveLocks= key: 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS , sharedLocks= ] 2014-11-24 18:05:33,795 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] Running command: ActivateVdsCommand internal: false. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDSAction group MANIPULATE_HOST with role type ADMIN 2014-11-24 18:05:33,796 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] Before acquiring lock in order to prevent monitoring for host Compute5 from data-center SV_WTC 2014-11-24 18:05:33,797 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] Lock acquired, from now a monitoring of host will be skipped for host Compute5 from data-center SV_WTC 2014-11-24 18:05:33,817 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] START, SetVdsStatusVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=Unassigned, nonOperationalReason=NONE, stopSpmFailureLogged=false), log id: 1cbc7311 2014-11-24 18:05:33,820 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] FINISH, SetVdsStatusVDSCommand, log id: 1cbc7311 2014-11-24 18:05:34,086 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) Activate finished. Lock released. Monitoring can run now for host Compute5 from data-center SV_WTC 2014-11-24 18:05:34,088 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-45) Correlation ID: 287d570d, Job ID: 5ef8e4d6-b2bc-469e-8e81-7ef74b2a001a, Call Stack: null, Custom Event ID: -1, Message: Host Compute5 was activated by admin. 2014-11-24 18:05:34,090 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) Lock freed to object EngineLock [exclusiveLocks= key: 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS , sharedLocks= ] 2014-11-24 18:05:35,792 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-55) [3706e836] START, GlusterVolumesListVDSCommand(HostName = Compute4, HostId = 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 48a0c832 2014-11-24 18:05:37,064 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] (DefaultQuartzScheduler_Worker-69) START, GetHardwareInfoVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a, vds=Host[Compute5,0bf6b00f-7947-4411-b55a-cc5eea2b381a]), log id: 6d560cc2 2014-11-24 18:05:37,074 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] (DefaultQuartzScheduler_Worker-69) FINISH, GetHardwareInfoVDSCommand, log id: 6d560cc2 2014-11-24 18:05:37,093 WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (DefaultQuartzScheduler_Worker-69) Host Compute5 is running with disabled SELinux. 2014-11-24 18:05:37,127 INFO [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] (DefaultQuartzScheduler_Worker-69) [2b4a51cf] Running command: HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS 2014-11-24 18:05:37,147 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-69) [2b4a51cf] START, GlusterServersListVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a), log id: 4faed87 2014-11-24 18:05:37,164 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-69) [2b4a51cf] FINISH, GlusterServersListVDSCommand, log id: 4faed87 2014-11-24 18:05:37,189 INFO [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Running command: SetNonOperationalVdsCommand internal: true. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS 2014-11-24 18:05:37,206 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] START, SetVdsStatusVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=NonOperational, nonOperationalReason=GLUSTER_COMMAND_FAILED, stopSpmFailureLogged=false), log id: fed5617 2014-11-24 18:05:37,209 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] FINISH, SetVdsStatusVDSCommand, log id: fed5617 2014-11-24 18:05:37,223 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: 4a84c4e5, Job ID: 4bfd4a6d-c3ef-468f-a40e-a3a6ca13011b, Call Stack: null, Custom Event ID: -1, Message: Gluster command [<UNKNOWN>] failed on server Compute5. 2014-11-24 18:05:37,243 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Status of host Compute5 was set to NonOperational. 2014-11-24 18:05:37,272 INFO [org.ovirt.engine.core.bll.HandleVdsVersionCommand] (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Running command: HandleVdsVersionCommand internal: true. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS 2014-11-24 18:05:37,274 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Host 0bf6b00f-7947-4411-b55a-cc5eea2b381a : Compute5 is already in NonOperational status for reason GLUSTER_COMMAND_FAILED. SetNonOperationalVds command is skipped. 2014-11-24 18:05:38,065 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-55) [3706e836] FINISH, GlusterVolumesListVDSCommand, return: {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4e72a1b1}, log id: 48a0c832 2014-11-24 18:05:43,243 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-35) START, GlusterVolumesListVDSCommand(HostName = Compute4, HostId = 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 3ce13ebc ^C [root@ccr01 ~]#
Thanks, Punit
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
--------------080701090102060806020404 Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=windows-1252" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> Can you send the corresponding error in vdsm.log from the host?<br> <br> Also check if glusterd service is running.<br> <br> Thanks,<br> Kanagaraj<br> <br> <div class="moz-cite-prefix">On 11/24/2014 03:39 PM, Punit Dambiwal wrote:<br> </div> <blockquote cite="mid:CAGZcrB=OPQ2ikZdEwGCeMbDDdPE6rJ68n6C0XuZAadhsX2eH+w@mail.gmail.com" type="cite"> <div dir="ltr"> <div>Hi,</div> <div><br> </div> <div>After reboot my Hypervisior host can not activate again in the cluster and failed with the following error :- </div> <div><br> </div> <div>Gluster command [<UNKNOWN>] failed on server...<br> </div> <div><br> </div> <div>Engine logs :- </div> <div><br> </div> <div>2014-11-24 18:05:28,397 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-64) START, GlusterVolumesListVDSCommand(HostName = Compute4, HostId = 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 5f251c90</div> <div>2014-11-24 18:05:30,609 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-64) FINISH, GlusterVolumesListVDSCommand, return: {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@d95203e0}, log id: 5f251c90</div> <div>2014-11-24 18:05:33,768 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (ajp--127.0.0.1-8702-8) [287d570d] Lock Acquired to object EngineLock [exclusiveLocks= key: 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS</div> <div>, sharedLocks= ]</div> <div>2014-11-24 18:05:33,795 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] Running command: ActivateVdsCommand internal: false. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDSAction group MANIPULATE_HOST with role type ADMIN</div> <div>2014-11-24 18:05:33,796 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] Before acquiring lock in order to prevent monitoring for host Compute5 from data-center SV_WTC</div> <div>2014-11-24 18:05:33,797 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] Lock acquired, from now a monitoring of host will be skipped for host Compute5 from data-center SV_WTC</div> <div>2014-11-24 18:05:33,817 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] START, SetVdsStatusVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=Unassigned, nonOperationalReason=NONE, stopSpmFailureLogged=false), log id: 1cbc7311</div> <div>2014-11-24 18:05:33,820 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] FINISH, SetVdsStatusVDSCommand, log id: 1cbc7311</div> <div>2014-11-24 18:05:34,086 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) Activate finished. Lock released. Monitoring can run now for host Compute5 from data-center SV_WTC</div> <div>2014-11-24 18:05:34,088 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-45) Correlation ID: 287d570d, Job ID: 5ef8e4d6-b2bc-469e-8e81-7ef74b2a001a, Call Stack: null, Custom Event ID: -1, Message: Host Compute5 was activated by admin.</div> <div>2014-11-24 18:05:34,090 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) Lock freed to object EngineLock [exclusiveLocks= key: 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS</div> <div>, sharedLocks= ]</div> <div>2014-11-24 18:05:35,792 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-55) [3706e836] START, GlusterVolumesListVDSCommand(HostName = Compute4, HostId = 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 48a0c832</div> <div>2014-11-24 18:05:37,064 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] (DefaultQuartzScheduler_Worker-69) START, GetHardwareInfoVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a, vds=Host[Compute5,0bf6b00f-7947-4411-b55a-cc5eea2b381a]), log id: 6d560cc2</div> <div>2014-11-24 18:05:37,074 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] (DefaultQuartzScheduler_Worker-69) FINISH, GetHardwareInfoVDSCommand, log id: 6d560cc2</div> <div>2014-11-24 18:05:37,093 WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (DefaultQuartzScheduler_Worker-69) Host Compute5 is running with disabled SELinux.</div> <div>2014-11-24 18:05:37,127 INFO [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] (DefaultQuartzScheduler_Worker-69) [2b4a51cf] Running command: HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS</div> <div>2014-11-24 18:05:37,147 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-69) [2b4a51cf] START, GlusterServersListVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a), log id: 4faed87</div> <div>2014-11-24 18:05:37,164 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-69) [2b4a51cf] FINISH, GlusterServersListVDSCommand, log id: 4faed87</div> <div>2014-11-24 18:05:37,189 INFO [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Running command: SetNonOperationalVdsCommand internal: true. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS</div> <div>2014-11-24 18:05:37,206 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] START, SetVdsStatusVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=NonOperational, nonOperationalReason=GLUSTER_COMMAND_FAILED, stopSpmFailureLogged=false), log id: fed5617</div> <div>2014-11-24 18:05:37,209 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] FINISH, SetVdsStatusVDSCommand, log id: fed5617</div> <div>2014-11-24 18:05:37,223 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: 4a84c4e5, Job ID: 4bfd4a6d-c3ef-468f-a40e-a3a6ca13011b, Call Stack: null, Custom Event ID: -1, Message: Gluster command [<UNKNOWN>] failed on server Compute5.</div> <div>2014-11-24 18:05:37,243 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Status of host Compute5 was set to NonOperational.</div> <div>2014-11-24 18:05:37,272 INFO [org.ovirt.engine.core.bll.HandleVdsVersionCommand] (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Running command: HandleVdsVersionCommand internal: true. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS</div> <div>2014-11-24 18:05:37,274 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Host 0bf6b00f-7947-4411-b55a-cc5eea2b381a : Compute5 is already in NonOperational status for reason GLUSTER_COMMAND_FAILED. SetNonOperationalVds command is skipped.</div> <div>2014-11-24 18:05:38,065 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-55) [3706e836] FINISH, GlusterVolumesListVDSCommand, return: {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4e72a1b1}, log id: 48a0c832</div> <div>2014-11-24 18:05:43,243 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-35) START, GlusterVolumesListVDSCommand(HostName = Compute4, HostId = 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 3ce13ebc</div> <div>^C</div> <div>[root@ccr01 ~]#</div> <div><br> </div> <div>Thanks,</div> <div>Punit</div> </div> <br> <fieldset class="mimeAttachmentHeader"></fieldset> <br> <pre wrap="">_______________________________________________ Users mailing list <a class="moz-txt-link-abbreviated" href="mailto:Users@ovirt.org">Users@ovirt.org</a> <a class="moz-txt-link-freetext" href="http://lists.ovirt.org/mailman/listinfo/users">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <br> </body> </html> --------------080701090102060806020404--

Hi Kanagaraj, Please find the attached VDSM logs :- ---------------- Thread-13::DEBUG::2014-11-24 21:41:17,182::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-13::DEBUG::2014-11-24 21:41:17,182::task::993::Storage.TaskManager.Task::(_decref) Task=`1691d409-9b27-4585-8281-5ec26154367a`::ref 0 aborting False Thread-13::DEBUG::2014-11-24 21:41:32,393::task::595::Storage.TaskManager.Task::(_updateState) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from state init -> state preparing Thread-13::INFO::2014-11-24 21:41:32,393::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-13::INFO::2014-11-24 21:41:32,393::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-13::DEBUG::2014-11-24 21:41:32,393::task::1191::Storage.TaskManager.Task::(prepare) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::finished: {} Thread-13::DEBUG::2014-11-24 21:41:32,394::task::595::Storage.TaskManager.Task::(_updateState) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from state preparing -> state finished Thread-13::DEBUG::2014-11-24 21:41:32,394::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-13::DEBUG::2014-11-24 21:41:32,394::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-13::DEBUG::2014-11-24 21:41:32,394::task::993::Storage.TaskManager.Task::(_decref) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::ref 0 aborting False Thread-13::DEBUG::2014-11-24 21:41:41,550::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call getCapabilities with () {} Thread-13::DEBUG::2014-11-24 21:41:41,553::utils::738::root::(execCmd) /sbin/ip route show to 0.0.0.0/0 table all (cwd None) Thread-13::DEBUG::2014-11-24 21:41:41,560::utils::758::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-13::DEBUG::2014-11-24 21:41:41,588::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift',) not found Thread-13::DEBUG::2014-11-24 21:41:41,592::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-object',) not found Thread-13::DEBUG::2014-11-24 21:41:41,593::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-plugin',) not found Thread-13::DEBUG::2014-11-24 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-account',) not found Thread-13::DEBUG::2014-11-24 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-proxy',) not found Thread-13::DEBUG::2014-11-24 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-doc',) not found Thread-13::DEBUG::2014-11-24 21:41:41,599::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-container',) not found Thread-13::DEBUG::2014-11-24 21:41:41,599::caps::728::root::(_getKeyPackages) rpm package ('glusterfs-geo-replication',) not found Thread-13::DEBUG::2014-11-24 21:41:41,600::caps::646::root::(get) VirtioRNG DISABLED: libvirt version 0.10.2-29.el6_5.9 required >= 0.10.2-31 Thread-13::DEBUG::2014-11-24 21:41:41,603::BindingXMLRPC::1139::vds::(wrapper) return getCapabilities with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory': {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:32151ce183c8'}], 'FC': []}, 'packages2': {'kernel': {'release': '431.el6.x86_64', 'buildtime': 1385061309.0, 'version': '2.6.32'}, 'glusterfs-rdma': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, 'glusterfs-fuse': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, 'spice-server': {'release': '6.el6_5.2', 'buildtime': 1402324637L, 'version': '0.12.4'}, 'vdsm': {'release': '1.gitdb83943.el6', 'buildtime': 1412784567L, 'version': '4.16.7'}, 'qemu-kvm': {'release': '2.415.el6_5.10', 'buildtime': 1402435700L, 'version': '0.12.1.2'}, 'qemu-img': {'release': '2.415.el6_5.10', 'buildtime': 1402435700L, 'version': '0.12.1.2'}, 'libvirt': {'release': '29.el6_5.9', 'buildtime': 1402404612L, 'version': '0.10.2'}, 'glusterfs': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, 'mom': {'release': '2.el6', 'buildtime': 1403794344L, 'version': '0.4.1'}, 'glusterfs-server': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}}, 'numaNodeDistance': {'1': [20, 10], '0': [10, 20]}, 'cpuModel': 'Intel(R) Xeon(R) CPU X5650 @ 2.67GHz', 'liveMerge': 'false', 'hooks': {}, 'cpuSockets': '2', 'vmTypes': ['kvm'], 'selinux': {'mode': '1'}, 'kdumpStatus': 0, 'supportedProtocols': ['2.2', '2.3'], 'networks': {'ovirtmgmt': {'iface': u'bond0.10', 'addr': '43.252.176.16', 'bridged': False, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'bootproto4': 'none', 'netmask': '255.255.255.0', 'ipv4addrs': [' 43.252.176.16/24'], 'interface': u'bond0.10', 'ipv6gateway': '::', 'gateway': '43.25.17.1'}, 'Internal': {'iface': 'Internal', 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'bridged': True, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'gateway': '', 'bootproto4': 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'mtu': '9000', 'ipv6gateway': '::', 'ports': ['bond1.100']}, 'storage': {'iface': u'bond1', 'addr': '10.10.10.6', 'bridged': False, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'bootproto4': 'none', 'netmask': '255.255.255.0', 'ipv4addrs': ['10.10.10.6/24'], 'interface': u'bond1', 'ipv6gateway': '::', 'gateway': ''}, 'VMNetwork': {'iface': 'VMNetwork', 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'bridged': True, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'gateway': '', 'bootproto4': 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'mtu': '1500', 'ipv6gateway': '::', 'ports': ['bond0.36']}}, 'bridges': {'Internal': {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': '', 'opts': {'topology_change_detected': '0', 'multicast_last_member_count': '2', 'hash_elasticity': '4', 'multicast_query_response_interval': '999', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3124', 'hello_timer': '31', 'multicast_querier_interval': '25496', 'max_age': '1999', 'hash_max': '512', 'stp_state': '0', 'root_id': '8000.001018cddaac', 'priority': '32768', 'multicast_membership_interval': '25996', 'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_startup_query_count': '2', 'hello_time': '199', 'topology_change': '0', 'bridge_id': '8000.001018cddaac', 'topology_change_timer': '0', 'ageing_time': '29995', 'gc_timer': '31', 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', 'multicast_query_interval': '12498', 'multicast_last_member_interval': '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': ['bond1.100']}, 'VMNetwork': {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': '', 'opts': {'topology_change_detected': '0', 'multicast_last_member_count': '2', 'hash_elasticity': '4', 'multicast_query_response_interval': '999', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3124', 'hello_timer': '131', 'multicast_querier_interval': '25496', 'max_age': '1999', 'hash_max': '512', 'stp_state': '0', 'root_id': '8000.60eb6920b46c', 'priority': '32768', 'multicast_membership_interval': '25996', 'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_startup_query_count': '2', 'hello_time': '199', 'topology_change': '0', 'bridge_id': '8000.60eb6920b46c', 'topology_change_timer': '0', 'ageing_time': '29995', 'gc_timer': '31', 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', 'multicast_query_interval': '12498', 'multicast_last_member_interval': '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': ['bond0.36']}}, 'uuid': '44454C4C-4C00-1057-8053-B7C04F504E31', 'lastClientIface': 'bond1', 'nics': {'eth3': {'permhwaddr': '00:10:18:cd:da:ae', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ae', 'MASTER': 'bond1', 'DEVICE': 'eth3', 'ONBOOT': 'no'}, 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '00:10:18:cd:da:ac', 'speed': 1000}, 'eth2': {'permhwaddr': '00:10:18:cd:da:ac', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ac', 'MASTER': 'bond1', 'DEVICE': 'eth2', 'ONBOOT': 'no'}, 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '00:10:18:cd:da:ac', 'speed': 1000}, 'eth1': {'permhwaddr': '60:eb:69:20:b4:6d', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6d', 'MASTER': 'bond0', 'DEVICE': 'eth1', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'speed': 1000}, 'eth0': {'permhwaddr': '60:eb:69:20:b4:6c', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6c', 'MASTER': 'bond0', 'DEVICE': 'eth0', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'speed': 1000}}, 'software_revision': '1', 'clusterLevels': ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5'], 'cpuFlags': u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,popcnt,aes,lahf_lm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270', 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:32151ce183c8', 'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5'], 'autoNumaBalancing': 2, 'reservedMem': '321', 'bondings': {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': 'no', 'BONDING_OPTS': 'mode=4 miimon=100', 'DEVICE': 'bond0', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'slaves': ['eth0', 'eth1'], 'opts': {'miimon': '100', 'mode': '4'}}, 'bond1': {'addr': '10.10.10.6', 'cfg': {'DEFROUTE': 'no', 'IPADDR': '10.10.10.6', 'HOTPLUG': 'no', 'MTU': '9000', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'BONDING_OPTS': 'mode=4 miimon=100', 'DEVICE': 'bond1', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'netmask': '255.255.255.0', 'ipv4addrs': ['10.10.10.6/24'], 'hwaddr': '00:10:18:cd:da:ac', 'slaves': ['eth2', 'eth3'], 'opts': {'miimon': '100', 'mode': '4'}}, 'bond2': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}}, 'software_version': '4.16', 'memSize': '24019', 'cpuSpeed': '2667.000', 'numaNodes': {u'1': {'totalMemory': '12288', 'cpus': [6, 7, 8, 9, 10, 11, 18, 19, 20, 21, 22, 23]}, u'0': {'totalMemory': '12278', 'cpus': [0, 1, 2, 3, 4, 5, 12, 13, 14, 15, 16, 17]}}, 'version_name': 'Snow Man', 'vlans': {'bond0.10': {'iface': 'bond0', 'addr': '43.25.17.16', 'cfg': {'DEFROUTE': 'yes', 'VLAN': 'yes', 'IPADDR': '43.25.17.16', 'HOTPLUG': 'no', 'GATEWAY': '43.25.17.1', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'DEVICE': 'bond0.10', 'MTU': '1500', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 10, 'mtu': '1500', 'netmask': '255.255.255.0', 'ipv4addrs': ['43.25.17.16/24']}, 'bond0.36': {'iface': 'bond0', 'addr': '', 'cfg': {'BRIDGE': 'VMNetwork', 'VLAN': 'yes', 'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond0.36', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 36, 'mtu': '1500', 'netmask': '', 'ipv4addrs': []}, 'bond1.100': {'iface': 'bond1', 'addr': '', 'cfg': {'BRIDGE': 'Internal', 'VLAN': 'yes', 'HOTPLUG': 'no', 'MTU': '9000', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond1.100', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'vlanid': 100, 'mtu': '9000', 'netmask': '', 'ipv4addrs': []}}, 'cpuCores': '12', 'kvmEnabled': 'true', 'guestOverhead': '65', 'cpuThreads': '24', 'emulatedMachines': [u'rhel6.5.0', u'pc', u'rhel6.4.0', u'rhel6.3.0', u'rhel6.2.0', u'rhel6.1.0', u'rhel6.0.0', u'rhel5.5.0', u'rhel5.4.4', u'rhel5.4.0'], 'operatingSystem': {'release': '5.el6.centos.11.1', 'version': '6', 'name': 'RHEL'}, 'lastClient': '10.10.10.2'}} Thread-13::DEBUG::2014-11-24 21:41:41,620::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call getHardwareInfo with () {} Thread-13::DEBUG::2014-11-24 21:41:41,621::BindingXMLRPC::1139::vds::(wrapper) return getHardwareInfo with {'status': {'message': 'Done', 'code': 0}, 'info': {'systemProductName': 'CS24-TY', 'systemSerialNumber': '7LWSPN1', 'systemFamily': 'Server', 'systemVersion': 'A00', 'systemUUID': '44454c4c-4c00-1057-8053-b7c04f504e31', 'systemManufacturer': 'Dell'}} Thread-13::DEBUG::2014-11-24 21:41:41,733::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call hostsList with () {} flowID [222e8036] Thread-13::ERROR::2014-11-24 21:41:44,753::BindingXMLRPC::1148::vds::(wrapper) vdsm exception occured Traceback (most recent call last): File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1135, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper rv = func(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList return {'hosts': self.svdsmProxy.glusterPeerStatus()} File "/usr/share/vdsm/supervdsm.py", line 50, in __call__ return callMethod() File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda> **kwargs) File "<string>", line 2, in glusterPeerStatus File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in _callmethod raise convert_to_error(kind, result) GlusterCmdExecFailedException: Command execution failed error: Connection failed. Please check if gluster daemon is operational. return code: 1 Thread-13::DEBUG::2014-11-24 21:41:50,949::task::595::Storage.TaskManager.Task::(_updateState) Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from state init -> state preparing Thread-13::INFO::2014-11-24 21:41:50,950::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-13::INFO::2014-11-24 21:41:50,950::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-13::DEBUG::2014-11-24 21:41:50,950::task::1191::Storage.TaskManager.Task::(prepare) Task=`c9042986-c978-4b08-adb2-616f5299e115`::finished: {} Thread-13::DEBUG::2014-11-24 21:41:50,950::task::595::Storage.TaskManager.Task::(_updateState) Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from state preparing -> state finished Thread-13::DEBUG::2014-11-24 21:41:50,951::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-13::DEBUG::2014-11-24 21:41:50,951::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-13::DEBUG::2014-11-24 21:41:50,951::task::993::Storage.TaskManager.Task::(_decref) Task=`c9042986-c978-4b08-adb2-616f5299e115`::ref 0 aborting False ------------------------------- [root@compute4 ~]# service glusterd status glusterd is stopped [root@compute4 ~]# chkconfig --list | grep glusterd glusterd 0:off 1:off 2:on 3:on 4:on 5:on 6:off [root@compute4 ~]# Thanks, Punit On Mon, Nov 24, 2014 at 6:36 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
Can you send the corresponding error in vdsm.log from the host?
Also check if glusterd service is running.
Thanks, Kanagaraj
On 11/24/2014 03:39 PM, Punit Dambiwal wrote:
Hi,
After reboot my Hypervisior host can not activate again in the cluster and failed with the following error :-
Gluster command [<UNKNOWN>] failed on server...
Engine logs :-
2014-11-24 18:05:28,397 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-64) START, GlusterVolumesListVDSCommand(HostName = Compute4, HostId = 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 5f251c90 2014-11-24 18:05:30,609 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-64) FINISH, GlusterVolumesListVDSCommand, return: {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@d95203e0}, log id: 5f251c90 2014-11-24 18:05:33,768 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (ajp--127.0.0.1-8702-8) [287d570d] Lock Acquired to object EngineLock [exclusiveLocks= key: 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS , sharedLocks= ] 2014-11-24 18:05:33,795 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] Running command: ActivateVdsCommand internal: false. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDSAction group MANIPULATE_HOST with role type ADMIN 2014-11-24 18:05:33,796 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] Before acquiring lock in order to prevent monitoring for host Compute5 from data-center SV_WTC 2014-11-24 18:05:33,797 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] Lock acquired, from now a monitoring of host will be skipped for host Compute5 from data-center SV_WTC 2014-11-24 18:05:33,817 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] START, SetVdsStatusVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=Unassigned, nonOperationalReason=NONE, stopSpmFailureLogged=false), log id: 1cbc7311 2014-11-24 18:05:33,820 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] FINISH, SetVdsStatusVDSCommand, log id: 1cbc7311 2014-11-24 18:05:34,086 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) Activate finished. Lock released. Monitoring can run now for host Compute5 from data-center SV_WTC 2014-11-24 18:05:34,088 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-45) Correlation ID: 287d570d, Job ID: 5ef8e4d6-b2bc-469e-8e81-7ef74b2a001a, Call Stack: null, Custom Event ID: -1, Message: Host Compute5 was activated by admin. 2014-11-24 18:05:34,090 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) Lock freed to object EngineLock [exclusiveLocks= key: 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS , sharedLocks= ] 2014-11-24 18:05:35,792 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-55) [3706e836] START, GlusterVolumesListVDSCommand(HostName = Compute4, HostId = 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 48a0c832 2014-11-24 18:05:37,064 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] (DefaultQuartzScheduler_Worker-69) START, GetHardwareInfoVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a, vds=Host[Compute5,0bf6b00f-7947-4411-b55a-cc5eea2b381a]), log id: 6d560cc2 2014-11-24 18:05:37,074 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] (DefaultQuartzScheduler_Worker-69) FINISH, GetHardwareInfoVDSCommand, log id: 6d560cc2 2014-11-24 18:05:37,093 WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (DefaultQuartzScheduler_Worker-69) Host Compute5 is running with disabled SELinux. 2014-11-24 18:05:37,127 INFO [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] (DefaultQuartzScheduler_Worker-69) [2b4a51cf] Running command: HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS 2014-11-24 18:05:37,147 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-69) [2b4a51cf] START, GlusterServersListVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a), log id: 4faed87 2014-11-24 18:05:37,164 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-69) [2b4a51cf] FINISH, GlusterServersListVDSCommand, log id: 4faed87 2014-11-24 18:05:37,189 INFO [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Running command: SetNonOperationalVdsCommand internal: true. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS 2014-11-24 18:05:37,206 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] START, SetVdsStatusVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=NonOperational, nonOperationalReason=GLUSTER_COMMAND_FAILED, stopSpmFailureLogged=false), log id: fed5617 2014-11-24 18:05:37,209 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] FINISH, SetVdsStatusVDSCommand, log id: fed5617 2014-11-24 18:05:37,223 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: 4a84c4e5, Job ID: 4bfd4a6d-c3ef-468f-a40e-a3a6ca13011b, Call Stack: null, Custom Event ID: -1, Message: Gluster command [<UNKNOWN>] failed on server Compute5. 2014-11-24 18:05:37,243 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Status of host Compute5 was set to NonOperational. 2014-11-24 18:05:37,272 INFO [org.ovirt.engine.core.bll.HandleVdsVersionCommand] (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Running command: HandleVdsVersionCommand internal: true. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS 2014-11-24 18:05:37,274 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Host 0bf6b00f-7947-4411-b55a-cc5eea2b381a : Compute5 is already in NonOperational status for reason GLUSTER_COMMAND_FAILED. SetNonOperationalVds command is skipped. 2014-11-24 18:05:38,065 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-55) [3706e836] FINISH, GlusterVolumesListVDSCommand, return: {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4e72a1b1}, log id: 48a0c832 2014-11-24 18:05:43,243 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-35) START, GlusterVolumesListVDSCommand(HostName = Compute4, HostId = 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 3ce13ebc ^C [root@ccr01 ~]#
Thanks, Punit
_______________________________________________ Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users

This is a multi-part message in MIME format. --------------060604080107090201050102 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit From vdsm.log "error: Connection failed. Please check if gluster daemon is operational." Starting glusterd service should fix this issue. 'service glusterd start' But i am wondering why the glusterd was not started automatically after the reboot. Thanks, Kanagaraj On 11/24/2014 07:18 PM, Punit Dambiwal wrote:
Hi Kanagaraj,
Please find the attached VDSM logs :-
---------------- Thread-13::DEBUG::2014-11-24 21:41:17,182::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-13::DEBUG::2014-11-24 21:41:17,182::task::993::Storage.TaskManager.Task::(_decref) Task=`1691d409-9b27-4585-8281-5ec26154367a`::ref 0 aborting False Thread-13::DEBUG::2014-11-24 21:41:32,393::task::595::Storage.TaskManager.Task::(_updateState) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from state init -> state preparing Thread-13::INFO::2014-11-24 21:41:32,393::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-13::INFO::2014-11-24 21:41:32,393::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-13::DEBUG::2014-11-24 21:41:32,393::task::1191::Storage.TaskManager.Task::(prepare) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::finished: {} Thread-13::DEBUG::2014-11-24 21:41:32,394::task::595::Storage.TaskManager.Task::(_updateState) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from state preparing -> state finished Thread-13::DEBUG::2014-11-24 21:41:32,394::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-13::DEBUG::2014-11-24 21:41:32,394::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-13::DEBUG::2014-11-24 21:41:32,394::task::993::Storage.TaskManager.Task::(_decref) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::ref 0 aborting False Thread-13::DEBUG::2014-11-24 21:41:41,550::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call getCapabilities with () {} Thread-13::DEBUG::2014-11-24 21:41:41,553::utils::738::root::(execCmd) /sbin/ip route show to 0.0.0.0/0 <http://0.0.0.0/0> table all (cwd None) Thread-13::DEBUG::2014-11-24 21:41:41,560::utils::758::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-13::DEBUG::2014-11-24 21:41:41,588::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift',) not found Thread-13::DEBUG::2014-11-24 21:41:41,592::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-object',) not found Thread-13::DEBUG::2014-11-24 21:41:41,593::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-plugin',) not found Thread-13::DEBUG::2014-11-24 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-account',) not found Thread-13::DEBUG::2014-11-24 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-proxy',) not found Thread-13::DEBUG::2014-11-24 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-doc',) not found Thread-13::DEBUG::2014-11-24 21:41:41,599::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-container',) not found Thread-13::DEBUG::2014-11-24 21:41:41,599::caps::728::root::(_getKeyPackages) rpm package ('glusterfs-geo-replication',) not found Thread-13::DEBUG::2014-11-24 21:41:41,600::caps::646::root::(get) VirtioRNG DISABLED: libvirt version 0.10.2-29.el6_5.9 required >= 0.10.2-31 Thread-13::DEBUG::2014-11-24 21:41:41,603::BindingXMLRPC::1139::vds::(wrapper) return getCapabilities with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory': {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:32151ce183c8'}], 'FC': []}, 'packages2': {'kernel': {'release': '431.el6.x86_64', 'buildtime': 1385061309.0, 'version': '2.6.32'}, 'glusterfs-rdma': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, 'glusterfs-fuse': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, 'spice-server': {'release': '6.el6_5.2', 'buildtime': 1402324637L, 'version': '0.12.4'}, 'vdsm': {'release': '1.gitdb83943.el6', 'buildtime': 1412784567L, 'version': '4.16.7'}, 'qemu-kvm': {'release': '2.415.el6_5.10', 'buildtime': 1402435700L, 'version': '0.12.1.2'}, 'qemu-img': {'release': '2.415.el6_5.10', 'buildtime': 1402435700L, 'version': '0.12.1.2'}, 'libvirt': {'release': '29.el6_5.9', 'buildtime': 1402404612L, 'version': '0.10.2'}, 'glusterfs': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, 'mom': {'release': '2.el6', 'buildtime': 1403794344L, 'version': '0.4.1'}, 'glusterfs-server': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}}, 'numaNodeDistance': {'1': [20, 10], '0': [10, 20]}, 'cpuModel': 'Intel(R) Xeon(R) CPU X5650 @ 2.67GHz', 'liveMerge': 'false', 'hooks': {}, 'cpuSockets': '2', 'vmTypes': ['kvm'], 'selinux': {'mode': '1'}, 'kdumpStatus': 0, 'supportedProtocols': ['2.2', '2.3'], 'networks': {'ovirtmgmt': {'iface': u'bond0.10', 'addr': '43.252.176.16', 'bridged': False, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'bootproto4': 'none', 'netmask': '255.255.255.0', 'ipv4addrs': ['43.252.176.16/24' <http://43.252.176.16/24%27>], 'interface': u'bond0.10', 'ipv6gateway': '::', 'gateway': '43.25.17.1'}, 'Internal': {'iface': 'Internal', 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'bridged': True, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'gateway': '', 'bootproto4': 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'mtu': '9000', 'ipv6gateway': '::', 'ports': ['bond1.100']}, 'storage': {'iface': u'bond1', 'addr': '10.10.10.6', 'bridged': False, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'bootproto4': 'none', 'netmask': '255.255.255.0', 'ipv4addrs': ['10.10.10.6/24' <http://10.10.10.6/24%27>], 'interface': u'bond1', 'ipv6gateway': '::', 'gateway': ''}, 'VMNetwork': {'iface': 'VMNetwork', 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'bridged': True, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'gateway': '', 'bootproto4': 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'mtu': '1500', 'ipv6gateway': '::', 'ports': ['bond0.36']}}, 'bridges': {'Internal': {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': '', 'opts': {'topology_change_detected': '0', 'multicast_last_member_count': '2', 'hash_elasticity': '4', 'multicast_query_response_interval': '999', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3124', 'hello_timer': '31', 'multicast_querier_interval': '25496', 'max_age': '1999', 'hash_max': '512', 'stp_state': '0', 'root_id': '8000.001018cddaac', 'priority': '32768', 'multicast_membership_interval': '25996', 'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_startup_query_count': '2', 'hello_time': '199', 'topology_change': '0', 'bridge_id': '8000.001018cddaac', 'topology_change_timer': '0', 'ageing_time': '29995', 'gc_timer': '31', 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', 'multicast_query_interval': '12498', 'multicast_last_member_interval': '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': ['bond1.100']}, 'VMNetwork': {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': '', 'opts': {'topology_change_detected': '0', 'multicast_last_member_count': '2', 'hash_elasticity': '4', 'multicast_query_response_interval': '999', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3124', 'hello_timer': '131', 'multicast_querier_interval': '25496', 'max_age': '1999', 'hash_max': '512', 'stp_state': '0', 'root_id': '8000.60eb6920b46c', 'priority': '32768', 'multicast_membership_interval': '25996', 'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_startup_query_count': '2', 'hello_time': '199', 'topology_change': '0', 'bridge_id': '8000.60eb6920b46c', 'topology_change_timer': '0', 'ageing_time': '29995', 'gc_timer': '31', 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', 'multicast_query_interval': '12498', 'multicast_last_member_interval': '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': ['bond0.36']}}, 'uuid': '44454C4C-4C00-1057-8053-B7C04F504E31', 'lastClientIface': 'bond1', 'nics': {'eth3': {'permhwaddr': '00:10:18:cd:da:ae', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ae', 'MASTER': 'bond1', 'DEVICE': 'eth3', 'ONBOOT': 'no'}, 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '00:10:18:cd:da:ac', 'speed': 1000}, 'eth2': {'permhwaddr': '00:10:18:cd:da:ac', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ac', 'MASTER': 'bond1', 'DEVICE': 'eth2', 'ONBOOT': 'no'}, 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '00:10:18:cd:da:ac', 'speed': 1000}, 'eth1': {'permhwaddr': '60:eb:69:20:b4:6d', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6d', 'MASTER': 'bond0', 'DEVICE': 'eth1', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'speed': 1000}, 'eth0': {'permhwaddr': '60:eb:69:20:b4:6c', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6c', 'MASTER': 'bond0', 'DEVICE': 'eth0', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'speed': 1000}}, 'software_revision': '1', 'clusterLevels': ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5'], 'cpuFlags': u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,popcnt,aes,lahf_lm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270', 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:32151ce183c8', 'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5'], 'autoNumaBalancing': 2, 'reservedMem': '321', 'bondings': {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': 'no', 'BONDING_OPTS': 'mode=4 miimon=100', 'DEVICE': 'bond0', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'slaves': ['eth0', 'eth1'], 'opts': {'miimon': '100', 'mode': '4'}}, 'bond1': {'addr': '10.10.10.6', 'cfg': {'DEFROUTE': 'no', 'IPADDR': '10.10.10.6', 'HOTPLUG': 'no', 'MTU': '9000', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'BONDING_OPTS': 'mode=4 miimon=100', 'DEVICE': 'bond1', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'netmask': '255.255.255.0', 'ipv4addrs': ['10.10.10.6/24' <http://10.10.10.6/24%27>], 'hwaddr': '00:10:18:cd:da:ac', 'slaves': ['eth2', 'eth3'], 'opts': {'miimon': '100', 'mode': '4'}}, 'bond2': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}}, 'software_version': '4.16', 'memSize': '24019', 'cpuSpeed': '2667.000', 'numaNodes': {u'1': {'totalMemory': '12288', 'cpus': [6, 7, 8, 9, 10, 11, 18, 19, 20, 21, 22, 23]}, u'0': {'totalMemory': '12278', 'cpus': [0, 1, 2, 3, 4, 5, 12, 13, 14, 15, 16, 17]}}, 'version_name': 'Snow Man', 'vlans': {'bond0.10': {'iface': 'bond0', 'addr': '43.25.17.16', 'cfg': {'DEFROUTE': 'yes', 'VLAN': 'yes', 'IPADDR': '43.25.17.16', 'HOTPLUG': 'no', 'GATEWAY': '43.25.17.1', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'DEVICE': 'bond0.10', 'MTU': '1500', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 10, 'mtu': '1500', 'netmask': '255.255.255.0', 'ipv4addrs': ['43.25.17.16/24'] <http://43.25.17.16/24%27]>}, 'bond0.36': {'iface': 'bond0', 'addr': '', 'cfg': {'BRIDGE': 'VMNetwork', 'VLAN': 'yes', 'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond0.36', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 36, 'mtu': '1500', 'netmask': '', 'ipv4addrs': []}, 'bond1.100': {'iface': 'bond1', 'addr': '', 'cfg': {'BRIDGE': 'Internal', 'VLAN': 'yes', 'HOTPLUG': 'no', 'MTU': '9000', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond1.100', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'vlanid': 100, 'mtu': '9000', 'netmask': '', 'ipv4addrs': []}}, 'cpuCores': '12', 'kvmEnabled': 'true', 'guestOverhead': '65', 'cpuThreads': '24', 'emulatedMachines': [u'rhel6.5.0', u'pc', u'rhel6.4.0', u'rhel6.3.0', u'rhel6.2.0', u'rhel6.1.0', u'rhel6.0.0', u'rhel5.5.0', u'rhel5.4.4', u'rhel5.4.0'], 'operatingSystem': {'release': '5.el6.centos.11.1', 'version': '6', 'name': 'RHEL'}, 'lastClient': '10.10.10.2'}} Thread-13::DEBUG::2014-11-24 21:41:41,620::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call getHardwareInfo with () {} Thread-13::DEBUG::2014-11-24 21:41:41,621::BindingXMLRPC::1139::vds::(wrapper) return getHardwareInfo with {'status': {'message': 'Done', 'code': 0}, 'info': {'systemProductName': 'CS24-TY', 'systemSerialNumber': '7LWSPN1', 'systemFamily': 'Server', 'systemVersion': 'A00', 'systemUUID': '44454c4c-4c00-1057-8053-b7c04f504e31', 'systemManufacturer': 'Dell'}} Thread-13::DEBUG::2014-11-24 21:41:41,733::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call hostsList with () {} flowID [222e8036] Thread-13::ERROR::2014-11-24 21:41:44,753::BindingXMLRPC::1148::vds::(wrapper) vdsm exception occured Traceback (most recent call last): File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1135, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper rv = func(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList return {'hosts': self.svdsmProxy.glusterPeerStatus()} File "/usr/share/vdsm/supervdsm.py", line 50, in __call__ return callMethod() File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda> **kwargs) File "<string>", line 2, in glusterPeerStatus File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in _callmethod raise convert_to_error(kind, result) GlusterCmdExecFailedException: Command execution failed error: Connection failed. Please check if gluster daemon is operational. return code: 1 Thread-13::DEBUG::2014-11-24 21:41:50,949::task::595::Storage.TaskManager.Task::(_updateState) Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from state init -> state preparing Thread-13::INFO::2014-11-24 21:41:50,950::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-13::INFO::2014-11-24 21:41:50,950::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-13::DEBUG::2014-11-24 21:41:50,950::task::1191::Storage.TaskManager.Task::(prepare) Task=`c9042986-c978-4b08-adb2-616f5299e115`::finished: {} Thread-13::DEBUG::2014-11-24 21:41:50,950::task::595::Storage.TaskManager.Task::(_updateState) Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from state preparing -> state finished Thread-13::DEBUG::2014-11-24 21:41:50,951::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-13::DEBUG::2014-11-24 21:41:50,951::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-13::DEBUG::2014-11-24 21:41:50,951::task::993::Storage.TaskManager.Task::(_decref) Task=`c9042986-c978-4b08-adb2-616f5299e115`::ref 0 aborting False -------------------------------
[root@compute4 ~]# service glusterd status glusterd is stopped [root@compute4 ~]# chkconfig --list | grep glusterd glusterd 0:off 1:off 2:on 3:on 4:on 5:on 6:off [root@compute4 ~]#
Thanks, Punit
On Mon, Nov 24, 2014 at 6:36 PM, Kanagaraj <kmayilsa@redhat.com <mailto:kmayilsa@redhat.com>> wrote:
Can you send the corresponding error in vdsm.log from the host?
Also check if glusterd service is running.
Thanks, Kanagaraj
On 11/24/2014 03:39 PM, Punit Dambiwal wrote:
Hi,
After reboot my Hypervisior host can not activate again in the cluster and failed with the following error :-
Gluster command [<UNKNOWN>] failed on server...
Engine logs :-
2014-11-24 18:05:28,397 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-64) START, GlusterVolumesListVDSCommand(HostName = Compute4, HostId = 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 5f251c90 2014-11-24 18:05:30,609 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-64) FINISH, GlusterVolumesListVDSCommand, return: {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@d95203e0}, log id: 5f251c90 2014-11-24 18:05:33,768 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (ajp--127.0.0.1-8702-8) [287d570d] Lock Acquired to object EngineLock [exclusiveLocks= key: 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS , sharedLocks= ] 2014-11-24 18:05:33,795 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] Running command: ActivateVdsCommand internal: false. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDSAction group MANIPULATE_HOST with role type ADMIN 2014-11-24 18:05:33,796 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] Before acquiring lock in order to prevent monitoring for host Compute5 from data-center SV_WTC 2014-11-24 18:05:33,797 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] Lock acquired, from now a monitoring of host will be skipped for host Compute5 from data-center SV_WTC 2014-11-24 18:05:33,817 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] START, SetVdsStatusVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=Unassigned, nonOperationalReason=NONE, stopSpmFailureLogged=false), log id: 1cbc7311 2014-11-24 18:05:33,820 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] FINISH, SetVdsStatusVDSCommand, log id: 1cbc7311 2014-11-24 18:05:34,086 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) Activate finished. Lock released. Monitoring can run now for host Compute5 from data-center SV_WTC 2014-11-24 18:05:34,088 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-45) Correlation ID: 287d570d, Job ID: 5ef8e4d6-b2bc-469e-8e81-7ef74b2a001a, Call Stack: null, Custom Event ID: -1, Message: Host Compute5 was activated by admin. 2014-11-24 18:05:34,090 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) Lock freed to object EngineLock [exclusiveLocks= key: 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS , sharedLocks= ] 2014-11-24 18:05:35,792 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-55) [3706e836] START, GlusterVolumesListVDSCommand(HostName = Compute4, HostId = 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 48a0c832 2014-11-24 18:05:37,064 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] (DefaultQuartzScheduler_Worker-69) START, GetHardwareInfoVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a, vds=Host[Compute5,0bf6b00f-7947-4411-b55a-cc5eea2b381a]), log id: 6d560cc2 2014-11-24 18:05:37,074 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] (DefaultQuartzScheduler_Worker-69) FINISH, GetHardwareInfoVDSCommand, log id: 6d560cc2 2014-11-24 18:05:37,093 WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (DefaultQuartzScheduler_Worker-69) Host Compute5 is running with disabled SELinux. 2014-11-24 18:05:37,127 INFO [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] (DefaultQuartzScheduler_Worker-69) [2b4a51cf] Running command: HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS 2014-11-24 18:05:37,147 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-69) [2b4a51cf] START, GlusterServersListVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a), log id: 4faed87 2014-11-24 18:05:37,164 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-69) [2b4a51cf] FINISH, GlusterServersListVDSCommand, log id: 4faed87 2014-11-24 18:05:37,189 INFO [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Running command: SetNonOperationalVdsCommand internal: true. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS 2014-11-24 18:05:37,206 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] START, SetVdsStatusVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=NonOperational, nonOperationalReason=GLUSTER_COMMAND_FAILED, stopSpmFailureLogged=false), log id: fed5617 2014-11-24 18:05:37,209 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] FINISH, SetVdsStatusVDSCommand, log id: fed5617 2014-11-24 18:05:37,223 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: 4a84c4e5, Job ID: 4bfd4a6d-c3ef-468f-a40e-a3a6ca13011b, Call Stack: null, Custom Event ID: -1, Message: Gluster command [<UNKNOWN>] failed on server Compute5. 2014-11-24 18:05:37,243 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Status of host Compute5 was set to NonOperational. 2014-11-24 18:05:37,272 INFO [org.ovirt.engine.core.bll.HandleVdsVersionCommand] (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Running command: HandleVdsVersionCommand internal: true. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS 2014-11-24 18:05:37,274 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Host 0bf6b00f-7947-4411-b55a-cc5eea2b381a : Compute5 is already in NonOperational status for reason GLUSTER_COMMAND_FAILED. SetNonOperationalVds command is skipped. 2014-11-24 18:05:38,065 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-55) [3706e836] FINISH, GlusterVolumesListVDSCommand, return: {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4e72a1b1}, log id: 48a0c832 2014-11-24 18:05:43,243 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-35) START, GlusterVolumesListVDSCommand(HostName = Compute4, HostId = 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 3ce13ebc ^C [root@ccr01 ~]#
Thanks, Punit
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users
--------------060604080107090201050102 Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=utf-8" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> From vdsm.log "error: Connection failed. Please check if gluster daemon is operational."<br> <br> Starting glusterd service should fix this issue. 'service glusterd start'<br> But i am wondering why the glusterd was not started automatically after the reboot.<br> <br> Thanks,<br> Kanagaraj<br> <br> <br> <div class="moz-cite-prefix">On 11/24/2014 07:18 PM, Punit Dambiwal wrote:<br> </div> <blockquote cite="mid:CAGZcrB=e1A99OEwghFU3ZqZBpkGY9PCa0vWMz51rQCL3hQNXmA@mail.gmail.com" type="cite"> <div dir="ltr">Hi Kanagaraj, <div><br> </div> <div>Please find the attached VDSM logs :- </div> <div><br> </div> <div>----------------</div> <div> <div>Thread-13::DEBUG::2014-11-24 21:41:17,182::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}</div> <div>Thread-13::DEBUG::2014-11-24 21:41:17,182::task::993::Storage.TaskManager.Task::(_decref) Task=`1691d409-9b27-4585-8281-5ec26154367a`::ref 0 aborting False</div> <div>Thread-13::DEBUG::2014-11-24 21:41:32,393::task::595::Storage.TaskManager.Task::(_updateState) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from state init -> state preparing</div> <div>Thread-13::<a class="moz-txt-link-freetext" href="INFO::2014-11-24">INFO::2014-11-24</a> 21:41:32,393::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None)</div> <div>Thread-13::<a class="moz-txt-link-freetext" href="INFO::2014-11-24">INFO::2014-11-24</a> 21:41:32,393::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {}</div> <div>Thread-13::DEBUG::2014-11-24 21:41:32,393::task::1191::Storage.TaskManager.Task::(prepare) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::finished: {}</div> <div>Thread-13::DEBUG::2014-11-24 21:41:32,394::task::595::Storage.TaskManager.Task::(_updateState) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from state preparing -> state finished</div> <div>Thread-13::DEBUG::2014-11-24 21:41:32,394::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}</div> <div>Thread-13::DEBUG::2014-11-24 21:41:32,394::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}</div> <div>Thread-13::DEBUG::2014-11-24 21:41:32,394::task::993::Storage.TaskManager.Task::(_decref) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::ref 0 aborting False</div> <div>Thread-13::DEBUG::2014-11-24 21:41:41,550::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call getCapabilities with () {}</div> <div>Thread-13::DEBUG::2014-11-24 21:41:41,553::utils::738::root::(execCmd) /sbin/ip route show to <a moz-do-not-send="true" href="http://0.0.0.0/0">0.0.0.0/0</a> table all (cwd None)</div> <div>Thread-13::DEBUG::2014-11-24 21:41:41,560::utils::758::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0</div> <div>Thread-13::DEBUG::2014-11-24 21:41:41,588::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift',) not found</div> <div>Thread-13::DEBUG::2014-11-24 21:41:41,592::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-object',) not found</div> <div>Thread-13::DEBUG::2014-11-24 21:41:41,593::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-plugin',) not found</div> <div>Thread-13::DEBUG::2014-11-24 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-account',) not found</div> <div>Thread-13::DEBUG::2014-11-24 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-proxy',) not found</div> <div>Thread-13::DEBUG::2014-11-24 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-doc',) not found</div> <div>Thread-13::DEBUG::2014-11-24 21:41:41,599::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-container',) not found</div> <div>Thread-13::DEBUG::2014-11-24 21:41:41,599::caps::728::root::(_getKeyPackages) rpm package ('glusterfs-geo-replication',) not found</div> <div>Thread-13::DEBUG::2014-11-24 21:41:41,600::caps::646::root::(get) VirtioRNG DISABLED: libvirt version 0.10.2-29.el6_5.9 required >= 0.10.2-31</div> <div>Thread-13::DEBUG::2014-11-24 21:41:41,603::BindingXMLRPC::1139::vds::(wrapper) return getCapabilities with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory': {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:32151ce183c8'}], 'FC': []}, 'packages2': {'kernel': {'release': '431.el6.x86_64', 'buildtime': 1385061309.0, 'version': '2.6.32'}, 'glusterfs-rdma': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, 'glusterfs-fuse': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, 'spice-server': {'release': '6.el6_5.2', 'buildtime': 1402324637L, 'version': '0.12.4'}, 'vdsm': {'release': '1.gitdb83943.el6', 'buildtime': 1412784567L, 'version': '4.16.7'}, 'qemu-kvm': {'release': '2.415.el6_5.10', 'buildtime': 1402435700L, 'version': '0.12.1.2'}, 'qemu-img': {'release': '2.415.el6_5.10', 'buildtime': 1402435700L, 'version': '0.12.1.2'}, 'libvirt': {'release': '29.el6_5.9', 'buildtime': 1402404612L, 'version': '0.10.2'}, 'glusterfs': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, 'mom': {'release': '2.el6', 'buildtime': 1403794344L, 'version': '0.4.1'}, 'glusterfs-server': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}}, 'numaNodeDistance': {'1': [20, 10], '0': [10, 20]}, 'cpuModel': 'Intel(R) Xeon(R) CPU X5650 @ 2.67GHz', 'liveMerge': 'false', 'hooks': {}, 'cpuSockets': '2', 'vmTypes': ['kvm'], 'selinux': {'mode': '1'}, 'kdumpStatus': 0, 'supportedProtocols': ['2.2', '2.3'], 'networks': {'ovirtmgmt': {'iface': u'bond0.10', 'addr': '43.252.176.16', 'bridged': False, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'bootproto4': 'none', 'netmask': '255.255.255.0', 'ipv4addrs': ['<a moz-do-not-send="true" href="http://43.252.176.16/24%27">43.252.176.16/24'</a>], 'interface': u'bond0.10', 'ipv6gateway': '::', 'gateway': '43.25.17.1'}, 'Internal': {'iface': 'Internal', 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'bridged': True, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'gateway': '', 'bootproto4': 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'mtu': '9000', 'ipv6gateway': '::', 'ports': ['bond1.100']}, 'storage': {'iface': u'bond1', 'addr': '10.10.10.6', 'bridged': False, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'bootproto4': 'none', 'netmask': '255.255.255.0', 'ipv4addrs': ['<a moz-do-not-send="true" href="http://10.10.10.6/24%27">10.10.10.6/24'</a>], 'interface': u'bond1', 'ipv6gateway': '::', 'gateway': ''}, 'VMNetwork': {'iface': 'VMNetwork', 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'bridged': True, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'gateway': '', 'bootproto4': 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'mtu': '1500', 'ipv6gateway': '::', 'ports': ['bond0.36']}}, 'bridges': {'Internal': {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': '', 'opts': {'topology_change_detected': '0', 'multicast_last_member_count': '2', 'hash_elasticity': '4', 'multicast_query_response_interval': '999', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3124', 'hello_timer': '31', 'multicast_querier_interval': '25496', 'max_age': '1999', 'hash_max': '512', 'stp_state': '0', 'root_id': '8000.001018cddaac', 'priority': '32768', 'multicast_membership_interval': '25996', 'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_startup_query_count': '2', 'hello_time': '199', 'topology_change': '0', 'bridge_id': '8000.001018cddaac', 'topology_change_timer': '0', 'ageing_time': '29995', 'gc_timer': '31', 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', 'multicast_query_interval': '12498', 'multicast_last_member_interval': '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': ['bond1.100']}, 'VMNetwork': {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': '', 'opts': {'topology_change_detected': '0', 'multicast_last_member_count': '2', 'hash_elasticity': '4', 'multicast_query_response_interval': '999', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3124', 'hello_timer': '131', 'multicast_querier_interval': '25496', 'max_age': '1999', 'hash_max': '512', 'stp_state': '0', 'root_id': '8000.60eb6920b46c', 'priority': '32768', 'multicast_membership_interval': '25996', 'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_startup_query_count': '2', 'hello_time': '199', 'topology_change': '0', 'bridge_id': '8000.60eb6920b46c', 'topology_change_timer': '0', 'ageing_time': '29995', 'gc_timer': '31', 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', 'multicast_query_interval': '12498', 'multicast_last_member_interval': '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': ['bond0.36']}}, 'uuid': '44454C4C-4C00-1057-8053-B7C04F504E31', 'lastClientIface': 'bond1', 'nics': {'eth3': {'permhwaddr': '00:10:18:cd:da:ae', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ae', 'MASTER': 'bond1', 'DEVICE': 'eth3', 'ONBOOT': 'no'}, 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '00:10:18:cd:da:ac', 'speed': 1000}, 'eth2': {'permhwaddr': '00:10:18:cd:da:ac', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ac', 'MASTER': 'bond1', 'DEVICE': 'eth2', 'ONBOOT': 'no'}, 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '00:10:18:cd:da:ac', 'speed': 1000}, 'eth1': {'permhwaddr': '60:eb:69:20:b4:6d', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6d', 'MASTER': 'bond0', 'DEVICE': 'eth1', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'speed': 1000}, 'eth0': {'permhwaddr': '60:eb:69:20:b4:6c', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6c', 'MASTER': 'bond0', 'DEVICE': 'eth0', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'speed': 1000}}, 'software_revision': '1', 'clusterLevels': ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5'], 'cpuFlags': u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,popcnt,aes,lahf_lm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270', 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:32151ce183c8', 'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5'], 'autoNumaBalancing': 2, 'reservedMem': '321', 'bondings': {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': 'no', 'BONDING_OPTS': 'mode=4 miimon=100', 'DEVICE': 'bond0', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'slaves': ['eth0', 'eth1'], 'opts': {'miimon': '100', 'mode': '4'}}, 'bond1': {'addr': '10.10.10.6', 'cfg': {'DEFROUTE': 'no', 'IPADDR': '10.10.10.6', 'HOTPLUG': 'no', 'MTU': '9000', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'BONDING_OPTS': 'mode=4 miimon=100', 'DEVICE': 'bond1', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'netmask': '255.255.255.0', 'ipv4addrs': ['<a moz-do-not-send="true" href="http://10.10.10.6/24%27">10.10.10.6/24'</a>], 'hwaddr': '00:10:18:cd:da:ac', 'slaves': ['eth2', 'eth3'], 'opts': {'miimon': '100', 'mode': '4'}}, 'bond2': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}}, 'software_version': '4.16', 'memSize': '24019', 'cpuSpeed': '2667.000', 'numaNodes': {u'1': {'totalMemory': '12288', 'cpus': [6, 7, 8, 9, 10, 11, 18, 19, 20, 21, 22, 23]}, u'0': {'totalMemory': '12278', 'cpus': [0, 1, 2, 3, 4, 5, 12, 13, 14, 15, 16, 17]}}, 'version_name': 'Snow Man', 'vlans': {'bond0.10': {'iface': 'bond0', 'addr': '43.25.17.16', 'cfg': {'DEFROUTE': 'yes', 'VLAN': 'yes', 'IPADDR': '43.25.17.16', 'HOTPLUG': 'no', 'GATEWAY': '43.25.17.1', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'DEVICE': 'bond0.10', 'MTU': '1500', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 10, 'mtu': '1500', 'netmask': '255.255.255.0', 'ipv4addrs': ['<a moz-do-not-send="true" href="http://43.25.17.16/24%27]">43.25.17.16/24']</a>}, 'bond0.36': {'iface': 'bond0', 'addr': '', 'cfg': {'BRIDGE': 'VMNetwork', 'VLAN': 'yes', 'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond0.36', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 36, 'mtu': '1500', 'netmask': '', 'ipv4addrs': []}, 'bond1.100': {'iface': 'bond1', 'addr': '', 'cfg': {'BRIDGE': 'Internal', 'VLAN': 'yes', 'HOTPLUG': 'no', 'MTU': '9000', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond1.100', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'vlanid': 100, 'mtu': '9000', 'netmask': '', 'ipv4addrs': []}}, 'cpuCores': '12', 'kvmEnabled': 'true', 'guestOverhead': '65', 'cpuThreads': '24', 'emulatedMachines': [u'rhel6.5.0', u'pc', u'rhel6.4.0', u'rhel6.3.0', u'rhel6.2.0', u'rhel6.1.0', u'rhel6.0.0', u'rhel5.5.0', u'rhel5.4.4', u'rhel5.4.0'], 'operatingSystem': {'release': '5.el6.centos.11.1', 'version': '6', 'name': 'RHEL'}, 'lastClient': '10.10.10.2'}}</div> <div>Thread-13::DEBUG::2014-11-24 21:41:41,620::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call getHardwareInfo with () {}</div> <div>Thread-13::DEBUG::2014-11-24 21:41:41,621::BindingXMLRPC::1139::vds::(wrapper) return getHardwareInfo with {'status': {'message': 'Done', 'code': 0}, 'info': {'systemProductName': 'CS24-TY', 'systemSerialNumber': '7LWSPN1', 'systemFamily': 'Server', 'systemVersion': 'A00', 'systemUUID': '44454c4c-4c00-1057-8053-b7c04f504e31', 'systemManufacturer': 'Dell'}}</div> <div>Thread-13::DEBUG::2014-11-24 21:41:41,733::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call hostsList with () {} flowID [222e8036]</div> <div>Thread-13::ERROR::2014-11-24 21:41:44,753::BindingXMLRPC::1148::vds::(wrapper) vdsm exception occured</div> <div>Traceback (most recent call last):</div> <div> File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1135, in wrapper</div> <div> res = f(*args, **kwargs)</div> <div> File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper</div> <div> rv = func(*args, **kwargs)</div> <div> File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList</div> <div> return {'hosts': self.svdsmProxy.glusterPeerStatus()}</div> <div> File "/usr/share/vdsm/supervdsm.py", line 50, in __call__</div> <div> return callMethod()</div> <div> File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda></div> <div> **kwargs)</div> <div> File "<string>", line 2, in glusterPeerStatus</div> <div> File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in _callmethod</div> <div> raise convert_to_error(kind, result)</div> <div>GlusterCmdExecFailedException: Command execution failed</div> <div>error: Connection failed. Please check if gluster daemon is operational.</div> <div>return code: 1</div> <div>Thread-13::DEBUG::2014-11-24 21:41:50,949::task::595::Storage.TaskManager.Task::(_updateState) Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from state init -> state preparing</div> <div>Thread-13::<a class="moz-txt-link-freetext" href="INFO::2014-11-24">INFO::2014-11-24</a> 21:41:50,950::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None)</div> <div>Thread-13::<a class="moz-txt-link-freetext" href="INFO::2014-11-24">INFO::2014-11-24</a> 21:41:50,950::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {}</div> <div>Thread-13::DEBUG::2014-11-24 21:41:50,950::task::1191::Storage.TaskManager.Task::(prepare) Task=`c9042986-c978-4b08-adb2-616f5299e115`::finished: {}</div> <div>Thread-13::DEBUG::2014-11-24 21:41:50,950::task::595::Storage.TaskManager.Task::(_updateState) Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from state preparing -> state finished</div> <div>Thread-13::DEBUG::2014-11-24 21:41:50,951::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}</div> <div>Thread-13::DEBUG::2014-11-24 21:41:50,951::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}</div> <div>Thread-13::DEBUG::2014-11-24 21:41:50,951::task::993::Storage.TaskManager.Task::(_decref) Task=`c9042986-c978-4b08-adb2-616f5299e115`::ref 0 aborting False</div> </div> <div>-------------------------------</div> <div><br> </div> <div> <div>[root@compute4 ~]# service glusterd status</div> <div>glusterd is stopped</div> <div>[root@compute4 ~]# chkconfig --list | grep glusterd</div> <div>glusterd 0:off 1:off 2:on 3:on 4:on 5:on 6:off</div> <div>[root@compute4 ~]#<br> </div> </div> <div><br> </div> <div>Thanks,</div> <div>Punit</div> </div> <div class="gmail_extra"><br> <div class="gmail_quote">On Mon, Nov 24, 2014 at 6:36 PM, Kanagaraj <span dir="ltr"><<a moz-do-not-send="true" href="mailto:kmayilsa@redhat.com" target="_blank">kmayilsa@redhat.com</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <div bgcolor="#FFFFFF" text="#000000"> Can you send the corresponding error in vdsm.log from the host?<br> <br> Also check if glusterd service is running.<br> <br> Thanks,<br> Kanagaraj <div> <div class="h5"><br> <br> <div>On 11/24/2014 03:39 PM, Punit Dambiwal wrote:<br> </div> </div> </div> <blockquote type="cite"> <div> <div class="h5"> <div dir="ltr"> <div>Hi,</div> <div><br> </div> <div>After reboot my Hypervisior host can not activate again in the cluster and failed with the following error :- </div> <div><br> </div> <div>Gluster command [<UNKNOWN>] failed on server...<br> </div> <div><br> </div> <div>Engine logs :- </div> <div><br> </div> <div>2014-11-24 18:05:28,397 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-64) START, GlusterVolumesListVDSCommand(HostName = Compute4, HostId = 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 5f251c90</div> <div>2014-11-24 18:05:30,609 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-64) FINISH, GlusterVolumesListVDSCommand, return: {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@d95203e0}, log id: 5f251c90</div> <div>2014-11-24 18:05:33,768 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (ajp--127.0.0.1-8702-8) [287d570d] Lock Acquired to object EngineLock [exclusiveLocks= key: 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS</div> <div>, sharedLocks= ]</div> <div>2014-11-24 18:05:33,795 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] Running command: ActivateVdsCommand internal: false. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDSAction group MANIPULATE_HOST with role type ADMIN</div> <div>2014-11-24 18:05:33,796 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] Before acquiring lock in order to prevent monitoring for host Compute5 from data-center SV_WTC</div> <div>2014-11-24 18:05:33,797 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] Lock acquired, from now a monitoring of host will be skipped for host Compute5 from data-center SV_WTC</div> <div>2014-11-24 18:05:33,817 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] START, SetVdsStatusVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=Unassigned, nonOperationalReason=NONE, stopSpmFailureLogged=false), log id: 1cbc7311</div> <div>2014-11-24 18:05:33,820 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] FINISH, SetVdsStatusVDSCommand, log id: 1cbc7311</div> <div>2014-11-24 18:05:34,086 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) Activate finished. Lock released. Monitoring can run now for host Compute5 from data-center SV_WTC</div> <div>2014-11-24 18:05:34,088 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-45) Correlation ID: 287d570d, Job ID: 5ef8e4d6-b2bc-469e-8e81-7ef74b2a001a, Call Stack: null, Custom Event ID: -1, Message: Host Compute5 was activated by admin.</div> <div>2014-11-24 18:05:34,090 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) Lock freed to object EngineLock [exclusiveLocks= key: 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS</div> <div>, sharedLocks= ]</div> <div>2014-11-24 18:05:35,792 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-55) [3706e836] START, GlusterVolumesListVDSCommand(HostName = Compute4, HostId = 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 48a0c832</div> <div>2014-11-24 18:05:37,064 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] (DefaultQuartzScheduler_Worker-69) START, GetHardwareInfoVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a, vds=Host[Compute5,0bf6b00f-7947-4411-b55a-cc5eea2b381a]), log id: 6d560cc2</div> <div>2014-11-24 18:05:37,074 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] (DefaultQuartzScheduler_Worker-69) FINISH, GetHardwareInfoVDSCommand, log id: 6d560cc2</div> <div>2014-11-24 18:05:37,093 WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (DefaultQuartzScheduler_Worker-69) Host Compute5 is running with disabled SELinux.</div> <div>2014-11-24 18:05:37,127 INFO [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] (DefaultQuartzScheduler_Worker-69) [2b4a51cf] Running command: HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS</div> <div>2014-11-24 18:05:37,147 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-69) [2b4a51cf] START, GlusterServersListVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a), log id: 4faed87</div> <div>2014-11-24 18:05:37,164 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-69) [2b4a51cf] FINISH, GlusterServersListVDSCommand, log id: 4faed87</div> <div>2014-11-24 18:05:37,189 INFO [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Running command: SetNonOperationalVdsCommand internal: true. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS</div> <div>2014-11-24 18:05:37,206 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] START, SetVdsStatusVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=NonOperational, nonOperationalReason=GLUSTER_COMMAND_FAILED, stopSpmFailureLogged=false), log id: fed5617</div> <div>2014-11-24 18:05:37,209 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] FINISH, SetVdsStatusVDSCommand, log id: fed5617</div> <div>2014-11-24 18:05:37,223 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: 4a84c4e5, Job ID: 4bfd4a6d-c3ef-468f-a40e-a3a6ca13011b, Call Stack: null, Custom Event ID: -1, Message: Gluster command [<UNKNOWN>] failed on server Compute5.</div> <div>2014-11-24 18:05:37,243 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Status of host Compute5 was set to NonOperational.</div> <div>2014-11-24 18:05:37,272 INFO [org.ovirt.engine.core.bll.HandleVdsVersionCommand] (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Running command: HandleVdsVersionCommand internal: true. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS</div> <div>2014-11-24 18:05:37,274 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Host 0bf6b00f-7947-4411-b55a-cc5eea2b381a : Compute5 is already in NonOperational status for reason GLUSTER_COMMAND_FAILED. SetNonOperationalVds command is skipped.</div> <div>2014-11-24 18:05:38,065 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-55) [3706e836] FINISH, GlusterVolumesListVDSCommand, return: {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4e72a1b1}, log id: 48a0c832</div> <div>2014-11-24 18:05:43,243 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-35) START, GlusterVolumesListVDSCommand(HostName = Compute4, HostId = 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 3ce13ebc</div> <div>^C</div> <div>[root@ccr01 ~]#</div> <div><br> </div> <div>Thanks,</div> <div>Punit</div> </div> <br> <fieldset></fieldset> <br> </div> </div> <pre>_______________________________________________ Users mailing list <a moz-do-not-send="true" href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a> <a moz-do-not-send="true" href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <br> </div> </blockquote> </div> <br> </div> </blockquote> <br> </body> </html> --------------060604080107090201050102--

Hi Kanagaraj, Yes...once i will start the gluster service and then vdsmd ...the host can connect to cluster...but the question is why it's not started even it has chkconfig enabled... I have tested it in two host cluster environment...(Centos 6.6 and centos 7.0) on both hypervisior cluster..it's failed to reconnect in to cluster after reboot.... In both the environment glusterd enabled for next boot....but it's failed with the same error....seems it's bug in either gluster or Ovirt ?? Please help me to find the workaround here if can not resolve it...as without this the Host machine can not connect after reboot....that means engine will consider it as down and every time need to manually start the gluster service and vdsmd... ?? Thanks, Punit On Mon, Nov 24, 2014 at 10:20 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
From vdsm.log "error: Connection failed. Please check if gluster daemon is operational."
Starting glusterd service should fix this issue. 'service glusterd start' But i am wondering why the glusterd was not started automatically after the reboot.
Thanks, Kanagaraj
On 11/24/2014 07:18 PM, Punit Dambiwal wrote:
Hi Kanagaraj,
Please find the attached VDSM logs :-
---------------- Thread-13::DEBUG::2014-11-24 21:41:17,182::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-13::DEBUG::2014-11-24 21:41:17,182::task::993::Storage.TaskManager.Task::(_decref) Task=`1691d409-9b27-4585-8281-5ec26154367a`::ref 0 aborting False Thread-13::DEBUG::2014-11-24 21:41:32,393::task::595::Storage.TaskManager.Task::(_updateState) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from state init -> state preparing Thread-13::INFO::2014-11-24 21:41:32,393::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-13::INFO::2014-11-24 21:41:32,393::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-13::DEBUG::2014-11-24 21:41:32,393::task::1191::Storage.TaskManager.Task::(prepare) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::finished: {} Thread-13::DEBUG::2014-11-24 21:41:32,394::task::595::Storage.TaskManager.Task::(_updateState) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from state preparing -> state finished Thread-13::DEBUG::2014-11-24 21:41:32,394::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-13::DEBUG::2014-11-24 21:41:32,394::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-13::DEBUG::2014-11-24 21:41:32,394::task::993::Storage.TaskManager.Task::(_decref) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::ref 0 aborting False Thread-13::DEBUG::2014-11-24 21:41:41,550::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call getCapabilities with () {} Thread-13::DEBUG::2014-11-24 21:41:41,553::utils::738::root::(execCmd) /sbin/ip route show to 0.0.0.0/0 table all (cwd None) Thread-13::DEBUG::2014-11-24 21:41:41,560::utils::758::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-13::DEBUG::2014-11-24 21:41:41,588::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift',) not found Thread-13::DEBUG::2014-11-24 21:41:41,592::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-object',) not found Thread-13::DEBUG::2014-11-24 21:41:41,593::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-plugin',) not found Thread-13::DEBUG::2014-11-24 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-account',) not found Thread-13::DEBUG::2014-11-24 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-proxy',) not found Thread-13::DEBUG::2014-11-24 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-doc',) not found Thread-13::DEBUG::2014-11-24 21:41:41,599::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-container',) not found Thread-13::DEBUG::2014-11-24 21:41:41,599::caps::728::root::(_getKeyPackages) rpm package ('glusterfs-geo-replication',) not found Thread-13::DEBUG::2014-11-24 21:41:41,600::caps::646::root::(get) VirtioRNG DISABLED: libvirt version 0.10.2-29.el6_5.9 required >= 0.10.2-31 Thread-13::DEBUG::2014-11-24 21:41:41,603::BindingXMLRPC::1139::vds::(wrapper) return getCapabilities with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory': {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:32151ce183c8'}], 'FC': []}, 'packages2': {'kernel': {'release': '431.el6.x86_64', 'buildtime': 1385061309.0, 'version': '2.6.32'}, 'glusterfs-rdma': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, 'glusterfs-fuse': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, 'spice-server': {'release': '6.el6_5.2', 'buildtime': 1402324637L, 'version': '0.12.4'}, 'vdsm': {'release': '1.gitdb83943.el6', 'buildtime': 1412784567L, 'version': '4.16.7'}, 'qemu-kvm': {'release': '2.415.el6_5.10', 'buildtime': 1402435700L, 'version': '0.12.1.2'}, 'qemu-img': {'release': '2.415.el6_5.10', 'buildtime': 1402435700L, 'version': '0.12.1.2'}, 'libvirt': {'release': '29.el6_5.9', 'buildtime': 1402404612L, 'version': '0.10.2'}, 'glusterfs': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, 'mom': {'release': '2.el6', 'buildtime': 1403794344L, 'version': '0.4.1'}, 'glusterfs-server': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}}, 'numaNodeDistance': {'1': [20, 10], '0': [10, 20]}, 'cpuModel': 'Intel(R) Xeon(R) CPU X5650 @ 2.67GHz', 'liveMerge': 'false', 'hooks': {}, 'cpuSockets': '2', 'vmTypes': ['kvm'], 'selinux': {'mode': '1'}, 'kdumpStatus': 0, 'supportedProtocols': ['2.2', '2.3'], 'networks': {'ovirtmgmt': {'iface': u'bond0.10', 'addr': '43.252.176.16', 'bridged': False, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'bootproto4': 'none', 'netmask': '255.255.255.0', 'ipv4addrs': [' 43.252.176.16/24' <http://43.252.176.16/24%27>], 'interface': u'bond0.10', 'ipv6gateway': '::', 'gateway': '43.25.17.1'}, 'Internal': {'iface': 'Internal', 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'bridged': True, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'gateway': '', 'bootproto4': 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'mtu': '9000', 'ipv6gateway': '::', 'ports': ['bond1.100']}, 'storage': {'iface': u'bond1', 'addr': '10.10.10.6', 'bridged': False, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'bootproto4': 'none', 'netmask': '255.255.255.0', 'ipv4addrs': [' 10.10.10.6/24' <http://10.10.10.6/24%27>], 'interface': u'bond1', 'ipv6gateway': '::', 'gateway': ''}, 'VMNetwork': {'iface': 'VMNetwork', 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'bridged': True, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'gateway': '', 'bootproto4': 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'mtu': '1500', 'ipv6gateway': '::', 'ports': ['bond0.36']}}, 'bridges': {'Internal': {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': '', 'opts': {'topology_change_detected': '0', 'multicast_last_member_count': '2', 'hash_elasticity': '4', 'multicast_query_response_interval': '999', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3124', 'hello_timer': '31', 'multicast_querier_interval': '25496', 'max_age': '1999', 'hash_max': '512', 'stp_state': '0', 'root_id': '8000.001018cddaac', 'priority': '32768', 'multicast_membership_interval': '25996', 'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_startup_query_count': '2', 'hello_time': '199', 'topology_change': '0', 'bridge_id': '8000.001018cddaac', 'topology_change_timer': '0', 'ageing_time': '29995', 'gc_timer': '31', 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', 'multicast_query_interval': '12498', 'multicast_last_member_interval': '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': ['bond1.100']}, 'VMNetwork': {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': '', 'opts': {'topology_change_detected': '0', 'multicast_last_member_count': '2', 'hash_elasticity': '4', 'multicast_query_response_interval': '999', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3124', 'hello_timer': '131', 'multicast_querier_interval': '25496', 'max_age': '1999', 'hash_max': '512', 'stp_state': '0', 'root_id': '8000.60eb6920b46c', 'priority': '32768', 'multicast_membership_interval': '25996', 'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_startup_query_count': '2', 'hello_time': '199', 'topology_change': '0', 'bridge_id': '8000.60eb6920b46c', 'topology_change_timer': '0', 'ageing_time': '29995', 'gc_timer': '31', 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', 'multicast_query_interval': '12498', 'multicast_last_member_interval': '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': ['bond0.36']}}, 'uuid': '44454C4C-4C00-1057-8053-B7C04F504E31', 'lastClientIface': 'bond1', 'nics': {'eth3': {'permhwaddr': '00:10:18:cd:da:ae', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ae', 'MASTER': 'bond1', 'DEVICE': 'eth3', 'ONBOOT': 'no'}, 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '00:10:18:cd:da:ac', 'speed': 1000}, 'eth2': {'permhwaddr': '00:10:18:cd:da:ac', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ac', 'MASTER': 'bond1', 'DEVICE': 'eth2', 'ONBOOT': 'no'}, 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '00:10:18:cd:da:ac', 'speed': 1000}, 'eth1': {'permhwaddr': '60:eb:69:20:b4:6d', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6d', 'MASTER': 'bond0', 'DEVICE': 'eth1', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'speed': 1000}, 'eth0': {'permhwaddr': '60:eb:69:20:b4:6c', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6c', 'MASTER': 'bond0', 'DEVICE': 'eth0', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'speed': 1000}}, 'software_revision': '1', 'clusterLevels': ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5'], 'cpuFlags': u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,popcnt,aes,lahf_lm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270', 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:32151ce183c8', 'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5'], 'autoNumaBalancing': 2, 'reservedMem': '321', 'bondings': {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': 'no', 'BONDING_OPTS': 'mode=4 miimon=100', 'DEVICE': 'bond0', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'slaves': ['eth0', 'eth1'], 'opts': {'miimon': '100', 'mode': '4'}}, 'bond1': {'addr': '10.10.10.6', 'cfg': {'DEFROUTE': 'no', 'IPADDR': '10.10.10.6', 'HOTPLUG': 'no', 'MTU': '9000', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'BONDING_OPTS': 'mode=4 miimon=100', 'DEVICE': 'bond1', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'netmask': '255.255.255.0', 'ipv4addrs': ['10.10.10.6/24' <http://10.10.10.6/24%27>], 'hwaddr': '00:10:18:cd:da:ac', 'slaves': ['eth2', 'eth3'], 'opts': {'miimon': '100', 'mode': '4'}}, 'bond2': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}}, 'software_version': '4.16', 'memSize': '24019', 'cpuSpeed': '2667.000', 'numaNodes': {u'1': {'totalMemory': '12288', 'cpus': [6, 7, 8, 9, 10, 11, 18, 19, 20, 21, 22, 23]}, u'0': {'totalMemory': '12278', 'cpus': [0, 1, 2, 3, 4, 5, 12, 13, 14, 15, 16, 17]}}, 'version_name': 'Snow Man', 'vlans': {'bond0.10': {'iface': 'bond0', 'addr': '43.25.17.16', 'cfg': {'DEFROUTE': 'yes', 'VLAN': 'yes', 'IPADDR': '43.25.17.16', 'HOTPLUG': 'no', 'GATEWAY': '43.25.17.1', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'DEVICE': 'bond0.10', 'MTU': '1500', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 10, 'mtu': '1500', 'netmask': '255.255.255.0', 'ipv4addrs': ['43.25.17.16/24'] <http://43.25.17.16/24%27%5D>}, 'bond0.36': {'iface': 'bond0', 'addr': '', 'cfg': {'BRIDGE': 'VMNetwork', 'VLAN': 'yes', 'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond0.36', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 36, 'mtu': '1500', 'netmask': '', 'ipv4addrs': []}, 'bond1.100': {'iface': 'bond1', 'addr': '', 'cfg': {'BRIDGE': 'Internal', 'VLAN': 'yes', 'HOTPLUG': 'no', 'MTU': '9000', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond1.100', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'vlanid': 100, 'mtu': '9000', 'netmask': '', 'ipv4addrs': []}}, 'cpuCores': '12', 'kvmEnabled': 'true', 'guestOverhead': '65', 'cpuThreads': '24', 'emulatedMachines': [u'rhel6.5.0', u'pc', u'rhel6.4.0', u'rhel6.3.0', u'rhel6.2.0', u'rhel6.1.0', u'rhel6.0.0', u'rhel5.5.0', u'rhel5.4.4', u'rhel5.4.0'], 'operatingSystem': {'release': '5.el6.centos.11.1', 'version': '6', 'name': 'RHEL'}, 'lastClient': '10.10.10.2'}} Thread-13::DEBUG::2014-11-24 21:41:41,620::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call getHardwareInfo with () {} Thread-13::DEBUG::2014-11-24 21:41:41,621::BindingXMLRPC::1139::vds::(wrapper) return getHardwareInfo with {'status': {'message': 'Done', 'code': 0}, 'info': {'systemProductName': 'CS24-TY', 'systemSerialNumber': '7LWSPN1', 'systemFamily': 'Server', 'systemVersion': 'A00', 'systemUUID': '44454c4c-4c00-1057-8053-b7c04f504e31', 'systemManufacturer': 'Dell'}} Thread-13::DEBUG::2014-11-24 21:41:41,733::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call hostsList with () {} flowID [222e8036] Thread-13::ERROR::2014-11-24 21:41:44,753::BindingXMLRPC::1148::vds::(wrapper) vdsm exception occured Traceback (most recent call last): File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1135, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper rv = func(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList return {'hosts': self.svdsmProxy.glusterPeerStatus()} File "/usr/share/vdsm/supervdsm.py", line 50, in __call__ return callMethod() File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda> **kwargs) File "<string>", line 2, in glusterPeerStatus File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in _callmethod raise convert_to_error(kind, result) GlusterCmdExecFailedException: Command execution failed error: Connection failed. Please check if gluster daemon is operational. return code: 1 Thread-13::DEBUG::2014-11-24 21:41:50,949::task::595::Storage.TaskManager.Task::(_updateState) Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from state init -> state preparing Thread-13::INFO::2014-11-24 21:41:50,950::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-13::INFO::2014-11-24 21:41:50,950::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-13::DEBUG::2014-11-24 21:41:50,950::task::1191::Storage.TaskManager.Task::(prepare) Task=`c9042986-c978-4b08-adb2-616f5299e115`::finished: {} Thread-13::DEBUG::2014-11-24 21:41:50,950::task::595::Storage.TaskManager.Task::(_updateState) Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from state preparing -> state finished Thread-13::DEBUG::2014-11-24 21:41:50,951::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-13::DEBUG::2014-11-24 21:41:50,951::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-13::DEBUG::2014-11-24 21:41:50,951::task::993::Storage.TaskManager.Task::(_decref) Task=`c9042986-c978-4b08-adb2-616f5299e115`::ref 0 aborting False -------------------------------
[root@compute4 ~]# service glusterd status glusterd is stopped [root@compute4 ~]# chkconfig --list | grep glusterd glusterd 0:off 1:off 2:on 3:on 4:on 5:on 6:off [root@compute4 ~]#
Thanks, Punit
On Mon, Nov 24, 2014 at 6:36 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
Can you send the corresponding error in vdsm.log from the host?
Also check if glusterd service is running.
Thanks, Kanagaraj
On 11/24/2014 03:39 PM, Punit Dambiwal wrote:
Hi,
After reboot my Hypervisior host can not activate again in the cluster and failed with the following error :-
Gluster command [<UNKNOWN>] failed on server...
Engine logs :-
2014-11-24 18:05:28,397 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-64) START, GlusterVolumesListVDSCommand(HostName = Compute4, HostId = 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 5f251c90 2014-11-24 18:05:30,609 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-64) FINISH, GlusterVolumesListVDSCommand, return: {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@d95203e0}, log id: 5f251c90 2014-11-24 18:05:33,768 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (ajp--127.0.0.1-8702-8) [287d570d] Lock Acquired to object EngineLock [exclusiveLocks= key: 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS , sharedLocks= ] 2014-11-24 18:05:33,795 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] Running command: ActivateVdsCommand internal: false. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDSAction group MANIPULATE_HOST with role type ADMIN 2014-11-24 18:05:33,796 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] Before acquiring lock in order to prevent monitoring for host Compute5 from data-center SV_WTC 2014-11-24 18:05:33,797 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] Lock acquired, from now a monitoring of host will be skipped for host Compute5 from data-center SV_WTC 2014-11-24 18:05:33,817 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] START, SetVdsStatusVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=Unassigned, nonOperationalReason=NONE, stopSpmFailureLogged=false), log id: 1cbc7311 2014-11-24 18:05:33,820 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] FINISH, SetVdsStatusVDSCommand, log id: 1cbc7311 2014-11-24 18:05:34,086 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) Activate finished. Lock released. Monitoring can run now for host Compute5 from data-center SV_WTC 2014-11-24 18:05:34,088 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-45) Correlation ID: 287d570d, Job ID: 5ef8e4d6-b2bc-469e-8e81-7ef74b2a001a, Call Stack: null, Custom Event ID: -1, Message: Host Compute5 was activated by admin. 2014-11-24 18:05:34,090 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) Lock freed to object EngineLock [exclusiveLocks= key: 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS , sharedLocks= ] 2014-11-24 18:05:35,792 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-55) [3706e836] START, GlusterVolumesListVDSCommand(HostName = Compute4, HostId = 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 48a0c832 2014-11-24 18:05:37,064 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] (DefaultQuartzScheduler_Worker-69) START, GetHardwareInfoVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a, vds=Host[Compute5,0bf6b00f-7947-4411-b55a-cc5eea2b381a]), log id: 6d560cc2 2014-11-24 18:05:37,074 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] (DefaultQuartzScheduler_Worker-69) FINISH, GetHardwareInfoVDSCommand, log id: 6d560cc2 2014-11-24 18:05:37,093 WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (DefaultQuartzScheduler_Worker-69) Host Compute5 is running with disabled SELinux. 2014-11-24 18:05:37,127 INFO [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] (DefaultQuartzScheduler_Worker-69) [2b4a51cf] Running command: HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS 2014-11-24 18:05:37,147 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-69) [2b4a51cf] START, GlusterServersListVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a), log id: 4faed87 2014-11-24 18:05:37,164 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-69) [2b4a51cf] FINISH, GlusterServersListVDSCommand, log id: 4faed87 2014-11-24 18:05:37,189 INFO [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Running command: SetNonOperationalVdsCommand internal: true. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS 2014-11-24 18:05:37,206 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] START, SetVdsStatusVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=NonOperational, nonOperationalReason=GLUSTER_COMMAND_FAILED, stopSpmFailureLogged=false), log id: fed5617 2014-11-24 18:05:37,209 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] FINISH, SetVdsStatusVDSCommand, log id: fed5617 2014-11-24 18:05:37,223 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: 4a84c4e5, Job ID: 4bfd4a6d-c3ef-468f-a40e-a3a6ca13011b, Call Stack: null, Custom Event ID: -1, Message: Gluster command [<UNKNOWN>] failed on server Compute5. 2014-11-24 18:05:37,243 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Status of host Compute5 was set to NonOperational. 2014-11-24 18:05:37,272 INFO [org.ovirt.engine.core.bll.HandleVdsVersionCommand] (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Running command: HandleVdsVersionCommand internal: true. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS 2014-11-24 18:05:37,274 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Host 0bf6b00f-7947-4411-b55a-cc5eea2b381a : Compute5 is already in NonOperational status for reason GLUSTER_COMMAND_FAILED. SetNonOperationalVds command is skipped. 2014-11-24 18:05:38,065 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-55) [3706e836] FINISH, GlusterVolumesListVDSCommand, return: {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4e72a1b1}, log id: 48a0c832 2014-11-24 18:05:43,243 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-35) START, GlusterVolumesListVDSCommand(HostName = Compute4, HostId = 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 3ce13ebc ^C [root@ccr01 ~]#
Thanks, Punit
_______________________________________________ Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users

This is a multi-part message in MIME format. --------------000301090309090402010701 Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit Do you see any errors in /var/log/glusterfs/etc-glusterfs-glusterd.vol.log or vdsm.log when the service is trying to start automatically after the reboot? Thanks, Kanagaraj On 11/24/2014 08:13 PM, Punit Dambiwal wrote:
Hi Kanagaraj,
Yes...once i will start the gluster service and then vdsmd ...the host can connect to cluster...but the question is why it's not started even it has chkconfig enabled...
I have tested it in two host cluster environment...(Centos 6.6 and centos 7.0) on both hypervisior cluster..it's failed to reconnect in to cluster after reboot....
In both the environment glusterd enabled for next boot....but it's failed with the same error....seems it's bug in either gluster or Ovirt ??
Please help me to find the workaround here if can not resolve it...as without this the Host machine can not connect after reboot....that means engine will consider it as down and every time need to manually start the gluster service and vdsmd... ??
Thanks, Punit
On Mon, Nov 24, 2014 at 10:20 PM, Kanagaraj <kmayilsa@redhat.com <mailto:kmayilsa@redhat.com>> wrote:
From vdsm.log "error: Connection failed. Please check if gluster daemon is operational."
Starting glusterd service should fix this issue. 'service glusterd start' But i am wondering why the glusterd was not started automatically after the reboot.
Thanks, Kanagaraj
On 11/24/2014 07:18 PM, Punit Dambiwal wrote:
Hi Kanagaraj,
Please find the attached VDSM logs :-
---------------- Thread-13::DEBUG::2014-11-24 21:41:17,182::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-13::DEBUG::2014-11-24 21:41:17,182::task::993::Storage.TaskManager.Task::(_decref) Task=`1691d409-9b27-4585-8281-5ec26154367a`::ref 0 aborting False Thread-13::DEBUG::2014-11-24 21:41:32,393::task::595::Storage.TaskManager.Task::(_updateState) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from state init -> state preparing Thread-13::INFO::2014-11-24 21:41:32,393::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-13::INFO::2014-11-24 21:41:32,393::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-13::DEBUG::2014-11-24 21:41:32,393::task::1191::Storage.TaskManager.Task::(prepare) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::finished: {} Thread-13::DEBUG::2014-11-24 21:41:32,394::task::595::Storage.TaskManager.Task::(_updateState) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from state preparing -> state finished Thread-13::DEBUG::2014-11-24 21:41:32,394::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-13::DEBUG::2014-11-24 21:41:32,394::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-13::DEBUG::2014-11-24 21:41:32,394::task::993::Storage.TaskManager.Task::(_decref) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::ref 0 aborting False Thread-13::DEBUG::2014-11-24 21:41:41,550::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call getCapabilities with () {} Thread-13::DEBUG::2014-11-24 21:41:41,553::utils::738::root::(execCmd) /sbin/ip route show to 0.0.0.0/0 <http://0.0.0.0/0> table all (cwd None) Thread-13::DEBUG::2014-11-24 21:41:41,560::utils::758::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-13::DEBUG::2014-11-24 21:41:41,588::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift',) not found Thread-13::DEBUG::2014-11-24 21:41:41,592::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-object',) not found Thread-13::DEBUG::2014-11-24 21:41:41,593::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-plugin',) not found Thread-13::DEBUG::2014-11-24 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-account',) not found Thread-13::DEBUG::2014-11-24 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-proxy',) not found Thread-13::DEBUG::2014-11-24 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-doc',) not found Thread-13::DEBUG::2014-11-24 21:41:41,599::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-container',) not found Thread-13::DEBUG::2014-11-24 21:41:41,599::caps::728::root::(_getKeyPackages) rpm package ('glusterfs-geo-replication',) not found Thread-13::DEBUG::2014-11-24 21:41:41,600::caps::646::root::(get) VirtioRNG DISABLED: libvirt version 0.10.2-29.el6_5.9 required >= 0.10.2-31 Thread-13::DEBUG::2014-11-24 21:41:41,603::BindingXMLRPC::1139::vds::(wrapper) return getCapabilities with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory': {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:32151ce183c8'}], 'FC': []}, 'packages2': {'kernel': {'release': '431.el6.x86_64', 'buildtime': 1385061309.0, 'version': '2.6.32'}, 'glusterfs-rdma': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, 'glusterfs-fuse': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, 'spice-server': {'release': '6.el6_5.2', 'buildtime': 1402324637L, 'version': '0.12.4'}, 'vdsm': {'release': '1.gitdb83943.el6', 'buildtime': 1412784567L, 'version': '4.16.7'}, 'qemu-kvm': {'release': '2.415.el6_5.10', 'buildtime': 1402435700L, 'version': '0.12.1.2'}, 'qemu-img': {'release': '2.415.el6_5.10', 'buildtime': 1402435700L, 'version': '0.12.1.2'}, 'libvirt': {'release': '29.el6_5.9', 'buildtime': 1402404612L, 'version': '0.10.2'}, 'glusterfs': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, 'mom': {'release': '2.el6', 'buildtime': 1403794344L, 'version': '0.4.1'}, 'glusterfs-server': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}}, 'numaNodeDistance': {'1': [20, 10], '0': [10, 20]}, 'cpuModel': 'Intel(R) Xeon(R) CPU X5650 @ 2.67GHz', 'liveMerge': 'false', 'hooks': {}, 'cpuSockets': '2', 'vmTypes': ['kvm'], 'selinux': {'mode': '1'}, 'kdumpStatus': 0, 'supportedProtocols': ['2.2', '2.3'], 'networks': {'ovirtmgmt': {'iface': u'bond0.10', 'addr': '43.252.176.16', 'bridged': False, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'bootproto4': 'none', 'netmask': '255.255.255.0', 'ipv4addrs': ['43.252.176.16/24' <http://43.252.176.16/24%27>], 'interface': u'bond0.10', 'ipv6gateway': '::', 'gateway': '43.25.17.1'}, 'Internal': {'iface': 'Internal', 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'bridged': True, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'gateway': '', 'bootproto4': 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'mtu': '9000', 'ipv6gateway': '::', 'ports': ['bond1.100']}, 'storage': {'iface': u'bond1', 'addr': '10.10.10.6', 'bridged': False, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'bootproto4': 'none', 'netmask': '255.255.255.0', 'ipv4addrs': ['10.10.10.6/24' <http://10.10.10.6/24%27>], 'interface': u'bond1', 'ipv6gateway': '::', 'gateway': ''}, 'VMNetwork': {'iface': 'VMNetwork', 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'bridged': True, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'gateway': '', 'bootproto4': 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'mtu': '1500', 'ipv6gateway': '::', 'ports': ['bond0.36']}}, 'bridges': {'Internal': {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': '', 'opts': {'topology_change_detected': '0', 'multicast_last_member_count': '2', 'hash_elasticity': '4', 'multicast_query_response_interval': '999', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3124', 'hello_timer': '31', 'multicast_querier_interval': '25496', 'max_age': '1999', 'hash_max': '512', 'stp_state': '0', 'root_id': '8000.001018cddaac', 'priority': '32768', 'multicast_membership_interval': '25996', 'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_startup_query_count': '2', 'hello_time': '199', 'topology_change': '0', 'bridge_id': '8000.001018cddaac', 'topology_change_timer': '0', 'ageing_time': '29995', 'gc_timer': '31', 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', 'multicast_query_interval': '12498', 'multicast_last_member_interval': '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': ['bond1.100']}, 'VMNetwork': {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': '', 'opts': {'topology_change_detected': '0', 'multicast_last_member_count': '2', 'hash_elasticity': '4', 'multicast_query_response_interval': '999', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3124', 'hello_timer': '131', 'multicast_querier_interval': '25496', 'max_age': '1999', 'hash_max': '512', 'stp_state': '0', 'root_id': '8000.60eb6920b46c', 'priority': '32768', 'multicast_membership_interval': '25996', 'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_startup_query_count': '2', 'hello_time': '199', 'topology_change': '0', 'bridge_id': '8000.60eb6920b46c', 'topology_change_timer': '0', 'ageing_time': '29995', 'gc_timer': '31', 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', 'multicast_query_interval': '12498', 'multicast_last_member_interval': '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': ['bond0.36']}}, 'uuid': '44454C4C-4C00-1057-8053-B7C04F504E31', 'lastClientIface': 'bond1', 'nics': {'eth3': {'permhwaddr': '00:10:18:cd:da:ae', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ae', 'MASTER': 'bond1', 'DEVICE': 'eth3', 'ONBOOT': 'no'}, 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '00:10:18:cd:da:ac', 'speed': 1000}, 'eth2': {'permhwaddr': '00:10:18:cd:da:ac', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ac', 'MASTER': 'bond1', 'DEVICE': 'eth2', 'ONBOOT': 'no'}, 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '00:10:18:cd:da:ac', 'speed': 1000}, 'eth1': {'permhwaddr': '60:eb:69:20:b4:6d', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6d', 'MASTER': 'bond0', 'DEVICE': 'eth1', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'speed': 1000}, 'eth0': {'permhwaddr': '60:eb:69:20:b4:6c', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6c', 'MASTER': 'bond0', 'DEVICE': 'eth0', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'speed': 1000}}, 'software_revision': '1', 'clusterLevels': ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5'], 'cpuFlags': u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,popcnt,aes,lahf_lm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270', 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:32151ce183c8', 'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5'], 'autoNumaBalancing': 2, 'reservedMem': '321', 'bondings': {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': 'no', 'BONDING_OPTS': 'mode=4 miimon=100', 'DEVICE': 'bond0', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'slaves': ['eth0', 'eth1'], 'opts': {'miimon': '100', 'mode': '4'}}, 'bond1': {'addr': '10.10.10.6', 'cfg': {'DEFROUTE': 'no', 'IPADDR': '10.10.10.6', 'HOTPLUG': 'no', 'MTU': '9000', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'BONDING_OPTS': 'mode=4 miimon=100', 'DEVICE': 'bond1', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'netmask': '255.255.255.0', 'ipv4addrs': ['10.10.10.6/24' <http://10.10.10.6/24%27>], 'hwaddr': '00:10:18:cd:da:ac', 'slaves': ['eth2', 'eth3'], 'opts': {'miimon': '100', 'mode': '4'}}, 'bond2': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}}, 'software_version': '4.16', 'memSize': '24019', 'cpuSpeed': '2667.000', 'numaNodes': {u'1': {'totalMemory': '12288', 'cpus': [6, 7, 8, 9, 10, 11, 18, 19, 20, 21, 22, 23]}, u'0': {'totalMemory': '12278', 'cpus': [0, 1, 2, 3, 4, 5, 12, 13, 14, 15, 16, 17]}}, 'version_name': 'Snow Man', 'vlans': {'bond0.10': {'iface': 'bond0', 'addr': '43.25.17.16', 'cfg': {'DEFROUTE': 'yes', 'VLAN': 'yes', 'IPADDR': '43.25.17.16', 'HOTPLUG': 'no', 'GATEWAY': '43.25.17.1', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'DEVICE': 'bond0.10', 'MTU': '1500', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 10, 'mtu': '1500', 'netmask': '255.255.255.0', 'ipv4addrs': ['43.25.17.16/24'] <http://43.25.17.16/24%27%5D>}, 'bond0.36': {'iface': 'bond0', 'addr': '', 'cfg': {'BRIDGE': 'VMNetwork', 'VLAN': 'yes', 'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond0.36', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 36, 'mtu': '1500', 'netmask': '', 'ipv4addrs': []}, 'bond1.100': {'iface': 'bond1', 'addr': '', 'cfg': {'BRIDGE': 'Internal', 'VLAN': 'yes', 'HOTPLUG': 'no', 'MTU': '9000', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond1.100', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'vlanid': 100, 'mtu': '9000', 'netmask': '', 'ipv4addrs': []}}, 'cpuCores': '12', 'kvmEnabled': 'true', 'guestOverhead': '65', 'cpuThreads': '24', 'emulatedMachines': [u'rhel6.5.0', u'pc', u'rhel6.4.0', u'rhel6.3.0', u'rhel6.2.0', u'rhel6.1.0', u'rhel6.0.0', u'rhel5.5.0', u'rhel5.4.4', u'rhel5.4.0'], 'operatingSystem': {'release': '5.el6.centos.11.1', 'version': '6', 'name': 'RHEL'}, 'lastClient': '10.10.10.2'}} Thread-13::DEBUG::2014-11-24 21:41:41,620::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call getHardwareInfo with () {} Thread-13::DEBUG::2014-11-24 21:41:41,621::BindingXMLRPC::1139::vds::(wrapper) return getHardwareInfo with {'status': {'message': 'Done', 'code': 0}, 'info': {'systemProductName': 'CS24-TY', 'systemSerialNumber': '7LWSPN1', 'systemFamily': 'Server', 'systemVersion': 'A00', 'systemUUID': '44454c4c-4c00-1057-8053-b7c04f504e31', 'systemManufacturer': 'Dell'}} Thread-13::DEBUG::2014-11-24 21:41:41,733::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call hostsList with () {} flowID [222e8036] Thread-13::ERROR::2014-11-24 21:41:44,753::BindingXMLRPC::1148::vds::(wrapper) vdsm exception occured Traceback (most recent call last): File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1135, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper rv = func(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList return {'hosts': self.svdsmProxy.glusterPeerStatus()} File "/usr/share/vdsm/supervdsm.py", line 50, in __call__ return callMethod() File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda> **kwargs) File "<string>", line 2, in glusterPeerStatus File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in _callmethod raise convert_to_error(kind, result) GlusterCmdExecFailedException: Command execution failed error: Connection failed. Please check if gluster daemon is operational. return code: 1 Thread-13::DEBUG::2014-11-24 21:41:50,949::task::595::Storage.TaskManager.Task::(_updateState) Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from state init -> state preparing Thread-13::INFO::2014-11-24 21:41:50,950::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-13::INFO::2014-11-24 21:41:50,950::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-13::DEBUG::2014-11-24 21:41:50,950::task::1191::Storage.TaskManager.Task::(prepare) Task=`c9042986-c978-4b08-adb2-616f5299e115`::finished: {} Thread-13::DEBUG::2014-11-24 21:41:50,950::task::595::Storage.TaskManager.Task::(_updateState) Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from state preparing -> state finished Thread-13::DEBUG::2014-11-24 21:41:50,951::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-13::DEBUG::2014-11-24 21:41:50,951::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-13::DEBUG::2014-11-24 21:41:50,951::task::993::Storage.TaskManager.Task::(_decref) Task=`c9042986-c978-4b08-adb2-616f5299e115`::ref 0 aborting False -------------------------------
[root@compute4 ~]# service glusterd status glusterd is stopped [root@compute4 ~]# chkconfig --list | grep glusterd glusterd 0:off 1:off 2:on 3:on 4:on 5:on 6:off [root@compute4 ~]#
Thanks, Punit
On Mon, Nov 24, 2014 at 6:36 PM, Kanagaraj <kmayilsa@redhat.com <mailto:kmayilsa@redhat.com>> wrote:
Can you send the corresponding error in vdsm.log from the host?
Also check if glusterd service is running.
Thanks, Kanagaraj
On 11/24/2014 03:39 PM, Punit Dambiwal wrote:
Hi,
After reboot my Hypervisior host can not activate again in the cluster and failed with the following error :-
Gluster command [<UNKNOWN>] failed on server...
Engine logs :-
2014-11-24 18:05:28,397 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-64) START, GlusterVolumesListVDSCommand(HostName = Compute4, HostId = 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 5f251c90 2014-11-24 18:05:30,609 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-64) FINISH, GlusterVolumesListVDSCommand, return: {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@d95203e0}, log id: 5f251c90 2014-11-24 18:05:33,768 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (ajp--127.0.0.1-8702-8) [287d570d] Lock Acquired to object EngineLock [exclusiveLocks= key: 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS , sharedLocks= ] 2014-11-24 18:05:33,795 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] Running command: ActivateVdsCommand internal: false. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDSAction group MANIPULATE_HOST with role type ADMIN 2014-11-24 18:05:33,796 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] Before acquiring lock in order to prevent monitoring for host Compute5 from data-center SV_WTC 2014-11-24 18:05:33,797 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] Lock acquired, from now a monitoring of host will be skipped for host Compute5 from data-center SV_WTC 2014-11-24 18:05:33,817 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] START, SetVdsStatusVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=Unassigned, nonOperationalReason=NONE, stopSpmFailureLogged=false), log id: 1cbc7311 2014-11-24 18:05:33,820 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] FINISH, SetVdsStatusVDSCommand, log id: 1cbc7311 2014-11-24 18:05:34,086 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) Activate finished. Lock released. Monitoring can run now for host Compute5 from data-center SV_WTC 2014-11-24 18:05:34,088 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-45) Correlation ID: 287d570d, Job ID: 5ef8e4d6-b2bc-469e-8e81-7ef74b2a001a, Call Stack: null, Custom Event ID: -1, Message: Host Compute5 was activated by admin. 2014-11-24 18:05:34,090 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) Lock freed to object EngineLock [exclusiveLocks= key: 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS , sharedLocks= ] 2014-11-24 18:05:35,792 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-55) [3706e836] START, GlusterVolumesListVDSCommand(HostName = Compute4, HostId = 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 48a0c832 2014-11-24 18:05:37,064 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] (DefaultQuartzScheduler_Worker-69) START, GetHardwareInfoVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a, vds=Host[Compute5,0bf6b00f-7947-4411-b55a-cc5eea2b381a]), log id: 6d560cc2 2014-11-24 18:05:37,074 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] (DefaultQuartzScheduler_Worker-69) FINISH, GetHardwareInfoVDSCommand, log id: 6d560cc2 2014-11-24 18:05:37,093 WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (DefaultQuartzScheduler_Worker-69) Host Compute5 is running with disabled SELinux. 2014-11-24 18:05:37,127 INFO [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] (DefaultQuartzScheduler_Worker-69) [2b4a51cf] Running command: HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS 2014-11-24 18:05:37,147 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-69) [2b4a51cf] START, GlusterServersListVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a), log id: 4faed87 2014-11-24 18:05:37,164 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-69) [2b4a51cf] FINISH, GlusterServersListVDSCommand, log id: 4faed87 2014-11-24 18:05:37,189 INFO [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Running command: SetNonOperationalVdsCommand internal: true. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS 2014-11-24 18:05:37,206 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] START, SetVdsStatusVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=NonOperational, nonOperationalReason=GLUSTER_COMMAND_FAILED, stopSpmFailureLogged=false), log id: fed5617 2014-11-24 18:05:37,209 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] FINISH, SetVdsStatusVDSCommand, log id: fed5617 2014-11-24 18:05:37,223 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: 4a84c4e5, Job ID: 4bfd4a6d-c3ef-468f-a40e-a3a6ca13011b, Call Stack: null, Custom Event ID: -1, Message: Gluster command [<UNKNOWN>] failed on server Compute5. 2014-11-24 18:05:37,243 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Status of host Compute5 was set to NonOperational. 2014-11-24 18:05:37,272 INFO [org.ovirt.engine.core.bll.HandleVdsVersionCommand] (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Running command: HandleVdsVersionCommand internal: true. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS 2014-11-24 18:05:37,274 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Host 0bf6b00f-7947-4411-b55a-cc5eea2b381a : Compute5 is already in NonOperational status for reason GLUSTER_COMMAND_FAILED. SetNonOperationalVds command is skipped. 2014-11-24 18:05:38,065 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-55) [3706e836] FINISH, GlusterVolumesListVDSCommand, return: {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4e72a1b1}, log id: 48a0c832 2014-11-24 18:05:43,243 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-35) START, GlusterVolumesListVDSCommand(HostName = Compute4, HostId = 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 3ce13ebc ^C [root@ccr01 ~]#
Thanks, Punit
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users
--------------000301090309090402010701 Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: 8bit <html> <head> <meta content="text/html; charset=windows-1252" http-equiv="Content-Type"> </head> <body bgcolor="#FFFFFF" text="#000000"> Do you see any errors in /var/log/glusterfs/etc-glusterfs-glusterd.vol.log or vdsm.log when the service is trying to start automatically after the reboot?<br> <br> Thanks,<br> Kanagaraj<br> <br> <div class="moz-cite-prefix">On 11/24/2014 08:13 PM, Punit Dambiwal wrote:<br> </div> <blockquote cite="mid:CAGZcrBmhGjZpN=fDZ+svTXD9V2sk=M45Jn3nJxwhKj38EODwrA@mail.gmail.com" type="cite"> <div dir="ltr">Hi Kanagaraj, <div><br> </div> <div>Yes...once i will start the gluster service and then vdsmd ...the host can connect to cluster...but the question is why it's not started even it has chkconfig enabled...</div> <div><br> </div> <div>I have tested it in two host cluster environment...(Centos 6.6 and centos 7.0) on both hypervisior cluster..it's failed to reconnect in to cluster after reboot....</div> <div><br> </div> <div>In both the environment glusterd enabled for next boot....but it's failed with the same error....seems it's bug in either gluster or Ovirt ??</div> <div><br> </div> <div>Please help me to find the workaround here if can not resolve it...as without this the Host machine can not connect after reboot....that means engine will consider it as down and every time need to manually start the gluster service and vdsmd... ??</div> <div><br> </div> <div>Thanks,</div> <div>Punit</div> </div> <div class="gmail_extra"><br> <div class="gmail_quote">On Mon, Nov 24, 2014 at 10:20 PM, Kanagaraj <span dir="ltr"><<a moz-do-not-send="true" href="mailto:kmayilsa@redhat.com" target="_blank">kmayilsa@redhat.com</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <div bgcolor="#FFFFFF" text="#000000"> From vdsm.log "error: Connection failed. Please check if gluster daemon is operational."<br> <br> Starting glusterd service should fix this issue. 'service glusterd start'<br> But i am wondering why the glusterd was not started automatically after the reboot.<br> <br> Thanks,<br> Kanagaraj <div> <div class="h5"><br> <br> <br> <div>On 11/24/2014 07:18 PM, Punit Dambiwal wrote:<br> </div> <blockquote type="cite"> <div dir="ltr">Hi Kanagaraj, <div><br> </div> <div>Please find the attached VDSM logs :- </div> <div><br> </div> <div>----------------</div> <div> <div>Thread-13::DEBUG::2014-11-24 21:41:17,182::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}</div> <div>Thread-13::DEBUG::2014-11-24 21:41:17,182::task::993::Storage.TaskManager.Task::(_decref) Task=`1691d409-9b27-4585-8281-5ec26154367a`::ref 0 aborting False</div> <div>Thread-13::DEBUG::2014-11-24 21:41:32,393::task::595::Storage.TaskManager.Task::(_updateState) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from state init -> state preparing</div> <div>Thread-13::<a moz-do-not-send="true">INFO::2014-11-24</a> 21:41:32,393::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None)</div> <div>Thread-13::<a moz-do-not-send="true">INFO::2014-11-24</a> 21:41:32,393::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {}</div> <div>Thread-13::DEBUG::2014-11-24 21:41:32,393::task::1191::Storage.TaskManager.Task::(prepare) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::finished: {}</div> <div>Thread-13::DEBUG::2014-11-24 21:41:32,394::task::595::Storage.TaskManager.Task::(_updateState) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from state preparing -> state finished</div> <div>Thread-13::DEBUG::2014-11-24 21:41:32,394::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}</div> <div>Thread-13::DEBUG::2014-11-24 21:41:32,394::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}</div> <div>Thread-13::DEBUG::2014-11-24 21:41:32,394::task::993::Storage.TaskManager.Task::(_decref) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::ref 0 aborting False</div> <div>Thread-13::DEBUG::2014-11-24 21:41:41,550::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call getCapabilities with () {}</div> <div>Thread-13::DEBUG::2014-11-24 21:41:41,553::utils::738::root::(execCmd) /sbin/ip route show to <a moz-do-not-send="true" href="http://0.0.0.0/0" target="_blank">0.0.0.0/0</a> table all (cwd None)</div> <div>Thread-13::DEBUG::2014-11-24 21:41:41,560::utils::758::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0</div> <div>Thread-13::DEBUG::2014-11-24 21:41:41,588::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift',) not found</div> <div>Thread-13::DEBUG::2014-11-24 21:41:41,592::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-object',) not found</div> <div>Thread-13::DEBUG::2014-11-24 21:41:41,593::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-plugin',) not found</div> <div>Thread-13::DEBUG::2014-11-24 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-account',) not found</div> <div>Thread-13::DEBUG::2014-11-24 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-proxy',) not found</div> <div>Thread-13::DEBUG::2014-11-24 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-doc',) not found</div> <div>Thread-13::DEBUG::2014-11-24 21:41:41,599::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-container',) not found</div> <div>Thread-13::DEBUG::2014-11-24 21:41:41,599::caps::728::root::(_getKeyPackages) rpm package ('glusterfs-geo-replication',) not found</div> <div>Thread-13::DEBUG::2014-11-24 21:41:41,600::caps::646::root::(get) VirtioRNG DISABLED: libvirt version 0.10.2-29.el6_5.9 required >= 0.10.2-31</div> <div>Thread-13::DEBUG::2014-11-24 21:41:41,603::BindingXMLRPC::1139::vds::(wrapper) return getCapabilities with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory': {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:32151ce183c8'}], 'FC': []}, 'packages2': {'kernel': {'release': '431.el6.x86_64', 'buildtime': 1385061309.0, 'version': '2.6.32'}, 'glusterfs-rdma': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, 'glusterfs-fuse': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, 'spice-server': {'release': '6.el6_5.2', 'buildtime': 1402324637L, 'version': '0.12.4'}, 'vdsm': {'release': '1.gitdb83943.el6', 'buildtime': 1412784567L, 'version': '4.16.7'}, 'qemu-kvm': {'release': '2.415.el6_5.10', 'buildtime': 1402435700L, 'version': '0.12.1.2'}, 'qemu-img': {'release': '2.415.el6_5.10', 'buildtime': 1402435700L, 'version': '0.12.1.2'}, 'libvirt': {'release': '29.el6_5.9', 'buildtime': 1402404612L, 'version': '0.10.2'}, 'glusterfs': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, 'mom': {'release': '2.el6', 'buildtime': 1403794344L, 'version': '0.4.1'}, 'glusterfs-server': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}}, 'numaNodeDistance': {'1': [20, 10], '0': [10, 20]}, 'cpuModel': 'Intel(R) Xeon(R) CPU X5650 @ 2.67GHz', 'liveMerge': 'false', 'hooks': {}, 'cpuSockets': '2', 'vmTypes': ['kvm'], 'selinux': {'mode': '1'}, 'kdumpStatus': 0, 'supportedProtocols': ['2.2', '2.3'], 'networks': {'ovirtmgmt': {'iface': u'bond0.10', 'addr': '43.252.176.16', 'bridged': False, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'bootproto4': 'none', 'netmask': '255.255.255.0', 'ipv4addrs': ['<a moz-do-not-send="true" href="http://43.252.176.16/24%27" target="_blank">43.252.176.16/24'</a>], 'interface': u'bond0.10', 'ipv6gateway': '::', 'gateway': '43.25.17.1'}, 'Internal': {'iface': 'Internal', 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'bridged': True, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'gateway': '', 'bootproto4': 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'mtu': '9000', 'ipv6gateway': '::', 'ports': ['bond1.100']}, 'storage': {'iface': u'bond1', 'addr': '10.10.10.6', 'bridged': False, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'bootproto4': 'none', 'netmask': '255.255.255.0', 'ipv4addrs': ['<a moz-do-not-send="true" href="http://10.10.10.6/24%27" target="_blank">10.10.10.6/24'</a>], 'interface': u'bond1', 'ipv6gateway': '::', 'gateway': ''}, 'VMNetwork': {'iface': 'VMNetwork', 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'bridged': True, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'gateway': '', 'bootproto4': 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'mtu': '1500', 'ipv6gateway': '::', 'ports': ['bond0.36']}}, 'bridges': {'Internal': {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': '', 'opts': {'topology_change_detected': '0', 'multicast_last_member_count': '2', 'hash_elasticity': '4', 'multicast_query_response_interval': '999', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3124', 'hello_timer': '31', 'multicast_querier_interval': '25496', 'max_age': '1999', 'hash_max': '512', 'stp_state': '0', 'root_id': '8000.001018cddaac', 'priority': '32768', 'multicast_membership_interval': '25996', 'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_startup_query_count': '2', 'hello_time': '199', 'topology_change': '0', 'bridge_id': '8000.001018cddaac', 'topology_change_timer': '0', 'ageing_time': '29995', 'gc_timer': '31', 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', 'multicast_query_interval': '12498', 'multicast_last_member_interval': '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': ['bond1.100']}, 'VMNetwork': {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': '', 'opts': {'topology_change_detected': '0', 'multicast_last_member_count': '2', 'hash_elasticity': '4', 'multicast_query_response_interval': '999', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3124', 'hello_timer': '131', 'multicast_querier_interval': '25496', 'max_age': '1999', 'hash_max': '512', 'stp_state': '0', 'root_id': '8000.60eb6920b46c', 'priority': '32768', 'multicast_membership_interval': '25996', 'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_startup_query_count': '2', 'hello_time': '199', 'topology_change': '0', 'bridge_id': '8000.60eb6920b46c', 'topology_change_timer': '0', 'ageing_time': '29995', 'gc_timer': '31', 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', 'multicast_query_interval': '12498', 'multicast_last_member_interval': '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': ['bond0.36']}}, 'uuid': '44454C4C-4C00-1057-8053-B7C04F504E31', 'lastClientIface': 'bond1', 'nics': {'eth3': {'permhwaddr': '00:10:18:cd:da:ae', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ae', 'MASTER': 'bond1', 'DEVICE': 'eth3', 'ONBOOT': 'no'}, 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '00:10:18:cd:da:ac', 'speed': 1000}, 'eth2': {'permhwaddr': '00:10:18:cd:da:ac', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ac', 'MASTER': 'bond1', 'DEVICE': 'eth2', 'ONBOOT': 'no'}, 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '00:10:18:cd:da:ac', 'speed': 1000}, 'eth1': {'permhwaddr': '60:eb:69:20:b4:6d', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6d', 'MASTER': 'bond0', 'DEVICE': 'eth1', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'speed': 1000}, 'eth0': {'permhwaddr': '60:eb:69:20:b4:6c', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6c', 'MASTER': 'bond0', 'DEVICE': 'eth0', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'speed': 1000}}, 'software_revision': '1', 'clusterLevels': ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5'], 'cpuFlags': u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,popcnt,aes,lahf_lm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270', 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:32151ce183c8', 'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5'], 'autoNumaBalancing': 2, 'reservedMem': '321', 'bondings': {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': 'no', 'BONDING_OPTS': 'mode=4 miimon=100', 'DEVICE': 'bond0', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'slaves': ['eth0', 'eth1'], 'opts': {'miimon': '100', 'mode': '4'}}, 'bond1': {'addr': '10.10.10.6', 'cfg': {'DEFROUTE': 'no', 'IPADDR': '10.10.10.6', 'HOTPLUG': 'no', 'MTU': '9000', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'BONDING_OPTS': 'mode=4 miimon=100', 'DEVICE': 'bond1', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'netmask': '255.255.255.0', 'ipv4addrs': ['<a moz-do-not-send="true" href="http://10.10.10.6/24%27" target="_blank">10.10.10.6/24'</a>], 'hwaddr': '00:10:18:cd:da:ac', 'slaves': ['eth2', 'eth3'], 'opts': {'miimon': '100', 'mode': '4'}}, 'bond2': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}}, 'software_version': '4.16', 'memSize': '24019', 'cpuSpeed': '2667.000', 'numaNodes': {u'1': {'totalMemory': '12288', 'cpus': [6, 7, 8, 9, 10, 11, 18, 19, 20, 21, 22, 23]}, u'0': {'totalMemory': '12278', 'cpus': [0, 1, 2, 3, 4, 5, 12, 13, 14, 15, 16, 17]}}, 'version_name': 'Snow Man', 'vlans': {'bond0.10': {'iface': 'bond0', 'addr': '43.25.17.16', 'cfg': {'DEFROUTE': 'yes', 'VLAN': 'yes', 'IPADDR': '43.25.17.16', 'HOTPLUG': 'no', 'GATEWAY': '43.25.17.1', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'DEVICE': 'bond0.10', 'MTU': '1500', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 10, 'mtu': '1500', 'netmask': '255.255.255.0', 'ipv4addrs': ['<a moz-do-not-send="true" href="http://43.25.17.16/24%27%5D" target="_blank">43.25.17.16/24']</a>}, 'bond0.36': {'iface': 'bond0', 'addr': '', 'cfg': {'BRIDGE': 'VMNetwork', 'VLAN': 'yes', 'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond0.36', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 36, 'mtu': '1500', 'netmask': '', 'ipv4addrs': []}, 'bond1.100': {'iface': 'bond1', 'addr': '', 'cfg': {'BRIDGE': 'Internal', 'VLAN': 'yes', 'HOTPLUG': 'no', 'MTU': '9000', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond1.100', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'vlanid': 100, 'mtu': '9000', 'netmask': '', 'ipv4addrs': []}}, 'cpuCores': '12', 'kvmEnabled': 'true', 'guestOverhead': '65', 'cpuThreads': '24', 'emulatedMachines': [u'rhel6.5.0', u'pc', u'rhel6.4.0', u'rhel6.3.0', u'rhel6.2.0', u'rhel6.1.0', u'rhel6.0.0', u'rhel5.5.0', u'rhel5.4.4', u'rhel5.4.0'], 'operatingSystem': {'release': '5.el6.centos.11.1', 'version': '6', 'name': 'RHEL'}, 'lastClient': '10.10.10.2'}}</div> <div>Thread-13::DEBUG::2014-11-24 21:41:41,620::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call getHardwareInfo with () {}</div> <div>Thread-13::DEBUG::2014-11-24 21:41:41,621::BindingXMLRPC::1139::vds::(wrapper) return getHardwareInfo with {'status': {'message': 'Done', 'code': 0}, 'info': {'systemProductName': 'CS24-TY', 'systemSerialNumber': '7LWSPN1', 'systemFamily': 'Server', 'systemVersion': 'A00', 'systemUUID': '44454c4c-4c00-1057-8053-b7c04f504e31', 'systemManufacturer': 'Dell'}}</div> <div>Thread-13::DEBUG::2014-11-24 21:41:41,733::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call hostsList with () {} flowID [222e8036]</div> <div>Thread-13::ERROR::2014-11-24 21:41:44,753::BindingXMLRPC::1148::vds::(wrapper) vdsm exception occured</div> <div>Traceback (most recent call last):</div> <div> File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1135, in wrapper</div> <div> res = f(*args, **kwargs)</div> <div> File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper</div> <div> rv = func(*args, **kwargs)</div> <div> File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList</div> <div> return {'hosts': self.svdsmProxy.glusterPeerStatus()}</div> <div> File "/usr/share/vdsm/supervdsm.py", line 50, in __call__</div> <div> return callMethod()</div> <div> File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda></div> <div> **kwargs)</div> <div> File "<string>", line 2, in glusterPeerStatus</div> <div> File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in _callmethod</div> <div> raise convert_to_error(kind, result)</div> <div>GlusterCmdExecFailedException: Command execution failed</div> <div>error: Connection failed. Please check if gluster daemon is operational.</div> <div>return code: 1</div> <div>Thread-13::DEBUG::2014-11-24 21:41:50,949::task::595::Storage.TaskManager.Task::(_updateState) Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from state init -> state preparing</div> <div>Thread-13::<a moz-do-not-send="true">INFO::2014-11-24</a> 21:41:50,950::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None)</div> <div>Thread-13::<a moz-do-not-send="true">INFO::2014-11-24</a> 21:41:50,950::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {}</div> <div>Thread-13::DEBUG::2014-11-24 21:41:50,950::task::1191::Storage.TaskManager.Task::(prepare) Task=`c9042986-c978-4b08-adb2-616f5299e115`::finished: {}</div> <div>Thread-13::DEBUG::2014-11-24 21:41:50,950::task::595::Storage.TaskManager.Task::(_updateState) Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from state preparing -> state finished</div> <div>Thread-13::DEBUG::2014-11-24 21:41:50,951::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}</div> <div>Thread-13::DEBUG::2014-11-24 21:41:50,951::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}</div> <div>Thread-13::DEBUG::2014-11-24 21:41:50,951::task::993::Storage.TaskManager.Task::(_decref) Task=`c9042986-c978-4b08-adb2-616f5299e115`::ref 0 aborting False</div> </div> <div>-------------------------------</div> <div><br> </div> <div> <div>[root@compute4 ~]# service glusterd status</div> <div>glusterd is stopped</div> <div>[root@compute4 ~]# chkconfig --list | grep glusterd</div> <div>glusterd 0:off 1:off 2:on 3:on 4:on 5:on 6:off</div> <div>[root@compute4 ~]#<br> </div> </div> <div><br> </div> <div>Thanks,</div> <div>Punit</div> </div> <div class="gmail_extra"><br> <div class="gmail_quote">On Mon, Nov 24, 2014 at 6:36 PM, Kanagaraj <span dir="ltr"><<a moz-do-not-send="true" href="mailto:kmayilsa@redhat.com" target="_blank">kmayilsa@redhat.com</a>></span> wrote:<br> <blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"> <div bgcolor="#FFFFFF" text="#000000"> Can you send the corresponding error in vdsm.log from the host?<br> <br> Also check if glusterd service is running.<br> <br> Thanks,<br> Kanagaraj <div> <div><br> <br> <div>On 11/24/2014 03:39 PM, Punit Dambiwal wrote:<br> </div> </div> </div> <blockquote type="cite"> <div> <div> <div dir="ltr"> <div>Hi,</div> <div><br> </div> <div>After reboot my Hypervisior host can not activate again in the cluster and failed with the following error :- </div> <div><br> </div> <div>Gluster command [<UNKNOWN>] failed on server...<br> </div> <div><br> </div> <div>Engine logs :- </div> <div><br> </div> <div>2014-11-24 18:05:28,397 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-64) START, GlusterVolumesListVDSCommand(HostName = Compute4, HostId = 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 5f251c90</div> <div>2014-11-24 18:05:30,609 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-64) FINISH, GlusterVolumesListVDSCommand, return: {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@d95203e0}, log id: 5f251c90</div> <div>2014-11-24 18:05:33,768 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (ajp--127.0.0.1-8702-8) [287d570d] Lock Acquired to object EngineLock [exclusiveLocks= key: 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS</div> <div>, sharedLocks= ]</div> <div>2014-11-24 18:05:33,795 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] Running command: ActivateVdsCommand internal: false. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDSAction group MANIPULATE_HOST with role type ADMIN</div> <div>2014-11-24 18:05:33,796 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] Before acquiring lock in order to prevent monitoring for host Compute5 from data-center SV_WTC</div> <div>2014-11-24 18:05:33,797 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] Lock acquired, from now a monitoring of host will be skipped for host Compute5 from data-center SV_WTC</div> <div>2014-11-24 18:05:33,817 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] START, SetVdsStatusVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=Unassigned, nonOperationalReason=NONE, stopSpmFailureLogged=false), log id: 1cbc7311</div> <div>2014-11-24 18:05:33,820 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] FINISH, SetVdsStatusVDSCommand, log id: 1cbc7311</div> <div>2014-11-24 18:05:34,086 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) Activate finished. Lock released. Monitoring can run now for host Compute5 from data-center SV_WTC</div> <div>2014-11-24 18:05:34,088 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-45) Correlation ID: 287d570d, Job ID: 5ef8e4d6-b2bc-469e-8e81-7ef74b2a001a, Call Stack: null, Custom Event ID: -1, Message: Host Compute5 was activated by admin.</div> <div>2014-11-24 18:05:34,090 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) Lock freed to object EngineLock [exclusiveLocks= key: 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS</div> <div>, sharedLocks= ]</div> <div>2014-11-24 18:05:35,792 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-55) [3706e836] START, GlusterVolumesListVDSCommand(HostName = Compute4, HostId = 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 48a0c832</div> <div>2014-11-24 18:05:37,064 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] (DefaultQuartzScheduler_Worker-69) START, GetHardwareInfoVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a, vds=Host[Compute5,0bf6b00f-7947-4411-b55a-cc5eea2b381a]), log id: 6d560cc2</div> <div>2014-11-24 18:05:37,074 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] (DefaultQuartzScheduler_Worker-69) FINISH, GetHardwareInfoVDSCommand, log id: 6d560cc2</div> <div>2014-11-24 18:05:37,093 WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (DefaultQuartzScheduler_Worker-69) Host Compute5 is running with disabled SELinux.</div> <div>2014-11-24 18:05:37,127 INFO [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] (DefaultQuartzScheduler_Worker-69) [2b4a51cf] Running command: HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS</div> <div>2014-11-24 18:05:37,147 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-69) [2b4a51cf] START, GlusterServersListVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a), log id: 4faed87</div> <div>2014-11-24 18:05:37,164 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-69) [2b4a51cf] FINISH, GlusterServersListVDSCommand, log id: 4faed87</div> <div>2014-11-24 18:05:37,189 INFO [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Running command: SetNonOperationalVdsCommand internal: true. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS</div> <div>2014-11-24 18:05:37,206 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] START, SetVdsStatusVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=NonOperational, nonOperationalReason=GLUSTER_COMMAND_FAILED, stopSpmFailureLogged=false), log id: fed5617</div> <div>2014-11-24 18:05:37,209 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] FINISH, SetVdsStatusVDSCommand, log id: fed5617</div> <div>2014-11-24 18:05:37,223 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: 4a84c4e5, Job ID: 4bfd4a6d-c3ef-468f-a40e-a3a6ca13011b, Call Stack: null, Custom Event ID: -1, Message: Gluster command [<UNKNOWN>] failed on server Compute5.</div> <div>2014-11-24 18:05:37,243 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Status of host Compute5 was set to NonOperational.</div> <div>2014-11-24 18:05:37,272 INFO [org.ovirt.engine.core.bll.HandleVdsVersionCommand] (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Running command: HandleVdsVersionCommand internal: true. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS</div> <div>2014-11-24 18:05:37,274 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Host 0bf6b00f-7947-4411-b55a-cc5eea2b381a : Compute5 is already in NonOperational status for reason GLUSTER_COMMAND_FAILED. SetNonOperationalVds command is skipped.</div> <div>2014-11-24 18:05:38,065 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-55) [3706e836] FINISH, GlusterVolumesListVDSCommand, return: {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4e72a1b1}, log id: 48a0c832</div> <div>2014-11-24 18:05:43,243 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-35) START, GlusterVolumesListVDSCommand(HostName = Compute4, HostId = 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 3ce13ebc</div> <div>^C</div> <div>[root@ccr01 ~]#</div> <div><br> </div> <div>Thanks,</div> <div>Punit</div> </div> <br> <fieldset></fieldset> <br> </div> </div> <pre>_______________________________________________ Users mailing list <a moz-do-not-send="true" href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a> <a moz-do-not-send="true" href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a> </pre> </blockquote> <br> </div> </blockquote> </div> <br> </div> </blockquote> <br> </div> </div> </div> </blockquote> </div> <br> </div> </blockquote> <br> </body> </html> --------------000301090309090402010701--

Hi Kanagraj, Please check the attached log files....i didn't find any thing special.... On Tue, Nov 25, 2014 at 12:12 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
Do you see any errors in /var/log/glusterfs/etc-glusterfs-glusterd.vol.log or vdsm.log when the service is trying to start automatically after the reboot?
Thanks, Kanagaraj
On 11/24/2014 08:13 PM, Punit Dambiwal wrote:
Hi Kanagaraj,
Yes...once i will start the gluster service and then vdsmd ...the host can connect to cluster...but the question is why it's not started even it has chkconfig enabled...
I have tested it in two host cluster environment...(Centos 6.6 and centos 7.0) on both hypervisior cluster..it's failed to reconnect in to cluster after reboot....
In both the environment glusterd enabled for next boot....but it's failed with the same error....seems it's bug in either gluster or Ovirt ??
Please help me to find the workaround here if can not resolve it...as without this the Host machine can not connect after reboot....that means engine will consider it as down and every time need to manually start the gluster service and vdsmd... ??
Thanks, Punit
On Mon, Nov 24, 2014 at 10:20 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
From vdsm.log "error: Connection failed. Please check if gluster daemon is operational."
Starting glusterd service should fix this issue. 'service glusterd start' But i am wondering why the glusterd was not started automatically after the reboot.
Thanks, Kanagaraj
On 11/24/2014 07:18 PM, Punit Dambiwal wrote:
Hi Kanagaraj,
Please find the attached VDSM logs :-
---------------- Thread-13::DEBUG::2014-11-24 21:41:17,182::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-13::DEBUG::2014-11-24 21:41:17,182::task::993::Storage.TaskManager.Task::(_decref) Task=`1691d409-9b27-4585-8281-5ec26154367a`::ref 0 aborting False Thread-13::DEBUG::2014-11-24 21:41:32,393::task::595::Storage.TaskManager.Task::(_updateState) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from state init -> state preparing Thread-13::INFO::2014-11-24 21:41:32,393::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-13::INFO::2014-11-24 21:41:32,393::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-13::DEBUG::2014-11-24 21:41:32,393::task::1191::Storage.TaskManager.Task::(prepare) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::finished: {} Thread-13::DEBUG::2014-11-24 21:41:32,394::task::595::Storage.TaskManager.Task::(_updateState) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from state preparing -> state finished Thread-13::DEBUG::2014-11-24 21:41:32,394::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-13::DEBUG::2014-11-24 21:41:32,394::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-13::DEBUG::2014-11-24 21:41:32,394::task::993::Storage.TaskManager.Task::(_decref) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::ref 0 aborting False Thread-13::DEBUG::2014-11-24 21:41:41,550::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call getCapabilities with () {} Thread-13::DEBUG::2014-11-24 21:41:41,553::utils::738::root::(execCmd) /sbin/ip route show to 0.0.0.0/0 table all (cwd None) Thread-13::DEBUG::2014-11-24 21:41:41,560::utils::758::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-13::DEBUG::2014-11-24 21:41:41,588::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift',) not found Thread-13::DEBUG::2014-11-24 21:41:41,592::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-object',) not found Thread-13::DEBUG::2014-11-24 21:41:41,593::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-plugin',) not found Thread-13::DEBUG::2014-11-24 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-account',) not found Thread-13::DEBUG::2014-11-24 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-proxy',) not found Thread-13::DEBUG::2014-11-24 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-doc',) not found Thread-13::DEBUG::2014-11-24 21:41:41,599::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-container',) not found Thread-13::DEBUG::2014-11-24 21:41:41,599::caps::728::root::(_getKeyPackages) rpm package ('glusterfs-geo-replication',) not found Thread-13::DEBUG::2014-11-24 21:41:41,600::caps::646::root::(get) VirtioRNG DISABLED: libvirt version 0.10.2-29.el6_5.9 required >= 0.10.2-31 Thread-13::DEBUG::2014-11-24 21:41:41,603::BindingXMLRPC::1139::vds::(wrapper) return getCapabilities with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory': {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:32151ce183c8'}], 'FC': []}, 'packages2': {'kernel': {'release': '431.el6.x86_64', 'buildtime': 1385061309.0, 'version': '2.6.32'}, 'glusterfs-rdma': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, 'glusterfs-fuse': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, 'spice-server': {'release': '6.el6_5.2', 'buildtime': 1402324637L, 'version': '0.12.4'}, 'vdsm': {'release': '1.gitdb83943.el6', 'buildtime': 1412784567L, 'version': '4.16.7'}, 'qemu-kvm': {'release': '2.415.el6_5.10', 'buildtime': 1402435700L, 'version': '0.12.1.2'}, 'qemu-img': {'release': '2.415.el6_5.10', 'buildtime': 1402435700L, 'version': '0.12.1.2'}, 'libvirt': {'release': '29.el6_5.9', 'buildtime': 1402404612L, 'version': '0.10.2'}, 'glusterfs': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, 'mom': {'release': '2.el6', 'buildtime': 1403794344L, 'version': '0.4.1'}, 'glusterfs-server': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}}, 'numaNodeDistance': {'1': [20, 10], '0': [10, 20]}, 'cpuModel': 'Intel(R) Xeon(R) CPU X5650 @ 2.67GHz', 'liveMerge': 'false', 'hooks': {}, 'cpuSockets': '2', 'vmTypes': ['kvm'], 'selinux': {'mode': '1'}, 'kdumpStatus': 0, 'supportedProtocols': ['2.2', '2.3'], 'networks': {'ovirtmgmt': {'iface': u'bond0.10', 'addr': '43.252.176.16', 'bridged': False, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'bootproto4': 'none', 'netmask': '255.255.255.0', 'ipv4addrs': [' 43.252.176.16/24' <http://43.252.176.16/24%27>], 'interface': u'bond0.10', 'ipv6gateway': '::', 'gateway': '43.25.17.1'}, 'Internal': {'iface': 'Internal', 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'bridged': True, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'gateway': '', 'bootproto4': 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'mtu': '9000', 'ipv6gateway': '::', 'ports': ['bond1.100']}, 'storage': {'iface': u'bond1', 'addr': '10.10.10.6', 'bridged': False, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'bootproto4': 'none', 'netmask': '255.255.255.0', 'ipv4addrs': [' 10.10.10.6/24' <http://10.10.10.6/24%27>], 'interface': u'bond1', 'ipv6gateway': '::', 'gateway': ''}, 'VMNetwork': {'iface': 'VMNetwork', 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'bridged': True, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'gateway': '', 'bootproto4': 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'mtu': '1500', 'ipv6gateway': '::', 'ports': ['bond0.36']}}, 'bridges': {'Internal': {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': '', 'opts': {'topology_change_detected': '0', 'multicast_last_member_count': '2', 'hash_elasticity': '4', 'multicast_query_response_interval': '999', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3124', 'hello_timer': '31', 'multicast_querier_interval': '25496', 'max_age': '1999', 'hash_max': '512', 'stp_state': '0', 'root_id': '8000.001018cddaac', 'priority': '32768', 'multicast_membership_interval': '25996', 'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_startup_query_count': '2', 'hello_time': '199', 'topology_change': '0', 'bridge_id': '8000.001018cddaac', 'topology_change_timer': '0', 'ageing_time': '29995', 'gc_timer': '31', 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', 'multicast_query_interval': '12498', 'multicast_last_member_interval': '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': ['bond1.100']}, 'VMNetwork': {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': '', 'opts': {'topology_change_detected': '0', 'multicast_last_member_count': '2', 'hash_elasticity': '4', 'multicast_query_response_interval': '999', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3124', 'hello_timer': '131', 'multicast_querier_interval': '25496', 'max_age': '1999', 'hash_max': '512', 'stp_state': '0', 'root_id': '8000.60eb6920b46c', 'priority': '32768', 'multicast_membership_interval': '25996', 'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_startup_query_count': '2', 'hello_time': '199', 'topology_change': '0', 'bridge_id': '8000.60eb6920b46c', 'topology_change_timer': '0', 'ageing_time': '29995', 'gc_timer': '31', 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', 'multicast_query_interval': '12498', 'multicast_last_member_interval': '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': ['bond0.36']}}, 'uuid': '44454C4C-4C00-1057-8053-B7C04F504E31', 'lastClientIface': 'bond1', 'nics': {'eth3': {'permhwaddr': '00:10:18:cd:da:ae', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ae', 'MASTER': 'bond1', 'DEVICE': 'eth3', 'ONBOOT': 'no'}, 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '00:10:18:cd:da:ac', 'speed': 1000}, 'eth2': {'permhwaddr': '00:10:18:cd:da:ac', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ac', 'MASTER': 'bond1', 'DEVICE': 'eth2', 'ONBOOT': 'no'}, 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '00:10:18:cd:da:ac', 'speed': 1000}, 'eth1': {'permhwaddr': '60:eb:69:20:b4:6d', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6d', 'MASTER': 'bond0', 'DEVICE': 'eth1', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'speed': 1000}, 'eth0': {'permhwaddr': '60:eb:69:20:b4:6c', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6c', 'MASTER': 'bond0', 'DEVICE': 'eth0', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'speed': 1000}}, 'software_revision': '1', 'clusterLevels': ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5'], 'cpuFlags': u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,popcnt,aes,lahf_lm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270', 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:32151ce183c8', 'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5'], 'autoNumaBalancing': 2, 'reservedMem': '321', 'bondings': {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': 'no', 'BONDING_OPTS': 'mode=4 miimon=100', 'DEVICE': 'bond0', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'slaves': ['eth0', 'eth1'], 'opts': {'miimon': '100', 'mode': '4'}}, 'bond1': {'addr': '10.10.10.6', 'cfg': {'DEFROUTE': 'no', 'IPADDR': '10.10.10.6', 'HOTPLUG': 'no', 'MTU': '9000', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'BONDING_OPTS': 'mode=4 miimon=100', 'DEVICE': 'bond1', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'netmask': '255.255.255.0', 'ipv4addrs': ['10.10.10.6/24' <http://10.10.10.6/24%27>], 'hwaddr': '00:10:18:cd:da:ac', 'slaves': ['eth2', 'eth3'], 'opts': {'miimon': '100', 'mode': '4'}}, 'bond2': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}}, 'software_version': '4.16', 'memSize': '24019', 'cpuSpeed': '2667.000', 'numaNodes': {u'1': {'totalMemory': '12288', 'cpus': [6, 7, 8, 9, 10, 11, 18, 19, 20, 21, 22, 23]}, u'0': {'totalMemory': '12278', 'cpus': [0, 1, 2, 3, 4, 5, 12, 13, 14, 15, 16, 17]}}, 'version_name': 'Snow Man', 'vlans': {'bond0.10': {'iface': 'bond0', 'addr': '43.25.17.16', 'cfg': {'DEFROUTE': 'yes', 'VLAN': 'yes', 'IPADDR': '43.25.17.16', 'HOTPLUG': 'no', 'GATEWAY': '43.25.17.1', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'DEVICE': 'bond0.10', 'MTU': '1500', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 10, 'mtu': '1500', 'netmask': '255.255.255.0', 'ipv4addrs': ['43.25.17.16/24'] <http://43.25.17.16/24%27%5D>}, 'bond0.36': {'iface': 'bond0', 'addr': '', 'cfg': {'BRIDGE': 'VMNetwork', 'VLAN': 'yes', 'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond0.36', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 36, 'mtu': '1500', 'netmask': '', 'ipv4addrs': []}, 'bond1.100': {'iface': 'bond1', 'addr': '', 'cfg': {'BRIDGE': 'Internal', 'VLAN': 'yes', 'HOTPLUG': 'no', 'MTU': '9000', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond1.100', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'vlanid': 100, 'mtu': '9000', 'netmask': '', 'ipv4addrs': []}}, 'cpuCores': '12', 'kvmEnabled': 'true', 'guestOverhead': '65', 'cpuThreads': '24', 'emulatedMachines': [u'rhel6.5.0', u'pc', u'rhel6.4.0', u'rhel6.3.0', u'rhel6.2.0', u'rhel6.1.0', u'rhel6.0.0', u'rhel5.5.0', u'rhel5.4.4', u'rhel5.4.0'], 'operatingSystem': {'release': '5.el6.centos.11.1', 'version': '6', 'name': 'RHEL'}, 'lastClient': '10.10.10.2'}} Thread-13::DEBUG::2014-11-24 21:41:41,620::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call getHardwareInfo with () {} Thread-13::DEBUG::2014-11-24 21:41:41,621::BindingXMLRPC::1139::vds::(wrapper) return getHardwareInfo with {'status': {'message': 'Done', 'code': 0}, 'info': {'systemProductName': 'CS24-TY', 'systemSerialNumber': '7LWSPN1', 'systemFamily': 'Server', 'systemVersion': 'A00', 'systemUUID': '44454c4c-4c00-1057-8053-b7c04f504e31', 'systemManufacturer': 'Dell'}} Thread-13::DEBUG::2014-11-24 21:41:41,733::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call hostsList with () {} flowID [222e8036] Thread-13::ERROR::2014-11-24 21:41:44,753::BindingXMLRPC::1148::vds::(wrapper) vdsm exception occured Traceback (most recent call last): File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1135, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper rv = func(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList return {'hosts': self.svdsmProxy.glusterPeerStatus()} File "/usr/share/vdsm/supervdsm.py", line 50, in __call__ return callMethod() File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda> **kwargs) File "<string>", line 2, in glusterPeerStatus File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in _callmethod raise convert_to_error(kind, result) GlusterCmdExecFailedException: Command execution failed error: Connection failed. Please check if gluster daemon is operational. return code: 1 Thread-13::DEBUG::2014-11-24 21:41:50,949::task::595::Storage.TaskManager.Task::(_updateState) Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from state init -> state preparing Thread-13::INFO::2014-11-24 21:41:50,950::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-13::INFO::2014-11-24 21:41:50,950::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-13::DEBUG::2014-11-24 21:41:50,950::task::1191::Storage.TaskManager.Task::(prepare) Task=`c9042986-c978-4b08-adb2-616f5299e115`::finished: {} Thread-13::DEBUG::2014-11-24 21:41:50,950::task::595::Storage.TaskManager.Task::(_updateState) Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from state preparing -> state finished Thread-13::DEBUG::2014-11-24 21:41:50,951::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-13::DEBUG::2014-11-24 21:41:50,951::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-13::DEBUG::2014-11-24 21:41:50,951::task::993::Storage.TaskManager.Task::(_decref) Task=`c9042986-c978-4b08-adb2-616f5299e115`::ref 0 aborting False -------------------------------
[root@compute4 ~]# service glusterd status glusterd is stopped [root@compute4 ~]# chkconfig --list | grep glusterd glusterd 0:off 1:off 2:on 3:on 4:on 5:on 6:off [root@compute4 ~]#
Thanks, Punit
On Mon, Nov 24, 2014 at 6:36 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
Can you send the corresponding error in vdsm.log from the host?
Also check if glusterd service is running.
Thanks, Kanagaraj
On 11/24/2014 03:39 PM, Punit Dambiwal wrote:
Hi,
After reboot my Hypervisior host can not activate again in the cluster and failed with the following error :-
Gluster command [<UNKNOWN>] failed on server...
Engine logs :-
2014-11-24 18:05:28,397 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-64) START, GlusterVolumesListVDSCommand(HostName = Compute4, HostId = 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 5f251c90 2014-11-24 18:05:30,609 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-64) FINISH, GlusterVolumesListVDSCommand, return: {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@d95203e0}, log id: 5f251c90 2014-11-24 18:05:33,768 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (ajp--127.0.0.1-8702-8) [287d570d] Lock Acquired to object EngineLock [exclusiveLocks= key: 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS , sharedLocks= ] 2014-11-24 18:05:33,795 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] Running command: ActivateVdsCommand internal: false. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDSAction group MANIPULATE_HOST with role type ADMIN 2014-11-24 18:05:33,796 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] Before acquiring lock in order to prevent monitoring for host Compute5 from data-center SV_WTC 2014-11-24 18:05:33,797 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] Lock acquired, from now a monitoring of host will be skipped for host Compute5 from data-center SV_WTC 2014-11-24 18:05:33,817 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] START, SetVdsStatusVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=Unassigned, nonOperationalReason=NONE, stopSpmFailureLogged=false), log id: 1cbc7311 2014-11-24 18:05:33,820 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] FINISH, SetVdsStatusVDSCommand, log id: 1cbc7311 2014-11-24 18:05:34,086 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) Activate finished. Lock released. Monitoring can run now for host Compute5 from data-center SV_WTC 2014-11-24 18:05:34,088 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-45) Correlation ID: 287d570d, Job ID: 5ef8e4d6-b2bc-469e-8e81-7ef74b2a001a, Call Stack: null, Custom Event ID: -1, Message: Host Compute5 was activated by admin. 2014-11-24 18:05:34,090 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) Lock freed to object EngineLock [exclusiveLocks= key: 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS , sharedLocks= ] 2014-11-24 18:05:35,792 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-55) [3706e836] START, GlusterVolumesListVDSCommand(HostName = Compute4, HostId = 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 48a0c832 2014-11-24 18:05:37,064 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] (DefaultQuartzScheduler_Worker-69) START, GetHardwareInfoVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a, vds=Host[Compute5,0bf6b00f-7947-4411-b55a-cc5eea2b381a]), log id: 6d560cc2 2014-11-24 18:05:37,074 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] (DefaultQuartzScheduler_Worker-69) FINISH, GetHardwareInfoVDSCommand, log id: 6d560cc2 2014-11-24 18:05:37,093 WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (DefaultQuartzScheduler_Worker-69) Host Compute5 is running with disabled SELinux. 2014-11-24 18:05:37,127 INFO [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] (DefaultQuartzScheduler_Worker-69) [2b4a51cf] Running command: HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS 2014-11-24 18:05:37,147 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-69) [2b4a51cf] START, GlusterServersListVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a), log id: 4faed87 2014-11-24 18:05:37,164 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-69) [2b4a51cf] FINISH, GlusterServersListVDSCommand, log id: 4faed87 2014-11-24 18:05:37,189 INFO [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Running command: SetNonOperationalVdsCommand internal: true. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS 2014-11-24 18:05:37,206 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] START, SetVdsStatusVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=NonOperational, nonOperationalReason=GLUSTER_COMMAND_FAILED, stopSpmFailureLogged=false), log id: fed5617 2014-11-24 18:05:37,209 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] FINISH, SetVdsStatusVDSCommand, log id: fed5617 2014-11-24 18:05:37,223 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: 4a84c4e5, Job ID: 4bfd4a6d-c3ef-468f-a40e-a3a6ca13011b, Call Stack: null, Custom Event ID: -1, Message: Gluster command [<UNKNOWN>] failed on server Compute5. 2014-11-24 18:05:37,243 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Status of host Compute5 was set to NonOperational. 2014-11-24 18:05:37,272 INFO [org.ovirt.engine.core.bll.HandleVdsVersionCommand] (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Running command: HandleVdsVersionCommand internal: true. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS 2014-11-24 18:05:37,274 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Host 0bf6b00f-7947-4411-b55a-cc5eea2b381a : Compute5 is already in NonOperational status for reason GLUSTER_COMMAND_FAILED. SetNonOperationalVds command is skipped. 2014-11-24 18:05:38,065 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-55) [3706e836] FINISH, GlusterVolumesListVDSCommand, return: {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4e72a1b1}, log id: 48a0c832 2014-11-24 18:05:43,243 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-35) START, GlusterVolumesListVDSCommand(HostName = Compute4, HostId = 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 3ce13ebc ^C [root@ccr01 ~]#
Thanks, Punit
_______________________________________________ Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users

------=_Part_3480043_71003673.1416938399727 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit [+Gluster-users@gluster.org] "Initialization of volume 'management' failed, review your volfile again", glusterd throws this error when the service is started automatically after the reboot. But the service is successfully started later manually by the user. can somebody from gluster-users please help on this? glusterfs version: 3.5.1 Thanks, Kanagaraj ----- Original Message -----
From: "Punit Dambiwal" <hypunit@gmail.com> To: "Kanagaraj" <kmayilsa@redhat.com> Cc: users@ovirt.org Sent: Tuesday, November 25, 2014 7:24:45 PM Subject: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server...
Hi Kanagraj,
Please check the attached log files....i didn't find any thing special....
On Tue, Nov 25, 2014 at 12:12 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
Do you see any errors in /var/log/glusterfs/etc-glusterfs-glusterd.vol.log or vdsm.log when the service is trying to start automatically after the reboot?
Thanks, Kanagaraj
On 11/24/2014 08:13 PM, Punit Dambiwal wrote:
Hi Kanagaraj,
Yes...once i will start the gluster service and then vdsmd ...the host can connect to cluster...but the question is why it's not started even it has chkconfig enabled...
I have tested it in two host cluster environment...(Centos 6.6 and centos 7.0) on both hypervisior cluster..it's failed to reconnect in to cluster after reboot....
In both the environment glusterd enabled for next boot....but it's failed with the same error....seems it's bug in either gluster or Ovirt ??
Please help me to find the workaround here if can not resolve it...as without this the Host machine can not connect after reboot....that means engine will consider it as down and every time need to manually start the gluster service and vdsmd... ??
Thanks, Punit
On Mon, Nov 24, 2014 at 10:20 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
From vdsm.log "error: Connection failed. Please check if gluster daemon is operational."
Starting glusterd service should fix this issue. 'service glusterd start' But i am wondering why the glusterd was not started automatically after the reboot.
Thanks, Kanagaraj
On 11/24/2014 07:18 PM, Punit Dambiwal wrote:
Hi Kanagaraj,
Please find the attached VDSM logs :-
---------------- Thread-13::DEBUG::2014-11-24 21:41:17,182::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-13::DEBUG::2014-11-24 21:41:17,182::task::993::Storage.TaskManager.Task::(_decref) Task=`1691d409-9b27-4585-8281-5ec26154367a`::ref 0 aborting False Thread-13::DEBUG::2014-11-24 21:41:32,393::task::595::Storage.TaskManager.Task::(_updateState) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from state init -> state preparing Thread-13::INFO::2014-11-24 21:41:32,393::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-13::INFO::2014-11-24 21:41:32,393::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-13::DEBUG::2014-11-24 21:41:32,393::task::1191::Storage.TaskManager.Task::(prepare) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::finished: {} Thread-13::DEBUG::2014-11-24 21:41:32,394::task::595::Storage.TaskManager.Task::(_updateState) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from state preparing -> state finished Thread-13::DEBUG::2014-11-24 21:41:32,394::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-13::DEBUG::2014-11-24 21:41:32,394::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-13::DEBUG::2014-11-24 21:41:32,394::task::993::Storage.TaskManager.Task::(_decref) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::ref 0 aborting False Thread-13::DEBUG::2014-11-24 21:41:41,550::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call getCapabilities with () {} Thread-13::DEBUG::2014-11-24 21:41:41,553::utils::738::root::(execCmd) /sbin/ip route show to 0.0.0.0/0 table all (cwd None) Thread-13::DEBUG::2014-11-24 21:41:41,560::utils::758::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-13::DEBUG::2014-11-24 21:41:41,588::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift',) not found Thread-13::DEBUG::2014-11-24 21:41:41,592::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-object',) not found Thread-13::DEBUG::2014-11-24 21:41:41,593::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-plugin',) not found Thread-13::DEBUG::2014-11-24 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-account',) not found Thread-13::DEBUG::2014-11-24 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-proxy',) not found Thread-13::DEBUG::2014-11-24 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-doc',) not found Thread-13::DEBUG::2014-11-24 21:41:41,599::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-container',) not found Thread-13::DEBUG::2014-11-24 21:41:41,599::caps::728::root::(_getKeyPackages) rpm package ('glusterfs-geo-replication',) not found Thread-13::DEBUG::2014-11-24 21:41:41,600::caps::646::root::(get) VirtioRNG DISABLED: libvirt version 0.10.2-29.el6_5.9 required >= 0.10.2-31 Thread-13::DEBUG::2014-11-24 21:41:41,603::BindingXMLRPC::1139::vds::(wrapper) return getCapabilities with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory': {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:32151ce183c8'}], 'FC': []}, 'packages2': {'kernel': {'release': '431.el6.x86_64', 'buildtime': 1385061309.0, 'version': '2.6.32'}, 'glusterfs-rdma': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, 'glusterfs-fuse': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, 'spice-server': {'release': '6.el6_5.2', 'buildtime': 1402324637L, 'version': '0.12.4'}, 'vdsm': {'release': '1.gitdb83943.el6', 'buildtime': 1412784567L, 'version': '4.16.7'}, 'qemu-kvm': {'release': '2.415.el6_5.10', 'buildtime': 1402435700L, 'version': '0.12.1.2'}, 'qemu-img': {'release': '2.415.el6_5.10', 'buildtime': 1402435700L, 'version': '0.12.1.2'}, 'libvirt': {'release': '29.el6_5.9', 'buildtime': 1402404612L, 'version': '0.10.2'}, 'glusterfs': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, 'mom': {'release': '2.el6', 'buildtime': 1403794344L, 'version': '0.4.1'}, 'glusterfs-server': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}}, 'numaNodeDistance': {'1': [20, 10], '0': [10, 20]}, 'cpuModel': 'Intel(R) Xeon(R) CPU X5650 @ 2.67GHz', 'liveMerge': 'false', 'hooks': {}, 'cpuSockets': '2', 'vmTypes': ['kvm'], 'selinux': {'mode': '1'}, 'kdumpStatus': 0, 'supportedProtocols': ['2.2', '2.3'], 'networks': {'ovirtmgmt': {'iface': u'bond0.10', 'addr': '43.252.176.16', 'bridged': False, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'bootproto4': 'none', 'netmask': '255.255.255.0', 'ipv4addrs': [' 43.252.176.16/24' <http://43.252.176.16/24%27>], 'interface': u'bond0.10', 'ipv6gateway': '::', 'gateway': '43.25.17.1'}, 'Internal': {'iface': 'Internal', 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'bridged': True, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'gateway': '', 'bootproto4': 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'mtu': '9000', 'ipv6gateway': '::', 'ports': ['bond1.100']}, 'storage': {'iface': u'bond1', 'addr': '10.10.10.6', 'bridged': False, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'bootproto4': 'none', 'netmask': '255.255.255.0', 'ipv4addrs': [' 10.10.10.6/24' <http://10.10.10.6/24%27>], 'interface': u'bond1', 'ipv6gateway': '::', 'gateway': ''}, 'VMNetwork': {'iface': 'VMNetwork', 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'bridged': True, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'gateway': '', 'bootproto4': 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'mtu': '1500', 'ipv6gateway': '::', 'ports': ['bond0.36']}}, 'bridges': {'Internal': {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': '', 'opts': {'topology_change_detected': '0', 'multicast_last_member_count': '2', 'hash_elasticity': '4', 'multicast_query_response_interval': '999', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3124', 'hello_timer': '31', 'multicast_querier_interval': '25496', 'max_age': '1999', 'hash_max': '512', 'stp_state': '0', 'root_id': '8000.001018cddaac', 'priority': '32768', 'multicast_membership_interval': '25996', 'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_startup_query_count': '2', 'hello_time': '199', 'topology_change': '0', 'bridge_id': '8000.001018cddaac', 'topology_change_timer': '0', 'ageing_time': '29995', 'gc_timer': '31', 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', 'multicast_query_interval': '12498', 'multicast_last_member_interval': '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': ['bond1.100']}, 'VMNetwork': {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': '', 'opts': {'topology_change_detected': '0', 'multicast_last_member_count': '2', 'hash_elasticity': '4', 'multicast_query_response_interval': '999', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3124', 'hello_timer': '131', 'multicast_querier_interval': '25496', 'max_age': '1999', 'hash_max': '512', 'stp_state': '0', 'root_id': '8000.60eb6920b46c', 'priority': '32768', 'multicast_membership_interval': '25996', 'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_startup_query_count': '2', 'hello_time': '199', 'topology_change': '0', 'bridge_id': '8000.60eb6920b46c', 'topology_change_timer': '0', 'ageing_time': '29995', 'gc_timer': '31', 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', 'multicast_query_interval': '12498', 'multicast_last_member_interval': '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': ['bond0.36']}}, 'uuid': '44454C4C-4C00-1057-8053-B7C04F504E31', 'lastClientIface': 'bond1', 'nics': {'eth3': {'permhwaddr': '00:10:18:cd:da:ae', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ae', 'MASTER': 'bond1', 'DEVICE': 'eth3', 'ONBOOT': 'no'}, 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '00:10:18:cd:da:ac', 'speed': 1000}, 'eth2': {'permhwaddr': '00:10:18:cd:da:ac', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ac', 'MASTER': 'bond1', 'DEVICE': 'eth2', 'ONBOOT': 'no'}, 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '00:10:18:cd:da:ac', 'speed': 1000}, 'eth1': {'permhwaddr': '60:eb:69:20:b4:6d', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6d', 'MASTER': 'bond0', 'DEVICE': 'eth1', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'speed': 1000}, 'eth0': {'permhwaddr': '60:eb:69:20:b4:6c', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6c', 'MASTER': 'bond0', 'DEVICE': 'eth0', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'speed': 1000}}, 'software_revision': '1', 'clusterLevels': ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5'], 'cpuFlags': u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,popcnt,aes,lahf_lm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270', 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:32151ce183c8', 'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5'], 'autoNumaBalancing': 2, 'reservedMem': '321', 'bondings': {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': 'no', 'BONDING_OPTS': 'mode=4 miimon=100', 'DEVICE': 'bond0', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'slaves': ['eth0', 'eth1'], 'opts': {'miimon': '100', 'mode': '4'}}, 'bond1': {'addr': '10.10.10.6', 'cfg': {'DEFROUTE': 'no', 'IPADDR': '10.10.10.6', 'HOTPLUG': 'no', 'MTU': '9000', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'BONDING_OPTS': 'mode=4 miimon=100', 'DEVICE': 'bond1', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'netmask': '255.255.255.0', 'ipv4addrs': ['10.10.10.6/24' <http://10.10.10.6/24%27>], 'hwaddr': '00:10:18:cd:da:ac', 'slaves': ['eth2', 'eth3'], 'opts': {'miimon': '100', 'mode': '4'}}, 'bond2': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}}, 'software_version': '4.16', 'memSize': '24019', 'cpuSpeed': '2667.000', 'numaNodes': {u'1': {'totalMemory': '12288', 'cpus': [6, 7, 8, 9, 10, 11, 18, 19, 20, 21, 22, 23]}, u'0': {'totalMemory': '12278', 'cpus': [0, 1, 2, 3, 4, 5, 12, 13, 14, 15, 16, 17]}}, 'version_name': 'Snow Man', 'vlans': {'bond0.10': {'iface': 'bond0', 'addr': '43.25.17.16', 'cfg': {'DEFROUTE': 'yes', 'VLAN': 'yes', 'IPADDR': '43.25.17.16', 'HOTPLUG': 'no', 'GATEWAY': '43.25.17.1', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'DEVICE': 'bond0.10', 'MTU': '1500', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 10, 'mtu': '1500', 'netmask': '255.255.255.0', 'ipv4addrs': ['43.25.17.16/24'] <http://43.25.17.16/24%27%5D>}, 'bond0.36': {'iface': 'bond0', 'addr': '', 'cfg': {'BRIDGE': 'VMNetwork', 'VLAN': 'yes', 'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond0.36', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 36, 'mtu': '1500', 'netmask': '', 'ipv4addrs': []}, 'bond1.100': {'iface': 'bond1', 'addr': '', 'cfg': {'BRIDGE': 'Internal', 'VLAN': 'yes', 'HOTPLUG': 'no', 'MTU': '9000', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond1.100', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'vlanid': 100, 'mtu': '9000', 'netmask': '', 'ipv4addrs': []}}, 'cpuCores': '12', 'kvmEnabled': 'true', 'guestOverhead': '65', 'cpuThreads': '24', 'emulatedMachines': [u'rhel6.5.0', u'pc', u'rhel6.4.0', u'rhel6.3.0', u'rhel6.2.0', u'rhel6.1.0', u'rhel6.0.0', u'rhel5.5.0', u'rhel5.4.4', u'rhel5.4.0'], 'operatingSystem': {'release': '5.el6.centos.11.1', 'version': '6', 'name': 'RHEL'}, 'lastClient': '10.10.10.2'}} Thread-13::DEBUG::2014-11-24 21:41:41,620::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call getHardwareInfo with () {} Thread-13::DEBUG::2014-11-24 21:41:41,621::BindingXMLRPC::1139::vds::(wrapper) return getHardwareInfo with {'status': {'message': 'Done', 'code': 0}, 'info': {'systemProductName': 'CS24-TY', 'systemSerialNumber': '7LWSPN1', 'systemFamily': 'Server', 'systemVersion': 'A00', 'systemUUID': '44454c4c-4c00-1057-8053-b7c04f504e31', 'systemManufacturer': 'Dell'}} Thread-13::DEBUG::2014-11-24 21:41:41,733::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call hostsList with () {} flowID [222e8036] Thread-13::ERROR::2014-11-24 21:41:44,753::BindingXMLRPC::1148::vds::(wrapper) vdsm exception occured Traceback (most recent call last): File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1135, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper rv = func(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList return {'hosts': self.svdsmProxy.glusterPeerStatus()} File "/usr/share/vdsm/supervdsm.py", line 50, in __call__ return callMethod() File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda> **kwargs) File "<string>", line 2, in glusterPeerStatus File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in _callmethod raise convert_to_error(kind, result) GlusterCmdExecFailedException: Command execution failed error: Connection failed. Please check if gluster daemon is operational. return code: 1 Thread-13::DEBUG::2014-11-24 21:41:50,949::task::595::Storage.TaskManager.Task::(_updateState) Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from state init -> state preparing Thread-13::INFO::2014-11-24 21:41:50,950::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-13::INFO::2014-11-24 21:41:50,950::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-13::DEBUG::2014-11-24 21:41:50,950::task::1191::Storage.TaskManager.Task::(prepare) Task=`c9042986-c978-4b08-adb2-616f5299e115`::finished: {} Thread-13::DEBUG::2014-11-24 21:41:50,950::task::595::Storage.TaskManager.Task::(_updateState) Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from state preparing -> state finished Thread-13::DEBUG::2014-11-24 21:41:50,951::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-13::DEBUG::2014-11-24 21:41:50,951::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-13::DEBUG::2014-11-24 21:41:50,951::task::993::Storage.TaskManager.Task::(_decref) Task=`c9042986-c978-4b08-adb2-616f5299e115`::ref 0 aborting False -------------------------------
[root@compute4 ~]# service glusterd status glusterd is stopped [root@compute4 ~]# chkconfig --list | grep glusterd glusterd 0:off 1:off 2:on 3:on 4:on 5:on 6:off [root@compute4 ~]#
Thanks, Punit
On Mon, Nov 24, 2014 at 6:36 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
Can you send the corresponding error in vdsm.log from the host?
Also check if glusterd service is running.
Thanks, Kanagaraj
On 11/24/2014 03:39 PM, Punit Dambiwal wrote:
Hi,
After reboot my Hypervisior host can not activate again in the cluster and failed with the following error :-
Gluster command [<UNKNOWN>] failed on server...
Engine logs :-
2014-11-24 18:05:28,397 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-64) START, GlusterVolumesListVDSCommand(HostName = Compute4, HostId = 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 5f251c90 2014-11-24 18:05:30,609 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-64) FINISH, GlusterVolumesListVDSCommand, return: {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@d95203e0}, log id: 5f251c90 2014-11-24 18:05:33,768 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (ajp--127.0.0.1-8702-8) [287d570d] Lock Acquired to object EngineLock [exclusiveLocks= key: 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS , sharedLocks= ] 2014-11-24 18:05:33,795 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] Running command: ActivateVdsCommand internal: false. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDSAction group MANIPULATE_HOST with role type ADMIN 2014-11-24 18:05:33,796 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] Before acquiring lock in order to prevent monitoring for host Compute5 from data-center SV_WTC 2014-11-24 18:05:33,797 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] Lock acquired, from now a monitoring of host will be skipped for host Compute5 from data-center SV_WTC 2014-11-24 18:05:33,817 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] START, SetVdsStatusVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=Unassigned, nonOperationalReason=NONE, stopSpmFailureLogged=false), log id: 1cbc7311 2014-11-24 18:05:33,820 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] FINISH, SetVdsStatusVDSCommand, log id: 1cbc7311 2014-11-24 18:05:34,086 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) Activate finished. Lock released. Monitoring can run now for host Compute5 from data-center SV_WTC 2014-11-24 18:05:34,088 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-45) Correlation ID: 287d570d, Job ID: 5ef8e4d6-b2bc-469e-8e81-7ef74b2a001a, Call Stack: null, Custom Event ID: -1, Message: Host Compute5 was activated by admin. 2014-11-24 18:05:34,090 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) Lock freed to object EngineLock [exclusiveLocks= key: 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS , sharedLocks= ] 2014-11-24 18:05:35,792 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-55) [3706e836] START, GlusterVolumesListVDSCommand(HostName = Compute4, HostId = 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 48a0c832 2014-11-24 18:05:37,064 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] (DefaultQuartzScheduler_Worker-69) START, GetHardwareInfoVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a, vds=Host[Compute5,0bf6b00f-7947-4411-b55a-cc5eea2b381a]), log id: 6d560cc2 2014-11-24 18:05:37,074 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] (DefaultQuartzScheduler_Worker-69) FINISH, GetHardwareInfoVDSCommand, log id: 6d560cc2 2014-11-24 18:05:37,093 WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (DefaultQuartzScheduler_Worker-69) Host Compute5 is running with disabled SELinux. 2014-11-24 18:05:37,127 INFO [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] (DefaultQuartzScheduler_Worker-69) [2b4a51cf] Running command: HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS 2014-11-24 18:05:37,147 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-69) [2b4a51cf] START, GlusterServersListVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a), log id: 4faed87 2014-11-24 18:05:37,164 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] (DefaultQuartzScheduler_Worker-69) [2b4a51cf] FINISH, GlusterServersListVDSCommand, log id: 4faed87 2014-11-24 18:05:37,189 INFO [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Running command: SetNonOperationalVdsCommand internal: true. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS 2014-11-24 18:05:37,206 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] START, SetVdsStatusVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=NonOperational, nonOperationalReason=GLUSTER_COMMAND_FAILED, stopSpmFailureLogged=false), log id: fed5617 2014-11-24 18:05:37,209 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] FINISH, SetVdsStatusVDSCommand, log id: fed5617 2014-11-24 18:05:37,223 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: 4a84c4e5, Job ID: 4bfd4a6d-c3ef-468f-a40e-a3a6ca13011b, Call Stack: null, Custom Event ID: -1, Message: Gluster command [<UNKNOWN>] failed on server Compute5. 2014-11-24 18:05:37,243 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Status of host Compute5 was set to NonOperational. 2014-11-24 18:05:37,272 INFO [org.ovirt.engine.core.bll.HandleVdsVersionCommand] (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Running command: HandleVdsVersionCommand internal: true. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS 2014-11-24 18:05:37,274 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Host 0bf6b00f-7947-4411-b55a-cc5eea2b381a : Compute5 is already in NonOperational status for reason GLUSTER_COMMAND_FAILED. SetNonOperationalVds command is skipped. 2014-11-24 18:05:38,065 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-55) [3706e836] FINISH, GlusterVolumesListVDSCommand, return: {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4e72a1b1}, log id: 48a0c832 2014-11-24 18:05:43,243 INFO [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] (DefaultQuartzScheduler_Worker-35) START, GlusterVolumesListVDSCommand(HostName = Compute4, HostId = 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 3ce13ebc ^C [root@ccr01 ~]#
Thanks, Punit
_______________________________________________ Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
------=_Part_3480043_71003673.1416938399727 Content-Type: text/plain; name=gluster_after-reboot.txt Content-Disposition: attachment; filename=gluster_after-reboot.txt Content-Transfer-Encoding: base64 WzIwMTQtMTEtMjUgMDY6NDY6MDguMTI1NDI1XSBJIFtnbHVzdGVyZnNkLmM6MTk1OTptYWluXSAw LS91c3Ivc2Jpbi9nbHVzdGVyZDogU3RhcnRlZCBydW5uaW5nIC91c3Ivc2Jpbi9nbHVzdGVyZCB2 ZXJzaW9uIDMuNS4xICgvdXNyL3NiaW4vZ2x1c3RlcmQgLS1waWQtZmlsZT0vdmFyL3J1bi9nbHVz dGVyZC5waWQpDQpbMjAxNC0xMS0yNSAwNjo0NjowOC4zMjI5MzhdIEkgW2dsdXN0ZXJkLmM6MTEy Mjppbml0XSAwLW1hbmFnZW1lbnQ6IFVzaW5nIC92YXIvbGliL2dsdXN0ZXJkIGFzIHdvcmtpbmcg ZGlyZWN0b3J5DQpbMjAxNC0xMS0yNSAwNjo0NjowOC40MTk2NTBdIEkgW3NvY2tldC5jOjM1NjE6 c29ja2V0X2luaXRdIDAtc29ja2V0Lm1hbmFnZW1lbnQ6IFNTTCBzdXBwb3J0IGlzIE5PVCBlbmFi bGVkDQpbMjAxNC0xMS0yNSAwNjo0NjowOC40MTk2ODBdIEkgW3NvY2tldC5jOjM1NzY6c29ja2V0 X2luaXRdIDAtc29ja2V0Lm1hbmFnZW1lbnQ6IHVzaW5nIHN5c3RlbSBwb2xsaW5nIHRocmVhZA0K WzIwMTQtMTEtMjUgMDY6NDY6MDguNDU4MDIxXSBXIFtyZG1hLmM6NDE5NDpfX2dmX3JkbWFfY3R4 X2NyZWF0ZV0gMC1ycGMtdHJhbnNwb3J0L3JkbWE6IHJkbWFfY20gZXZlbnQgY2hhbm5lbCBjcmVh dGlvbiBmYWlsZWQgKE5vIHN1Y2ggZGV2aWNlKQ0KWzIwMTQtMTEtMjUgMDY6NDY6MDguNDU4MDM4 XSBFIFtyZG1hLmM6NDQ4Mjppbml0XSAwLXJkbWEubWFuYWdlbWVudDogRmFpbGVkIHRvIGluaXRp YWxpemUgSUIgRGV2aWNlDQpbMjAxNC0xMS0yNSAwNjo0NjowOC40NTgwNDVdIEUgW3JwYy10cmFu c3BvcnQuYzozMzM6cnBjX3RyYW5zcG9ydF9sb2FkXSAwLXJwYy10cmFuc3BvcnQ6ICdyZG1hJyBp bml0aWFsaXphdGlvbiBmYWlsZWQNClsyMDE0LTExLTI1IDA2OjQ2OjA4LjQ1ODA4Ml0gVyBbcnBj c3ZjLmM6MTUyMTpycGNzdmNfdHJhbnNwb3J0X2NyZWF0ZV0gMC1ycGMtc2VydmljZTogY2Fubm90 IGNyZWF0ZSBsaXN0ZW5lciwgaW5pdGluZyB0aGUgdHJhbnNwb3J0IGZhaWxlZA0KWzIwMTQtMTEt MjUgMDY6NDY6MDguNDU4MTQwXSBJIFtzb2NrZXQuYzozNTYxOnNvY2tldF9pbml0XSAwLXNvY2tl dC5tYW5hZ2VtZW50OiBTU0wgc3VwcG9ydCBpcyBOT1QgZW5hYmxlZA0KWzIwMTQtMTEtMjUgMDY6 NDY6MDguNDU4MTUwXSBJIFtzb2NrZXQuYzozNTc2OnNvY2tldF9pbml0XSAwLXNvY2tldC5tYW5h Z2VtZW50OiB1c2luZyBzeXN0ZW0gcG9sbGluZyB0aHJlYWQNClsyMDE0LTExLTI1IDA2OjQ2OjA4 LjQ3MDQ4OV0gSSBbZ2x1c3RlcmQuYzozNjc6Z2x1c3RlcmRfY2hlY2tfZ3N5bmNfcHJlc2VudF0g MC1nbHVzdGVyZDogZ2VvLXJlcGxpY2F0aW9uIG1vZHVsZSBub3QgaW5zdGFsbGVkIGluIHRoZSBz eXN0ZW0NClsyMDE0LTExLTI1IDA2OjQ2OjA4LjY0NTA4OF0gSSBbZ2x1c3RlcmQtc3RvcmUuYzox NDIxOmdsdXN0ZXJkX3Jlc3RvcmVfb3BfdmVyc2lvbl0gMC1nbHVzdGVyZDogcmV0cmlldmVkIG9w LXZlcnNpb246IDMwNTAxDQpbMjAxNC0xMS0yNSAwNjo0NjowOC43NTQyNzZdIEUgW2dsdXN0ZXJk LXN0b3JlLmM6MTk3OTpnbHVzdGVyZF9zdG9yZV9yZXRyaWV2ZV92b2x1bWVdIDAtOiBVbmtub3du IGtleTogYnJpY2stMA0KWzIwMTQtMTEtMjUgMDY6NDY6MDguNzU0MzAzXSBFIFtnbHVzdGVyZC1z dG9yZS5jOjE5Nzk6Z2x1c3RlcmRfc3RvcmVfcmV0cmlldmVfdm9sdW1lXSAwLTogVW5rbm93biBr ZXk6IGJyaWNrLTENClsyMDE0LTExLTI1IDA2OjQ2OjA4Ljc1NDMxNF0gRSBbZ2x1c3RlcmQtc3Rv cmUuYzoxOTc5OmdsdXN0ZXJkX3N0b3JlX3JldHJpZXZlX3ZvbHVtZV0gMC06IFVua25vd24ga2V5 OiBicmljay0yDQpbMjAxNC0xMS0yNSAwNjo0NjowOC43NTQzMjRdIEUgW2dsdXN0ZXJkLXN0b3Jl LmM6MTk3OTpnbHVzdGVyZF9zdG9yZV9yZXRyaWV2ZV92b2x1bWVdIDAtOiBVbmtub3duIGtleTog YnJpY2stMw0KWzIwMTQtMTEtMjUgMDY6NDY6MDguNzU0MzM0XSBFIFtnbHVzdGVyZC1zdG9yZS5j OjE5Nzk6Z2x1c3RlcmRfc3RvcmVfcmV0cmlldmVfdm9sdW1lXSAwLTogVW5rbm93biBrZXk6IGJy aWNrLTQNClsyMDE0LTExLTI1IDA2OjQ2OjA4Ljc1NDM0M10gRSBbZ2x1c3RlcmQtc3RvcmUuYzox OTc5OmdsdXN0ZXJkX3N0b3JlX3JldHJpZXZlX3ZvbHVtZV0gMC06IFVua25vd24ga2V5OiBicmlj ay01DQpbMjAxNC0xMS0yNSAwNjo0NjowOC43NTQzNTNdIEUgW2dsdXN0ZXJkLXN0b3JlLmM6MTk3 OTpnbHVzdGVyZF9zdG9yZV9yZXRyaWV2ZV92b2x1bWVdIDAtOiBVbmtub3duIGtleTogYnJpY2st Ng0KWzIwMTQtMTEtMjUgMDY6NDY6MDguNzU0MzYyXSBFIFtnbHVzdGVyZC1zdG9yZS5jOjE5Nzk6 Z2x1c3RlcmRfc3RvcmVfcmV0cmlldmVfdm9sdW1lXSAwLTogVW5rbm93biBrZXk6IGJyaWNrLTcN ClsyMDE0LTExLTI1IDA2OjQ2OjA5Ljc3MTExOF0gSSBbZ2x1c3RlcmQtaGFuZGxlci5jOjI5MTI6 Z2x1c3RlcmRfZnJpZW5kX2FkZF0gMC1tYW5hZ2VtZW50OiBjb25uZWN0IHJldHVybmVkIDANClsy MDE0LTExLTI1IDA2OjQ2OjA5LjgyMTE3NF0gSSBbZ2x1c3RlcmQtaGFuZGxlci5jOjI5MTI6Z2x1 c3RlcmRfZnJpZW5kX2FkZF0gMC1tYW5hZ2VtZW50OiBjb25uZWN0IHJldHVybmVkIDANClsyMDE0 LTExLTI1IDA2OjQ2OjA5Ljg3MTI1Nl0gSSBbZ2x1c3RlcmQtaGFuZGxlci5jOjI5MTI6Z2x1c3Rl cmRfZnJpZW5kX2FkZF0gMC1tYW5hZ2VtZW50OiBjb25uZWN0IHJldHVybmVkIDANClsyMDE0LTEx LTI1IDA2OjQ2OjA5LjkyMTMyMV0gSSBbZ2x1c3RlcmQtaGFuZGxlci5jOjI5MTI6Z2x1c3RlcmRf ZnJpZW5kX2FkZF0gMC1tYW5hZ2VtZW50OiBjb25uZWN0IHJldHVybmVkIDANClsyMDE0LTExLTI1 IDA2OjQ2OjA5LjkyMTM4NV0gSSBbcnBjLWNsbnQuYzo5NzI6cnBjX2NsbnRfY29ubmVjdGlvbl9p bml0XSAwLW1hbmFnZW1lbnQ6IHNldHRpbmcgZnJhbWUtdGltZW91dCB0byA2MDANClsyMDE0LTEx LTI1IDA2OjQ2OjA5LjkyMTQ0MV0gSSBbc29ja2V0LmM6MzU2MTpzb2NrZXRfaW5pdF0gMC1tYW5h Z2VtZW50OiBTU0wgc3VwcG9ydCBpcyBOT1QgZW5hYmxlZA0KWzIwMTQtMTEtMjUgMDY6NDY6MDku OTIxNDUwXSBJIFtzb2NrZXQuYzozNTc2OnNvY2tldF9pbml0XSAwLW1hbmFnZW1lbnQ6IHVzaW5n IHN5c3RlbSBwb2xsaW5nIHRocmVhZA0KWzIwMTQtMTEtMjUgMDY6NDY6MDkuOTYxNjMwXSBJIFty cGMtY2xudC5jOjk3MjpycGNfY2xudF9jb25uZWN0aW9uX2luaXRdIDAtbWFuYWdlbWVudDogc2V0 dGluZyBmcmFtZS10aW1lb3V0IHRvIDYwMA0KWzIwMTQtMTEtMjUgMDY6NDY6MDkuOTYxNjcwXSBJ IFtzb2NrZXQuYzozNTYxOnNvY2tldF9pbml0XSAwLW1hbmFnZW1lbnQ6IFNTTCBzdXBwb3J0IGlz IE5PVCBlbmFibGVkDQpbMjAxNC0xMS0yNSAwNjo0NjowOS45NjE2NzldIEkgW3NvY2tldC5jOjM1 NzY6c29ja2V0X2luaXRdIDAtbWFuYWdlbWVudDogdXNpbmcgc3lzdGVtIHBvbGxpbmcgdGhyZWFk DQpbMjAxNC0xMS0yNSAwNjo0NjowOS45NjQzMjVdIEkgW3JwYy1jbG50LmM6OTcyOnJwY19jbG50 X2Nvbm5lY3Rpb25faW5pdF0gMC1tYW5hZ2VtZW50OiBzZXR0aW5nIGZyYW1lLXRpbWVvdXQgdG8g NjAwDQpbMjAxNC0xMS0yNSAwNjo0NjowOS45NjQzNjFdIEkgW3NvY2tldC5jOjM1NjE6c29ja2V0 X2luaXRdIDAtbWFuYWdlbWVudDogU1NMIHN1cHBvcnQgaXMgTk9UIGVuYWJsZWQNClsyMDE0LTEx LTI1IDA2OjQ2OjA5Ljk2NDM3MF0gSSBbc29ja2V0LmM6MzU3Njpzb2NrZXRfaW5pdF0gMC1tYW5h Z2VtZW50OiB1c2luZyBzeXN0ZW0gcG9sbGluZyB0aHJlYWQNClsyMDE0LTExLTI1IDA2OjQ2OjA5 Ljk2Njk1MV0gSSBbcnBjLWNsbnQuYzo5NzI6cnBjX2NsbnRfY29ubmVjdGlvbl9pbml0XSAwLW1h bmFnZW1lbnQ6IHNldHRpbmcgZnJhbWUtdGltZW91dCB0byA2MDANClsyMDE0LTExLTI1IDA2OjQ2 OjA5Ljk2Njk4OV0gSSBbc29ja2V0LmM6MzU2MTpzb2NrZXRfaW5pdF0gMC1tYW5hZ2VtZW50OiBT U0wgc3VwcG9ydCBpcyBOT1QgZW5hYmxlZA0KWzIwMTQtMTEtMjUgMDY6NDY6MDkuOTY2OTk4XSBJ IFtzb2NrZXQuYzozNTc2OnNvY2tldF9pbml0XSAwLW1hbmFnZW1lbnQ6IHVzaW5nIHN5c3RlbSBw b2xsaW5nIHRocmVhZA0KWzIwMTQtMTEtMjUgMDY6NDY6MDkuOTcyMTEzXSBFIFtnbHVzdGVyZC1z dG9yZS5jOjI2MzI6Z2x1c3RlcmRfcmVzb2x2ZV9hbGxfYnJpY2tzXSAwLWdsdXN0ZXJkOiByZXNv bHZlIGJyaWNrIGZhaWxlZCBpbiByZXN0b3JlDQpbMjAxNC0xMS0yNSAwNjo0NjowOS45NzIxMzhd IEUgW3hsYXRvci5jOjQwMzp4bGF0b3JfaW5pdF0gMC1tYW5hZ2VtZW50OiBJbml0aWFsaXphdGlv biBvZiB2b2x1bWUgJ21hbmFnZW1lbnQnIGZhaWxlZCwgcmV2aWV3IHlvdXIgdm9sZmlsZSBhZ2Fp bg0KWzIwMTQtMTEtMjUgMDY6NDY6MDkuOTcyMTQ3XSBFIFtncmFwaC5jOjMwNzpnbHVzdGVyZnNf Z3JhcGhfaW5pdF0gMC1tYW5hZ2VtZW50OiBpbml0aWFsaXppbmcgdHJhbnNsYXRvciBmYWlsZWQN ClsyMDE0LTExLTI1IDA2OjQ2OjA5Ljk3MjE1M10gRSBbZ3JhcGguYzo1MDI6Z2x1c3RlcmZzX2dy YXBoX2FjdGl2YXRlXSAwLWdyYXBoOiBpbml0IGZhaWxlZA0KWzIwMTQtMTEtMjUgMDY6NDY6MDku OTg5MDgyXSBXIFtnbHVzdGVyZnNkLmM6MTA5NTpjbGVhbnVwX2FuZF9leGl0XSAoLS0+L3Vzci9z YmluL2dsdXN0ZXJkKG1haW4rMHg1NGUpIFsweDQwNzVjZV0gKC0tPi91c3Ivc2Jpbi9nbHVzdGVy ZChnbHVzdGVyZnNfdm9sdW1lc19pbml0KzB4YzApIFsweDQwNTI0MF0gKC0tPi91c3Ivc2Jpbi9n bHVzdGVyZChnbHVzdGVyZnNfcHJvY2Vzc192b2xmcCsweDEwNikgWzB4NDA1MTQ2XSkpKSAwLTog cmVjZWl2ZWQgc2lnbnVtICgwKSwgc2h1dHRpbmcgZG93bg0K ------=_Part_3480043_71003673.1416938399727 Content-Type: text/plain; name=Gluster_after_manual_service_restart.txt Content-Disposition: attachment; filename=Gluster_after_manual_service_restart.txt Content-Transfer-Encoding: base64 DQoNCg0KQWZ0ZXIgTXVuYXVhbCByZWJvb3QgdGhlIGdsdXN0ZXIgc2VydmljZSA6LQ0KDQpbMjAx NC0xMS0yNSAwNjo1NDoxMS42MzAwMzRdIEkgW2dsdXN0ZXJmc2QuYzoxOTU5Om1haW5dIDAtL3Vz ci9zYmluL2dsdXN0ZXJkOiBTdGFydGVkIHJ1bm5pbmcgL3Vzci9zYmluL2dsdXN0ZXJkIHZlcnNp b24gMy41LjEgKC91c3Ivc2Jpbi9nbHVzdGVyZCAtLXBpZC1maWxlPS92YXIvcnVuL2dsdXN0ZXJk LnBpZCkNClsyMDE0LTExLTI1IDA2OjU0OjExLjYzNTk0NF0gSSBbZ2x1c3RlcmQuYzoxMTIyOmlu aXRdIDAtbWFuYWdlbWVudDogVXNpbmcgL3Zhci9saWIvZ2x1c3RlcmQgYXMgd29ya2luZyBkaXJl Y3RvcnkNClsyMDE0LTExLTI1IDA2OjU0OjExLjYzODE1Nl0gSSBbc29ja2V0LmM6MzU2MTpzb2Nr ZXRfaW5pdF0gMC1zb2NrZXQubWFuYWdlbWVudDogU1NMIHN1cHBvcnQgaXMgTk9UIGVuYWJsZWQN ClsyMDE0LTExLTI1IDA2OjU0OjExLjYzODE3N10gSSBbc29ja2V0LmM6MzU3Njpzb2NrZXRfaW5p dF0gMC1zb2NrZXQubWFuYWdlbWVudDogdXNpbmcgc3lzdGVtIHBvbGxpbmcgdGhyZWFkDQpbMjAx NC0xMS0yNSAwNjo1NDoxMS42NDE0NTldIFcgW3JkbWEuYzo0MTk0Ol9fZ2ZfcmRtYV9jdHhfY3Jl YXRlXSAwLXJwYy10cmFuc3BvcnQvcmRtYTogcmRtYV9jbSBldmVudCBjaGFubmVsIGNyZWF0aW9u IGZhaWxlZCAoTm8gc3VjaCBkZXZpY2UpDQpbMjAxNC0xMS0yNSAwNjo1NDoxMS42NDE0NzldIEUg W3JkbWEuYzo0NDgyOmluaXRdIDAtcmRtYS5tYW5hZ2VtZW50OiBGYWlsZWQgdG8gaW5pdGlhbGl6 ZSBJQiBEZXZpY2UNClsyMDE0LTExLTI1IDA2OjU0OjExLjY0MTUwMV0gRSBbcnBjLXRyYW5zcG9y dC5jOjMzMzpycGNfdHJhbnNwb3J0X2xvYWRdIDAtcnBjLXRyYW5zcG9ydDogJ3JkbWEnIGluaXRp YWxpemF0aW9uIGZhaWxlZA0KWzIwMTQtMTEtMjUgMDY6NTQ6MTEuNjQxNTU1XSBXIFtycGNzdmMu YzoxNTIxOnJwY3N2Y190cmFuc3BvcnRfY3JlYXRlXSAwLXJwYy1zZXJ2aWNlOiBjYW5ub3QgY3Jl YXRlIGxpc3RlbmVyLCBpbml0aW5nIHRoZSB0cmFuc3BvcnQgZmFpbGVkDQpbMjAxNC0xMS0yNSAw Njo1NDoxMS42NDE2MjZdIEkgW3NvY2tldC5jOjM1NjE6c29ja2V0X2luaXRdIDAtc29ja2V0Lm1h bmFnZW1lbnQ6IFNTTCBzdXBwb3J0IGlzIE5PVCBlbmFibGVkDQpbMjAxNC0xMS0yNSAwNjo1NDox MS42NDE2NDFdIEkgW3NvY2tldC5jOjM1NzY6c29ja2V0X2luaXRdIDAtc29ja2V0Lm1hbmFnZW1l bnQ6IHVzaW5nIHN5c3RlbSBwb2xsaW5nIHRocmVhZA0KWzIwMTQtMTEtMjUgMDY6NTQ6MTEuNjQ4 MzM0XSBJIFtnbHVzdGVyZC5jOjM2NzpnbHVzdGVyZF9jaGVja19nc3luY19wcmVzZW50XSAwLWds dXN0ZXJkOiBnZW8tcmVwbGljYXRpb24gbW9kdWxlIG5vdCBpbnN0YWxsZWQgaW4gdGhlIHN5c3Rl bQ0KWzIwMTQtMTEtMjUgMDY6NTQ6MTEuNzk4ODM2XSBJIFtnbHVzdGVyZC1zdG9yZS5jOjE0MjE6 Z2x1c3RlcmRfcmVzdG9yZV9vcF92ZXJzaW9uXSAwLWdsdXN0ZXJkOiByZXRyaWV2ZWQgb3AtdmVy c2lvbjogMzA1MDENClsyMDE0LTExLTI1IDA2OjU0OjExLjg0OTA0NV0gRSBbZ2x1c3RlcmQtc3Rv cmUuYzoxOTc5OmdsdXN0ZXJkX3N0b3JlX3JldHJpZXZlX3ZvbHVtZV0gMC06IFVua25vd24ga2V5 OiBicmljay0wDQpbMjAxNC0xMS0yNSAwNjo1NDoxMS44NDkwNzNdIEUgW2dsdXN0ZXJkLXN0b3Jl LmM6MTk3OTpnbHVzdGVyZF9zdG9yZV9yZXRyaWV2ZV92b2x1bWVdIDAtOiBVbmtub3duIGtleTog YnJpY2stMQ0KWzIwMTQtMTEtMjUgMDY6NTQ6MTEuODQ5MDg2XSBFIFtnbHVzdGVyZC1zdG9yZS5j OjE5Nzk6Z2x1c3RlcmRfc3RvcmVfcmV0cmlldmVfdm9sdW1lXSAwLTogVW5rbm93biBrZXk6IGJy aWNrLTINClsyMDE0LTExLTI1IDA2OjU0OjExLjg0OTA5Nl0gRSBbZ2x1c3RlcmQtc3RvcmUuYzox OTc5OmdsdXN0ZXJkX3N0b3JlX3JldHJpZXZlX3ZvbHVtZV0gMC06IFVua25vd24ga2V5OiBicmlj ay0zDQpbMjAxNC0xMS0yNSAwNjo1NDoxMS44NDkxMDZdIEUgW2dsdXN0ZXJkLXN0b3JlLmM6MTk3 OTpnbHVzdGVyZF9zdG9yZV9yZXRyaWV2ZV92b2x1bWVdIDAtOiBVbmtub3duIGtleTogYnJpY2st NA0KWzIwMTQtMTEtMjUgMDY6NTQ6MTEuODQ5MTE3XSBFIFtnbHVzdGVyZC1zdG9yZS5jOjE5Nzk6 Z2x1c3RlcmRfc3RvcmVfcmV0cmlldmVfdm9sdW1lXSAwLTogVW5rbm93biBrZXk6IGJyaWNrLTUN ClsyMDE0LTExLTI1IDA2OjU0OjExLjg0OTEyOF0gRSBbZ2x1c3RlcmQtc3RvcmUuYzoxOTc5Omds dXN0ZXJkX3N0b3JlX3JldHJpZXZlX3ZvbHVtZV0gMC06IFVua25vd24ga2V5OiBicmljay02DQpb MjAxNC0xMS0yNSAwNjo1NDoxMS44NDkxMzldIEUgW2dsdXN0ZXJkLXN0b3JlLmM6MTk3OTpnbHVz dGVyZF9zdG9yZV9yZXRyaWV2ZV92b2x1bWVdIDAtOiBVbmtub3duIGtleTogYnJpY2stNw0KWzIw MTQtMTEtMjUgMDY6NTQ6MTIuNzgzMDI2XSBJIFtnbHVzdGVyZC1oYW5kbGVyLmM6MjkxMjpnbHVz dGVyZF9mcmllbmRfYWRkXSAwLW1hbmFnZW1lbnQ6IGNvbm5lY3QgcmV0dXJuZWQgMA0KWzIwMTQt MTEtMjUgMDY6NTQ6MTIuODMzMTY5XSBJIFtnbHVzdGVyZC1oYW5kbGVyLmM6MjkxMjpnbHVzdGVy ZF9mcmllbmRfYWRkXSAwLW1hbmFnZW1lbnQ6IGNvbm5lY3QgcmV0dXJuZWQgMA0KWzIwMTQtMTEt MjUgMDY6NTQ6MTIuODgzMjE2XSBJIFtnbHVzdGVyZC1oYW5kbGVyLmM6MjkxMjpnbHVzdGVyZF9m cmllbmRfYWRkXSAwLW1hbmFnZW1lbnQ6IGNvbm5lY3QgcmV0dXJuZWQgMA0KWzIwMTQtMTEtMjUg MDY6NTQ6MTIuOTMzMzAzXSBJIFtnbHVzdGVyZC1oYW5kbGVyLmM6MjkxMjpnbHVzdGVyZF9mcmll bmRfYWRkXSAwLW1hbmFnZW1lbnQ6IGNvbm5lY3QgcmV0dXJuZWQgMA0KWzIwMTQtMTEtMjUgMDY6 NTQ6MTIuOTMzMzcyXSBJIFtycGMtY2xudC5jOjk3MjpycGNfY2xudF9jb25uZWN0aW9uX2luaXRd IDAtbWFuYWdlbWVudDogc2V0dGluZyBmcmFtZS10aW1lb3V0IHRvIDYwMA0KWzIwMTQtMTEtMjUg MDY6NTQ6MTIuOTMzNDM3XSBJIFtzb2NrZXQuYzozNTYxOnNvY2tldF9pbml0XSAwLW1hbmFnZW1l bnQ6IFNTTCBzdXBwb3J0IGlzIE5PVCBlbmFibGVkDQpbMjAxNC0xMS0yNSAwNjo1NDoxMi45MzM0 NTFdIEkgW3NvY2tldC5jOjM1NzY6c29ja2V0X2luaXRdIDAtbWFuYWdlbWVudDogdXNpbmcgc3lz dGVtIHBvbGxpbmcgdGhyZWFkDQpbMjAxNC0xMS0yNSAwNjo1NDoxMi45NDE0MzldIEkgW3JwYy1j bG50LmM6OTcyOnJwY19jbG50X2Nvbm5lY3Rpb25faW5pdF0gMC1tYW5hZ2VtZW50OiBzZXR0aW5n IGZyYW1lLXRpbWVvdXQgdG8gNjAwDQpbMjAxNC0xMS0yNSAwNjo1NDoxMi45NDE0ODJdIEkgW3Nv Y2tldC5jOjM1NjE6c29ja2V0X2luaXRdIDAtbWFuYWdlbWVudDogU1NMIHN1cHBvcnQgaXMgTk9U IGVuYWJsZWQNClsyMDE0LTExLTI1IDA2OjU0OjEyLjk0MTQ5N10gSSBbc29ja2V0LmM6MzU3Njpz b2NrZXRfaW5pdF0gMC1tYW5hZ2VtZW50OiB1c2luZyBzeXN0ZW0gcG9sbGluZyB0aHJlYWQNClsy MDE0LTExLTI1IDA2OjU0OjEyLjk0NDE0Nl0gSSBbcnBjLWNsbnQuYzo5NzI6cnBjX2NsbnRfY29u bmVjdGlvbl9pbml0XSAwLW1hbmFnZW1lbnQ6IHNldHRpbmcgZnJhbWUtdGltZW91dCB0byA2MDAN ClsyMDE0LTExLTI1IDA2OjU0OjEyLjk0NDE4N10gSSBbc29ja2V0LmM6MzU2MTpzb2NrZXRfaW5p dF0gMC1tYW5hZ2VtZW50OiBTU0wgc3VwcG9ydCBpcyBOT1QgZW5hYmxlZA0KWzIwMTQtMTEtMjUg MDY6NTQ6MTIuOTQ0MTk3XSBJIFtzb2NrZXQuYzozNTc2OnNvY2tldF9pbml0XSAwLW1hbmFnZW1l bnQ6IHVzaW5nIHN5c3RlbSBwb2xsaW5nIHRocmVhZA0KWzIwMTQtMTEtMjUgMDY6NTQ6MTIuOTQ2 NzgwXSBJIFtycGMtY2xudC5jOjk3MjpycGNfY2xudF9jb25uZWN0aW9uX2luaXRdIDAtbWFuYWdl bWVudDogc2V0dGluZyBmcmFtZS10aW1lb3V0IHRvIDYwMA0KWzIwMTQtMTEtMjUgMDY6NTQ6MTIu OTQ2ODI0XSBJIFtzb2NrZXQuYzozNTYxOnNvY2tldF9pbml0XSAwLW1hbmFnZW1lbnQ6IFNTTCBz dXBwb3J0IGlzIE5PVCBlbmFibGVkDQpbMjAxNC0xMS0yNSAwNjo1NDoxMi45NDY4MzRdIEkgW3Nv Y2tldC5jOjM1NzY6c29ja2V0X2luaXRdIDAtbWFuYWdlbWVudDogdXNpbmcgc3lzdGVtIHBvbGxp bmcgdGhyZWFkDQpbMjAxNC0xMS0yNSAwNjo1NDoxMi45NDk4NDZdIEkgW2dsdXN0ZXJkLmM6MTM4 OmdsdXN0ZXJkX3V1aWRfaW5pdF0gMC1tYW5hZ2VtZW50OiByZXRyaWV2ZWQgVVVJRDogOTE4Y2Vm MWYtZWQ4My00NTcwLTkwYWQtMzljNWNlZWM3NTNjDQpGaW5hbCBncmFwaDoNCistLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0rDQogIDE6IHZvbHVtZSBtYW5hZ2VtZW50DQogIDI6ICAgICB0eXBlIG1nbXQv Z2x1c3RlcmQNCiAgMzogICAgIG9wdGlvbiBycGMtYXV0aC5hdXRoLWdsdXN0ZXJmcyBvbg0KICA0 OiAgICAgb3B0aW9uIHJwYy1hdXRoLmF1dGgtdW5peCBvbg0KICA1OiAgICAgb3B0aW9uIHJwYy1h dXRoLmF1dGgtbnVsbCBvbg0KICA2OiAgICAgb3B0aW9uIHRyYW5zcG9ydC5zb2NrZXQubGlzdGVu LWJhY2tsb2cgMTI4DQogIDc6ICAgICBvcHRpb24gdHJhbnNwb3J0LnNvY2tldC5yZWFkLWZhaWwt bG9nIG9mZg0KICA4OiAgICAgb3B0aW9uIHRyYW5zcG9ydC5zb2NrZXQua2VlcGFsaXZlLWludGVy dmFsIDINCiAgOTogICAgIG9wdGlvbiB0cmFuc3BvcnQuc29ja2V0LmtlZXBhbGl2ZS10aW1lIDEw DQogMTA6ICAgICBvcHRpb24gdHJhbnNwb3J0LXR5cGUgcmRtYQ0KIDExOiAgICAgb3B0aW9uIHdv cmtpbmctZGlyZWN0b3J5IC92YXIvbGliL2dsdXN0ZXJkDQogMTI6IGVuZC12b2x1bWUNCiAxMzoN CistLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0t LS0tLS0tLS0tLS0tLS0tLS0tLS0tLS0rDQpbMjAxNC0xMS0yNSAwNjo1NDoxMy40NTA3MDNdIEkg W2dsdXN0ZXJkLXJwYy1vcHMuYzozNTY6X19nbHVzdGVyZF9mcmllbmRfYWRkX2Nia10gMC1nbHVz dGVyZDogUmVjZWl2ZWQgQUNDIGZyb20gdXVpZDogMGQ5NGQ4OTYtZTk5NC00ZGViLWIxYTEtZDZm NjhiZTg2YzMwLCBob3N0OiBjb21wdXRlMy4zbGludXguY29tLCBwb3J0OiAwDQpbMjAxNC0xMS0y NSAwNjo1NDoxMy44MTAzNzddIEkgW3JwYy1jbG50LmM6OTcyOnJwY19jbG50X2Nvbm5lY3Rpb25f aW5pdF0gMC1tYW5hZ2VtZW50OiBzZXR0aW5nIGZyYW1lLXRpbWVvdXQgdG8gNjAwDQpbMjAxNC0x MS0yNSAwNjo1NDoxMy44MTA0NDZdIEkgW3NvY2tldC5jOjM1NjE6c29ja2V0X2luaXRdIDAtbWFu YWdlbWVudDogU1NMIHN1cHBvcnQgaXMgTk9UIGVuYWJsZWQNClsyMDE0LTExLTI1IDA2OjU0OjEz LjgxMDQ2Ml0gSSBbc29ja2V0LmM6MzU3Njpzb2NrZXRfaW5pdF0gMC1tYW5hZ2VtZW50OiB1c2lu ZyBzeXN0ZW0gcG9sbGluZyB0aHJlYWQNClsyMDE0LTExLTI1IDA2OjU0OjEzLjgxMjUyN10gSSBb cnBjLWNsbnQuYzo5NzI6cnBjX2NsbnRfY29ubmVjdGlvbl9pbml0XSAwLW1hbmFnZW1lbnQ6IHNl dHRpbmcgZnJhbWUtdGltZW91dCB0byA2MDANClsyMDE0LTExLTI1IDA2OjU0OjEzLjgxMjYxN10g SSBbc29ja2V0LmM6MzU2MTpzb2NrZXRfaW5pdF0gMC1tYW5hZ2VtZW50OiBTU0wgc3VwcG9ydCBp cyBOT1QgZW5hYmxlZA0KWzIwMTQtMTEtMjUgMDY6NTQ6MTMuODEyNjM0XSBJIFtzb2NrZXQuYzoz NTc2OnNvY2tldF9pbml0XSAwLW1hbmFnZW1lbnQ6IHVzaW5nIHN5c3RlbSBwb2xsaW5nIHRocmVh ZA0KWzIwMTQtMTEtMjUgMDY6NTQ6MTMuODE4NTM1XSBJIFtycGMtY2xudC5jOjk3MjpycGNfY2xu dF9jb25uZWN0aW9uX2luaXRdIDAtbWFuYWdlbWVudDogc2V0dGluZyBmcmFtZS10aW1lb3V0IHRv IDYwMA0KWzIwMTQtMTEtMjUgMDY6NTQ6MTMuODE4NjEzXSBJIFtzb2NrZXQuYzozNTYxOnNvY2tl dF9pbml0XSAwLW1hbmFnZW1lbnQ6IFNTTCBzdXBwb3J0IGlzIE5PVCBlbmFibGVkDQpbMjAxNC0x MS0yNSAwNjo1NDoxMy44MTg2MjddIEkgW3NvY2tldC5jOjM1NzY6c29ja2V0X2luaXRdIDAtbWFu YWdlbWVudDogdXNpbmcgc3lzdGVtIHBvbGxpbmcgdGhyZWFkDQpbMjAxNC0xMS0yNSAwNjo1NDox NC43NjExNTJdIEkgW2dsdXN0ZXJkLWhhbmRzaGFrZS5jOjU2MzpfX2dsdXN0ZXJkX21nbXRfaG5k c2tfdmVyc2lvbnNfYWNrXSAwLW1hbmFnZW1lbnQ6IHVzaW5nIHRoZSBvcC12ZXJzaW9uIDMwNTAx DQpbMjAxNC0xMS0yNSAwNjo1NDoxNS4wNDE1OTRdIEkgW3JwYy1jbG50LmM6OTcyOnJwY19jbG50 X2Nvbm5lY3Rpb25faW5pdF0gMC1tYW5hZ2VtZW50OiBzZXR0aW5nIGZyYW1lLXRpbWVvdXQgdG8g NjAwDQpbMjAxNC0xMS0yNSAwNjo1NDoxNS4wNDE2ODddIEkgW3NvY2tldC5jOjM1NjE6c29ja2V0 X2luaXRdIDAtbWFuYWdlbWVudDogU1NMIHN1cHBvcnQgaXMgTk9UIGVuYWJsZWQNClsyMDE0LTEx LTI1IDA2OjU0OjE1LjA0MTcwMl0gSSBbc29ja2V0LmM6MzU3Njpzb2NrZXRfaW5pdF0gMC1tYW5h Z2VtZW50OiB1c2luZyBzeXN0ZW0gcG9sbGluZyB0aHJlYWQNClsyMDE0LTExLTI1IDA2OjU0OjIw LjgyNDU5OV0gSSBbcnBjLWNsbnQuYzo5NzI6cnBjX2NsbnRfY29ubmVjdGlvbl9pbml0XSAwLW1h bmFnZW1lbnQ6IHNldHRpbmcgZnJhbWUtdGltZW91dCB0byA2MDANClsyMDE0LTExLTI1IDA2OjU0 OjIwLjgyNDY5Nl0gSSBbc29ja2V0LmM6MzU2MTpzb2NrZXRfaW5pdF0gMC1tYW5hZ2VtZW50OiBT U0wgc3VwcG9ydCBpcyBOT1QgZW5hYmxlZA0KWzIwMTQtMTEtMjUgMDY6NTQ6MjAuODI0NzE3XSBJ IFtzb2NrZXQuYzozNTc2OnNvY2tldF9pbml0XSAwLW1hbmFnZW1lbnQ6IHVzaW5nIHN5c3RlbSBw b2xsaW5nIHRocmVhZA0KWzIwMTQtMTEtMjUgMDY6NTQ6MjEuMjQwNTY5XSBJIFtnbHVzdGVyZC1y cGMtb3BzLmM6MzU2Ol9fZ2x1c3RlcmRfZnJpZW5kX2FkZF9jYmtdIDAtZ2x1c3RlcmQ6IFJlY2Vp dmVkIEFDQyBmcm9tIHV1aWQ6IDY5YjE5ZjRlLTAzYzgtNGZlMS1hNjQ0LTI3MDFjNDFkNzAyMCwg aG9zdDogY29tcHV0ZTEuM2xpbnV4LmNvbSwgcG9ydDogMA0KWzIwMTQtMTEtMjUgMDY6NTQ6MjEu NTc0NjMwXSBJIFtnbHVzdGVyZC1oYW5kbGVyLmM6MjIxMjpfX2dsdXN0ZXJkX2hhbmRsZV9mcmll bmRfdXBkYXRlXSAwLWdsdXN0ZXJkOiBSZWNlaXZlZCBmcmllbmQgdXBkYXRlIGZyb20gdXVpZDog NjliMTlmNGUtMDNjOC00ZmUxLWE2NDQtMjcwMWM0MWQ3MDIwDQpbMjAxNC0xMS0yNSAwNjo1NDoy MS41NzQ2NzVdIEkgW2dsdXN0ZXJkLWhhbmRsZXIuYzoyMjU3Ol9fZ2x1c3RlcmRfaGFuZGxlX2Zy aWVuZF91cGRhdGVdIDAtOiBSZWNlaXZlZCB1dWlkOiAwZDk0ZDg5Ni1lOTk0LTRkZWItYjFhMS1k NmY2OGJlODZjMzAsIGhvc3RuYW1lOmNvbXB1dGUzLjNsaW51eC5jb20NClsyMDE0LTExLTI1IDA2 OjU0OjIxLjU3NDY5NV0gSSBbZ2x1c3RlcmQtaGFuZGxlci5jOjIyNTc6X19nbHVzdGVyZF9oYW5k bGVfZnJpZW5kX3VwZGF0ZV0gMC06IFJlY2VpdmVkIHV1aWQ6IDJjZjM0MjhjLTYyYmUtNDJmNC04 NmQzLWVmNjdlN2RmYzk3ZiwgaG9zdG5hbWU6Y29tcHV0ZTIuM2xpbnV4LmNvbQ0KWzIwMTQtMTEt MjUgMDY6NTQ6MjEuNTc0NzEwXSBJIFtnbHVzdGVyZC1oYW5kbGVyLmM6MjI1NzpfX2dsdXN0ZXJk X2hhbmRsZV9mcmllbmRfdXBkYXRlXSAwLTogUmVjZWl2ZWQgdXVpZDogOTE4Y2VmMWYtZWQ4My00 NTcwLTkwYWQtMzljNWNlZWM3NTNjLCBob3N0bmFtZTpjb21wdXRlNC4zbGludXguY29tDQpbMjAx NC0xMS0yNSAwNjo1NDoyMS41NzQ3MjVdIEkgW2dsdXN0ZXJkLWhhbmRsZXIuYzoyMjY2Ol9fZ2x1 c3RlcmRfaGFuZGxlX2ZyaWVuZF91cGRhdGVdIDAtOiBSZWNlaXZlZCBteSB1dWlkIGFzIEZyaWVu ZA0KWzIwMTQtMTEtMjUgMDY6NTQ6MjEuNTc0NzM0XSBJIFtnbHVzdGVyZC1oYW5kbGVyLmM6MjI1 NzpfX2dsdXN0ZXJkX2hhbmRsZV9mcmllbmRfdXBkYXRlXSAwLTogUmVjZWl2ZWQgdXVpZDogMjY4 MTM5Y2ItZTdiNy00OGU5LTk5Y2YtZTI2YzUyMzUxNmU3LCBob3N0bmFtZTpjb21wdXRlNS4zbGlu dXguY29tDQpbMjAxNC0xMS0yNSAwNjo1NDoyMS41NzQ3ODhdIEkgW2dsdXN0ZXJkLXJwYy1vcHMu YzozNTY6X19nbHVzdGVyZF9mcmllbmRfYWRkX2Nia10gMC1nbHVzdGVyZDogUmVjZWl2ZWQgQUND IGZyb20gdXVpZDogMmNmMzQyOGMtNjJiZS00MmY0LTg2ZDMtZWY2N2U3ZGZjOTdmLCBob3N0OiBj b21wdXRlMi4zbGludXguY29tLCBwb3J0OiAwDQpbMjAxNC0xMS0yNSAwNjo1NDoyMS45MDExMzBd IEkgW2dsdXN0ZXJkLXJwYy1vcHMuYzozNTY6X19nbHVzdGVyZF9mcmllbmRfYWRkX2Nia10gMC1n bHVzdGVyZDogUmVjZWl2ZWQgQUNDIGZyb20gdXVpZDogMjY4MTM5Y2ItZTdiNy00OGU5LTk5Y2Yt ZTI2YzUyMzUxNmU3LCBob3N0OiBjb21wdXRlNS4zbGludXguY29tLCBwb3J0OiAwDQpbMjAxNC0x MS0yNSAwNjo1NDoyMi4yMjY1NTFdIEkgW3NvY2tldC5jOjIyMzg6c29ja2V0X2V2ZW50X2hhbmRs ZXJdIDAtdHJhbnNwb3J0OiBkaXNjb25uZWN0aW5nIG5vdw0KWzIwMTQtMTEtMjUgMDY6NTQ6MjIu MjI2ODg5XSBJIFtnbHVzdGVyZC1oYW5kbGVyLmM6MzcxMzpfX2dsdXN0ZXJkX2JyaWNrX3JwY19u b3RpZnldIDAtbWFuYWdlbWVudDogRGlzY29ubmVjdGVkIGZyb20gY29tcHV0ZTQuM2xpbnV4LmNv bTovYnJpY2s3DQpbMjAxNC0xMS0yNSAwNjo1NDoyMi4yMjcyMTZdIEkgW3NvY2tldC5jOjIyMzg6 c29ja2V0X2V2ZW50X2hhbmRsZXJdIDAtdHJhbnNwb3J0OiBkaXNjb25uZWN0aW5nIG5vdw0KWzIw MTQtMTEtMjUgMDY6NTQ6MjIuMjI3NTQxXSBJIFtnbHVzdGVyZC1oYW5kbGVyLmM6MzcxMzpfX2ds dXN0ZXJkX2JyaWNrX3JwY19ub3RpZnldIDAtbWFuYWdlbWVudDogRGlzY29ubmVjdGVkIGZyb20g Y29tcHV0ZTQuM2xpbnV4LmNvbTovYnJpY2s4DQpbMjAxNC0xMS0yNSAwNjo1NDoyMi4yMjc1NjZd IEkgW3NvY2tldC5jOjIyMzg6c29ja2V0X2V2ZW50X2hhbmRsZXJdIDAtdHJhbnNwb3J0OiBkaXNj b25uZWN0aW5nIG5vdw0KWzIwMTQtMTEtMjUgMDY6NTQ6MjIuMjI3NTkzXSBJIFtzb2NrZXQuYzoy MjM4OnNvY2tldF9ldmVudF9oYW5kbGVyXSAwLXRyYW5zcG9ydDogZGlzY29ubmVjdGluZyBub3cN ClsyMDE0LTExLTI1IDA2OjU0OjIyLjIyNzY2NF0gSSBbZ2x1c3RlcmQtaGFuZGxlci5jOjIwNTA6 X19nbHVzdGVyZF9oYW5kbGVfaW5jb21pbmdfZnJpZW5kX3JlcV0gMC1nbHVzdGVyZDogUmVjZWl2 ZWQgcHJvYmUgZnJvbSB1dWlkOiA2OWIxOWY0ZS0wM2M4LTRmZTEtYTY0NC0yNzAxYzQxZDcwMjAN ClsyMDE0LTExLTI1IDA2OjU0OjIyLjIyNzczNF0gSSBbZ2x1c3RlcmQtaGFuZGxlci5jOjMwODU6 Z2x1c3RlcmRfeGZlcl9mcmllbmRfYWRkX3Jlc3BdIDAtZ2x1c3RlcmQ6IFJlc3BvbmRlZCB0byBj b21wdXRlMS4zbGludXguY29tICgwKSwgcmV0OiAwDQpbMjAxNC0xMS0yNSAwNjo1NDoyMi41NTEx NzZdIEkgW2dsdXN0ZXJkLXNtLmM6NDk1OmdsdXN0ZXJkX2FjX3NlbmRfZnJpZW5kX3VwZGF0ZV0g MC06IEFkZGVkIHV1aWQ6IDI2ODEzOWNiLWU3YjctNDhlOS05OWNmLWUyNmM1MjM1MTZlNywgaG9z dDogY29tcHV0ZTUuM2xpbnV4LmNvbQ0KWzIwMTQtMTEtMjUgMDY6NTQ6MjIuNTUxMjExXSBJIFtn bHVzdGVyZC1zbS5jOjQ5NTpnbHVzdGVyZF9hY19zZW5kX2ZyaWVuZF91cGRhdGVdIDAtOiBBZGRl ZCB1dWlkOiA2OWIxOWY0ZS0wM2M4LTRmZTEtYTY0NC0yNzAxYzQxZDcwMjAsIGhvc3Q6IGNvbXB1 dGUxLjNsaW51eC5jb20NClsyMDE0LTExLTI1IDA2OjU0OjIyLjU1MTIyOV0gSSBbZ2x1c3RlcmQt c20uYzo0OTU6Z2x1c3RlcmRfYWNfc2VuZF9mcmllbmRfdXBkYXRlXSAwLTogQWRkZWQgdXVpZDog MmNmMzQyOGMtNjJiZS00MmY0LTg2ZDMtZWY2N2U3ZGZjOTdmLCBob3N0OiBjb21wdXRlMi4zbGlu dXguY29tDQpbMjAxNC0xMS0yNSAwNjo1NDoyMi41NTEyNDVdIEkgW2dsdXN0ZXJkLXNtLmM6NDk1 OmdsdXN0ZXJkX2FjX3NlbmRfZnJpZW5kX3VwZGF0ZV0gMC06IEFkZGVkIHV1aWQ6IDBkOTRkODk2 LWU5OTQtNGRlYi1iMWExLWQ2ZjY4YmU4NmMzMCwgaG9zdDogY29tcHV0ZTMuM2xpbnV4LmNvbQ0K WzIwMTQtMTEtMjUgMDY6NTQ6MjIuODkzNjI0XSBJIFtnbHVzdGVyZC1ycGMtb3BzLmM6NTUzOl9f Z2x1c3RlcmRfZnJpZW5kX3VwZGF0ZV9jYmtdIDAtbWFuYWdlbWVudDogUmVjZWl2ZWQgQUNDIGZy b20gdXVpZDogMjY4MTM5Y2ItZTdiNy00OGU5LTk5Y2YtZTI2YzUyMzUxNmU3DQpbMjAxNC0xMS0y NSAwNjo1NDoyMi44OTM2NTldIEkgW2dsdXN0ZXJkLXJwYy1vcHMuYzo1NTM6X19nbHVzdGVyZF9m cmllbmRfdXBkYXRlX2Nia10gMC1tYW5hZ2VtZW50OiBSZWNlaXZlZCBBQ0MgZnJvbSB1dWlkOiA2 OWIxOWY0ZS0wM2M4LTRmZTEtYTY0NC0yNzAxYzQxZDcwMjANClsyMDE0LTExLTI1IDA2OjU0OjIy Ljg5MzY3OF0gSSBbZ2x1c3RlcmQtcnBjLW9wcy5jOjU1MzpfX2dsdXN0ZXJkX2ZyaWVuZF91cGRh dGVfY2JrXSAwLW1hbmFnZW1lbnQ6IFJlY2VpdmVkIEFDQyBmcm9tIHV1aWQ6IDJjZjM0MjhjLTYy YmUtNDJmNC04NmQzLWVmNjdlN2RmYzk3Zg0KWzIwMTQtMTEtMjUgMDY6NTQ6MjIuODkzNjk1XSBJ IFtnbHVzdGVyZC1ycGMtb3BzLmM6NTUzOl9fZ2x1c3RlcmRfZnJpZW5kX3VwZGF0ZV9jYmtdIDAt bWFuYWdlbWVudDogUmVjZWl2ZWQgQUNDIGZyb20gdXVpZDogMGQ5NGQ4OTYtZTk5NC00ZGViLWIx YTEtZDZmNjhiZTg2YzMwDQpbMjAxNC0xMS0yNSAwNjo1NDoyMi44OTM4MzRdIEkgW2dsdXN0ZXJk LWhhbmRsZXIuYzoyMjEyOl9fZ2x1c3RlcmRfaGFuZGxlX2ZyaWVuZF91cGRhdGVdIDAtZ2x1c3Rl cmQ6IFJlY2VpdmVkIGZyaWVuZCB1cGRhdGUgZnJvbSB1dWlkOiAwZDk0ZDg5Ni1lOTk0LTRkZWIt YjFhMS1kNmY2OGJlODZjMzANClsyMDE0LTExLTI1IDA2OjU0OjIyLjg5Mzg2MV0gSSBbZ2x1c3Rl cmQtaGFuZGxlci5jOjIyNTc6X19nbHVzdGVyZF9oYW5kbGVfZnJpZW5kX3VwZGF0ZV0gMC06IFJl Y2VpdmVkIHV1aWQ6IDJjZjM0MjhjLTYyYmUtNDJmNC04NmQzLWVmNjdlN2RmYzk3ZiwgaG9zdG5h bWU6Y29tcHV0ZTIuM2xpbnV4LmNvbQ0KWzIwMTQtMTEtMjUgMDY6NTQ6MjIuODkzODczXSBJIFtn bHVzdGVyZC1oYW5kbGVyLmM6MjI1NzpfX2dsdXN0ZXJkX2hhbmRsZV9mcmllbmRfdXBkYXRlXSAw LTogUmVjZWl2ZWQgdXVpZDogMjY4MTM5Y2ItZTdiNy00OGU5LTk5Y2YtZTI2YzUyMzUxNmU3LCBo b3N0bmFtZTpjb21wdXRlNS4zbGludXguY29tDQpbMjAxNC0xMS0yNSAwNjo1NDoyMi44OTM4ODJd IEkgW2dsdXN0ZXJkLWhhbmRsZXIuYzoyMjU3Ol9fZ2x1c3RlcmRfaGFuZGxlX2ZyaWVuZF91cGRh dGVdIDAtOiBSZWNlaXZlZCB1dWlkOiA5MThjZWYxZi1lZDgzLTQ1NzAtOTBhZC0zOWM1Y2VlYzc1 M2MsIGhvc3RuYW1lOmNvbXB1dGU0LjNsaW51eC5jb20NClsyMDE0LTExLTI1IDA2OjU0OjIyLjg5 Mzg4OV0gSSBbZ2x1c3RlcmQtaGFuZGxlci5jOjIyNjY6X19nbHVzdGVyZF9oYW5kbGVfZnJpZW5k X3VwZGF0ZV0gMC06IFJlY2VpdmVkIG15IHV1aWQgYXMgRnJpZW5kDQpbMjAxNC0xMS0yNSAwNjo1 NDoyMi44OTM4OThdIEkgW2dsdXN0ZXJkLWhhbmRsZXIuYzoyMjU3Ol9fZ2x1c3RlcmRfaGFuZGxl X2ZyaWVuZF91cGRhdGVdIDAtOiBSZWNlaXZlZCB1dWlkOiA2OWIxOWY0ZS0wM2M4LTRmZTEtYTY0 NC0yNzAxYzQxZDcwMjAsIGhvc3RuYW1lOmNvbXB1dGUxLjNsaW51eC5jb20NClsyMDE0LTExLTI1 IDA2OjU0OjIyLjkxODUyN10gSSBbZ2x1c3RlcmQtaGFuZGxlci5jOjIyMTI6X19nbHVzdGVyZF9o YW5kbGVfZnJpZW5kX3VwZGF0ZV0gMC1nbHVzdGVyZDogUmVjZWl2ZWQgZnJpZW5kIHVwZGF0ZSBm cm9tIHV1aWQ6IDJjZjM0MjhjLTYyYmUtNDJmNC04NmQzLWVmNjdlN2RmYzk3Zg0KWzIwMTQtMTEt MjUgMDY6NTQ6MjIuOTE4NTU0XSBJIFtnbHVzdGVyZC1oYW5kbGVyLmM6MjI1NzpfX2dsdXN0ZXJk X2hhbmRsZV9mcmllbmRfdXBkYXRlXSAwLTogUmVjZWl2ZWQgdXVpZDogMjY4MTM5Y2ItZTdiNy00 OGU5LTk5Y2YtZTI2YzUyMzUxNmU3LCBob3N0bmFtZTpjb21wdXRlNS4zbGludXguY29tDQpbMjAx NC0xMS0yNSAwNjo1NDoyMi45MTg1NjZdIEkgW2dsdXN0ZXJkLWhhbmRsZXIuYzoyMjU3Ol9fZ2x1 c3RlcmRfaGFuZGxlX2ZyaWVuZF91cGRhdGVdIDAtOiBSZWNlaXZlZCB1dWlkOiA5MThjZWYxZi1l ZDgzLTQ1NzAtOTBhZC0zOWM1Y2VlYzc1M2MsIGhvc3RuYW1lOmNvbXB1dGU0LjNsaW51eC5jb20N ClsyMDE0LTExLTI1IDA2OjU0OjIyLjkxODU3M10gSSBbZ2x1c3RlcmQtaGFuZGxlci5jOjIyNjY6 X19nbHVzdGVyZF9oYW5kbGVfZnJpZW5kX3VwZGF0ZV0gMC06IFJlY2VpdmVkIG15IHV1aWQgYXMg RnJpZW5kDQpbMjAxNC0xMS0yNSAwNjo1NDoyMi45MTg1ODJdIEkgW2dsdXN0ZXJkLWhhbmRsZXIu YzoyMjU3Ol9fZ2x1c3RlcmRfaGFuZGxlX2ZyaWVuZF91cGRhdGVdIDAtOiBSZWNlaXZlZCB1dWlk OiAwZDk0ZDg5Ni1lOTk0LTRkZWItYjFhMS1kNmY2OGJlODZjMzAsIGhvc3RuYW1lOmNvbXB1dGUz LjNsaW51eC5jb20NClsyMDE0LTExLTI1IDA2OjU0OjIyLjkxODU5MV0gSSBbZ2x1c3RlcmQtaGFu ZGxlci5jOjIyNTc6X19nbHVzdGVyZF9oYW5kbGVfZnJpZW5kX3VwZGF0ZV0gMC06IFJlY2VpdmVk IHV1aWQ6IDY5YjE5ZjRlLTAzYzgtNGZlMS1hNjQ0LTI3MDFjNDFkNzAyMCwgaG9zdG5hbWU6Y29t cHV0ZTEuM2xpbnV4LmNvbQ0KWzIwMTQtMTEtMjUgMDY6NTQ6MjIuOTE4NzE3XSBJIFtnbHVzdGVy ZC1oYW5kc2hha2UuYzo1NjM6X19nbHVzdGVyZF9tZ210X2huZHNrX3ZlcnNpb25zX2Fja10gMC1t YW5hZ2VtZW50OiB1c2luZyB0aGUgb3AtdmVyc2lvbiAzMDUwMQ0KWzIwMTQtMTEtMjUgMDY6NTQ6 MjMuMzUzNTA2XSBJIFtnbHVzdGVyZC1oYW5kc2hha2UuYzo1NjM6X19nbHVzdGVyZF9tZ210X2hu ZHNrX3ZlcnNpb25zX2Fja10gMC1tYW5hZ2VtZW50OiB1c2luZyB0aGUgb3AtdmVyc2lvbiAzMDUw MQ0KWzIwMTQtMTEtMjUgMDY6NTQ6MjMuNzQ0ODM4XSBJIFtnbHVzdGVyZC1oYW5kbGVyLmM6MjIx MjpfX2dsdXN0ZXJkX2hhbmRsZV9mcmllbmRfdXBkYXRlXSAwLWdsdXN0ZXJkOiBSZWNlaXZlZCBm cmllbmQgdXBkYXRlIGZyb20gdXVpZDogMjY4MTM5Y2ItZTdiNy00OGU5LTk5Y2YtZTI2YzUyMzUx NmU3DQpbMjAxNC0xMS0yNSAwNjo1NDoyMy43NDQ4ODldIEkgW2dsdXN0ZXJkLWhhbmRsZXIuYzoy MjU3Ol9fZ2x1c3RlcmRfaGFuZGxlX2ZyaWVuZF91cGRhdGVdIDAtOiBSZWNlaXZlZCB1dWlkOiA5 MThjZWYxZi1lZDgzLTQ1NzAtOTBhZC0zOWM1Y2VlYzc1M2MsIGhvc3RuYW1lOmNvbXB1dGU0LjNs aW51eC5jb20NClsyMDE0LTExLTI1IDA2OjU0OjIzLjc0NDkwNV0gSSBbZ2x1c3RlcmQtaGFuZGxl ci5jOjIyNjY6X19nbHVzdGVyZF9oYW5kbGVfZnJpZW5kX3VwZGF0ZV0gMC06IFJlY2VpdmVkIG15 IHV1aWQgYXMgRnJpZW5kDQpbMjAxNC0xMS0yNSAwNjo1NDoyMy43NDQ5MjBdIEkgW2dsdXN0ZXJk LWhhbmRsZXIuYzoyMjU3Ol9fZ2x1c3RlcmRfaGFuZGxlX2ZyaWVuZF91cGRhdGVdIDAtOiBSZWNl aXZlZCB1dWlkOiAwZDk0ZDg5Ni1lOTk0LTRkZWItYjFhMS1kNmY2OGJlODZjMzAsIGhvc3RuYW1l OmNvbXB1dGUzLjNsaW51eC5jb20NClsyMDE0LTExLTI1IDA2OjU0OjIzLjc0NDkzNl0gSSBbZ2x1 c3RlcmQtaGFuZGxlci5jOjIyNTc6X19nbHVzdGVyZF9oYW5kbGVfZnJpZW5kX3VwZGF0ZV0gMC06 IFJlY2VpdmVkIHV1aWQ6IDJjZjM0MjhjLTYyYmUtNDJmNC04NmQzLWVmNjdlN2RmYzk3ZiwgaG9z dG5hbWU6Y29tcHV0ZTIuM2xpbnV4LmNvbQ0KWzIwMTQtMTEtMjUgMDY6NTQ6MjMuNzQ0OTUxXSBJ IFtnbHVzdGVyZC1oYW5kbGVyLmM6MjI1NzpfX2dsdXN0ZXJkX2hhbmRsZV9mcmllbmRfdXBkYXRl XSAwLTogUmVjZWl2ZWQgdXVpZDogNjliMTlmNGUtMDNjOC00ZmUxLWE2NDQtMjcwMWM0MWQ3MDIw LCBob3N0bmFtZTpjb21wdXRlMS4zbGludXguY29tDQpbMjAxNC0xMS0yNSAwNjo1NDoyMy43NDUw NjddIEkgW2dsdXN0ZXJkLWhhbmRsZXIuYzoyMDUwOl9fZ2x1c3RlcmRfaGFuZGxlX2luY29taW5n X2ZyaWVuZF9yZXFdIDAtZ2x1c3RlcmQ6IFJlY2VpdmVkIHByb2JlIGZyb20gdXVpZDogMGQ5NGQ4 OTYtZTk5NC00ZGViLWIxYTEtZDZmNjhiZTg2YzMwDQpbMjAxNC0xMS0yNSAwNjo1NDoyMy43NDUx MzZdIEkgW2dsdXN0ZXJkLWhhbmRsZXIuYzozMDg1OmdsdXN0ZXJkX3hmZXJfZnJpZW5kX2FkZF9y ZXNwXSAwLWdsdXN0ZXJkOiBSZXNwb25kZWQgdG8gY29tcHV0ZTMuM2xpbnV4LmNvbSAoMCksIHJl dDogMA0KWzIwMTQtMTEtMjUgMDY6NTQ6MjQuNDM3NTMwXSBJIFtnbHVzdGVyZC1zbS5jOjQ5NTpn bHVzdGVyZF9hY19zZW5kX2ZyaWVuZF91cGRhdGVdIDAtOiBBZGRlZCB1dWlkOiAyNjgxMzljYi1l N2I3LTQ4ZTktOTljZi1lMjZjNTIzNTE2ZTcsIGhvc3Q6IGNvbXB1dGU1LjNsaW51eC5jb20NClsy MDE0LTExLTI1IDA2OjU0OjI0LjQzNzU2NV0gSSBbZ2x1c3RlcmQtc20uYzo0OTU6Z2x1c3RlcmRf YWNfc2VuZF9mcmllbmRfdXBkYXRlXSAwLTogQWRkZWQgdXVpZDogNjliMTlmNGUtMDNjOC00ZmUx LWE2NDQtMjcwMWM0MWQ3MDIwLCBob3N0OiBjb21wdXRlMS4zbGludXguY29tDQpbMjAxNC0xMS0y NSAwNjo1NDoyNC40Mzc1NzldIEkgW2dsdXN0ZXJkLXNtLmM6NDk1OmdsdXN0ZXJkX2FjX3NlbmRf ZnJpZW5kX3VwZGF0ZV0gMC06IEFkZGVkIHV1aWQ6IDJjZjM0MjhjLTYyYmUtNDJmNC04NmQzLWVm NjdlN2RmYzk3ZiwgaG9zdDogY29tcHV0ZTIuM2xpbnV4LmNvbQ0KWzIwMTQtMTEtMjUgMDY6NTQ6 MjQuNDM3NTg5XSBJIFtnbHVzdGVyZC1zbS5jOjQ5NTpnbHVzdGVyZF9hY19zZW5kX2ZyaWVuZF91 cGRhdGVdIDAtOiBBZGRlZCB1dWlkOiAwZDk0ZDg5Ni1lOTk0LTRkZWItYjFhMS1kNmY2OGJlODZj MzAsIGhvc3Q6IGNvbXB1dGUzLjNsaW51eC5jb20NClsyMDE0LTExLTI1IDA2OjU0OjI0Ljg3MTgz N10gSSBbZ2x1c3RlcmQtaGFuZGxlci5jOjIwNTA6X19nbHVzdGVyZF9oYW5kbGVfaW5jb21pbmdf ZnJpZW5kX3JlcV0gMC1nbHVzdGVyZDogUmVjZWl2ZWQgcHJvYmUgZnJvbSB1dWlkOiAyY2YzNDI4 Yy02MmJlLTQyZjQtODZkMy1lZjY3ZTdkZmM5N2YNClsyMDE0LTExLTI1IDA2OjU0OjI0Ljg3MTky OV0gSSBbZ2x1c3RlcmQtaGFuZGxlci5jOjMwODU6Z2x1c3RlcmRfeGZlcl9mcmllbmRfYWRkX3Jl c3BdIDAtZ2x1c3RlcmQ6IFJlc3BvbmRlZCB0byBjb21wdXRlMi4zbGludXguY29tICgwKSwgcmV0 OiAwDQpbMjAxNC0xMS0yNSAwNjo1NDoyNS4yMzk2MTJdIEkgW2dsdXN0ZXJkLXNtLmM6NDk1Omds dXN0ZXJkX2FjX3NlbmRfZnJpZW5kX3VwZGF0ZV0gMC06IEFkZGVkIHV1aWQ6IDI2ODEzOWNiLWU3 YjctNDhlOS05OWNmLWUyNmM1MjM1MTZlNywgaG9zdDogY29tcHV0ZTUuM2xpbnV4LmNvbQ0KWzIw MTQtMTEtMjUgMDY6NTQ6MjUuMjM5NjM2XSBJIFtnbHVzdGVyZC1zbS5jOjQ5NTpnbHVzdGVyZF9h Y19zZW5kX2ZyaWVuZF91cGRhdGVdIDAtOiBBZGRlZCB1dWlkOiA2OWIxOWY0ZS0wM2M4LTRmZTEt YTY0NC0yNzAxYzQxZDcwMjAsIGhvc3Q6IGNvbXB1dGUxLjNsaW51eC5jb20NClsyMDE0LTExLTI1 IDA2OjU0OjI1LjIzOTY0N10gSSBbZ2x1c3RlcmQtc20uYzo0OTU6Z2x1c3RlcmRfYWNfc2VuZF9m cmllbmRfdXBkYXRlXSAwLTogQWRkZWQgdXVpZDogMmNmMzQyOGMtNjJiZS00MmY0LTg2ZDMtZWY2 N2U3ZGZjOTdmLCBob3N0OiBjb21wdXRlMi4zbGludXguY29tDQpbMjAxNC0xMS0yNSAwNjo1NDoy NS4yMzk2NThdIEkgW2dsdXN0ZXJkLXNtLmM6NDk1OmdsdXN0ZXJkX2FjX3NlbmRfZnJpZW5kX3Vw ZGF0ZV0gMC06IEFkZGVkIHV1aWQ6IDBkOTRkODk2LWU5OTQtNGRlYi1iMWExLWQ2ZjY4YmU4NmMz MCwgaG9zdDogY29tcHV0ZTMuM2xpbnV4LmNvbQ0KWzIwMTQtMTEtMjUgMDY6NTQ6MjUuNTY1Mjc5 XSBJIFtnbHVzdGVyZC1wbWFwLmM6MjI3OnBtYXBfcmVnaXN0cnlfYmluZF0gMC1wbWFwOiBhZGRp bmcgYnJpY2sgL2JyaWNrOCBvbiBwb3J0IDQ5MTUzDQpbMjAxNC0xMS0yNSAwNjo1NDoyNS41Njk5 MzBdIEkgW2dsdXN0ZXJkLXBtYXAuYzoyMjc6cG1hcF9yZWdpc3RyeV9iaW5kXSAwLXBtYXA6IGFk ZGluZyBicmljayAvYnJpY2s3IG9uIHBvcnQgNDkxNTINClsyMDE0LTExLTI1IDA2OjU0OjI1LjU3 MjE5M10gSSBbZ2x1c3RlcmQtcnBjLW9wcy5jOjU1MzpfX2dsdXN0ZXJkX2ZyaWVuZF91cGRhdGVf Y2JrXSAwLW1hbmFnZW1lbnQ6IFJlY2VpdmVkIEFDQyBmcm9tIHV1aWQ6IDJjZjM0MjhjLTYyYmUt NDJmNC04NmQzLWVmNjdlN2RmYzk3Zg0KWzIwMTQtMTEtMjUgMDY6NTQ6MjUuNTcyMjI2XSBJIFtn bHVzdGVyZC1ycGMtb3BzLmM6NTUzOl9fZ2x1c3RlcmRfZnJpZW5kX3VwZGF0ZV9jYmtdIDAtbWFu YWdlbWVudDogUmVjZWl2ZWQgQUNDIGZyb20gdXVpZDogMGQ5NGQ4OTYtZTk5NC00ZGViLWIxYTEt ZDZmNjhiZTg2YzMwDQpbMjAxNC0xMS0yNSAwNjo1NDoyNS41NzIyNDRdIEkgW2dsdXN0ZXJkLXJw Yy1vcHMuYzo1NTM6X19nbHVzdGVyZF9mcmllbmRfdXBkYXRlX2Nia10gMC1tYW5hZ2VtZW50OiBS ZWNlaXZlZCBBQ0MgZnJvbSB1dWlkOiAyNjgxMzljYi1lN2I3LTQ4ZTktOTljZi1lMjZjNTIzNTE2 ZTcNClsyMDE0LTExLTI1IDA2OjU0OjI1LjU3MjI2MF0gSSBbZ2x1c3RlcmQtcnBjLW9wcy5jOjU1 MzpfX2dsdXN0ZXJkX2ZyaWVuZF91cGRhdGVfY2JrXSAwLW1hbmFnZW1lbnQ6IFJlY2VpdmVkIEFD QyBmcm9tIHV1aWQ6IDY5YjE5ZjRlLTAzYzgtNGZlMS1hNjQ0LTI3MDFjNDFkNzAyMA0KWzIwMTQt MTEtMjUgMDY6NTQ6MjUuNTcyMjk5XSBJIFtnbHVzdGVyZC1oYW5kc2hha2UuYzo1NjM6X19nbHVz dGVyZF9tZ210X2huZHNrX3ZlcnNpb25zX2Fja10gMC1tYW5hZ2VtZW50OiB1c2luZyB0aGUgb3At dmVyc2lvbiAzMDUwMQ0KWzIwMTQtMTEtMjUgMDY6NTQ6MjUuODA3NDczXSBJIFtnbHVzdGVyZC1y cGMtb3BzLmM6NTUzOl9fZ2x1c3RlcmRfZnJpZW5kX3VwZGF0ZV9jYmtdIDAtbWFuYWdlbWVudDog UmVjZWl2ZWQgQUNDIGZyb20gdXVpZDogMmNmMzQyOGMtNjJiZS00MmY0LTg2ZDMtZWY2N2U3ZGZj OTdmDQpbMjAxNC0xMS0yNSAwNjo1NDoyNS44MDc1MDhdIEkgW2dsdXN0ZXJkLXJwYy1vcHMuYzo1 NTM6X19nbHVzdGVyZF9mcmllbmRfdXBkYXRlX2Nia10gMC1tYW5hZ2VtZW50OiBSZWNlaXZlZCBB Q0MgZnJvbSB1dWlkOiAwZDk0ZDg5Ni1lOTk0LTRkZWItYjFhMS1kNmY2OGJlODZjMzANClsyMDE0 LTExLTI1IDA2OjU0OjI1LjgwNzUzMl0gSSBbZ2x1c3RlcmQtcnBjLW9wcy5jOjU1MzpfX2dsdXN0 ZXJkX2ZyaWVuZF91cGRhdGVfY2JrXSAwLW1hbmFnZW1lbnQ6IFJlY2VpdmVkIEFDQyBmcm9tIHV1 aWQ6IDI2ODEzOWNiLWU3YjctNDhlOS05OWNmLWUyNmM1MjM1MTZlNw0KWzIwMTQtMTEtMjUgMDY6 NTQ6MjUuODA3NTQ5XSBJIFtnbHVzdGVyZC1ycGMtb3BzLmM6NTUzOl9fZ2x1c3RlcmRfZnJpZW5k X3VwZGF0ZV9jYmtdIDAtbWFuYWdlbWVudDogUmVjZWl2ZWQgQUNDIGZyb20gdXVpZDogNjliMTlm NGUtMDNjOC00ZmUxLWE2NDQtMjcwMWM0MWQ3MDIwDQpbMjAxNC0xMS0yNSAwNjo1NDoyNS44MDg0 MzBdIEkgW2dsdXN0ZXJkLWhhbmRsZXIuYzoyMDUwOl9fZ2x1c3RlcmRfaGFuZGxlX2luY29taW5n X2ZyaWVuZF9yZXFdIDAtZ2x1c3RlcmQ6IFJlY2VpdmVkIHByb2JlIGZyb20gdXVpZDogMjY4MTM5 Y2ItZTdiNy00OGU5LTk5Y2YtZTI2YzUyMzUxNmU3DQpbMjAxNC0xMS0yNSAwNjo1NDoyNS44MDg1 MjZdIEkgW2dsdXN0ZXJkLWhhbmRsZXIuYzozMDg1OmdsdXN0ZXJkX3hmZXJfZnJpZW5kX2FkZF9y ZXNwXSAwLWdsdXN0ZXJkOiBSZXNwb25kZWQgdG8gY29tcHV0ZTUuM2xpbnV4LmNvbSAoMCksIHJl dDogMA0KWzIwMTQtMTEtMjUgMDY6NTQ6MjYuMjY2NzMwXSBJIFtnbHVzdGVyZC1zbS5jOjQ5NTpn bHVzdGVyZF9hY19zZW5kX2ZyaWVuZF91cGRhdGVdIDAtOiBBZGRlZCB1dWlkOiAyNjgxMzljYi1l N2I3LTQ4ZTktOTljZi1lMjZjNTIzNTE2ZTcsIGhvc3Q6IGNvbXB1dGU1LjNsaW51eC5jb20NClsy MDE0LTExLTI1IDA2OjU0OjI2LjI2Njc1N10gSSBbZ2x1c3RlcmQtc20uYzo0OTU6Z2x1c3RlcmRf YWNfc2VuZF9mcmllbmRfdXBkYXRlXSAwLTogQWRkZWQgdXVpZDogNjliMTlmNGUtMDNjOC00ZmUx LWE2NDQtMjcwMWM0MWQ3MDIwLCBob3N0OiBjb21wdXRlMS4zbGludXguY29tDQpbMjAxNC0xMS0y NSAwNjo1NDoyNi4yNjY3NzVdIEkgW2dsdXN0ZXJkLXNtLmM6NDk1OmdsdXN0ZXJkX2FjX3NlbmRf ZnJpZW5kX3VwZGF0ZV0gMC06IEFkZGVkIHV1aWQ6IDJjZjM0MjhjLTYyYmUtNDJmNC04NmQzLWVm NjdlN2RmYzk3ZiwgaG9zdDogY29tcHV0ZTIuM2xpbnV4LmNvbQ0KWzIwMTQtMTEtMjUgMDY6NTQ6 MjYuMjY2Nzg1XSBJIFtnbHVzdGVyZC1zbS5jOjQ5NTpnbHVzdGVyZF9hY19zZW5kX2ZyaWVuZF91 cGRhdGVdIDAtOiBBZGRlZCB1dWlkOiAwZDk0ZDg5Ni1lOTk0LTRkZWItYjFhMS1kNmY2OGJlODZj MzAsIGhvc3Q6IGNvbXB1dGUzLjNsaW51eC5jb20NClsyMDE0LTExLTI1IDA2OjU0OjI2LjU5MjUw M10gSSBbZ2x1c3RlcmQtcnBjLW9wcy5jOjU1MzpfX2dsdXN0ZXJkX2ZyaWVuZF91cGRhdGVfY2Jr XSAwLW1hbmFnZW1lbnQ6IFJlY2VpdmVkIEFDQyBmcm9tIHV1aWQ6IDI2ODEzOWNiLWU3YjctNDhl OS05OWNmLWUyNmM1MjM1MTZlNw0KWzIwMTQtMTEtMjUgMDY6NTQ6MjYuNTkyNTMyXSBJIFtnbHVz dGVyZC1ycGMtb3BzLmM6NTUzOl9fZ2x1c3RlcmRfZnJpZW5kX3VwZGF0ZV9jYmtdIDAtbWFuYWdl bWVudDogUmVjZWl2ZWQgQUNDIGZyb20gdXVpZDogNjliMTlmNGUtMDNjOC00ZmUxLWE2NDQtMjcw MWM0MWQ3MDIwDQpbMjAxNC0xMS0yNSAwNjo1NDoyNi41OTI1NTBdIEkgW2dsdXN0ZXJkLXJwYy1v cHMuYzo1NTM6X19nbHVzdGVyZF9mcmllbmRfdXBkYXRlX2Nia10gMC1tYW5hZ2VtZW50OiBSZWNl aXZlZCBBQ0MgZnJvbSB1dWlkOiAyY2YzNDI4Yy02MmJlLTQyZjQtODZkMy1lZjY3ZTdkZmM5N2YN ClsyMDE0LTExLTI1IDA2OjU0OjI2LjU5MjU2NV0gSSBbZ2x1c3RlcmQtcnBjLW9wcy5jOjU1Mzpf X2dsdXN0ZXJkX2ZyaWVuZF91cGRhdGVfY2JrXSAwLW1hbmFnZW1lbnQ6IFJlY2VpdmVkIEFDQyBm cm9tIHV1aWQ6IDBkOTRkODk2LWU5OTQtNGRlYi1iMWExLWQ2ZjY4YmU4NmMzMA0KXkMNCg== ------=_Part_3480043_71003673.1416938399727 Content-Type: text/plain; name=vdsm_aftermanual_restart.txt Content-Disposition: attachment; filename=vdsm_aftermanual_restart.txt Content-Transfer-Encoding: base64 VGhyZWFkLTEzOjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTowNiwzNTk6OnJlc291cmNlTWFuYWdl cjo6OTc3OjpTdG9yYWdlLlJlc291cmNlTWFuYWdlci5Pd25lcjo6KGNhbmNlbEFsbCkgT3duZXIu Y2FuY2VsQWxsIHJlcXVlc3RzIHt9DQpUaHJlYWQtMTM6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1 OjA2LDM1OTo6dGFzazo6OTkzOjpTdG9yYWdlLlRhc2tNYW5hZ2VyLlRhc2s6OihfZGVjcmVmKSBU YXNrPWAyNGVmOGMxMi1lZjU5LTRlZjQtOWE2Ny01NjFhM2RiZDE4OWFgOjpyZWYgMCBhYm9ydGlu ZyBGYWxzZQ0KVGhyZWFkLTEzOjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NToyMSw1NzU6OnRhc2s6 OjU5NTo6U3RvcmFnZS5UYXNrTWFuYWdlci5UYXNrOjooX3VwZGF0ZVN0YXRlKSBUYXNrPWAwN2Nl YzUyZi02ZjNiLTQ2ZWMtOTU3ZS1kMTA1MzUzMTY4YjZgOjptb3ZpbmcgZnJvbSBzdGF0ZSBpbml0 IC0+IHN0YXRlIHByZXBhcmluZw0KVGhyZWFkLTEzOjpJTkZPOjoyMDE0LTExLTI1IDE0OjU1OjIx LDU3NTo6bG9nVXRpbHM6OjQ0OjpkaXNwYXRjaGVyOjood3JhcHBlcikgUnVuIGFuZCBwcm90ZWN0 OiByZXBvU3RhdHMob3B0aW9ucz1Ob25lKQ0KVGhyZWFkLTEzOjpJTkZPOjoyMDE0LTExLTI1IDE0 OjU1OjIxLDU3Njo6bG9nVXRpbHM6OjQ3OjpkaXNwYXRjaGVyOjood3JhcHBlcikgUnVuIGFuZCBw cm90ZWN0OiByZXBvU3RhdHMsIFJldHVybiByZXNwb25zZToge30NClRocmVhZC0xMzo6REVCVUc6 OjIwMTQtMTEtMjUgMTQ6NTU6MjEsNTc2Ojp0YXNrOjoxMTkxOjpTdG9yYWdlLlRhc2tNYW5hZ2Vy LlRhc2s6OihwcmVwYXJlKSBUYXNrPWAwN2NlYzUyZi02ZjNiLTQ2ZWMtOTU3ZS1kMTA1MzUzMTY4 YjZgOjpmaW5pc2hlZDoge30NClRocmVhZC0xMzo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MjEs NTc2Ojp0YXNrOjo1OTU6OlN0b3JhZ2UuVGFza01hbmFnZXIuVGFzazo6KF91cGRhdGVTdGF0ZSkg VGFzaz1gMDdjZWM1MmYtNmYzYi00NmVjLTk1N2UtZDEwNTM1MzE2OGI2YDo6bW92aW5nIGZyb20g c3RhdGUgcHJlcGFyaW5nIC0+IHN0YXRlIGZpbmlzaGVkDQpUaHJlYWQtMTM6OkRFQlVHOjoyMDE0 LTExLTI1IDE0OjU1OjIxLDU3Njo6cmVzb3VyY2VNYW5hZ2VyOjo5NDA6OlN0b3JhZ2UuUmVzb3Vy Y2VNYW5hZ2VyLk93bmVyOjoocmVsZWFzZUFsbCkgT3duZXIucmVsZWFzZUFsbCByZXF1ZXN0cyB7 fSByZXNvdXJjZXMge30NClRocmVhZC0xMzo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MjEsNTc2 OjpyZXNvdXJjZU1hbmFnZXI6Ojk3Nzo6U3RvcmFnZS5SZXNvdXJjZU1hbmFnZXIuT3duZXI6Oihj YW5jZWxBbGwpIE93bmVyLmNhbmNlbEFsbCByZXF1ZXN0cyB7fQ0KVGhyZWFkLTEzOjpERUJVRzo6 MjAxNC0xMS0yNSAxNDo1NToyMSw1NzY6OnRhc2s6Ojk5Mzo6U3RvcmFnZS5UYXNrTWFuYWdlci5U YXNrOjooX2RlY3JlZikgVGFzaz1gMDdjZWM1MmYtNmYzYi00NmVjLTk1N2UtZDEwNTM1MzE2OGI2 YDo6cmVmIDAgYWJvcnRpbmcgRmFsc2UNClRocmVhZC0xMzo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6 NTU6MzYsNzg4OjpCaW5kaW5nWE1MUlBDOjoxMTMyOjp2ZHM6Oih3cmFwcGVyKSBjbGllbnQgWzEw LjEwLjEwLjJdOjpjYWxsIGdldENhcGFiaWxpdGllcyB3aXRoICgpIHt9IGZsb3dJRCBbNTI3OWFj YzZdDQpUaHJlYWQtMTM6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM2LDc5MTo6dXRpbHM6Ojcz ODo6cm9vdDo6KGV4ZWNDbWQpIC9zYmluL2lwIHJvdXRlIHNob3cgdG8gMC4wLjAuMC8wIHRhYmxl IGFsbCAoY3dkIE5vbmUpDQpUaHJlYWQtMTM6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM2LDc5 ODo6dXRpbHM6Ojc1ODo6cm9vdDo6KGV4ZWNDbWQpIFNVQ0NFU1M6IDxlcnI+ID0gJyc7IDxyYz4g PSAwDQpUaHJlYWQtMTM6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM2LDgyNzo6Y2Fwczo6NzI4 Ojpyb290OjooX2dldEtleVBhY2thZ2VzKSBycG0gcGFja2FnZSAoJ2dsdXN0ZXItc3dpZnQnLCkg bm90IGZvdW5kDQpUaHJlYWQtMTM6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM2LDgzMTo6Y2Fw czo6NzI4Ojpyb290OjooX2dldEtleVBhY2thZ2VzKSBycG0gcGFja2FnZSAoJ2dsdXN0ZXItc3dp ZnQtb2JqZWN0JywpIG5vdCBmb3VuZA0KVGhyZWFkLTEzOjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1 NTozNiw4MzE6OmNhcHM6OjcyODo6cm9vdDo6KF9nZXRLZXlQYWNrYWdlcykgcnBtIHBhY2thZ2Ug KCdnbHVzdGVyLXN3aWZ0LXBsdWdpbicsKSBub3QgZm91bmQNClRocmVhZC0xMzo6REVCVUc6OjIw MTQtMTEtMjUgMTQ6NTU6MzYsODM2OjpjYXBzOjo3Mjg6OnJvb3Q6OihfZ2V0S2V5UGFja2FnZXMp IHJwbSBwYWNrYWdlICgnZ2x1c3Rlci1zd2lmdC1hY2NvdW50JywpIG5vdCBmb3VuZA0KVGhyZWFk LTEzOjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozNiw4Mzc6OmNhcHM6OjcyODo6cm9vdDo6KF9n ZXRLZXlQYWNrYWdlcykgcnBtIHBhY2thZ2UgKCdnbHVzdGVyLXN3aWZ0LXByb3h5JywpIG5vdCBm b3VuZA0KVGhyZWFkLTEzOjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozNiw4Mzc6OmNhcHM6Ojcy ODo6cm9vdDo6KF9nZXRLZXlQYWNrYWdlcykgcnBtIHBhY2thZ2UgKCdnbHVzdGVyLXN3aWZ0LWRv YycsKSBub3QgZm91bmQNClRocmVhZC0xMzo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzYsODM3 OjpjYXBzOjo3Mjg6OnJvb3Q6OihfZ2V0S2V5UGFja2FnZXMpIHJwbSBwYWNrYWdlICgnZ2x1c3Rl ci1zd2lmdC1jb250YWluZXInLCkgbm90IGZvdW5kDQpUaHJlYWQtMTM6OkRFQlVHOjoyMDE0LTEx LTI1IDE0OjU1OjM2LDgzODo6Y2Fwczo6NzI4Ojpyb290OjooX2dldEtleVBhY2thZ2VzKSBycG0g cGFja2FnZSAoJ2dsdXN0ZXJmcy1nZW8tcmVwbGljYXRpb24nLCkgbm90IGZvdW5kDQpUaHJlYWQt MTM6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM2LDgzOTo6Y2Fwczo6NjQ2Ojpyb290OjooZ2V0 KSBWaXJ0aW9STkcgRElTQUJMRUQ6IGxpYnZpcnQgdmVyc2lvbiAwLjEwLjItMjkuZWw2XzUuOSBy ZXF1aXJlZCA+PSAwLjEwLjItMzENClRocmVhZC0xMzo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6 MzYsODQyOjpCaW5kaW5nWE1MUlBDOjoxMTM5Ojp2ZHM6Oih3cmFwcGVyKSByZXR1cm4gZ2V0Q2Fw YWJpbGl0aWVzIHdpdGggeydzdGF0dXMnOiB7J21lc3NhZ2UnOiAnRG9uZScsICdjb2RlJzogMH0s ICdpbmZvJzogeydIQkFJbnZlbnRvcnknOiB7J2lTQ1NJJzogW3snSW5pdGlhdG9yTmFtZSc6ICdp cW4uMTk5NC0wNS5jb20ucmVkaGF0OjMyMTUxY2UxODNjOCd9XSwgJ0ZDJzogW119LCAncGFja2Fn ZXMyJzogeydrZXJuZWwnOiB7J3JlbGVhc2UnOiAnNDMxLmVsNi54ODZfNjQnLCAnYnVpbGR0aW1l JzogMTM4NTA2MTMwOS4wLCAndmVyc2lvbic6ICcyLjYuMzInfSwgJ2dsdXN0ZXJmcy1yZG1hJzog eydyZWxlYXNlJzogJzEuZWw2JywgJ2J1aWxkdGltZSc6IDE0MDM2MjI2MjhMLCAndmVyc2lvbic6 ICczLjUuMSd9LCAnZ2x1c3RlcmZzLWZ1c2UnOiB7J3JlbGVhc2UnOiAnMS5lbDYnLCAnYnVpbGR0 aW1lJzogMTQwMzYyMjYyOEwsICd2ZXJzaW9uJzogJzMuNS4xJ30sICdzcGljZS1zZXJ2ZXInOiB7 J3JlbGVhc2UnOiAnNi5lbDZfNS4yJywgJ2J1aWxkdGltZSc6IDE0MDIzMjQ2MzdMLCAndmVyc2lv bic6ICcwLjEyLjQnfSwgJ3Zkc20nOiB7J3JlbGVhc2UnOiAnMS5naXRkYjgzOTQzLmVsNicsICdi dWlsZHRpbWUnOiAxNDEyNzg0NTY3TCwgJ3ZlcnNpb24nOiAnNC4xNi43J30sICdxZW11LWt2bSc6 IHsncmVsZWFzZSc6ICcyLjQxNS5lbDZfNS4xMCcsICdidWlsZHRpbWUnOiAxNDAyNDM1NzAwTCwg J3ZlcnNpb24nOiAnMC4xMi4xLjInfSwgJ3FlbXUtaW1nJzogeydyZWxlYXNlJzogJzIuNDE1LmVs Nl81LjEwJywgJ2J1aWxkdGltZSc6IDE0MDI0MzU3MDBMLCAndmVyc2lvbic6ICcwLjEyLjEuMid9 LCAnbGlidmlydCc6IHsncmVsZWFzZSc6ICcyOS5lbDZfNS45JywgJ2J1aWxkdGltZSc6IDE0MDI0 MDQ2MTJMLCAndmVyc2lvbic6ICcwLjEwLjInfSwgJ2dsdXN0ZXJmcyc6IHsncmVsZWFzZSc6ICcx LmVsNicsICdidWlsZHRpbWUnOiAxNDAzNjIyNjI4TCwgJ3ZlcnNpb24nOiAnMy41LjEnfSwgJ21v bSc6IHsncmVsZWFzZSc6ICcyLmVsNicsICdidWlsZHRpbWUnOiAxNDAzNzk0MzQ0TCwgJ3ZlcnNp b24nOiAnMC40LjEnfSwgJ2dsdXN0ZXJmcy1zZXJ2ZXInOiB7J3JlbGVhc2UnOiAnMS5lbDYnLCAn YnVpbGR0aW1lJzogMTQwMzYyMjYyOEwsICd2ZXJzaW9uJzogJzMuNS4xJ319LCAnbnVtYU5vZGVE aXN0YW5jZSc6IHsnMSc6IFsyMCwgMTBdLCAnMCc6IFsxMCwgMjBdfSwgJ2NwdU1vZGVsJzogJ0lu dGVsKFIpIFhlb24oUikgQ1BVICAgICAgICAgICBYNTY1MCAgQCAyLjY3R0h6JywgJ2xpdmVNZXJn ZSc6ICdmYWxzZScsICdob29rcyc6IHt9LCAnY3B1U29ja2V0cyc6ICcyJywgJ3ZtVHlwZXMnOiBb J2t2bSddLCAnc2VsaW51eCc6IHsnbW9kZSc6ICcxJ30sICdrZHVtcFN0YXR1cyc6IDAsICdzdXBw b3J0ZWRQcm90b2NvbHMnOiBbJzIuMicsICcyLjMnXSwgJ25ldHdvcmtzJzogeydvdmlydG1nbXQn OiB7J2lmYWNlJzogdSdib25kMC4xMCcsICdhZGRyJzogJzQzLjI1Mi4xNzYuMTYnLCAnYnJpZGdl ZCc6IEZhbHNlLCAnaXB2NmFkZHJzJzogWydmZTgwOjo2MmViOjY5ZmY6ZmUyMDpiNDZjLzY0J10s ICdtdHUnOiAnMTUwMCcsICdib290cHJvdG80JzogJ25vbmUnLCAnbmV0bWFzayc6ICcyNTUuMjU1 LjI1NS4wJywgJ2lwdjRhZGRycyc6IFsnNDMuMjUyLjE3Ni4xNi8yNCddLCAnaW50ZXJmYWNlJzog dSdib25kMC4xMCcsICdpcHY2Z2F0ZXdheSc6ICc6OicsICdnYXRld2F5JzogJzQzLjI1Mi4xNzYu MSd9LCAnSW50ZXJuYWwnOiB7J2lmYWNlJzogJ0ludGVybmFsJywgJ2FkZHInOiAnJywgJ2NmZyc6 IHsnREVGUk9VVEUnOiAnbm8nLCAnSE9UUExVRyc6ICdubycsICdNVFUnOiAnOTAwMCcsICdERUxB WSc6ICcwJywgJ05NX0NPTlRST0xMRUQnOiAnbm8nLCAnQk9PVFBST1RPJzogJ25vbmUnLCAnU1RQ JzogJ29mZicsICdERVZJQ0UnOiAnSW50ZXJuYWwnLCAnVFlQRSc6ICdCcmlkZ2UnLCAnT05CT09U JzogJ25vJ30sICdicmlkZ2VkJzogVHJ1ZSwgJ2lwdjZhZGRycyc6IFsnZmU4MDo6MjEwOjE4ZmY6 ZmVjZDpkYWFjLzY0J10sICdnYXRld2F5JzogJycsICdib290cHJvdG80JzogJ25vbmUnLCAnbmV0 bWFzayc6ICcnLCAnc3RwJzogJ29mZicsICdpcHY0YWRkcnMnOiBbXSwgJ210dSc6ICc5MDAwJywg J2lwdjZnYXRld2F5JzogJzo6JywgJ3BvcnRzJzogWydib25kMS4xMDAnXX0sICdzdG9yYWdlJzog eydpZmFjZSc6IHUnYm9uZDEnLCAnYWRkcic6ICcxMC4xMC4xMC42JywgJ2JyaWRnZWQnOiBGYWxz ZSwgJ2lwdjZhZGRycyc6IFsnZmU4MDo6MjEwOjE4ZmY6ZmVjZDpkYWFjLzY0J10sICdtdHUnOiAn OTAwMCcsICdib290cHJvdG80JzogJ25vbmUnLCAnbmV0bWFzayc6ICcyNTUuMjU1LjI1NS4wJywg J2lwdjRhZGRycyc6IFsnMTAuMTAuMTAuNi8yNCddLCAnaW50ZXJmYWNlJzogdSdib25kMScsICdp cHY2Z2F0ZXdheSc6ICc6OicsICdnYXRld2F5JzogJyd9LCAnVk1OZXR3b3JrJzogeydpZmFjZSc6 ICdWTU5ldHdvcmsnLCAnYWRkcic6ICcnLCAnY2ZnJzogeydERUZST1VURSc6ICdubycsICdIT1RQ TFVHJzogJ25vJywgJ01UVSc6ICcxNTAwJywgJ0RFTEFZJzogJzAnLCAnTk1fQ09OVFJPTExFRCc6 ICdubycsICdCT09UUFJPVE8nOiAnbm9uZScsICdTVFAnOiAnb2ZmJywgJ0RFVklDRSc6ICdWTU5l dHdvcmsnLCAnVFlQRSc6ICdCcmlkZ2UnLCAnT05CT09UJzogJ25vJ30sICdicmlkZ2VkJzogVHJ1 ZSwgJ2lwdjZhZGRycyc6IFsnZmU4MDo6NjJlYjo2OWZmOmZlMjA6YjQ2Yy82NCddLCAnZ2F0ZXdh eSc6ICcnLCAnYm9vdHByb3RvNCc6ICdub25lJywgJ25ldG1hc2snOiAnJywgJ3N0cCc6ICdvZmYn LCAnaXB2NGFkZHJzJzogW10sICdtdHUnOiAnMTUwMCcsICdpcHY2Z2F0ZXdheSc6ICc6OicsICdw b3J0cyc6IFsnYm9uZDAuMzYnXX19LCAnYnJpZGdlcyc6IHsnSW50ZXJuYWwnOiB7J2FkZHInOiAn JywgJ2NmZyc6IHsnREVGUk9VVEUnOiAnbm8nLCAnSE9UUExVRyc6ICdubycsICdNVFUnOiAnOTAw MCcsICdERUxBWSc6ICcwJywgJ05NX0NPTlRST0xMRUQnOiAnbm8nLCAnQk9PVFBST1RPJzogJ25v bmUnLCAnU1RQJzogJ29mZicsICdERVZJQ0UnOiAnSW50ZXJuYWwnLCAnVFlQRSc6ICdCcmlkZ2Un LCAnT05CT09UJzogJ25vJ30sICdpcHY2YWRkcnMnOiBbJ2ZlODA6OjIxMDoxOGZmOmZlY2Q6ZGFh Yy82NCddLCAnbXR1JzogJzkwMDAnLCAnbmV0bWFzayc6ICcnLCAnc3RwJzogJ29mZicsICdpcHY0 YWRkcnMnOiBbXSwgJ2lwdjZnYXRld2F5JzogJzo6JywgJ2dhdGV3YXknOiAnJywgJ29wdHMnOiB7 J3RvcG9sb2d5X2NoYW5nZV9kZXRlY3RlZCc6ICcwJywgJ211bHRpY2FzdF9sYXN0X21lbWJlcl9j b3VudCc6ICcyJywgJ2hhc2hfZWxhc3RpY2l0eSc6ICc0JywgJ211bHRpY2FzdF9xdWVyeV9yZXNw b25zZV9pbnRlcnZhbCc6ICc5OTknLCAnbXVsdGljYXN0X3Nub29waW5nJzogJzEnLCAnbXVsdGlj YXN0X3N0YXJ0dXBfcXVlcnlfaW50ZXJ2YWwnOiAnMzEyNCcsICdoZWxsb190aW1lcic6ICcxMDAn LCAnbXVsdGljYXN0X3F1ZXJpZXJfaW50ZXJ2YWwnOiAnMjU0OTYnLCAnbWF4X2FnZSc6ICcxOTk5 JywgJ2hhc2hfbWF4JzogJzUxMicsICdzdHBfc3RhdGUnOiAnMCcsICdyb290X2lkJzogJzgwMDAu MDAxMDE4Y2RkYWFjJywgJ3ByaW9yaXR5JzogJzMyNzY4JywgJ211bHRpY2FzdF9tZW1iZXJzaGlw X2ludGVydmFsJzogJzI1OTk2JywgJ3Jvb3RfcGF0aF9jb3N0JzogJzAnLCAncm9vdF9wb3J0Jzog JzAnLCAnbXVsdGljYXN0X3F1ZXJpZXInOiAnMCcsICdtdWx0aWNhc3Rfc3RhcnR1cF9xdWVyeV9j b3VudCc6ICcyJywgJ2hlbGxvX3RpbWUnOiAnMTk5JywgJ3RvcG9sb2d5X2NoYW5nZSc6ICcwJywg J2JyaWRnZV9pZCc6ICc4MDAwLjAwMTAxOGNkZGFhYycsICd0b3BvbG9neV9jaGFuZ2VfdGltZXIn OiAnMCcsICdhZ2VpbmdfdGltZSc6ICcyOTk5NScsICdnY190aW1lcic6ICcwJywgJ2dyb3VwX2Fk ZHInOiAnMTo4MDpjMjowOjA6MCcsICd0Y25fdGltZXInOiAnMCcsICdtdWx0aWNhc3RfcXVlcnlf aW50ZXJ2YWwnOiAnMTI0OTgnLCAnbXVsdGljYXN0X2xhc3RfbWVtYmVyX2ludGVydmFsJzogJzk5 JywgJ211bHRpY2FzdF9yb3V0ZXInOiAnMScsICdmb3J3YXJkX2RlbGF5JzogJzAnfSwgJ3BvcnRz JzogWydib25kMS4xMDAnXX0sICdWTU5ldHdvcmsnOiB7J2FkZHInOiAnJywgJ2NmZyc6IHsnREVG Uk9VVEUnOiAnbm8nLCAnSE9UUExVRyc6ICdubycsICdNVFUnOiAnMTUwMCcsICdERUxBWSc6ICcw JywgJ05NX0NPTlRST0xMRUQnOiAnbm8nLCAnQk9PVFBST1RPJzogJ25vbmUnLCAnU1RQJzogJ29m ZicsICdERVZJQ0UnOiAnVk1OZXR3b3JrJywgJ1RZUEUnOiAnQnJpZGdlJywgJ09OQk9PVCc6ICdu byd9LCAnaXB2NmFkZHJzJzogWydmZTgwOjo2MmViOjY5ZmY6ZmUyMDpiNDZjLzY0J10sICdtdHUn OiAnMTUwMCcsICduZXRtYXNrJzogJycsICdzdHAnOiAnb2ZmJywgJ2lwdjRhZGRycyc6IFtdLCAn aXB2NmdhdGV3YXknOiAnOjonLCAnZ2F0ZXdheSc6ICcnLCAnb3B0cyc6IHsndG9wb2xvZ3lfY2hh bmdlX2RldGVjdGVkJzogJzAnLCAnbXVsdGljYXN0X2xhc3RfbWVtYmVyX2NvdW50JzogJzInLCAn aGFzaF9lbGFzdGljaXR5JzogJzQnLCAnbXVsdGljYXN0X3F1ZXJ5X3Jlc3BvbnNlX2ludGVydmFs JzogJzk5OScsICdtdWx0aWNhc3Rfc25vb3BpbmcnOiAnMScsICdtdWx0aWNhc3Rfc3RhcnR1cF9x dWVyeV9pbnRlcnZhbCc6ICczMTI0JywgJ2hlbGxvX3RpbWVyJzogJzE5OCcsICdtdWx0aWNhc3Rf cXVlcmllcl9pbnRlcnZhbCc6ICcyNTQ5NicsICdtYXhfYWdlJzogJzE5OTknLCAnaGFzaF9tYXgn OiAnNTEyJywgJ3N0cF9zdGF0ZSc6ICcwJywgJ3Jvb3RfaWQnOiAnODAwMC42MGViNjkyMGI0NmMn LCAncHJpb3JpdHknOiAnMzI3NjgnLCAnbXVsdGljYXN0X21lbWJlcnNoaXBfaW50ZXJ2YWwnOiAn MjU5OTYnLCAncm9vdF9wYXRoX2Nvc3QnOiAnMCcsICdyb290X3BvcnQnOiAnMCcsICdtdWx0aWNh c3RfcXVlcmllcic6ICcwJywgJ211bHRpY2FzdF9zdGFydHVwX3F1ZXJ5X2NvdW50JzogJzInLCAn aGVsbG9fdGltZSc6ICcxOTknLCAndG9wb2xvZ3lfY2hhbmdlJzogJzAnLCAnYnJpZGdlX2lkJzog JzgwMDAuNjBlYjY5MjBiNDZjJywgJ3RvcG9sb2d5X2NoYW5nZV90aW1lcic6ICcwJywgJ2FnZWlu Z190aW1lJzogJzI5OTk1JywgJ2djX3RpbWVyJzogJzk4JywgJ2dyb3VwX2FkZHInOiAnMTo4MDpj MjowOjA6MCcsICd0Y25fdGltZXInOiAnMCcsICdtdWx0aWNhc3RfcXVlcnlfaW50ZXJ2YWwnOiAn MTI0OTgnLCAnbXVsdGljYXN0X2xhc3RfbWVtYmVyX2ludGVydmFsJzogJzk5JywgJ211bHRpY2Fz dF9yb3V0ZXInOiAnMScsICdmb3J3YXJkX2RlbGF5JzogJzAnfSwgJ3BvcnRzJzogWydib25kMC4z NiddfX0sICd1dWlkJzogJzQ0NDU0QzRDLTRDMDAtMTA1Ny04MDUzLUI3QzA0RjUwNEUzMScsICds YXN0Q2xpZW50SWZhY2UnOiAnYm9uZDEnLCAnbmljcyc6IHsnZXRoMyc6IHsncGVybWh3YWRkcic6 ICcwMDoxMDoxODpjZDpkYTphZScsICdhZGRyJzogJycsICdjZmcnOiB7J1NMQVZFJzogJ3llcycs ICdOTV9DT05UUk9MTEVEJzogJ25vJywgJ01UVSc6ICc5MDAwJywgJ0hXQUREUic6ICcwMDoxMDox ODpjZDpkYTphZScsICdNQVNURVInOiAnYm9uZDEnLCAnREVWSUNFJzogJ2V0aDMnLCAnT05CT09U JzogJ25vJ30sICdpcHY2YWRkcnMnOiBbXSwgJ210dSc6ICc5MDAwJywgJ25ldG1hc2snOiAnJywg J2lwdjRhZGRycyc6IFtdLCAnaHdhZGRyJzogJzAwOjEwOjE4OmNkOmRhOmFjJywgJ3NwZWVkJzog MTAwMH0sICdldGgyJzogeydwZXJtaHdhZGRyJzogJzAwOjEwOjE4OmNkOmRhOmFjJywgJ2FkZHIn OiAnJywgJ2NmZyc6IHsnU0xBVkUnOiAneWVzJywgJ05NX0NPTlRST0xMRUQnOiAnbm8nLCAnTVRV JzogJzkwMDAnLCAnSFdBRERSJzogJzAwOjEwOjE4OmNkOmRhOmFjJywgJ01BU1RFUic6ICdib25k MScsICdERVZJQ0UnOiAnZXRoMicsICdPTkJPT1QnOiAnbm8nfSwgJ2lwdjZhZGRycyc6IFtdLCAn bXR1JzogJzkwMDAnLCAnbmV0bWFzayc6ICcnLCAnaXB2NGFkZHJzJzogW10sICdod2FkZHInOiAn MDA6MTA6MTg6Y2Q6ZGE6YWMnLCAnc3BlZWQnOiAxMDAwfSwgJ2V0aDEnOiB7J3Blcm1od2FkZHIn OiAnNjA6ZWI6Njk6MjA6YjQ6NmQnLCAnYWRkcic6ICcnLCAnY2ZnJzogeydTTEFWRSc6ICd5ZXMn LCAnTk1fQ09OVFJPTExFRCc6ICdubycsICdNVFUnOiAnMTUwMCcsICdIV0FERFInOiAnNjA6ZWI6 Njk6MjA6YjQ6NmQnLCAnTUFTVEVSJzogJ2JvbmQwJywgJ0RFVklDRSc6ICdldGgxJywgJ09OQk9P VCc6ICd5ZXMnfSwgJ2lwdjZhZGRycyc6IFtdLCAnbXR1JzogJzE1MDAnLCAnbmV0bWFzayc6ICcn LCAnaXB2NGFkZHJzJzogW10sICdod2FkZHInOiAnNjA6ZWI6Njk6MjA6YjQ6NmMnLCAnc3BlZWQn OiAxMDAwfSwgJ2V0aDAnOiB7J3Blcm1od2FkZHInOiAnNjA6ZWI6Njk6MjA6YjQ6NmMnLCAnYWRk cic6ICcnLCAnY2ZnJzogeydTTEFWRSc6ICd5ZXMnLCAnTk1fQ09OVFJPTExFRCc6ICdubycsICdN VFUnOiAnMTUwMCcsICdIV0FERFInOiAnNjA6ZWI6Njk6MjA6YjQ6NmMnLCAnTUFTVEVSJzogJ2Jv bmQwJywgJ0RFVklDRSc6ICdldGgwJywgJ09OQk9PVCc6ICd5ZXMnfSwgJ2lwdjZhZGRycyc6IFtd LCAnbXR1JzogJzE1MDAnLCAnbmV0bWFzayc6ICcnLCAnaXB2NGFkZHJzJzogW10sICdod2FkZHIn OiAnNjA6ZWI6Njk6MjA6YjQ6NmMnLCAnc3BlZWQnOiAxMDAwfX0sICdzb2Z0d2FyZV9yZXZpc2lv bic6ICcxJywgJ2NsdXN0ZXJMZXZlbHMnOiBbJzMuMCcsICczLjEnLCAnMy4yJywgJzMuMycsICcz LjQnLCAnMy41J10sICdjcHVGbGFncyc6IHUnZnB1LHZtZSxkZSxwc2UsdHNjLG1zcixwYWUsbWNl LGN4OCxhcGljLHNlcCxtdHJyLHBnZSxtY2EsY21vdixwYXQscHNlMzYsY2xmbHVzaCxkdHMsYWNw aSxtbXgsZnhzcixzc2Usc3NlMixzcyxodCx0bSxwYmUsc3lzY2FsbCxueCxwZHBlMWdiLHJkdHNj cCxsbSxjb25zdGFudF90c2MsYXJjaF9wZXJmbW9uLHBlYnMsYnRzLHJlcF9nb29kLHh0b3BvbG9n eSxub25zdG9wX3RzYyxwbmkscGNsbXVscWRxLGR0ZXM2NCxtb25pdG9yLGRzX2NwbCx2bXgsc214 LGVzdCx0bTIsc3NzZTMsY3gxNix4dHByLHBkY20scGNpZCxkY2Esc3NlNF8xLHNzZTRfMixwb3Bj bnQsYWVzLGxhaGZfbG0sdHByX3NoYWRvdyx2bm1pLGZsZXhwcmlvcml0eSxlcHQsdnBpZCxtb2Rl bF9OZWhhbGVtLG1vZGVsX0NvbnJvZSxtb2RlbF9jb3JlZHVvLG1vZGVsX2NvcmUyZHVvLG1vZGVs X1BlbnJ5bixtb2RlbF9XZXN0bWVyZSxtb2RlbF9uMjcwJywgJ0lTQ1NJSW5pdGlhdG9yTmFtZSc6 ICdpcW4uMTk5NC0wNS5jb20ucmVkaGF0OjMyMTUxY2UxODNjOCcsICduZXRDb25maWdEaXJ0eSc6 ICdGYWxzZScsICdzdXBwb3J0ZWRFTkdJTkVzJzogWyczLjAnLCAnMy4xJywgJzMuMicsICczLjMn LCAnMy40JywgJzMuNSddLCAnYXV0b051bWFCYWxhbmNpbmcnOiAyLCAncmVzZXJ2ZWRNZW0nOiAn MzIxJywgJ2JvbmRpbmdzJzogeydib25kNCc6IHsnYWRkcic6ICcnLCAnY2ZnJzoge30sICdtdHUn OiAnMTUwMCcsICduZXRtYXNrJzogJycsICdzbGF2ZXMnOiBbXSwgJ2h3YWRkcic6ICcwMDowMDow MDowMDowMDowMCd9LCAnYm9uZDAnOiB7J2FkZHInOiAnJywgJ2NmZyc6IHsnSE9UUExVRyc6ICdu bycsICdNVFUnOiAnMTUwMCcsICdOTV9DT05UUk9MTEVEJzogJ25vJywgJ0JPTkRJTkdfT1BUUyc6 ICdtb2RlPTQgbWlpbW9uPTEwMCcsICdERVZJQ0UnOiAnYm9uZDAnLCAnT05CT09UJzogJ3llcyd9 LCAnaXB2NmFkZHJzJzogWydmZTgwOjo2MmViOjY5ZmY6ZmUyMDpiNDZjLzY0J10sICdtdHUnOiAn MTUwMCcsICduZXRtYXNrJzogJycsICdpcHY0YWRkcnMnOiBbXSwgJ2h3YWRkcic6ICc2MDplYjo2 OToyMDpiNDo2YycsICdzbGF2ZXMnOiBbJ2V0aDAnLCAnZXRoMSddLCAnb3B0cyc6IHsnbWlpbW9u JzogJzEwMCcsICdtb2RlJzogJzQnfX0sICdib25kMSc6IHsnYWRkcic6ICcxMC4xMC4xMC42Jywg J2NmZyc6IHsnREVGUk9VVEUnOiAnbm8nLCAnSVBBRERSJzogJzEwLjEwLjEwLjYnLCAnSE9UUExV Ryc6ICdubycsICdNVFUnOiAnOTAwMCcsICdOTV9DT05UUk9MTEVEJzogJ25vJywgJ05FVE1BU0sn OiAnMjU1LjI1NS4yNTUuMCcsICdCT09UUFJPVE8nOiAnbm9uZScsICdCT05ESU5HX09QVFMnOiAn bW9kZT00IG1paW1vbj0xMDAnLCAnREVWSUNFJzogJ2JvbmQxJywgJ09OQk9PVCc6ICdubyd9LCAn aXB2NmFkZHJzJzogWydmZTgwOjoyMTA6MThmZjpmZWNkOmRhYWMvNjQnXSwgJ210dSc6ICc5MDAw JywgJ25ldG1hc2snOiAnMjU1LjI1NS4yNTUuMCcsICdpcHY0YWRkcnMnOiBbJzEwLjEwLjEwLjYv MjQnXSwgJ2h3YWRkcic6ICcwMDoxMDoxODpjZDpkYTphYycsICdzbGF2ZXMnOiBbJ2V0aDInLCAn ZXRoMyddLCAnb3B0cyc6IHsnbWlpbW9uJzogJzEwMCcsICdtb2RlJzogJzQnfX0sICdib25kMic6 IHsnYWRkcic6ICcnLCAnY2ZnJzoge30sICdtdHUnOiAnMTUwMCcsICduZXRtYXNrJzogJycsICdz bGF2ZXMnOiBbXSwgJ2h3YWRkcic6ICcwMDowMDowMDowMDowMDowMCd9LCAnYm9uZDMnOiB7J2Fk ZHInOiAnJywgJ2NmZyc6IHt9LCAnbXR1JzogJzE1MDAnLCAnbmV0bWFzayc6ICcnLCAnc2xhdmVz JzogW10sICdod2FkZHInOiAnMDA6MDA6MDA6MDA6MDA6MDAnfX0sICdzb2Z0d2FyZV92ZXJzaW9u JzogJzQuMTYnLCAnbWVtU2l6ZSc6ICcyNDAxOScsICdjcHVTcGVlZCc6ICcyNjY3LjAwMCcsICdu dW1hTm9kZXMnOiB7dScxJzogeyd0b3RhbE1lbW9yeSc6ICcxMjI4OCcsICdjcHVzJzogWzYsIDcs IDgsIDksIDEwLCAxMSwgMTgsIDE5LCAyMCwgMjEsIDIyLCAyM119LCB1JzAnOiB7J3RvdGFsTWVt b3J5JzogJzEyMjc4JywgJ2NwdXMnOiBbMCwgMSwgMiwgMywgNCwgNSwgMTIsIDEzLCAxNCwgMTUs IDE2LCAxN119fSwgJ3ZlcnNpb25fbmFtZSc6ICdTbm93IE1hbicsICd2bGFucyc6IHsnYm9uZDAu MTAnOiB7J2lmYWNlJzogJ2JvbmQwJywgJ2FkZHInOiAnNDMuMjUyLjE3Ni4xNicsICdjZmcnOiB7 J0RFRlJPVVRFJzogJ3llcycsICdWTEFOJzogJ3llcycsICdJUEFERFInOiAnNDMuMjUyLjE3Ni4x NicsICdIT1RQTFVHJzogJ25vJywgJ0dBVEVXQVknOiAnNDMuMjUyLjE3Ni4xJywgJ05NX0NPTlRS T0xMRUQnOiAnbm8nLCAnTkVUTUFTSyc6ICcyNTUuMjU1LjI1NS4wJywgJ0JPT1RQUk9UTyc6ICdu b25lJywgJ0RFVklDRSc6ICdib25kMC4xMCcsICdNVFUnOiAnMTUwMCcsICdPTkJPT1QnOiAneWVz J30sICdpcHY2YWRkcnMnOiBbJ2ZlODA6OjYyZWI6NjlmZjpmZTIwOmI0NmMvNjQnXSwgJ3ZsYW5p ZCc6IDEwLCAnbXR1JzogJzE1MDAnLCAnbmV0bWFzayc6ICcyNTUuMjU1LjI1NS4wJywgJ2lwdjRh ZGRycyc6IFsnNDMuMjUyLjE3Ni4xNi8yNCddfSwgJ2JvbmQwLjM2JzogeydpZmFjZSc6ICdib25k MCcsICdhZGRyJzogJycsICdjZmcnOiB7J0JSSURHRSc6ICdWTU5ldHdvcmsnLCAnVkxBTic6ICd5 ZXMnLCAnSE9UUExVRyc6ICdubycsICdNVFUnOiAnMTUwMCcsICdOTV9DT05UUk9MTEVEJzogJ25v JywgJ0RFVklDRSc6ICdib25kMC4zNicsICdPTkJPT1QnOiAnbm8nfSwgJ2lwdjZhZGRycyc6IFsn ZmU4MDo6NjJlYjo2OWZmOmZlMjA6YjQ2Yy82NCddLCAndmxhbmlkJzogMzYsICdtdHUnOiAnMTUw MCcsICduZXRtYXNrJzogJycsICdpcHY0YWRkcnMnOiBbXX0sICdib25kMS4xMDAnOiB7J2lmYWNl JzogJ2JvbmQxJywgJ2FkZHInOiAnJywgJ2NmZyc6IHsnQlJJREdFJzogJ0ludGVybmFsJywgJ1ZM QU4nOiAneWVzJywgJ0hPVFBMVUcnOiAnbm8nLCAnTVRVJzogJzkwMDAnLCAnTk1fQ09OVFJPTExF RCc6ICdubycsICdERVZJQ0UnOiAnYm9uZDEuMTAwJywgJ09OQk9PVCc6ICdubyd9LCAnaXB2NmFk ZHJzJzogWydmZTgwOjoyMTA6MThmZjpmZWNkOmRhYWMvNjQnXSwgJ3ZsYW5pZCc6IDEwMCwgJ210 dSc6ICc5MDAwJywgJ25ldG1hc2snOiAnJywgJ2lwdjRhZGRycyc6IFtdfX0sICdjcHVDb3Jlcyc6 ICcxMicsICdrdm1FbmFibGVkJzogJ3RydWUnLCAnZ3Vlc3RPdmVyaGVhZCc6ICc2NScsICdjcHVU aHJlYWRzJzogJzI0JywgJ2VtdWxhdGVkTWFjaGluZXMnOiBbdSdyaGVsNi41LjAnLCB1J3BjJywg dSdyaGVsNi40LjAnLCB1J3JoZWw2LjMuMCcsIHUncmhlbDYuMi4wJywgdSdyaGVsNi4xLjAnLCB1 J3JoZWw2LjAuMCcsIHUncmhlbDUuNS4wJywgdSdyaGVsNS40LjQnLCB1J3JoZWw1LjQuMCddLCAn b3BlcmF0aW5nU3lzdGVtJzogeydyZWxlYXNlJzogJzUuZWw2LmNlbnRvcy4xMS4xJywgJ3ZlcnNp b24nOiAnNicsICduYW1lJzogJ1JIRUwnfSwgJ2xhc3RDbGllbnQnOiAnMTAuMTAuMTAuMid9fQ0K VGhyZWFkLTEzOjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozNiw4NjA6OkJpbmRpbmdYTUxSUEM6 OjExMzI6OnZkczo6KHdyYXBwZXIpIGNsaWVudCBbMTAuMTAuMTAuMl06OmNhbGwgZ2V0SGFyZHdh cmVJbmZvIHdpdGggKCkge30gZmxvd0lEIFs1Mjc5YWNjNl0NClRocmVhZC0xMzo6REVCVUc6OjIw MTQtMTEtMjUgMTQ6NTU6MzYsODY3OjpCaW5kaW5nWE1MUlBDOjoxMTM5Ojp2ZHM6Oih3cmFwcGVy KSByZXR1cm4gZ2V0SGFyZHdhcmVJbmZvIHdpdGggeydzdGF0dXMnOiB7J21lc3NhZ2UnOiAnRG9u ZScsICdjb2RlJzogMH0sICdpbmZvJzogeydzeXN0ZW1Qcm9kdWN0TmFtZSc6ICdDUzI0LVRZJywg J3N5c3RlbVNlcmlhbE51bWJlcic6ICc3TFdTUE4xJywgJ3N5c3RlbUZhbWlseSc6ICdTZXJ2ZXIn LCAnc3lzdGVtVmVyc2lvbic6ICdBMDAnLCAnc3lzdGVtVVVJRCc6ICc0NDQ1NGM0Yy00YzAwLTEw NTctODA1My1iN2MwNGY1MDRlMzEnLCAnc3lzdGVtTWFudWZhY3R1cmVyJzogJ0RlbGwnfX0NClRo cmVhZC0xMzo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzYsOTY4OjpCaW5kaW5nWE1MUlBDOjox MTMyOjp2ZHM6Oih3cmFwcGVyKSBjbGllbnQgWzEwLjEwLjEwLjJdOjpjYWxsIGhvc3RzTGlzdCB3 aXRoICgpIHt9IGZsb3dJRCBbMzlkMzVkZDddDQpUaHJlYWQtMTM6OkRFQlVHOjoyMDE0LTExLTI1 IDE0OjU1OjM3LDAyMjo6QmluZGluZ1hNTFJQQzo6MTEzOTo6dmRzOjood3JhcHBlcikgcmV0dXJu IGhvc3RzTGlzdCB3aXRoIHsnc3RhdHVzJzogeydtZXNzYWdlJzogJ0RvbmUnLCAnY29kZSc6IDB9 LCAnaG9zdHMnOiBbeydzdGF0dXMnOiAnQ09OTkVDVEVEJywgJ2hvc3RuYW1lJzogJzEwLjEwLjEw LjYnLCAndXVpZCc6ICc5MThjZWYxZi1lZDgzLTQ1NzAtOTBhZC0zOWM1Y2VlYzc1M2MnfSwgeydz dGF0dXMnOiAnQ09OTkVDVEVEJywgJ2hvc3RuYW1lJzogJ2NvbXB1dGU1LjNsaW51eC5jb20nLCAn dXVpZCc6ICcyNjgxMzljYi1lN2I3LTQ4ZTktOTljZi1lMjZjNTIzNTE2ZTcnfSwgeydzdGF0dXMn OiAnQ09OTkVDVEVEJywgJ2hvc3RuYW1lJzogJ2NvbXB1dGUxLjNsaW51eC5jb20nLCAndXVpZCc6 ICc2OWIxOWY0ZS0wM2M4LTRmZTEtYTY0NC0yNzAxYzQxZDcwMjAnfSwgeydzdGF0dXMnOiAnQ09O TkVDVEVEJywgJ2hvc3RuYW1lJzogJ2NvbXB1dGUyLjNsaW51eC5jb20nLCAndXVpZCc6ICcyY2Yz NDI4Yy02MmJlLTQyZjQtODZkMy1lZjY3ZTdkZmM5N2YnfSwgeydzdGF0dXMnOiAnQ09OTkVDVEVE JywgJ2hvc3RuYW1lJzogJ2NvbXB1dGUzLjNsaW51eC5jb20nLCAndXVpZCc6ICcwZDk0ZDg5Ni1l OTk0LTRkZWItYjFhMS1kNmY2OGJlODZjMzAnfV19DQpUaHJlYWQtMTM6OkRFQlVHOjoyMDE0LTEx LTI1IDE0OjU1OjM3LDEwNjo6QmluZGluZ1hNTFJQQzo6MzE4Ojp2ZHM6Oih3cmFwcGVyKSBjbGll bnQgWzEwLjEwLjEwLjJdIGZsb3dJRCBbNWQyYzAwNjFdDQpUaHJlYWQtMTM6OkRFQlVHOjoyMDE0 LTExLTI1IDE0OjU1OjM3LDEwNzo6dGFzazo6NTk1OjpTdG9yYWdlLlRhc2tNYW5hZ2VyLlRhc2s6 OihfdXBkYXRlU3RhdGUpIFRhc2s9YDFhNTkwNzEyLWNiZWMtNGNkZi1iMzRjLTk2NTc4YzA3ZjI4 MmA6Om1vdmluZyBmcm9tIHN0YXRlIGluaXQgLT4gc3RhdGUgcHJlcGFyaW5nDQpUaHJlYWQtMTM6 OklORk86OjIwMTQtMTEtMjUgMTQ6NTU6MzcsMTA3Ojpsb2dVdGlsczo6NDQ6OmRpc3BhdGNoZXI6 Oih3cmFwcGVyKSBSdW4gYW5kIHByb3RlY3Q6IGNvbm5lY3RTdG9yYWdlU2VydmVyKGRvbVR5cGU9 MSwgc3BVVUlEPSdmNDk2NjgxMC0wOTIzLTQ3NTItYTA1NC0yYjQ5NTQ1ODQ5NzAnLCBjb25MaXN0 PVt7J2Nvbm5lY3Rpb24nOiAnNDMuMjUyLjE3Ni4xMjovdmFyL2xpYi9leHBvcnRzL2lzbycsICdp cW4nOiAnJywgJ3VzZXInOiAnJywgJ3RwZ3QnOiAnMScsICdwYXNzd29yZCc6ICcqKioqKionLCAn aWQnOiAnZGY2MzYyNGUtNzYwNS00ZWU3LTgyNjItZjIwMjM0NjFiYWFhJywgJ3BvcnQnOiAnJ31d LCBvcHRpb25zPU5vbmUpDQpUaHJlYWQtMTM6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM3LDEw OTo6ZmlsZVV0aWxzOjoxNDI6OlN0b3JhZ2UuZmlsZVV0aWxzOjooY3JlYXRlZGlyKSBDcmVhdGlu ZyBkaXJlY3Rvcnk6IC9yaGV2L2RhdGEtY2VudGVyL21udC80My4yNTIuMTc2LjEyOl92YXJfbGli X2V4cG9ydHNfaXNvDQpUaHJlYWQtMTM6OldBUk5JTkc6OjIwMTQtMTEtMjUgMTQ6NTU6MzcsMTEw OjpmaWxlVXRpbHM6OjE0OTo6U3RvcmFnZS5maWxlVXRpbHM6OihjcmVhdGVkaXIpIERpciAvcmhl di9kYXRhLWNlbnRlci9tbnQvNDMuMjUyLjE3Ni4xMjpfdmFyX2xpYl9leHBvcnRzX2lzbyBhbHJl YWR5IGV4aXN0cw0KVGhyZWFkLTEzOjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozNywxMTA6Om1v dW50OjoyMjc6OlN0b3JhZ2UuTWlzYy5leGNDbWQ6OihfcnVuY21kKSAvdXNyL2Jpbi9zdWRvIC1u IC9iaW4vbW91bnQgLXQgbmZzIC1vIHNvZnQsbm9zaGFyZWNhY2hlLHRpbWVvPTYwMCxyZXRyYW5z PTYsbmZzdmVycz0zIDQzLjI1Mi4xNzYuMTI6L3Zhci9saWIvZXhwb3J0cy9pc28gL3JoZXYvZGF0 YS1jZW50ZXIvbW50LzQzLjI1Mi4xNzYuMTI6X3Zhcl9saWJfZXhwb3J0c19pc28gKGN3ZCBOb25l KQ0KVGhyZWFkLTEzOjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozNywzNzY6Ol9faW5pdF9fOjox NTk6OklPUHJvY2Vzc0NsaWVudDo6KF9ydW4pIFN0YXJ0aW5nIElPUHJvY2Vzcy4uLg0KVGhyZWFk LTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozNywzOTk6Ol9faW5pdF9fOjoyMzI6OklPUHJv Y2Vzczo6KF9wcm9jZXNzTG9ncykgT3BlbmluZyBjb21tdW5pY2F0aW9uIGNoYW5uZWxzLi4uDQpU aHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM3LDQwMDo6X19pbml0X186OjIzMjo6 SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dzKSBSZWNlaXZpbmcgcmVxdWVzdC4uLg0KVGhyZWFkLTE0 OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozNyw0MDA6Ol9faW5pdF9fOjoyMzI6OklPUHJvY2Vz czo6KF9wcm9jZXNzTG9ncykgUXVldWluZyByZXF1ZXN0IGluIHRoZSB0aHJlYWQgcG9vbC4uLg0K VGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozNyw0MDA6Ol9faW5pdF9fOjoyMzI6 OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgRXh0cmFjdGluZyByZXF1ZXN0IGluZm9ybWF0aW9u Li4uDQpUaHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM3LDQwMDo6X19pbml0X186 OjIzMjo6SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dzKSAoMSkgR290IHJlcXVlc3QgZm9yIG1ldGhv ZCAnYWNjZXNzJw0KVGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozNyw0MDI6Ol9f aW5pdF9fOjoyMzI6OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgKDEpIFF1ZXVpbmcgcmVzcG9u c2UNClRocmVhZC0xMzo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzcsNDE3Ojpoc206OjIzNzI6 OlN0b3JhZ2UuSFNNOjooX19wcmVmZXRjaERvbWFpbnMpIG5mcyBsb2NhbCBwYXRoOiAvcmhldi9k YXRhLWNlbnRlci9tbnQvNDMuMjUyLjE3Ni4xMjpfdmFyX2xpYl9leHBvcnRzX2lzbw0KVGhyZWFk LTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozNyw0MTc6Ol9faW5pdF9fOjoyMzI6OklPUHJv Y2Vzczo6KF9wcm9jZXNzTG9ncykgUmVjZWl2aW5nIHJlcXVlc3QuLi4NClRocmVhZC0xNDo6REVC VUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzcsNDE4OjpfX2luaXRfXzo6MjMyOjpJT1Byb2Nlc3M6Oihf cHJvY2Vzc0xvZ3MpIFF1ZXVpbmcgcmVxdWVzdCBpbiB0aGUgdGhyZWFkIHBvb2wuLi4NClRocmVh ZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzcsNDE4OjpfX2luaXRfXzo6MjMyOjpJT1By b2Nlc3M6OihfcHJvY2Vzc0xvZ3MpIEV4dHJhY3RpbmcgcmVxdWVzdCBpbmZvcm1hdGlvbi4uLg0K VGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozNyw0MTg6Ol9faW5pdF9fOjoyMzI6 OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgKDIpIEdvdCByZXF1ZXN0IGZvciBtZXRob2QgJ2ds b2InDQpUaHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM3LDQxODo6X19pbml0X186 OjIzMjo6SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dzKSAoMikgUXVldWluZyByZXNwb25zZQ0KVGhy ZWFkLTEzOjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozNyw0MTg6OmhzbTo6MjM5Njo6U3RvcmFn ZS5IU006OihfX3ByZWZldGNoRG9tYWlucykgRm91bmQgU0QgdXVpZHM6ICh1JzliMmZkY2U5LWE3 OTUtNDRhNy04NmVkLTk5ZTUxNGQyNDM5OScsKQ0KVGhyZWFkLTEzOjpERUJVRzo6MjAxNC0xMS0y NSAxNDo1NTozNyw0MTk6OmhzbTo6MjQ1Mjo6U3RvcmFnZS5IU006Oihjb25uZWN0U3RvcmFnZVNl cnZlcikga25vd25TRHM6IHs5YjJmZGNlOS1hNzk1LTQ0YTctODZlZC05OWU1MTRkMjQzOTk6IHN0 b3JhZ2UubmZzU0QuZmluZERvbWFpbn0NClRocmVhZC0xMzo6SU5GTzo6MjAxNC0xMS0yNSAxNDo1 NTozNyw0MTk6OmxvZ1V0aWxzOjo0Nzo6ZGlzcGF0Y2hlcjo6KHdyYXBwZXIpIFJ1biBhbmQgcHJv dGVjdDogY29ubmVjdFN0b3JhZ2VTZXJ2ZXIsIFJldHVybiByZXNwb25zZTogeydzdGF0dXNsaXN0 JzogW3snc3RhdHVzJzogMCwgJ2lkJzogJ2RmNjM2MjRlLTc2MDUtNGVlNy04MjYyLWYyMDIzNDYx YmFhYSd9XX0NClRocmVhZC0xMzo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzcsNDE5Ojp0YXNr OjoxMTkxOjpTdG9yYWdlLlRhc2tNYW5hZ2VyLlRhc2s6OihwcmVwYXJlKSBUYXNrPWAxYTU5MDcx Mi1jYmVjLTRjZGYtYjM0Yy05NjU3OGMwN2YyODJgOjpmaW5pc2hlZDogeydzdGF0dXNsaXN0Jzog W3snc3RhdHVzJzogMCwgJ2lkJzogJ2RmNjM2MjRlLTc2MDUtNGVlNy04MjYyLWYyMDIzNDYxYmFh YSd9XX0NClRocmVhZC0xMzo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzcsNDE5Ojp0YXNrOjo1 OTU6OlN0b3JhZ2UuVGFza01hbmFnZXIuVGFzazo6KF91cGRhdGVTdGF0ZSkgVGFzaz1gMWE1OTA3 MTItY2JlYy00Y2RmLWIzNGMtOTY1NzhjMDdmMjgyYDo6bW92aW5nIGZyb20gc3RhdGUgcHJlcGFy aW5nIC0+IHN0YXRlIGZpbmlzaGVkDQpUaHJlYWQtMTM6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1 OjM3LDQxOTo6cmVzb3VyY2VNYW5hZ2VyOjo5NDA6OlN0b3JhZ2UuUmVzb3VyY2VNYW5hZ2VyLk93 bmVyOjoocmVsZWFzZUFsbCkgT3duZXIucmVsZWFzZUFsbCByZXF1ZXN0cyB7fSByZXNvdXJjZXMg e30NClRocmVhZC0xMzo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzcsNDIwOjpyZXNvdXJjZU1h bmFnZXI6Ojk3Nzo6U3RvcmFnZS5SZXNvdXJjZU1hbmFnZXIuT3duZXI6OihjYW5jZWxBbGwpIE93 bmVyLmNhbmNlbEFsbCByZXF1ZXN0cyB7fQ0KVGhyZWFkLTEzOjpERUJVRzo6MjAxNC0xMS0yNSAx NDo1NTozNyw0MjA6OnRhc2s6Ojk5Mzo6U3RvcmFnZS5UYXNrTWFuYWdlci5UYXNrOjooX2RlY3Jl ZikgVGFzaz1gMWE1OTA3MTItY2JlYy00Y2RmLWIzNGMtOTY1NzhjMDdmMjgyYDo6cmVmIDAgYWJv cnRpbmcgRmFsc2UNClRocmVhZC0xMzo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzcsNDgxOjpC aW5kaW5nWE1MUlBDOjozMTg6OnZkczo6KHdyYXBwZXIpIGNsaWVudCBbMTAuMTAuMTAuMl0gZmxv d0lEIFs1ZDJjMDA2MV0NClRocmVhZC0xMzo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzcsNDgx Ojp0YXNrOjo1OTU6OlN0b3JhZ2UuVGFza01hbmFnZXIuVGFzazo6KF91cGRhdGVTdGF0ZSkgVGFz az1gMWM5YzM3MjItYWVlZi00MWRhLWE2OTUtODA5MzljYjRiZTlmYDo6bW92aW5nIGZyb20gc3Rh dGUgaW5pdCAtPiBzdGF0ZSBwcmVwYXJpbmcNClRocmVhZC0xMzo6SU5GTzo6MjAxNC0xMS0yNSAx NDo1NTozNyw0ODI6OmxvZ1V0aWxzOjo0NDo6ZGlzcGF0Y2hlcjo6KHdyYXBwZXIpIFJ1biBhbmQg cHJvdGVjdDogY29ubmVjdFN0b3JhZ2VTZXJ2ZXIoZG9tVHlwZT03LCBzcFVVSUQ9J2Y0OTY2ODEw LTA5MjMtNDc1Mi1hMDU0LTJiNDk1NDU4NDk3MCcsIGNvbkxpc3Q9W3sncG9ydCc6ICcnLCAnY29u bmVjdGlvbic6ICcxMC4xMC4xMC4zOi9kYXRhMScsICdpcW4nOiAnJywgJ3VzZXInOiAnJywgJ3Rw Z3QnOiAnMScsICd2ZnNfdHlwZSc6ICdnbHVzdGVyZnMnLCAncGFzc3dvcmQnOiAnKioqKioqJywg J2lkJzogJzVmYjE5YmY4LTQxYTMtNGY4ZC05MTc3LTEyZDY4ZTk2MzYyNCd9XSwgb3B0aW9ucz1O b25lKQ0KVGhyZWFkLTEzOjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozNyw0ODM6OmZpbGVVdGls czo6MTQyOjpTdG9yYWdlLmZpbGVVdGlsczo6KGNyZWF0ZWRpcikgQ3JlYXRpbmcgZGlyZWN0b3J5 OiAvcmhldi9kYXRhLWNlbnRlci9tbnQvZ2x1c3RlclNELzEwLjEwLjEwLjM6X2RhdGExDQpUaHJl YWQtMTM6OldBUk5JTkc6OjIwMTQtMTEtMjUgMTQ6NTU6MzcsNTE2OjpmaWxlVXRpbHM6OjE0OTo6 U3RvcmFnZS5maWxlVXRpbHM6OihjcmVhdGVkaXIpIERpciAvcmhldi9kYXRhLWNlbnRlci9tbnQv Z2x1c3RlclNELzEwLjEwLjEwLjM6X2RhdGExIGFscmVhZHkgZXhpc3RzDQpUaHJlYWQtMTM6OkRF QlVHOjoyMDE0LTExLTI1IDE0OjU1OjM3LDUxNjo6bW91bnQ6OjIyNzo6U3RvcmFnZS5NaXNjLmV4 Y0NtZDo6KF9ydW5jbWQpIC91c3IvYmluL3N1ZG8gLW4gL2Jpbi9tb3VudCAtdCBnbHVzdGVyZnMg MTAuMTAuMTAuMzovZGF0YTEgL3JoZXYvZGF0YS1jZW50ZXIvbW50L2dsdXN0ZXJTRC8xMC4xMC4x MC4zOl9kYXRhMSAoY3dkIE5vbmUpDQpUaHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1 OjM3LDc3Njo6X19pbml0X186OjIzMjo6SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dzKSBSZWNlaXZp bmcgcmVxdWVzdC4uLg0KVGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozNyw3ODI6 Ol9faW5pdF9fOjoyMzI6OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgUXVldWluZyByZXF1ZXN0 IGluIHRoZSB0aHJlYWQgcG9vbC4uLg0KVGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1 NTozNyw3ODI6Ol9faW5pdF9fOjoyMzI6OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgRXh0cmFj dGluZyByZXF1ZXN0IGluZm9ybWF0aW9uLi4uDQpUaHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1 IDE0OjU1OjM3LDc4Mzo6X19pbml0X186OjIzMjo6SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dzKSAo MykgR290IHJlcXVlc3QgZm9yIG1ldGhvZCAnYWNjZXNzJw0KVGhyZWFkLTE0OjpERUJVRzo6MjAx NC0xMS0yNSAxNDo1NTozNyw3ODM6Ol9faW5pdF9fOjoyMzI6OklPUHJvY2Vzczo6KF9wcm9jZXNz TG9ncykgKDMpIFF1ZXVpbmcgcmVzcG9uc2UNClRocmVhZC0xMzo6REVCVUc6OjIwMTQtMTEtMjUg MTQ6NTU6MzcsNzkyOjpoc206OjIzODQ6OlN0b3JhZ2UuSFNNOjooX19wcmVmZXRjaERvbWFpbnMp IGdsdXN0ZXJEb21QYXRoOiBnbHVzdGVyU0QvKg0KVGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0y NSAxNDo1NTozNyw3OTM6Ol9faW5pdF9fOjoyMzI6OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykg UmVjZWl2aW5nIHJlcXVlc3QuLi4NClRocmVhZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6 MzcsNzk0OjpfX2luaXRfXzo6MjMyOjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3MpIFF1ZXVpbmcg cmVxdWVzdCBpbiB0aGUgdGhyZWFkIHBvb2wuLi4NClRocmVhZC0xNDo6REVCVUc6OjIwMTQtMTEt MjUgMTQ6NTU6MzcsNzk0OjpfX2luaXRfXzo6MjMyOjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3Mp IEV4dHJhY3RpbmcgcmVxdWVzdCBpbmZvcm1hdGlvbi4uLg0KVGhyZWFkLTE0OjpERUJVRzo6MjAx NC0xMS0yNSAxNDo1NTozNyw3OTQ6Ol9faW5pdF9fOjoyMzI6OklPUHJvY2Vzczo6KF9wcm9jZXNz TG9ncykgKDQpIEdvdCByZXF1ZXN0IGZvciBtZXRob2QgJ2dsb2InDQpUaHJlYWQtMTQ6OkRFQlVH OjoyMDE0LTExLTI1IDE0OjU1OjM3LDgwODo6X19pbml0X186OjIzMjo6SU9Qcm9jZXNzOjooX3By b2Nlc3NMb2dzKSAoNCkgUXVldWluZyByZXNwb25zZQ0KVGhyZWFkLTEzOjpERUJVRzo6MjAxNC0x MS0yNSAxNDo1NTozNyw4MDk6OmhzbTo6MjM5Njo6U3RvcmFnZS5IU006OihfX3ByZWZldGNoRG9t YWlucykgRm91bmQgU0QgdXVpZHM6ICh1J2VhNDk5Y2MwLTY4NGYtNGEyZi1hYmVmLTQzZmYyOGQ2 NDljYicsKQ0KVGhyZWFkLTEzOjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozNyw4MDk6OmhzbTo6 MjQ1Mjo6U3RvcmFnZS5IU006Oihjb25uZWN0U3RvcmFnZVNlcnZlcikga25vd25TRHM6IHs5YjJm ZGNlOS1hNzk1LTQ0YTctODZlZC05OWU1MTRkMjQzOTk6IHN0b3JhZ2UubmZzU0QuZmluZERvbWFp biwgZWE0OTljYzAtNjg0Zi00YTJmLWFiZWYtNDNmZjI4ZDY0OWNiOiBzdG9yYWdlLmdsdXN0ZXJT RC5maW5kRG9tYWlufQ0KVGhyZWFkLTEzOjpJTkZPOjoyMDE0LTExLTI1IDE0OjU1OjM3LDgwOTo6 bG9nVXRpbHM6OjQ3OjpkaXNwYXRjaGVyOjood3JhcHBlcikgUnVuIGFuZCBwcm90ZWN0OiBjb25u ZWN0U3RvcmFnZVNlcnZlciwgUmV0dXJuIHJlc3BvbnNlOiB7J3N0YXR1c2xpc3QnOiBbeydzdGF0 dXMnOiAwLCAnaWQnOiAnNWZiMTliZjgtNDFhMy00ZjhkLTkxNzctMTJkNjhlOTYzNjI0J31dfQ0K VGhyZWFkLTEzOjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozNyw4MTA6OnRhc2s6OjExOTE6OlN0 b3JhZ2UuVGFza01hbmFnZXIuVGFzazo6KHByZXBhcmUpIFRhc2s9YDFjOWMzNzIyLWFlZWYtNDFk YS1hNjk1LTgwOTM5Y2I0YmU5ZmA6OmZpbmlzaGVkOiB7J3N0YXR1c2xpc3QnOiBbeydzdGF0dXMn OiAwLCAnaWQnOiAnNWZiMTliZjgtNDFhMy00ZjhkLTkxNzctMTJkNjhlOTYzNjI0J31dfQ0KVGhy ZWFkLTEzOjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozNyw4MTA6OnRhc2s6OjU5NTo6U3RvcmFn ZS5UYXNrTWFuYWdlci5UYXNrOjooX3VwZGF0ZVN0YXRlKSBUYXNrPWAxYzljMzcyMi1hZWVmLTQx ZGEtYTY5NS04MDkzOWNiNGJlOWZgOjptb3ZpbmcgZnJvbSBzdGF0ZSBwcmVwYXJpbmcgLT4gc3Rh dGUgZmluaXNoZWQNClRocmVhZC0xMzo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzcsODEwOjpy ZXNvdXJjZU1hbmFnZXI6Ojk0MDo6U3RvcmFnZS5SZXNvdXJjZU1hbmFnZXIuT3duZXI6OihyZWxl YXNlQWxsKSBPd25lci5yZWxlYXNlQWxsIHJlcXVlc3RzIHt9IHJlc291cmNlcyB7fQ0KVGhyZWFk LTEzOjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozNyw4MTA6OnJlc291cmNlTWFuYWdlcjo6OTc3 OjpTdG9yYWdlLlJlc291cmNlTWFuYWdlci5Pd25lcjo6KGNhbmNlbEFsbCkgT3duZXIuY2FuY2Vs QWxsIHJlcXVlc3RzIHt9DQpUaHJlYWQtMTM6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM3LDgx MDo6dGFzazo6OTkzOjpTdG9yYWdlLlRhc2tNYW5hZ2VyLlRhc2s6OihfZGVjcmVmKSBUYXNrPWAx YzljMzcyMi1hZWVmLTQxZGEtYTY5NS04MDkzOWNiNGJlOWZgOjpyZWYgMCBhYm9ydGluZyBGYWxz ZQ0KVGhyZWFkLTEzOjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozNyw4NzY6OkJpbmRpbmdYTUxS UEM6OjMxODo6dmRzOjood3JhcHBlcikgY2xpZW50IFsxMC4xMC4xMC4yXQ0KVGhyZWFkLTEzOjpE RUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozNyw4NzY6OnRhc2s6OjU5NTo6U3RvcmFnZS5UYXNrTWFu YWdlci5UYXNrOjooX3VwZGF0ZVN0YXRlKSBUYXNrPWAwYmJkODA4MC1hYjRjLTQ5YTItOTcwOC00 OWU2ZTViZDBiNzhgOjptb3ZpbmcgZnJvbSBzdGF0ZSBpbml0IC0+IHN0YXRlIHByZXBhcmluZw0K VGhyZWFkLTEzOjpJTkZPOjoyMDE0LTExLTI1IDE0OjU1OjM3LDg3Nzo6bG9nVXRpbHM6OjQ0Ojpk aXNwYXRjaGVyOjood3JhcHBlcikgUnVuIGFuZCBwcm90ZWN0OiBjb25uZWN0U3RvcmFnZVBvb2wo c3BVVUlEPSdmNDk2NjgxMC0wOTIzLTQ3NTItYTA1NC0yYjQ5NTQ1ODQ5NzAnLCBob3N0SUQ9MSwg bXNkVVVJRD0nZWE0OTljYzAtNjg0Zi00YTJmLWFiZWYtNDNmZjI4ZDY0OWNiJywgbWFzdGVyVmVy c2lvbj0xLCBkb21haW5zTWFwPXsnOWIyZmRjZTktYTc5NS00NGE3LTg2ZWQtOTllNTE0ZDI0Mzk5 JzogJ2FjdGl2ZScsICdlYTQ5OWNjMC02ODRmLTRhMmYtYWJlZi00M2ZmMjhkNjQ5Y2InOiAnYWN0 aXZlJ30sIG9wdGlvbnM9Tm9uZSkNClRocmVhZC0xMzo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6 MzcsODc3OjpyZXNvdXJjZU1hbmFnZXI6OjE5ODo6U3RvcmFnZS5SZXNvdXJjZU1hbmFnZXIuUmVx dWVzdDo6KF9faW5pdF9fKSBSZXNOYW1lPWBTdG9yYWdlLkhzbURvbWFpbk1vbml0b3JMb2NrYFJl cUlEPWBiZjU5ZDY2Yi05YzM5LTQ1YzctOTAwOS00OTdiZmZhYmI4MWVgOjpSZXF1ZXN0IHdhcyBt YWRlIGluICcvdXNyL3NoYXJlL3Zkc20vc3RvcmFnZS9oc20ucHknIGxpbmUgJzEwMTEnIGF0ICdj b25uZWN0U3RvcmFnZVBvb2wnDQpUaHJlYWQtMTM6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM3 LDg3Nzo6cmVzb3VyY2VNYW5hZ2VyOjo1NDI6OlN0b3JhZ2UuUmVzb3VyY2VNYW5hZ2VyOjoocmVn aXN0ZXJSZXNvdXJjZSkgVHJ5aW5nIHRvIHJlZ2lzdGVyIHJlc291cmNlICdTdG9yYWdlLkhzbURv bWFpbk1vbml0b3JMb2NrJyBmb3IgbG9jayB0eXBlICdleGNsdXNpdmUnDQpUaHJlYWQtMTM6OkRF QlVHOjoyMDE0LTExLTI1IDE0OjU1OjM3LDg3Nzo6cmVzb3VyY2VNYW5hZ2VyOjo2MDE6OlN0b3Jh Z2UuUmVzb3VyY2VNYW5hZ2VyOjoocmVnaXN0ZXJSZXNvdXJjZSkgUmVzb3VyY2UgJ1N0b3JhZ2Uu SHNtRG9tYWluTW9uaXRvckxvY2snIGlzIGZyZWUuIE5vdyBsb2NraW5nIGFzICdleGNsdXNpdmUn ICgxIGFjdGl2ZSB1c2VyKQ0KVGhyZWFkLTEzOjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozNyw4 Nzg6OnJlc291cmNlTWFuYWdlcjo6MjM4OjpTdG9yYWdlLlJlc291cmNlTWFuYWdlci5SZXF1ZXN0 OjooZ3JhbnQpIFJlc05hbWU9YFN0b3JhZ2UuSHNtRG9tYWluTW9uaXRvckxvY2tgUmVxSUQ9YGJm NTlkNjZiLTljMzktNDVjNy05MDA5LTQ5N2JmZmFiYjgxZWA6OkdyYW50ZWQgcmVxdWVzdA0KVGhy ZWFkLTEzOjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozNyw4Nzg6OnJlc291cmNlTWFuYWdlcjo6 MTk4OjpTdG9yYWdlLlJlc291cmNlTWFuYWdlci5SZXF1ZXN0OjooX19pbml0X18pIFJlc05hbWU9 YFN0b3JhZ2UuZjQ5NjY4MTAtMDkyMy00NzUyLWEwNTQtMmI0OTU0NTg0OTcwYFJlcUlEPWBkZWU2 MWMwOS1jNmFhLTRiYWUtYWViYS04ZjYyOGRhOTAwYzVgOjpSZXF1ZXN0IHdhcyBtYWRlIGluICcv dXNyL3NoYXJlL3Zkc20vc3RvcmFnZS9oc20ucHknIGxpbmUgJzEwNTknIGF0ICdfY29ubmVjdFN0 b3JhZ2VQb29sJw0KVGhyZWFkLTEzOjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozNyw4Nzg6OnJl c291cmNlTWFuYWdlcjo6NTQyOjpTdG9yYWdlLlJlc291cmNlTWFuYWdlcjo6KHJlZ2lzdGVyUmVz b3VyY2UpIFRyeWluZyB0byByZWdpc3RlciByZXNvdXJjZSAnU3RvcmFnZS5mNDk2NjgxMC0wOTIz LTQ3NTItYTA1NC0yYjQ5NTQ1ODQ5NzAnIGZvciBsb2NrIHR5cGUgJ2V4Y2x1c2l2ZScNClRocmVh ZC0xMzo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzcsODc4OjpyZXNvdXJjZU1hbmFnZXI6OjYw MTo6U3RvcmFnZS5SZXNvdXJjZU1hbmFnZXI6OihyZWdpc3RlclJlc291cmNlKSBSZXNvdXJjZSAn U3RvcmFnZS5mNDk2NjgxMC0wOTIzLTQ3NTItYTA1NC0yYjQ5NTQ1ODQ5NzAnIGlzIGZyZWUuIE5v dyBsb2NraW5nIGFzICdleGNsdXNpdmUnICgxIGFjdGl2ZSB1c2VyKQ0KVGhyZWFkLTEzOjpERUJV Rzo6MjAxNC0xMS0yNSAxNDo1NTozNyw4Nzk6OnJlc291cmNlTWFuYWdlcjo6MjM4OjpTdG9yYWdl LlJlc291cmNlTWFuYWdlci5SZXF1ZXN0OjooZ3JhbnQpIFJlc05hbWU9YFN0b3JhZ2UuZjQ5NjY4 MTAtMDkyMy00NzUyLWEwNTQtMmI0OTU0NTg0OTcwYFJlcUlEPWBkZWU2MWMwOS1jNmFhLTRiYWUt YWViYS04ZjYyOGRhOTAwYzVgOjpHcmFudGVkIHJlcXVlc3QNClRocmVhZC0xMzo6REVCVUc6OjIw MTQtMTEtMjUgMTQ6NTU6MzcsODgwOjpzcGJhY2tlbmRzOjo0ODU6OlN0b3JhZ2UuU3RvcmFnZVBv b2xNZW1vcnlCYWNrZW5kOjoodXBkYXRlVmVyc2lvbkFuZERvbWFpbnMpIHVwZGF0aW5nIGRvbWFp biB2ZXJzaW9uIHRvIDEgYW5kIGRvbWFpbnMgbWFwIHRvIHsnOWIyZmRjZTktYTc5NS00NGE3LTg2 ZWQtOTllNTE0ZDI0Mzk5JzogJ2FjdGl2ZScsICdlYTQ5OWNjMC02ODRmLTRhMmYtYWJlZi00M2Zm MjhkNjQ5Y2InOiAnYWN0aXZlJ30NClRocmVhZC0xMzo6SU5GTzo6MjAxNC0xMS0yNSAxNDo1NToz Nyw4ODA6OnNwYmFja2VuZHM6OjQzMzo6U3RvcmFnZS5TdG9yYWdlUG9vbE1lbW9yeUJhY2tlbmQ6 OihzZXREb21haW5zTWFwKSBuZXcgc3RvcmFnZSBwb29sIG1hc3RlciB2ZXJzaW9uIDEgYW5kIGRv bWFpbnMgbWFwIHsnOWIyZmRjZTktYTc5NS00NGE3LTg2ZWQtOTllNTE0ZDI0Mzk5JzogJ0FjdGl2 ZScsICdlYTQ5OWNjMC02ODRmLTRhMmYtYWJlZi00M2ZmMjhkNjQ5Y2InOiAnQWN0aXZlJ30NClRo cmVhZC0xMzo6SU5GTzo6MjAxNC0xMS0yNSAxNDo1NTozNyw4ODA6OnNwOjoxMzM6OlN0b3JhZ2Uu U3RvcmFnZVBvb2w6OihzZXRCYWNrZW5kKSB1cGRhdGluZyBwb29sIGY0OTY2ODEwLTA5MjMtNDc1 Mi1hMDU0LTJiNDk1NDU4NDk3MCBiYWNrZW5kIGZyb20gdHlwZSBOb25lVHlwZSBpbnN0YW5jZSAw eDMzYmJkNzIyMTAgdG8gdHlwZSBTdG9yYWdlUG9vbE1lbW9yeUJhY2tlbmQgaW5zdGFuY2UgMHg3 ZjQzZDAwNTUxNTgNClRocmVhZC0xMzo6SU5GTzo6MjAxNC0xMS0yNSAxNDo1NTozNyw4ODA6OnNw Ojo2MzE6OlN0b3JhZ2UuU3RvcmFnZVBvb2w6Oihjb25uZWN0KSBDb25uZWN0IGhvc3QgIzEgdG8g dGhlIHN0b3JhZ2UgcG9vbCBmNDk2NjgxMC0wOTIzLTQ3NTItYTA1NC0yYjQ5NTQ1ODQ5NzAgd2l0 aCBtYXN0ZXIgZG9tYWluOiBlYTQ5OWNjMC02ODRmLTRhMmYtYWJlZi00M2ZmMjhkNjQ5Y2IgKHZl ciA9IDEpDQpUaHJlYWQtMTM6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM3LDg4MDo6bHZtOjo0 ODk6OlN0b3JhZ2UuT3BlcmF0aW9uTXV0ZXg6OihfaW52YWxpZGF0ZUFsbFB2cykgT3BlcmF0aW9u ICdsdm0gaW52YWxpZGF0ZSBvcGVyYXRpb24nIGdvdCB0aGUgb3BlcmF0aW9uIG11dGV4DQpUaHJl YWQtMTM6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM3LDg4MTo6bHZtOjo0OTE6OlN0b3JhZ2Uu T3BlcmF0aW9uTXV0ZXg6OihfaW52YWxpZGF0ZUFsbFB2cykgT3BlcmF0aW9uICdsdm0gaW52YWxp ZGF0ZSBvcGVyYXRpb24nIHJlbGVhc2VkIHRoZSBvcGVyYXRpb24gbXV0ZXgNClRocmVhZC0xMzo6 REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzcsODgxOjpsdm06OjUwMDo6U3RvcmFnZS5PcGVyYXRp b25NdXRleDo6KF9pbnZhbGlkYXRlQWxsVmdzKSBPcGVyYXRpb24gJ2x2bSBpbnZhbGlkYXRlIG9w ZXJhdGlvbicgZ290IHRoZSBvcGVyYXRpb24gbXV0ZXgNClRocmVhZC0xMzo6REVCVUc6OjIwMTQt MTEtMjUgMTQ6NTU6MzcsODgxOjpsdm06OjUwMjo6U3RvcmFnZS5PcGVyYXRpb25NdXRleDo6KF9p bnZhbGlkYXRlQWxsVmdzKSBPcGVyYXRpb24gJ2x2bSBpbnZhbGlkYXRlIG9wZXJhdGlvbicgcmVs ZWFzZWQgdGhlIG9wZXJhdGlvbiBtdXRleA0KVGhyZWFkLTEzOjpERUJVRzo6MjAxNC0xMS0yNSAx NDo1NTozNyw4ODE6Omx2bTo6NTIwOjpTdG9yYWdlLk9wZXJhdGlvbk11dGV4OjooX2ludmFsaWRh dGVBbGxMdnMpIE9wZXJhdGlvbiAnbHZtIGludmFsaWRhdGUgb3BlcmF0aW9uJyBnb3QgdGhlIG9w ZXJhdGlvbiBtdXRleA0KVGhyZWFkLTEzOjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozNyw4ODE6 Omx2bTo6NTIyOjpTdG9yYWdlLk9wZXJhdGlvbk11dGV4OjooX2ludmFsaWRhdGVBbGxMdnMpIE9w ZXJhdGlvbiAnbHZtIGludmFsaWRhdGUgb3BlcmF0aW9uJyByZWxlYXNlZCB0aGUgb3BlcmF0aW9u IG11dGV4DQpUaHJlYWQtMTM6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM3LDg4Mjo6bWlzYzo6 NzQxOjpTdG9yYWdlLlNhbXBsaW5nTWV0aG9kOjooX19jYWxsX18pIFRyeWluZyB0byBlbnRlciBz YW1wbGluZyBtZXRob2QgKHN0b3JhZ2Uuc2RjLnJlZnJlc2hTdG9yYWdlKQ0KVGhyZWFkLTEzOjpE RUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozNyw4ODI6Om1pc2M6Ojc0Mzo6U3RvcmFnZS5TYW1wbGlu Z01ldGhvZDo6KF9fY2FsbF9fKSBHb3QgaW4gdG8gc2FtcGxpbmcgbWV0aG9kDQpUaHJlYWQtMTM6 OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM3LDg4Mjo6bWlzYzo6NzQxOjpTdG9yYWdlLlNhbXBs aW5nTWV0aG9kOjooX19jYWxsX18pIFRyeWluZyB0byBlbnRlciBzYW1wbGluZyBtZXRob2QgKHN0 b3JhZ2UuaXNjc2kucmVzY2FuKQ0KVGhyZWFkLTEzOjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NToz Nyw4ODI6Om1pc2M6Ojc0Mzo6U3RvcmFnZS5TYW1wbGluZ01ldGhvZDo6KF9fY2FsbF9fKSBHb3Qg aW4gdG8gc2FtcGxpbmcgbWV0aG9kDQpUaHJlYWQtMTM6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1 OjM3LDg4Mjo6aXNjc2k6OjQwMzo6U3RvcmFnZS5JU0NTSTo6KHJlc2NhbikgUGVyZm9ybWluZyBT Q1NJIHNjYW4sIHRoaXMgd2lsbCB0YWtlIHVwIHRvIDMwIHNlY29uZHMNClRocmVhZC0xMzo6REVC VUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzcsODgyOjppc2NzaWFkbTo6OTI6OlN0b3JhZ2UuTWlzYy5l eGNDbWQ6OihfcnVuQ21kKSAvdXNyL2Jpbi9zdWRvIC1uIC9zYmluL2lzY3NpYWRtIC1tIHNlc3Np b24gLVIgKGN3ZCBOb25lKQ0KVGhyZWFkLTEzOjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozNyw4 OTQ6Om1pc2M6Ojc1MTo6U3RvcmFnZS5TYW1wbGluZ01ldGhvZDo6KF9fY2FsbF9fKSBSZXR1cm5p bmcgbGFzdCByZXN1bHQNClRocmVhZC0xMzo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzcsODk2 OjptdWx0aXBhdGg6OjExMDo6U3RvcmFnZS5NaXNjLmV4Y0NtZDo6KHJlc2NhbikgL3Vzci9iaW4v c3VkbyAtbiAvc2Jpbi9tdWx0aXBhdGggKGN3ZCBOb25lKQ0KVGhyZWFkLTEzOjpERUJVRzo6MjAx NC0xMS0yNSAxNDo1NTozNyw5ODE6Om11bHRpcGF0aDo6MTEwOjpTdG9yYWdlLk1pc2MuZXhjQ21k OjoocmVzY2FuKSBTVUNDRVNTOiA8ZXJyPiA9ICcnOyA8cmM+ID0gMA0KVGhyZWFkLTEzOjpERUJV Rzo6MjAxNC0xMS0yNSAxNDo1NTozNyw5ODM6Omx2bTo6NDg5OjpTdG9yYWdlLk9wZXJhdGlvbk11 dGV4OjooX2ludmFsaWRhdGVBbGxQdnMpIE9wZXJhdGlvbiAnbHZtIGludmFsaWRhdGUgb3BlcmF0 aW9uJyBnb3QgdGhlIG9wZXJhdGlvbiBtdXRleA0KVGhyZWFkLTEzOjpERUJVRzo6MjAxNC0xMS0y NSAxNDo1NTozNyw5ODM6Omx2bTo6NDkxOjpTdG9yYWdlLk9wZXJhdGlvbk11dGV4OjooX2ludmFs aWRhdGVBbGxQdnMpIE9wZXJhdGlvbiAnbHZtIGludmFsaWRhdGUgb3BlcmF0aW9uJyByZWxlYXNl ZCB0aGUgb3BlcmF0aW9uIG11dGV4DQpUaHJlYWQtMTM6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1 OjM3LDk4Mzo6bHZtOjo1MDA6OlN0b3JhZ2UuT3BlcmF0aW9uTXV0ZXg6OihfaW52YWxpZGF0ZUFs bFZncykgT3BlcmF0aW9uICdsdm0gaW52YWxpZGF0ZSBvcGVyYXRpb24nIGdvdCB0aGUgb3BlcmF0 aW9uIG11dGV4DQpUaHJlYWQtMTM6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM3LDk4NDo6bHZt Ojo1MDI6OlN0b3JhZ2UuT3BlcmF0aW9uTXV0ZXg6OihfaW52YWxpZGF0ZUFsbFZncykgT3BlcmF0 aW9uICdsdm0gaW52YWxpZGF0ZSBvcGVyYXRpb24nIHJlbGVhc2VkIHRoZSBvcGVyYXRpb24gbXV0 ZXgNClRocmVhZC0xMzo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzcsOTg0Ojpsdm06OjUyMDo6 U3RvcmFnZS5PcGVyYXRpb25NdXRleDo6KF9pbnZhbGlkYXRlQWxsTHZzKSBPcGVyYXRpb24gJ2x2 bSBpbnZhbGlkYXRlIG9wZXJhdGlvbicgZ290IHRoZSBvcGVyYXRpb24gbXV0ZXgNClRocmVhZC0x Mzo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzcsOTg0Ojpsdm06OjUyMjo6U3RvcmFnZS5PcGVy YXRpb25NdXRleDo6KF9pbnZhbGlkYXRlQWxsTHZzKSBPcGVyYXRpb24gJ2x2bSBpbnZhbGlkYXRl IG9wZXJhdGlvbicgcmVsZWFzZWQgdGhlIG9wZXJhdGlvbiBtdXRleA0KVGhyZWFkLTEzOjpERUJV Rzo6MjAxNC0xMS0yNSAxNDo1NTozNyw5ODQ6Om1pc2M6Ojc1MTo6U3RvcmFnZS5TYW1wbGluZ01l dGhvZDo6KF9fY2FsbF9fKSBSZXR1cm5pbmcgbGFzdCByZXN1bHQNClRocmVhZC0xNDo6REVCVUc6 OjIwMTQtMTEtMjUgMTQ6NTU6MzcsOTg2OjpfX2luaXRfXzo6MjMyOjpJT1Byb2Nlc3M6OihfcHJv Y2Vzc0xvZ3MpIFJlY2VpdmluZyByZXF1ZXN0Li4uDQpUaHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTEx LTI1IDE0OjU1OjM3LDk4Njo6X19pbml0X186OjIzMjo6SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dz KSBRdWV1aW5nIHJlcXVlc3QgaW4gdGhlIHRocmVhZCBwb29sLi4uDQpUaHJlYWQtMTQ6OkRFQlVH OjoyMDE0LTExLTI1IDE0OjU1OjM3LDk4Njo6X19pbml0X186OjIzMjo6SU9Qcm9jZXNzOjooX3By b2Nlc3NMb2dzKSBFeHRyYWN0aW5nIHJlcXVlc3QgaW5mb3JtYXRpb24uLi4NClRocmVhZC0xNDo6 REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzcsOTg2OjpfX2luaXRfXzo6MjMyOjpJT1Byb2Nlc3M6 OihfcHJvY2Vzc0xvZ3MpICg1KSBHb3QgcmVxdWVzdCBmb3IgbWV0aG9kICdnbG9iJw0KVGhyZWFk LTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozNyw5OTc6Ol9faW5pdF9fOjoyMzI6OklPUHJv Y2Vzczo6KF9wcm9jZXNzTG9ncykgKDUpIFF1ZXVpbmcgcmVzcG9uc2UNClRocmVhZC0xMzo6REVC VUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzcsOTk5OjpmaWxlU0Q6OjE1Mjo6U3RvcmFnZS5TdG9yYWdl RG9tYWluOjooX19pbml0X18pIFJlYWRpbmcgZG9tYWluIGluIHBhdGggL3JoZXYvZGF0YS1jZW50 ZXIvbW50L2dsdXN0ZXJTRC8xMC4xMC4xMC4zOl9kYXRhMS9lYTQ5OWNjMC02ODRmLTRhMmYtYWJl Zi00M2ZmMjhkNjQ5Y2INClRocmVhZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMDAw OjpfX2luaXRfXzo6MjMyOjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3MpIFJlY2VpdmluZyByZXF1 ZXN0Li4uDQpUaHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM4LDAwMDo6X19pbml0 X186OjIzMjo6SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dzKSBRdWV1aW5nIHJlcXVlc3QgaW4gdGhl IHRocmVhZCBwb29sLi4uDQpUaHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM4LDAw MDo6X19pbml0X186OjIzMjo6SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dzKSBFeHRyYWN0aW5nIHJl cXVlc3QgaW5mb3JtYXRpb24uLi4NClRocmVhZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6 MzgsMDAwOjpfX2luaXRfXzo6MjMyOjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3MpICg2KSBHb3Qg cmVxdWVzdCBmb3IgbWV0aG9kICd0b3VjaCcNClRocmVhZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUg MTQ6NTU6MzgsMDAyOjpfX2luaXRfXzo6MjMyOjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3MpICg2 KSBRdWV1aW5nIHJlc3BvbnNlDQpUaHJlYWQtMTM6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM4 LDAwMjo6cGVyc2lzdGVudERpY3Q6OjE5Mjo6U3RvcmFnZS5QZXJzaXN0ZW50RGljdDo6KF9faW5p dF9fKSBDcmVhdGVkIGEgcGVyc2lzdGVudCBkaWN0IHdpdGggRmlsZU1ldGFkYXRhUlcgYmFja2Vu ZA0KVGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozOCwwMDM6Ol9faW5pdF9fOjoy MzI6OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgUmVjZWl2aW5nIHJlcXVlc3QuLi4NClRocmVh ZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMDAzOjpfX2luaXRfXzo6MjMyOjpJT1By b2Nlc3M6OihfcHJvY2Vzc0xvZ3MpIFF1ZXVpbmcgcmVxdWVzdCBpbiB0aGUgdGhyZWFkIHBvb2wu Li4NClRocmVhZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMDA0OjpfX2luaXRfXzo6 MjMyOjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3MpIEV4dHJhY3RpbmcgcmVxdWVzdCBpbmZvcm1h dGlvbi4uLg0KVGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozOCwwMDQ6Ol9faW5p dF9fOjoyMzI6OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgKDcpIEdvdCByZXF1ZXN0IGZvciBt ZXRob2QgJ3JlYWRmaWxlJw0KVGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozOCww MTg6Ol9faW5pdF9fOjoyMzI6OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgKDcpIFF1ZXVpbmcg cmVzcG9uc2UNClRocmVhZC0xMzo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMDE5OjpwZXJz aXN0ZW50RGljdDo6MjM0OjpTdG9yYWdlLlBlcnNpc3RlbnREaWN0OjoocmVmcmVzaCkgcmVhZCBs aW5lcyAoRmlsZU1ldGFkYXRhUlcpPVsnQ0xBU1M9RGF0YScsICdERVNDUklQVElPTj1kYXRhJywg J0lPT1BUSU1FT1VUU0VDPTEwJywgJ0xFQVNFUkVUUklFUz0zJywgJ0xFQVNFVElNRVNFQz02MCcs ICdMT0NLUE9MSUNZPScsICdMT0NLUkVORVdBTElOVEVSVkFMU0VDPTUnLCAnTUFTVEVSX1ZFUlNJ T049MScsICdQT09MX0RFU0NSSVBUSU9OPVNWX1dUQycsICdQT09MX0RPTUFJTlM9OWIyZmRjZTkt YTc5NS00NGE3LTg2ZWQtOTllNTE0ZDI0Mzk5OkFjdGl2ZSxlYTQ5OWNjMC02ODRmLTRhMmYtYWJl Zi00M2ZmMjhkNjQ5Y2I6QWN0aXZlLDRiN2M3Zjk0LTJjZWUtNGZkYy05MTUzLWMyZjhjZDFmN2M2 YjpBY3RpdmUnLCAnUE9PTF9TUE1fSUQ9NCcsICdQT09MX1NQTV9MVkVSPTYnLCAnUE9PTF9VVUlE PWY0OTY2ODEwLTA5MjMtNDc1Mi1hMDU0LTJiNDk1NDU4NDk3MCcsICdSRU1PVEVfUEFUSD0xMC4x MC4xMC4zOi9kYXRhMScsICdST0xFPU1hc3RlcicsICdTRFVVSUQ9ZWE0OTljYzAtNjg0Zi00YTJm LWFiZWYtNDNmZjI4ZDY0OWNiJywgJ1RZUEU9R0xVU1RFUkZTJywgJ1ZFUlNJT049MycsICdfU0hB X0NLU1VNPTQ1NDcwODAxMzAzMzQ1ZTYxMzg5YzA3MTg3MDhlZDA3YzIwZWJlMjMnXQ0KVGhyZWFk LTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozOCwwMTk6Ol9faW5pdF9fOjoyMzI6OklPUHJv Y2Vzczo6KF9wcm9jZXNzTG9ncykgUmVjZWl2aW5nIHJlcXVlc3QuLi4NClRocmVhZC0xNDo6REVC VUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMDIwOjpfX2luaXRfXzo6MjMyOjpJT1Byb2Nlc3M6Oihf cHJvY2Vzc0xvZ3MpIFF1ZXVpbmcgcmVxdWVzdCBpbiB0aGUgdGhyZWFkIHBvb2wuLi4NClRocmVh ZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMDIwOjpfX2luaXRfXzo6MjMyOjpJT1By b2Nlc3M6OihfcHJvY2Vzc0xvZ3MpIEV4dHJhY3RpbmcgcmVxdWVzdCBpbmZvcm1hdGlvbi4uLg0K VGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozOCwwMjA6Ol9faW5pdF9fOjoyMzI6 OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgKDgpIEdvdCByZXF1ZXN0IGZvciBtZXRob2QgJ2Fj Y2VzcycNClRocmVhZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMDIwOjpfX2luaXRf Xzo6MjMyOjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3MpICg4KSBRdWV1aW5nIHJlc3BvbnNlDQpU aHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM4LDAyMTo6X19pbml0X186OjIzMjo6 SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dzKSBSZWNlaXZpbmcgcmVxdWVzdC4uLg0KVGhyZWFkLTE0 OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozOCwwMjE6Ol9faW5pdF9fOjoyMzI6OklPUHJvY2Vz czo6KF9wcm9jZXNzTG9ncykgUXVldWluZyByZXF1ZXN0IGluIHRoZSB0aHJlYWQgcG9vbC4uLg0K VGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozOCwwMjE6Ol9faW5pdF9fOjoyMzI6 OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgRXh0cmFjdGluZyByZXF1ZXN0IGluZm9ybWF0aW9u Li4uDQpUaHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM4LDAyMTo6X19pbml0X186 OjIzMjo6SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dzKSAoOSkgR290IHJlcXVlc3QgZm9yIG1ldGhv ZCAnZ2xvYicNClRocmVhZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMDMyOjpfX2lu aXRfXzo6MjMyOjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3MpICg5KSBRdWV1aW5nIHJlc3BvbnNl DQpUaHJlYWQtMTM6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM4LDAzMzo6ZmlsZVNEOjo2MTA6 OlN0b3JhZ2UuU3RvcmFnZURvbWFpbjo6KGltYWdlR2FyYmFnZUNvbGxlY3RvcikgUmVtb3Zpbmcg cmVtbmFudHMgb2YgZGVsZXRlZCBpbWFnZXMgW10NClRocmVhZC0xMzo6REVCVUc6OjIwMTQtMTEt MjUgMTQ6NTU6MzgsMDMzOjpyZXNvdXJjZU1hbmFnZXI6OjQyMTo6U3RvcmFnZS5SZXNvdXJjZU1h bmFnZXI6OihyZWdpc3Rlck5hbWVzcGFjZSkgUmVnaXN0ZXJpbmcgbmFtZXNwYWNlICdlYTQ5OWNj MC02ODRmLTRhMmYtYWJlZi00M2ZmMjhkNjQ5Y2JfaW1hZ2VOUycNClRocmVhZC0xMzo6REVCVUc6 OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMDMzOjpyZXNvdXJjZU1hbmFnZXI6OjQyMTo6U3RvcmFnZS5S ZXNvdXJjZU1hbmFnZXI6OihyZWdpc3Rlck5hbWVzcGFjZSkgUmVnaXN0ZXJpbmcgbmFtZXNwYWNl ICdlYTQ5OWNjMC02ODRmLTRhMmYtYWJlZi00M2ZmMjhkNjQ5Y2Jfdm9sdW1lTlMnDQpUaHJlYWQt MTM6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM4LDAzNDo6c3A6OjEzOTA6OlN0b3JhZ2UuU3Rv cmFnZVBvb2w6OihzZXRNYXN0ZXJEb21haW4pIE1hc3RlciBkb21haW4gZWE0OTljYzAtNjg0Zi00 YTJmLWFiZWYtNDNmZjI4ZDY0OWNiIHZlcmlmaWVkLCB2ZXJzaW9uIDENClRocmVhZC0xMzo6REVC VUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMDM0OjptaXNjOjo3NDE6OlN0b3JhZ2UuU2FtcGxpbmdN ZXRob2Q6OihfX2NhbGxfXykgVHJ5aW5nIHRvIGVudGVyIHNhbXBsaW5nIG1ldGhvZCAoc3RvcmFn ZS5zcC51cGRhdGVNb25pdG9yaW5nVGhyZWFkcykNClRocmVhZC0xMzo6REVCVUc6OjIwMTQtMTEt MjUgMTQ6NTU6MzgsMDM0OjptaXNjOjo3NDM6OlN0b3JhZ2UuU2FtcGxpbmdNZXRob2Q6OihfX2Nh bGxfXykgR290IGluIHRvIHNhbXBsaW5nIG1ldGhvZA0KVGhyZWFkLTEzOjpJTkZPOjoyMDE0LTEx LTI1IDE0OjU1OjM4LDAzNDo6ZG9tYWluTW9uaXRvcjo6MTAwOjpTdG9yYWdlLkRvbWFpbk1vbml0 b3I6OihzdGFydE1vbml0b3JpbmcpIFN0YXJ0IG1vbml0b3JpbmcgOWIyZmRjZTktYTc5NS00NGE3 LTg2ZWQtOTllNTE0ZDI0Mzk5DQpUaHJlYWQtMTM6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM4 LDAzNTo6c3A6OjE0MTM6OlN0b3JhZ2UuU3RvcmFnZVBvb2w6Oih1cGRhdGVNb25pdG9yaW5nVGhy ZWFkcykgU3RvcmFnZSBQb29sIGBmNDk2NjgxMC0wOTIzLTQ3NTItYTA1NC0yYjQ5NTQ1ODQ5NzBg IHN0YXJ0ZWQgbW9uaXRvcmluZyBkb21haW4gYDliMmZkY2U5LWE3OTUtNDRhNy04NmVkLTk5ZTUx NGQyNDM5OWANClRocmVhZC0xNzo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMDM1Ojpkb21h aW5Nb25pdG9yOjoxODI6OlN0b3JhZ2UuRG9tYWluTW9uaXRvclRocmVhZDo6KF9tb25pdG9yTG9v cCkgU3RhcnRpbmcgZG9tYWluIG1vbml0b3IgZm9yIDliMmZkY2U5LWE3OTUtNDRhNy04NmVkLTk5 ZTUxNGQyNDM5OQ0KVGhyZWFkLTEzOjpJTkZPOjoyMDE0LTExLTI1IDE0OjU1OjM4LDAzNjo6ZG9t YWluTW9uaXRvcjo6MTAwOjpTdG9yYWdlLkRvbWFpbk1vbml0b3I6OihzdGFydE1vbml0b3Jpbmcp IFN0YXJ0IG1vbml0b3JpbmcgZWE0OTljYzAtNjg0Zi00YTJmLWFiZWYtNDNmZjI4ZDY0OWNiDQpU aHJlYWQtMTg6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM4LDAzODo6ZG9tYWluTW9uaXRvcjo6 MTgyOjpTdG9yYWdlLkRvbWFpbk1vbml0b3JUaHJlYWQ6OihfbW9uaXRvckxvb3ApIFN0YXJ0aW5n IGRvbWFpbiBtb25pdG9yIGZvciBlYTQ5OWNjMC02ODRmLTRhMmYtYWJlZi00M2ZmMjhkNjQ5Y2IN ClRocmVhZC0xMzo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMDM4OjpzcDo6MTQxMzo6U3Rv cmFnZS5TdG9yYWdlUG9vbDo6KHVwZGF0ZU1vbml0b3JpbmdUaHJlYWRzKSBTdG9yYWdlIFBvb2wg YGY0OTY2ODEwLTA5MjMtNDc1Mi1hMDU0LTJiNDk1NDU4NDk3MGAgc3RhcnRlZCBtb25pdG9yaW5n IGRvbWFpbiBgZWE0OTljYzAtNjg0Zi00YTJmLWFiZWYtNDNmZjI4ZDY0OWNiYA0KVGhyZWFkLTEz OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozOCwwNDM6Om1pc2M6Ojc1MTo6U3RvcmFnZS5TYW1w bGluZ01ldGhvZDo6KF9fY2FsbF9fKSBSZXR1cm5pbmcgbGFzdCByZXN1bHQNClRocmVhZC0xMzo6 REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMDQ0OjpmaWxlVXRpbHM6OjE0Mjo6U3RvcmFnZS5m aWxlVXRpbHM6OihjcmVhdGVkaXIpIENyZWF0aW5nIGRpcmVjdG9yeTogL3JoZXYvZGF0YS1jZW50 ZXIvZjQ5NjY4MTAtMDkyMy00NzUyLWEwNTQtMmI0OTU0NTg0OTcwDQpUaHJlYWQtMTQ6OkRFQlVH OjoyMDE0LTExLTI1IDE0OjU1OjM4LDA0NDo6X19pbml0X186OjIzMjo6SU9Qcm9jZXNzOjooX3By b2Nlc3NMb2dzKSBSZWNlaXZpbmcgcmVxdWVzdC4uLg0KVGhyZWFkLTEzOjpXQVJOSU5HOjoyMDE0 LTExLTI1IDE0OjU1OjM4LDA0NTo6ZmlsZVV0aWxzOjoxNDk6OlN0b3JhZ2UuZmlsZVV0aWxzOjoo Y3JlYXRlZGlyKSBEaXIgL3JoZXYvZGF0YS1jZW50ZXIvZjQ5NjY4MTAtMDkyMy00NzUyLWEwNTQt MmI0OTU0NTg0OTcwIGFscmVhZHkgZXhpc3RzDQpUaHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1 IDE0OjU1OjM4LDA0NTo6X19pbml0X186OjIzMjo6SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dzKSBR dWV1aW5nIHJlcXVlc3QgaW4gdGhlIHRocmVhZCBwb29sLi4uDQpUaHJlYWQtMTQ6OkRFQlVHOjoy MDE0LTExLTI1IDE0OjU1OjM4LDA0Njo6X19pbml0X186OjIzMjo6SU9Qcm9jZXNzOjooX3Byb2Nl c3NMb2dzKSBFeHRyYWN0aW5nIHJlcXVlc3QgaW5mb3JtYXRpb24uLi4NClRocmVhZC0xNDo6REVC VUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMDQ3OjpfX2luaXRfXzo6MjMyOjpJT1Byb2Nlc3M6Oihf cHJvY2Vzc0xvZ3MpICgxMCkgR290IHJlcXVlc3QgZm9yIG1ldGhvZCAnZ2xvYicNClRocmVhZC0x Mzo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMDQ4Ojpsdm06OjM2NTo6U3RvcmFnZS5PcGVy YXRpb25NdXRleDo6KF9yZWxvYWR2Z3MpIE9wZXJhdGlvbiAnbHZtIHJlbG9hZCBvcGVyYXRpb24n IGdvdCB0aGUgb3BlcmF0aW9uIG11dGV4DQpUaHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1IDE0 OjU1OjM4LDA0OTo6X19pbml0X186OjIzMjo6SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dzKSAoMTAp IFF1ZXVpbmcgcmVzcG9uc2UNClRocmVhZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6Mzgs MDUxOjpfX2luaXRfXzo6MjMyOjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3MpIFJlY2VpdmluZyBy ZXF1ZXN0Li4uDQpUaHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM4LDA1Mzo6X19p bml0X186OjIzMjo6SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dzKSBRdWV1aW5nIHJlcXVlc3QgaW4g dGhlIHRocmVhZCBwb29sLi4uDQpUaHJlYWQtMTM6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM4 LDA1NDo6bHZtOjoyODg6OlN0b3JhZ2UuTWlzYy5leGNDbWQ6OihjbWQpIC91c3IvYmluL3N1ZG8g LW4gL3NiaW4vbHZtIHZncyAtLWNvbmZpZyAnIGRldmljZXMgeyBwcmVmZXJyZWRfbmFtZXMgPSBb Il4vZGV2L21hcHBlci8iXSBpZ25vcmVfc3VzcGVuZGVkX2RldmljZXM9MSB3cml0ZV9jYWNoZV9z dGF0ZT0wIGRpc2FibGVfYWZ0ZXJfZXJyb3JfY291bnQ9MyBvYnRhaW5fZGV2aWNlX2xpc3RfZnJv bV91ZGV2PTAgZmlsdGVyID0gWyAnXCcnYXwvZGV2L21hcHBlci9icmljazd8L2Rldi9tYXBwZXIv YnJpY2s4fCdcJycsICdcJydyfC4qfCdcJycgXSB9ICBnbG9iYWwgeyAgbG9ja2luZ190eXBlPTEg IHByaW9yaXRpc2Vfd3JpdGVfbG9ja3M9MSAgd2FpdF9mb3JfbG9ja3M9MSAgdXNlX2x2bWV0YWQ9 MCB9ICBiYWNrdXAgeyAgcmV0YWluX21pbiA9IDUwICByZXRhaW5fZGF5cyA9IDAgfSAnIC0tbm9o ZWFkaW5ncyAtLXVuaXRzIGIgLS1ub3N1ZmZpeCAtLXNlcGFyYXRvciAnfCcgLS1pZ25vcmVza2lw cGVkY2x1c3RlciAtbyB1dWlkLG5hbWUsYXR0cixzaXplLGZyZWUsZXh0ZW50X3NpemUsZXh0ZW50 X2NvdW50LGZyZWVfY291bnQsdGFncyx2Z19tZGFfc2l6ZSx2Z19tZGFfZnJlZSxsdl9jb3VudCxw dl9jb3VudCxwdl9uYW1lIDliMmZkY2U5LWE3OTUtNDRhNy04NmVkLTk5ZTUxNGQyNDM5OSAoY3dk IE5vbmUpDQpUaHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM4LDA1Njo6X19pbml0 X186OjIzMjo6SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dzKSBFeHRyYWN0aW5nIHJlcXVlc3QgaW5m b3JtYXRpb24uLi4NClRocmVhZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMDYzOjpf X2luaXRfXzo6MjMyOjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3MpICgxMSkgR290IHJlcXVlc3Qg Zm9yIG1ldGhvZCAnZ2xvYicNClRocmVhZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6Mzgs MDY0OjpfX2luaXRfXzo6MjMyOjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3MpICgxMSkgUXVldWlu ZyByZXNwb25zZQ0KVGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozOCwwNjc6Ol9f aW5pdF9fOjoyMzI6OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgUmVjZWl2aW5nIHJlcXVlc3Qu Li4NClRocmVhZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMDY5OjpfX2luaXRfXzo6 MjMyOjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3MpIFF1ZXVpbmcgcmVxdWVzdCBpbiB0aGUgdGhy ZWFkIHBvb2wuLi4NClRocmVhZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMDcwOjpf X2luaXRfXzo6MjMyOjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3MpIEV4dHJhY3RpbmcgcmVxdWVz dCBpbmZvcm1hdGlvbi4uLg0KVGhyZWFkLTE3OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozOCww NzE6OmZpbGVTRDo6MTUyOjpTdG9yYWdlLlN0b3JhZ2VEb21haW46OihfX2luaXRfXykgUmVhZGlu ZyBkb21haW4gaW4gcGF0aCAvcmhldi9kYXRhLWNlbnRlci9tbnQvNDMuMjUyLjE3Ni4xMjpfdmFy X2xpYl9leHBvcnRzX2lzby85YjJmZGNlOS1hNzk1LTQ0YTctODZlZC05OWU1MTRkMjQzOTkNClRo cmVhZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMDcxOjpfX2luaXRfXzo6MjMyOjpJ T1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3MpICgxMikgR290IHJlcXVlc3QgZm9yIG1ldGhvZCAnc3Rh dHZmcycNClRocmVhZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMDczOjpfX2luaXRf Xzo6MjMyOjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3MpICgxMikgUXVldWluZyByZXNwb25zZQ0K VGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozOCwwNzQ6Ol9faW5pdF9fOjoyMzI6 OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgUmVjZWl2aW5nIHJlcXVlc3QuLi4NClRocmVhZC0x NDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMDc1OjpfX2luaXRfXzo6MjMyOjpJT1Byb2Nl c3M6OihfcHJvY2Vzc0xvZ3MpIFF1ZXVpbmcgcmVxdWVzdCBpbiB0aGUgdGhyZWFkIHBvb2wuLi4N ClRocmVhZC0xODo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMDc1OjpmaWxlU0Q6OjI2MTo6 U3RvcmFnZS5NaXNjLmV4Y0NtZDo6KGdldFJlYWREZWxheSkgL2Jpbi9kZCBpZj0vcmhldi9kYXRh LWNlbnRlci9tbnQvZ2x1c3RlclNELzEwLjEwLjEwLjM6X2RhdGExL2VhNDk5Y2MwLTY4NGYtNGEy Zi1hYmVmLTQzZmYyOGQ2NDljYi9kb21fbWQvbWV0YWRhdGEgaWZsYWc9ZGlyZWN0IG9mPS9kZXYv bnVsbCBicz00MDk2IGNvdW50PTEgKGN3ZCBOb25lKQ0KVGhyZWFkLTE0OjpERUJVRzo6MjAxNC0x MS0yNSAxNDo1NTozOCwwNzY6Ol9faW5pdF9fOjoyMzI6OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9n cykgRXh0cmFjdGluZyByZXF1ZXN0IGluZm9ybWF0aW9uLi4uDQpUaHJlYWQtMTQ6OkRFQlVHOjoy MDE0LTExLTI1IDE0OjU1OjM4LDA4Mjo6X19pbml0X186OjIzMjo6SU9Qcm9jZXNzOjooX3Byb2Nl c3NMb2dzKSAoMTMpIEdvdCByZXF1ZXN0IGZvciBtZXRob2QgJ3RvdWNoJw0KVGhyZWFkLTE4OjpE RUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozOCwwODM6OmZpbGVTRDo6MjYxOjpTdG9yYWdlLk1pc2Mu ZXhjQ21kOjooZ2V0UmVhZERlbGF5KSBTVUNDRVNTOiA8ZXJyPiA9ICcwKzEgcmVjb3JkcyBpblxu MCsxIHJlY29yZHMgb3V0XG41NDAgYnl0ZXMgKDU0MCBCKSBjb3BpZWQsIDAuMDAwNDgwOTU3IHMs IDEuMSBNQi9zXG4nOyA8cmM+ID0gMA0KVGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1 NTozOCwwODQ6Ol9faW5pdF9fOjoyMzI6OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgKDEzKSBR dWV1aW5nIHJlc3BvbnNlDQpUaHJlYWQtMTc6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM4LDA4 Nzo6cGVyc2lzdGVudERpY3Q6OjE5Mjo6U3RvcmFnZS5QZXJzaXN0ZW50RGljdDo6KF9faW5pdF9f KSBDcmVhdGVkIGEgcGVyc2lzdGVudCBkaWN0IHdpdGggRmlsZU1ldGFkYXRhUlcgYmFja2VuZA0K VGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozOCwwODg6Ol9faW5pdF9fOjoyMzI6 OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgUmVjZWl2aW5nIHJlcXVlc3QuLi4NClRocmVhZC0x NDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMDg5OjpfX2luaXRfXzo6MjMyOjpJT1Byb2Nl c3M6OihfcHJvY2Vzc0xvZ3MpIFF1ZXVpbmcgcmVxdWVzdCBpbiB0aGUgdGhyZWFkIHBvb2wuLi4N ClRocmVhZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMDg5OjpfX2luaXRfXzo6MjMy OjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3MpIEV4dHJhY3RpbmcgcmVxdWVzdCBpbmZvcm1hdGlv bi4uLg0KVGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozOCwwODk6Ol9faW5pdF9f OjoyMzI6OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgKDE0KSBHb3QgcmVxdWVzdCBmb3IgbWV0 aG9kICdzdGF0dmZzJw0KVGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozOCwwODk6 Ol9faW5pdF9fOjoyMzI6OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgKDE0KSBRdWV1aW5nIHJl c3BvbnNlDQpUaHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM4LDA5MDo6X19pbml0 X186OjIzMjo6SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dzKSBSZWNlaXZpbmcgcmVxdWVzdC4uLg0K VGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozOCwwOTE6Ol9faW5pdF9fOjoyMzI6 OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgUXVldWluZyByZXF1ZXN0IGluIHRoZSB0aHJlYWQg cG9vbC4uLg0KVGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozOCwwOTE6Ol9faW5p dF9fOjoyMzI6OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgRXh0cmFjdGluZyByZXF1ZXN0IGlu Zm9ybWF0aW9uLi4uDQpUaHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM4LDA5MTo6 X19pbml0X186OjIzMjo6SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dzKSAoMTUpIEdvdCByZXF1ZXN0 IGZvciBtZXRob2QgJ3JlYWRmaWxlJw0KVGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1 NTozOCwwOTI6Ol9faW5pdF9fOjoyMzI6OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgKDE1KSBR dWV1aW5nIHJlc3BvbnNlDQpUaHJlYWQtMTc6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM4LDA5 Mjo6cGVyc2lzdGVudERpY3Q6OjIzNDo6U3RvcmFnZS5QZXJzaXN0ZW50RGljdDo6KHJlZnJlc2gp IHJlYWQgbGluZXMgKEZpbGVNZXRhZGF0YVJXKT1bJ0NMQVNTPUlzbycsICdERVNDUklQVElPTj1J U09fRE9NQUlOJywgJ0lPT1BUSU1FT1VUU0VDPTEnLCAnTEVBU0VSRVRSSUVTPTMnLCAnTEVBU0VU SU1FU0VDPTUnLCAnTE9DS1BPTElDWT0nLCAnTE9DS1JFTkVXQUxJTlRFUlZBTFNFQz01JywgJ01B U1RFUl9WRVJTSU9OPTAnLCAnUE9PTF9VVUlEPTAwMDAwMDAyLTAwMDItMDAwMi0wMDAyLTAwMDAw MDAwMDNhMSw5ZGFkYzg2OC1kMjkxLTQzZmItYmU3Mi1hZDRiMWE1NTU0ZjgsZjQ5NjY4MTAtMDky My00NzUyLWEwNTQtMmI0OTU0NTg0OTcwJywgJ1JFTU9URV9QQVRIPWNsb3VkLW1nbXQuM2xpbnV4 LmNvbTovdmFyL2xpYi9leHBvcnRzL2lzbycsICdST0xFPVJlZ3VsYXInLCAnU0RVVUlEPTliMmZk Y2U5LWE3OTUtNDRhNy04NmVkLTk5ZTUxNGQyNDM5OScsICdUWVBFPU5GUycsICdWRVJTSU9OPTAn LCAnX1NIQV9DS1NVTT1lOTMxZjljZTc5NWU5ZmJlMTc5YjgwZjc0MTgzMWJlMTM3YjQ1ZmQ2J10N ClRocmVhZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMDkzOjpfX2luaXRfXzo6MjMy OjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3MpIFJlY2VpdmluZyByZXF1ZXN0Li4uDQpUaHJlYWQt MTQ6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM4LDA5NDo6X19pbml0X186OjIzMjo6SU9Qcm9j ZXNzOjooX3Byb2Nlc3NMb2dzKSBRdWV1aW5nIHJlcXVlc3QgaW4gdGhlIHRocmVhZCBwb29sLi4u DQpUaHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM4LDA5NTo6X19pbml0X186OjIz Mjo6SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dzKSBFeHRyYWN0aW5nIHJlcXVlc3QgaW5mb3JtYXRp b24uLi4NClRocmVhZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMDk1OjpfX2luaXRf Xzo6MjMyOjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3MpICgxNikgR290IHJlcXVlc3QgZm9yIG1l dGhvZCAnYWNjZXNzJw0KVGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozOCwwOTU6 Ol9faW5pdF9fOjoyMzI6OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgKDE2KSBRdWV1aW5nIHJl c3BvbnNlDQpUaHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM4LDA5Njo6X19pbml0 X186OjIzMjo6SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dzKSBSZWNlaXZpbmcgcmVxdWVzdC4uLg0K VGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozOCwwOTc6Ol9faW5pdF9fOjoyMzI6 OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgUXVldWluZyByZXF1ZXN0IGluIHRoZSB0aHJlYWQg cG9vbC4uLg0KVGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozOCwwOTc6Ol9faW5p dF9fOjoyMzI6OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgRXh0cmFjdGluZyByZXF1ZXN0IGlu Zm9ybWF0aW9uLi4uDQpUaHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM4LDA5Nzo6 X19pbml0X186OjIzMjo6SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dzKSAoMTcpIEdvdCByZXF1ZXN0 IGZvciBtZXRob2QgJ2FjY2VzcycNClRocmVhZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6 MzgsMDk3OjpfX2luaXRfXzo6MjMyOjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3MpICgxNykgUXVl dWluZyByZXNwb25zZQ0KVGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozOCwwOTg6 Ol9faW5pdF9fOjoyMzI6OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgUmVjZWl2aW5nIHJlcXVl c3QuLi4NClRocmVhZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMDk4OjpfX2luaXRf Xzo6MjMyOjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3MpIFF1ZXVpbmcgcmVxdWVzdCBpbiB0aGUg dGhyZWFkIHBvb2wuLi4NClRocmVhZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMDk4 OjpfX2luaXRfXzo6MjMyOjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3MpIEV4dHJhY3RpbmcgcmVx dWVzdCBpbmZvcm1hdGlvbi4uLg0KVGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NToz OCwwOTk6Ol9faW5pdF9fOjoyMzI6OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgKDE4KSBHb3Qg cmVxdWVzdCBmb3IgbWV0aG9kICdhY2Nlc3MnDQpUaHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1 IDE0OjU1OjM4LDA5OTo6X19pbml0X186OjIzMjo6SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dzKSAo MTgpIFF1ZXVpbmcgcmVzcG9uc2UNClRocmVhZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6 MzgsMDk5OjpfX2luaXRfXzo6MjMyOjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3MpIFJlY2Vpdmlu ZyByZXF1ZXN0Li4uDQpUaHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM4LDEwMDo6 X19pbml0X186OjIzMjo6SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dzKSBRdWV1aW5nIHJlcXVlc3Qg aW4gdGhlIHRocmVhZCBwb29sLi4uDQpUaHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1 OjM4LDEwMDo6X19pbml0X186OjIzMjo6SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dzKSBFeHRyYWN0 aW5nIHJlcXVlc3QgaW5mb3JtYXRpb24uLi4NClRocmVhZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUg MTQ6NTU6MzgsMTAwOjpfX2luaXRfXzo6MjMyOjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3MpICgx OSkgR290IHJlcXVlc3QgZm9yIG1ldGhvZCAnZ2xvYicNClRocmVhZC0xNDo6REVCVUc6OjIwMTQt MTEtMjUgMTQ6NTU6MzgsMTAwOjpfX2luaXRfXzo6MjMyOjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xv Z3MpICgxOSkgUXVldWluZyByZXNwb25zZQ0KVGhyZWFkLTE3OjpERUJVRzo6MjAxNC0xMS0yNSAx NDo1NTozOCwxMDE6OmZpbGVTRDo6NjEwOjpTdG9yYWdlLlN0b3JhZ2VEb21haW46OihpbWFnZUdh cmJhZ2VDb2xsZWN0b3IpIFJlbW92aW5nIHJlbW5hbnRzIG9mIGRlbGV0ZWQgaW1hZ2VzIFtdDQpU aHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM4LDEwMTo6X19pbml0X186OjIzMjo6 SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dzKSBSZWNlaXZpbmcgcmVxdWVzdC4uLg0KVGhyZWFkLTE0 OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozOCwxMDM6Ol9faW5pdF9fOjoyMzI6OklPUHJvY2Vz czo6KF9wcm9jZXNzTG9ncykgUXVldWluZyByZXF1ZXN0IGluIHRoZSB0aHJlYWQgcG9vbC4uLg0K VGhyZWFkLTE3OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozOCwxMDM6OnJlc291cmNlTWFuYWdl cjo6NDIxOjpTdG9yYWdlLlJlc291cmNlTWFuYWdlcjo6KHJlZ2lzdGVyTmFtZXNwYWNlKSBSZWdp c3RlcmluZyBuYW1lc3BhY2UgJzliMmZkY2U5LWE3OTUtNDRhNy04NmVkLTk5ZTUxNGQyNDM5OV9p bWFnZU5TJw0KVGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozOCwxMDQ6Ol9faW5p dF9fOjoyMzI6OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgRXh0cmFjdGluZyByZXF1ZXN0IGlu Zm9ybWF0aW9uLi4uDQpUaHJlYWQtMTc6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM4LDEwNTo6 cmVzb3VyY2VNYW5hZ2VyOjo0MjE6OlN0b3JhZ2UuUmVzb3VyY2VNYW5hZ2VyOjoocmVnaXN0ZXJO YW1lc3BhY2UpIFJlZ2lzdGVyaW5nIG5hbWVzcGFjZSAnOWIyZmRjZTktYTc5NS00NGE3LTg2ZWQt OTllNTE0ZDI0Mzk5X3ZvbHVtZU5TJw0KVGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1 NTozOCwxMDU6Ol9faW5pdF9fOjoyMzI6OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgKDIwKSBH b3QgcmVxdWVzdCBmb3IgbWV0aG9kICdhY2Nlc3MnDQpUaHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTEx LTI1IDE0OjU1OjM4LDEwNzo6X19pbml0X186OjIzMjo6SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dz KSAoMjApIFF1ZXVpbmcgcmVzcG9uc2UNClRocmVhZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6 NTU6MzgsMTA4OjpfX2luaXRfXzo6MjMyOjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3MpIFJlY2Vp dmluZyByZXF1ZXN0Li4uDQpUaHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM4LDEw ODo6X19pbml0X186OjIzMjo6SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dzKSBRdWV1aW5nIHJlcXVl c3QgaW4gdGhlIHRocmVhZCBwb29sLi4uDQpUaHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1IDE0 OjU1OjM4LDEwOTo6X19pbml0X186OjIzMjo6SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dzKSBFeHRy YWN0aW5nIHJlcXVlc3QgaW5mb3JtYXRpb24uLi4NClRocmVhZC0xNDo6REVCVUc6OjIwMTQtMTEt MjUgMTQ6NTU6MzgsMTA5OjpfX2luaXRfXzo6MjMyOjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3Mp ICgyMSkgR290IHJlcXVlc3QgZm9yIG1ldGhvZCAnc3RhdHZmcycNClRocmVhZC0xNDo6REVCVUc6 OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMTA5OjpfX2luaXRfXzo6MjMyOjpJT1Byb2Nlc3M6OihfcHJv Y2Vzc0xvZ3MpICgyMSkgUXVldWluZyByZXNwb25zZQ0KVGhyZWFkLTE4OjpERUJVRzo6MjAxNC0x MS0yNSAxNDo1NTozOCwxMTE6OmRvbWFpbk1vbml0b3I6OjI2NDo6U3RvcmFnZS5Eb21haW5Nb25p dG9yVGhyZWFkOjooX21vbml0b3JEb21haW4pIERvbWFpbiBlYTQ5OWNjMC02ODRmLTRhMmYtYWJl Zi00M2ZmMjhkNjQ5Y2IgY2hhbmdlZCBpdHMgc3RhdHVzIHRvIFZhbGlkDQpUaHJlYWQtMjE6OkRF QlVHOjoyMDE0LTExLTI1IDE0OjU1OjM4LDExMzo6bWlzYzo6ODIxOjpTdG9yYWdlLkV2ZW50LlN0 b3JhZ2UuRG9tYWluTW9uaXRvci5vbkRvbWFpblN0YXRlQ2hhbmdlOjooX2VtaXQpIEVtaXR0aW5n IGV2ZW50DQpUaHJlYWQtMTg6OklORk86OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMTEzOjpjbHVzdGVy bG9jazo6MjIyOjpTdG9yYWdlLlNBTkxvY2s6OihhY3F1aXJlSG9zdElkKSBBY3F1aXJpbmcgaG9z dCBpZCBmb3IgZG9tYWluIGVhNDk5Y2MwLTY4NGYtNGEyZi1hYmVmLTQzZmYyOGQ2NDljYiAoaWQ6 IDEpDQpUaHJlYWQtMjE6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM4LDExNTo6bWlzYzo6ODMx OjpTdG9yYWdlLkV2ZW50LlN0b3JhZ2UuRG9tYWluTW9uaXRvci5vbkRvbWFpblN0YXRlQ2hhbmdl OjooX2VtaXQpIENhbGxpbmcgcmVnaXN0ZXJlZCBtZXRob2QgYGNvbnRFSU9WbXNgDQpUaHJlYWQt MTc6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM4LDExNTo6ZmlsZVNEOjoyNjE6OlN0b3JhZ2Uu TWlzYy5leGNDbWQ6OihnZXRSZWFkRGVsYXkpIC9iaW4vZGQgaWY9L3JoZXYvZGF0YS1jZW50ZXIv bW50LzQzLjI1Mi4xNzYuMTI6X3Zhcl9saWJfZXhwb3J0c19pc28vOWIyZmRjZTktYTc5NS00NGE3 LTg2ZWQtOTllNTE0ZDI0Mzk5L2RvbV9tZC9tZXRhZGF0YSBpZmxhZz1kaXJlY3Qgb2Y9L2Rldi9u dWxsIGJzPTQwOTYgY291bnQ9MSAoY3dkIE5vbmUpDQpUaHJlYWQtMTg6OkRFQlVHOjoyMDE0LTEx LTI1IDE0OjU1OjM4LDExNjo6Y2x1c3RlcmxvY2s6OjI0MDo6U3RvcmFnZS5TQU5Mb2NrOjooYWNx dWlyZUhvc3RJZCkgSG9zdCBpZCBmb3IgZG9tYWluIGVhNDk5Y2MwLTY4NGYtNGEyZi1hYmVmLTQz ZmYyOGQ2NDljYiBzdWNjZXNzZnVsbHkgYWNxdWlyZWQgKGlkOiAxKQ0KVGhyZWFkLTIxOjpERUJV Rzo6MjAxNC0xMS0yNSAxNDo1NTozOCwxMjI6Om1pc2M6Ojg0MTo6U3RvcmFnZS5FdmVudC5TdG9y YWdlLkRvbWFpbk1vbml0b3Iub25Eb21haW5TdGF0ZUNoYW5nZTo6KF9lbWl0KSBFdmVudCBlbWl0 dGVkDQpUaHJlYWQtMjI6OklORk86OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMTI0OjpjbGllbnRJRjo6 MTM2Ojp2ZHM6Oihjb250RUlPVm1zKSB2bUNvbnRhaW5lckxvY2sgYWNxdWlyZWQNClRocmVhZC0x Nzo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMTI2OjpmaWxlU0Q6OjI2MTo6U3RvcmFnZS5N aXNjLmV4Y0NtZDo6KGdldFJlYWREZWxheSkgU1VDQ0VTUzogPGVycj4gPSAnMCsxIHJlY29yZHMg aW5cbjArMSByZWNvcmRzIG91dFxuNDM4IGJ5dGVzICg0MzggQikgY29waWVkLCAwLjAwMDI0NTQ2 NCBzLCAxLjggTUIvc1xuJzsgPHJjPiA9IDANClRocmVhZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUg MTQ6NTU6MzgsMTI2OjpfX2luaXRfXzo6MjMyOjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3MpIFJl Y2VpdmluZyByZXF1ZXN0Li4uDQpUaHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM4 LDEyNzo6X19pbml0X186OjIzMjo6SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dzKSBRdWV1aW5nIHJl cXVlc3QgaW4gdGhlIHRocmVhZCBwb29sLi4uDQpUaHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1 IDE0OjU1OjM4LDEyNzo6X19pbml0X186OjIzMjo6SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dzKSBF eHRyYWN0aW5nIHJlcXVlc3QgaW5mb3JtYXRpb24uLi4NClRocmVhZC0xNDo6REVCVUc6OjIwMTQt MTEtMjUgMTQ6NTU6MzgsMTI3OjpfX2luaXRfXzo6MjMyOjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xv Z3MpICgyMikgR290IHJlcXVlc3QgZm9yIG1ldGhvZCAnc3RhdHZmcycNClRocmVhZC0xNDo6REVC VUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMTI3OjpfX2luaXRfXzo6MjMyOjpJT1Byb2Nlc3M6Oihf cHJvY2Vzc0xvZ3MpICgyMikgUXVldWluZyByZXNwb25zZQ0KVGhyZWFkLTE3OjpERUJVRzo6MjAx NC0xMS0yNSAxNDo1NTozOCwxMjg6OmRvbWFpbk1vbml0b3I6OjI2NDo6U3RvcmFnZS5Eb21haW5N b25pdG9yVGhyZWFkOjooX21vbml0b3JEb21haW4pIERvbWFpbiA5YjJmZGNlOS1hNzk1LTQ0YTct ODZlZC05OWU1MTRkMjQzOTkgY2hhbmdlZCBpdHMgc3RhdHVzIHRvIFZhbGlkDQpUaHJlYWQtMjM6 OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM4LDEyODo6bWlzYzo6ODIxOjpTdG9yYWdlLkV2ZW50 LlN0b3JhZ2UuRG9tYWluTW9uaXRvci5vbkRvbWFpblN0YXRlQ2hhbmdlOjooX2VtaXQpIEVtaXR0 aW5nIGV2ZW50DQpUaHJlYWQtMjM6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM4LDEyOTo6bWlz Yzo6ODMxOjpTdG9yYWdlLkV2ZW50LlN0b3JhZ2UuRG9tYWluTW9uaXRvci5vbkRvbWFpblN0YXRl Q2hhbmdlOjooX2VtaXQpIENhbGxpbmcgcmVnaXN0ZXJlZCBtZXRob2QgYGNvbnRFSU9WbXNgDQpU aHJlYWQtMjM6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM4LDEyOTo6bWlzYzo6ODQxOjpTdG9y YWdlLkV2ZW50LlN0b3JhZ2UuRG9tYWluTW9uaXRvci5vbkRvbWFpblN0YXRlQ2hhbmdlOjooX2Vt aXQpIEV2ZW50IGVtaXR0ZWQNClRocmVhZC0yNDo6SU5GTzo6MjAxNC0xMS0yNSAxNDo1NTozOCwx MzA6OmNsaWVudElGOjoxMzY6OnZkczo6KGNvbnRFSU9WbXMpIHZtQ29udGFpbmVyTG9jayBhY3F1 aXJlZA0KVGhyZWFkLTEzOjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozOCwzMzY6Omx2bTo6Mjg4 OjpTdG9yYWdlLk1pc2MuZXhjQ21kOjooY21kKSBGQUlMRUQ6IDxlcnI+ID0gJyAgVm9sdW1lIGdy b3VwICI5YjJmZGNlOS1hNzk1LTQ0YTctODZlZC05OWU1MTRkMjQzOTkiIG5vdCBmb3VuZFxuICBT a2lwcGluZyB2b2x1bWUgZ3JvdXAgOWIyZmRjZTktYTc5NS00NGE3LTg2ZWQtOTllNTE0ZDI0Mzk5 XG4nOyA8cmM+ID0gNQ0KVGhyZWFkLTEzOjpXQVJOSU5HOjoyMDE0LTExLTI1IDE0OjU1OjM4LDMz Nzo6bHZtOjozNzA6OlN0b3JhZ2UuTFZNOjooX3JlbG9hZHZncykgbHZtIHZncyBmYWlsZWQ6IDUg W10gWycgIFZvbHVtZSBncm91cCAiOWIyZmRjZTktYTc5NS00NGE3LTg2ZWQtOTllNTE0ZDI0Mzk5 IiBub3QgZm91bmQnLCAnICBTa2lwcGluZyB2b2x1bWUgZ3JvdXAgOWIyZmRjZTktYTc5NS00NGE3 LTg2ZWQtOTllNTE0ZDI0Mzk5J10NClRocmVhZC0xMzo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6 MzgsMzM3Ojpsdm06OjQwNzo6U3RvcmFnZS5PcGVyYXRpb25NdXRleDo6KF9yZWxvYWR2Z3MpIE9w ZXJhdGlvbiAnbHZtIHJlbG9hZCBvcGVyYXRpb24nIHJlbGVhc2VkIHRoZSBvcGVyYXRpb24gbXV0 ZXgNClRocmVhZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMzQwOjpfX2luaXRfXzo6 MjMyOjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3MpIFJlY2VpdmluZyByZXF1ZXN0Li4uDQpUaHJl YWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM4LDM0MDo6X19pbml0X186OjIzMjo6SU9Q cm9jZXNzOjooX3Byb2Nlc3NMb2dzKSBRdWV1aW5nIHJlcXVlc3QgaW4gdGhlIHRocmVhZCBwb29s Li4uDQpUaHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM4LDM0MDo6X19pbml0X186 OjIzMjo6SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dzKSBFeHRyYWN0aW5nIHJlcXVlc3QgaW5mb3Jt YXRpb24uLi4NClRocmVhZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMzQwOjpfX2lu aXRfXzo6MjMyOjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3MpICgyMykgR290IHJlcXVlc3QgZm9y IG1ldGhvZCAnZ2xvYicNClRocmVhZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMzQx OjpfX2luaXRfXzo6MjMyOjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3MpICgyMykgUXVldWluZyBy ZXNwb25zZQ0KVGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozOCwzNDE6Ol9faW5p dF9fOjoyMzI6OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgUmVjZWl2aW5nIHJlcXVlc3QuLi4N ClRocmVhZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMzQyOjpfX2luaXRfXzo6MjMy OjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3MpIFF1ZXVpbmcgcmVxdWVzdCBpbiB0aGUgdGhyZWFk IHBvb2wuLi4NClRocmVhZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMzQyOjpfX2lu aXRfXzo6MjMyOjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3MpIEV4dHJhY3RpbmcgcmVxdWVzdCBp bmZvcm1hdGlvbi4uLg0KVGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozOCwzNDI6 Ol9faW5pdF9fOjoyMzI6OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgKDI0KSBHb3QgcmVxdWVz dCBmb3IgbWV0aG9kICdnbG9iJw0KVGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NToz OCwzNDg6Ol9faW5pdF9fOjoyMzI6OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgKDI0KSBRdWV1 aW5nIHJlc3BvbnNlDQpUaHJlYWQtMTM6OklORk86OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMzQ5Ojpz cDo6MTEyMDo6U3RvcmFnZS5TdG9yYWdlUG9vbDo6KF9saW5rU3RvcmFnZURvbWFpbikgTGlua2lu ZyAvcmhldi9kYXRhLWNlbnRlci9tbnQvNDMuMjUyLjE3Ni4xMjpfdmFyX2xpYl9leHBvcnRzX2lz by85YjJmZGNlOS1hNzk1LTQ0YTctODZlZC05OWU1MTRkMjQzOTkgdG8gL3JoZXYvZGF0YS1jZW50 ZXIvZjQ5NjY4MTAtMDkyMy00NzUyLWEwNTQtMmI0OTU0NTg0OTcwLzliMmZkY2U5LWE3OTUtNDRh Ny04NmVkLTk5ZTUxNGQyNDM5OQ0KVGhyZWFkLTEzOjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NToz OCwzNDk6OnNwOjoxMTM3OjpTdG9yYWdlLlN0b3JhZ2VQb29sOjooX2xpbmtTdG9yYWdlRG9tYWlu KSBDcmVhdGluZyBzeW1saW5rIGZyb20gL3JoZXYvZGF0YS1jZW50ZXIvbW50LzQzLjI1Mi4xNzYu MTI6X3Zhcl9saWJfZXhwb3J0c19pc28vOWIyZmRjZTktYTc5NS00NGE3LTg2ZWQtOTllNTE0ZDI0 Mzk5IHRvIC9yaGV2L2RhdGEtY2VudGVyL2Y0OTY2ODEwLTA5MjMtNDc1Mi1hMDU0LTJiNDk1NDU4 NDk3MC85YjJmZGNlOS1hNzk1LTQ0YTctODZlZC05OWU1MTRkMjQzOTkNClRocmVhZC0xMzo6SU5G Tzo6MjAxNC0xMS0yNSAxNDo1NTozOCwzNTA6OnNwOjoxMTIwOjpTdG9yYWdlLlN0b3JhZ2VQb29s OjooX2xpbmtTdG9yYWdlRG9tYWluKSBMaW5raW5nIC9yaGV2L2RhdGEtY2VudGVyL21udC9nbHVz dGVyU0QvMTAuMTAuMTAuMzpfZGF0YTEvZWE0OTljYzAtNjg0Zi00YTJmLWFiZWYtNDNmZjI4ZDY0 OWNiIHRvIC9yaGV2L2RhdGEtY2VudGVyL2Y0OTY2ODEwLTA5MjMtNDc1Mi1hMDU0LTJiNDk1NDU4 NDk3MC9lYTQ5OWNjMC02ODRmLTRhMmYtYWJlZi00M2ZmMjhkNjQ5Y2INClRocmVhZC0xMzo6REVC VUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMzUwOjpzcDo6MTEzNzo6U3RvcmFnZS5TdG9yYWdlUG9v bDo6KF9saW5rU3RvcmFnZURvbWFpbikgQ3JlYXRpbmcgc3ltbGluayBmcm9tIC9yaGV2L2RhdGEt Y2VudGVyL21udC9nbHVzdGVyU0QvMTAuMTAuMTAuMzpfZGF0YTEvZWE0OTljYzAtNjg0Zi00YTJm LWFiZWYtNDNmZjI4ZDY0OWNiIHRvIC9yaGV2L2RhdGEtY2VudGVyL2Y0OTY2ODEwLTA5MjMtNDc1 Mi1hMDU0LTJiNDk1NDU4NDk3MC9lYTQ5OWNjMC02ODRmLTRhMmYtYWJlZi00M2ZmMjhkNjQ5Y2IN ClRocmVhZC0xMzo6SU5GTzo6MjAxNC0xMS0yNSAxNDo1NTozOCwzNTA6OnNwOjoxMTIwOjpTdG9y YWdlLlN0b3JhZ2VQb29sOjooX2xpbmtTdG9yYWdlRG9tYWluKSBMaW5raW5nIC9yaGV2L2RhdGEt Y2VudGVyL21udC9nbHVzdGVyU0QvMTAuMTAuMTAuMzpfZGF0YTEvZWE0OTljYzAtNjg0Zi00YTJm LWFiZWYtNDNmZjI4ZDY0OWNiIHRvIC9yaGV2L2RhdGEtY2VudGVyL2Y0OTY2ODEwLTA5MjMtNDc1 Mi1hMDU0LTJiNDk1NDU4NDk3MC9tYXN0ZXJzZA0KVGhyZWFkLTEzOjpERUJVRzo6MjAxNC0xMS0y NSAxNDo1NTozOCwzNTA6OnNwOjoxMTM3OjpTdG9yYWdlLlN0b3JhZ2VQb29sOjooX2xpbmtTdG9y YWdlRG9tYWluKSBDcmVhdGluZyBzeW1saW5rIGZyb20gL3JoZXYvZGF0YS1jZW50ZXIvbW50L2ds dXN0ZXJTRC8xMC4xMC4xMC4zOl9kYXRhMS9lYTQ5OWNjMC02ODRmLTRhMmYtYWJlZi00M2ZmMjhk NjQ5Y2IgdG8gL3JoZXYvZGF0YS1jZW50ZXIvZjQ5NjY4MTAtMDkyMy00NzUyLWEwNTQtMmI0OTU0 NTg0OTcwL21hc3RlcnNkDQpUaHJlYWQtMTM6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM4LDM1 ODo6dGhyZWFkUG9vbDo6MzU6OlN0b3JhZ2UuVGhyZWFkUG9vbDo6KF9faW5pdF9fKSBFbnRlciAt IG51bVRocmVhZHM6IDUsIHdhaXRUaW1lb3V0OiAzLCBtYXhUYXNrczogNTAwDQpUaHJlYWQtMTM6 OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM4LDM2MTo6c3RvcmFnZV9tYWlsYm94Ojo4NDo6U3Rv cmFnZS5NaXNjLmV4Y0NtZDo6KF9tYm94RXhlY0NtZCkgL2Jpbi9kZCBpZj0vcmhldi9kYXRhLWNl bnRlci9mNDk2NjgxMC0wOTIzLTQ3NTItYTA1NC0yYjQ5NTQ1ODQ5NzAvbWFzdGVyc2QvZG9tX21k L291dGJveCBpZmxhZz1kaXJlY3QsZnVsbGJsb2NrIGJzPTUxMiBjb3VudD04IHNraXA9OCAoY3dk IE5vbmUpDQpUaHJlYWQtMTM6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM4LDM3Mzo6c3RvcmFn ZV9tYWlsYm94Ojo4NDo6U3RvcmFnZS5NaXNjLmV4Y0NtZDo6KF9tYm94RXhlY0NtZCkgU1VDQ0VT UzogPGVycj4gPSAnOCswIHJlY29yZHMgaW5cbjgrMCByZWNvcmRzIG91dFxuNDA5NiBieXRlcyAo NC4xIGtCKSBjb3BpZWQsIDAuMDA3MjI1NTYgcywgNTY3IGtCL3Ncbic7IDxyYz4gPSAwDQpUaHJl YWQtMTM6OklORk86OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMzczOjpzdG9yYWdlX21haWxib3g6OjM4 NTo6U3RvcmFnZS5NYWlsQm94LkhzbU1haWxNb25pdG9yOjooX3NlbmRNYWlsKSBIU01fTWFpbE1v bml0b3Igc2VuZGluZyBtYWlsIHRvIFNQTSAtIFsnL2Jpbi9kZCcsICdvZj0vcmhldi9kYXRhLWNl bnRlci9mNDk2NjgxMC0wOTIzLTQ3NTItYTA1NC0yYjQ5NTQ1ODQ5NzAvbWFzdGVyc2QvZG9tX21k L2luYm94JywgJ2lmbGFnPWZ1bGxibG9jaycsICdvZmxhZz1kaXJlY3QnLCAnY29udj1ub3RydW5j JywgJ2JzPTUxMicsICdzZWVrPTgnXQ0KVGhyZWFkLTEzOjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1 NTozOCwzNzQ6OnN0b3JhZ2VfbWFpbGJveDo6ODQ6OlN0b3JhZ2UuTWlzYy5leGNDbWQ6OihfbWJv eEV4ZWNDbWQpIC9iaW4vZGQgb2Y9L3JoZXYvZGF0YS1jZW50ZXIvZjQ5NjY4MTAtMDkyMy00NzUy LWEwNTQtMmI0OTU0NTg0OTcwL21hc3RlcnNkL2RvbV9tZC9pbmJveCBpZmxhZz1mdWxsYmxvY2sg b2ZsYWc9ZGlyZWN0IGNvbnY9bm90cnVuYyBicz01MTIgc2Vlaz04IChjd2QgTm9uZSkNClRocmVh ZC0xMzo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMzgxOjpzdG9yYWdlX21haWxib3g6Ojg0 OjpTdG9yYWdlLk1pc2MuZXhjQ21kOjooX21ib3hFeGVjQ21kKSBTVUNDRVNTOiA8ZXJyPiA9ICc4 KzAgcmVjb3JkcyBpblxuOCswIHJlY29yZHMgb3V0XG40MDk2IGJ5dGVzICg0LjEga0IpIGNvcGll ZCwgMC4wMDIzNjA4MiBzLCAxLjcgTUIvc1xuJzsgPHJjPiA9IDANClRocmVhZC0xMzo6REVCVUc6 OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMzgyOjpzdG9yYWdlX21haWxib3g6OjIwNzo6U3RvcmFnZS5N YWlsYm94LkhTTTo6KF9faW5pdF9fKSBIU01fTWFpbGJveE1vbml0b3IgY3JlYXRlZCBmb3IgcG9v bCBmNDk2NjgxMC0wOTIzLTQ3NTItYTA1NC0yYjQ5NTQ1ODQ5NzANClRocmVhZC0xMzo6REVCVUc6 OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMzgyOjpzcDo6NDYyOjpTdG9yYWdlLlN0b3JhZ2VQb29sOjoo X19jcmVhdGVNYWlsYm94TW9uaXRvcikgSFNNIG1haWxib3ggcmVhZHkgZm9yIHBvb2wgZjQ5NjY4 MTAtMDkyMy00NzUyLWEwNTQtMmI0OTU0NTg0OTcwIG9uIG1hc3RlciBkb21haW4gZWE0OTljYzAt Njg0Zi00YTJmLWFiZWYtNDNmZjI4ZDY0OWNiDQpUaHJlYWQtMTM6OkRFQlVHOjoyMDE0LTExLTI1 IDE0OjU1OjM4LDM4Mjo6cmVzb3VyY2VNYW5hZ2VyOjo2MTY6OlN0b3JhZ2UuUmVzb3VyY2VNYW5h Z2VyOjoocmVsZWFzZVJlc291cmNlKSBUcnlpbmcgdG8gcmVsZWFzZSByZXNvdXJjZSAnU3RvcmFn ZS5mNDk2NjgxMC0wOTIzLTQ3NTItYTA1NC0yYjQ5NTQ1ODQ5NzAnDQpUaHJlYWQtMTM6OkRFQlVH OjoyMDE0LTExLTI1IDE0OjU1OjM4LDM4Mjo6cmVzb3VyY2VNYW5hZ2VyOjo2MzU6OlN0b3JhZ2Uu UmVzb3VyY2VNYW5hZ2VyOjoocmVsZWFzZVJlc291cmNlKSBSZWxlYXNlZCByZXNvdXJjZSAnU3Rv cmFnZS5mNDk2NjgxMC0wOTIzLTQ3NTItYTA1NC0yYjQ5NTQ1ODQ5NzAnICgwIGFjdGl2ZSB1c2Vy cykNClRocmVhZC0xMzo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMzgzOjpyZXNvdXJjZU1h bmFnZXI6OjY0MTo6U3RvcmFnZS5SZXNvdXJjZU1hbmFnZXI6OihyZWxlYXNlUmVzb3VyY2UpIFJl c291cmNlICdTdG9yYWdlLmY0OTY2ODEwLTA5MjMtNDc1Mi1hMDU0LTJiNDk1NDU4NDk3MCcgaXMg ZnJlZSwgZmluZGluZyBvdXQgaWYgYW55b25lIGlzIHdhaXRpbmcgZm9yIGl0Lg0KVGhyZWFkLTEz OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozOCwzODM6OnJlc291cmNlTWFuYWdlcjo6NjQ5OjpT dG9yYWdlLlJlc291cmNlTWFuYWdlcjo6KHJlbGVhc2VSZXNvdXJjZSkgTm8gb25lIGlzIHdhaXRp bmcgZm9yIHJlc291cmNlICdTdG9yYWdlLmY0OTY2ODEwLTA5MjMtNDc1Mi1hMDU0LTJiNDk1NDU4 NDk3MCcsIENsZWFyaW5nIHJlY29yZHMuDQpUaHJlYWQtMTM6OkRFQlVHOjoyMDE0LTExLTI1IDE0 OjU1OjM4LDM4Mzo6cmVzb3VyY2VNYW5hZ2VyOjo2MTY6OlN0b3JhZ2UuUmVzb3VyY2VNYW5hZ2Vy OjoocmVsZWFzZVJlc291cmNlKSBUcnlpbmcgdG8gcmVsZWFzZSByZXNvdXJjZSAnU3RvcmFnZS5I c21Eb21haW5Nb25pdG9yTG9jaycNClRocmVhZC0xMzo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6 MzgsMzgzOjpyZXNvdXJjZU1hbmFnZXI6OjYzNTo6U3RvcmFnZS5SZXNvdXJjZU1hbmFnZXI6Oihy ZWxlYXNlUmVzb3VyY2UpIFJlbGVhc2VkIHJlc291cmNlICdTdG9yYWdlLkhzbURvbWFpbk1vbml0 b3JMb2NrJyAoMCBhY3RpdmUgdXNlcnMpDQpUaHJlYWQtMTM6OkRFQlVHOjoyMDE0LTExLTI1IDE0 OjU1OjM4LDM4Mzo6cmVzb3VyY2VNYW5hZ2VyOjo2NDE6OlN0b3JhZ2UuUmVzb3VyY2VNYW5hZ2Vy OjoocmVsZWFzZVJlc291cmNlKSBSZXNvdXJjZSAnU3RvcmFnZS5Ic21Eb21haW5Nb25pdG9yTG9j aycgaXMgZnJlZSwgZmluZGluZyBvdXQgaWYgYW55b25lIGlzIHdhaXRpbmcgZm9yIGl0Lg0KVGhy ZWFkLTEzOjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozOCwzODM6OnJlc291cmNlTWFuYWdlcjo6 NjQ5OjpTdG9yYWdlLlJlc291cmNlTWFuYWdlcjo6KHJlbGVhc2VSZXNvdXJjZSkgTm8gb25lIGlz IHdhaXRpbmcgZm9yIHJlc291cmNlICdTdG9yYWdlLkhzbURvbWFpbk1vbml0b3JMb2NrJywgQ2xl YXJpbmcgcmVjb3Jkcy4NClRocmVhZC0xMzo6SU5GTzo6MjAxNC0xMS0yNSAxNDo1NTozOCwzODQ6 OmxvZ1V0aWxzOjo0Nzo6ZGlzcGF0Y2hlcjo6KHdyYXBwZXIpIFJ1biBhbmQgcHJvdGVjdDogY29u bmVjdFN0b3JhZ2VQb29sLCBSZXR1cm4gcmVzcG9uc2U6IFRydWUNClRocmVhZC0xMzo6REVCVUc6 OjIwMTQtMTEtMjUgMTQ6NTU6MzgsMzg0Ojp0YXNrOjoxMTkxOjpTdG9yYWdlLlRhc2tNYW5hZ2Vy LlRhc2s6OihwcmVwYXJlKSBUYXNrPWAwYmJkODA4MC1hYjRjLTQ5YTItOTcwOC00OWU2ZTViZDBi NzhgOjpmaW5pc2hlZDogVHJ1ZQ0KVGhyZWFkLTEzOjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NToz OCwzODQ6OnRhc2s6OjU5NTo6U3RvcmFnZS5UYXNrTWFuYWdlci5UYXNrOjooX3VwZGF0ZVN0YXRl KSBUYXNrPWAwYmJkODA4MC1hYjRjLTQ5YTItOTcwOC00OWU2ZTViZDBiNzhgOjptb3ZpbmcgZnJv bSBzdGF0ZSBwcmVwYXJpbmcgLT4gc3RhdGUgZmluaXNoZWQNClRocmVhZC0xMzo6REVCVUc6OjIw MTQtMTEtMjUgMTQ6NTU6MzgsMzg0OjpyZXNvdXJjZU1hbmFnZXI6Ojk0MDo6U3RvcmFnZS5SZXNv dXJjZU1hbmFnZXIuT3duZXI6OihyZWxlYXNlQWxsKSBPd25lci5yZWxlYXNlQWxsIHJlcXVlc3Rz IHt9IHJlc291cmNlcyB7fQ0KVGhyZWFkLTEzOjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozOCwz ODQ6OnJlc291cmNlTWFuYWdlcjo6OTc3OjpTdG9yYWdlLlJlc291cmNlTWFuYWdlci5Pd25lcjo6 KGNhbmNlbEFsbCkgT3duZXIuY2FuY2VsQWxsIHJlcXVlc3RzIHt9DQpUaHJlYWQtMTM6OkRFQlVH OjoyMDE0LTExLTI1IDE0OjU1OjM4LDM4NTo6dGFzazo6OTkzOjpTdG9yYWdlLlRhc2tNYW5hZ2Vy LlRhc2s6OihfZGVjcmVmKSBUYXNrPWAwYmJkODA4MC1hYjRjLTQ5YTItOTcwOC00OWU2ZTViZDBi NzhgOjpyZWYgMCBhYm9ydGluZyBGYWxzZQ0KVGhyZWFkLTEzOjpERUJVRzo6MjAxNC0xMS0yNSAx NDo1NTozOCw0MzU6OnRhc2s6OjU5NTo6U3RvcmFnZS5UYXNrTWFuYWdlci5UYXNrOjooX3VwZGF0 ZVN0YXRlKSBUYXNrPWBiY2ZhMTI3Ni1mZDU0LTRhYmMtOWY5OS1iNGYwZDVhNjViYzFgOjptb3Zp bmcgZnJvbSBzdGF0ZSBpbml0IC0+IHN0YXRlIHByZXBhcmluZw0KVGhyZWFkLTEzOjpJTkZPOjoy MDE0LTExLTI1IDE0OjU1OjM4LDQzNTo6bG9nVXRpbHM6OjQ0OjpkaXNwYXRjaGVyOjood3JhcHBl cikgUnVuIGFuZCBwcm90ZWN0OiByZXBvU3RhdHMob3B0aW9ucz1Ob25lKQ0KVGhyZWFkLTEzOjpJ TkZPOjoyMDE0LTExLTI1IDE0OjU1OjM4LDQzNTo6bG9nVXRpbHM6OjQ3OjpkaXNwYXRjaGVyOjoo d3JhcHBlcikgUnVuIGFuZCBwcm90ZWN0OiByZXBvU3RhdHMsIFJldHVybiByZXNwb25zZTogeyc5 YjJmZGNlOS1hNzk1LTQ0YTctODZlZC05OWU1MTRkMjQzOTknOiB7J2NvZGUnOiAwLCAndmVyc2lv bic6IDAsICdhY3F1aXJlZCc6IFRydWUsICdkZWxheSc6ICcwLjAwMDI0NTQ2NCcsICdsYXN0Q2hl Y2snOiAnMC4zJywgJ3ZhbGlkJzogVHJ1ZX0sICdlYTQ5OWNjMC02ODRmLTRhMmYtYWJlZi00M2Zm MjhkNjQ5Y2InOiB7J2NvZGUnOiAwLCAndmVyc2lvbic6IDMsICdhY3F1aXJlZCc6IEZhbHNlLCAn ZGVsYXknOiAnMC4wMDA0ODA5NTcnLCAnbGFzdENoZWNrJzogJzAuMycsICd2YWxpZCc6IFRydWV9 fQ0KVGhyZWFkLTEzOjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozOCw0MzU6OnRhc2s6OjExOTE6 OlN0b3JhZ2UuVGFza01hbmFnZXIuVGFzazo6KHByZXBhcmUpIFRhc2s9YGJjZmExMjc2LWZkNTQt NGFiYy05Zjk5LWI0ZjBkNWE2NWJjMWA6OmZpbmlzaGVkOiB7JzliMmZkY2U5LWE3OTUtNDRhNy04 NmVkLTk5ZTUxNGQyNDM5OSc6IHsnY29kZSc6IDAsICd2ZXJzaW9uJzogMCwgJ2FjcXVpcmVkJzog VHJ1ZSwgJ2RlbGF5JzogJzAuMDAwMjQ1NDY0JywgJ2xhc3RDaGVjayc6ICcwLjMnLCAndmFsaWQn OiBUcnVlfSwgJ2VhNDk5Y2MwLTY4NGYtNGEyZi1hYmVmLTQzZmYyOGQ2NDljYic6IHsnY29kZSc6 IDAsICd2ZXJzaW9uJzogMywgJ2FjcXVpcmVkJzogRmFsc2UsICdkZWxheSc6ICcwLjAwMDQ4MDk1 NycsICdsYXN0Q2hlY2snOiAnMC4zJywgJ3ZhbGlkJzogVHJ1ZX19DQpUaHJlYWQtMTM6OkRFQlVH OjoyMDE0LTExLTI1IDE0OjU1OjM4LDQzNTo6dGFzazo6NTk1OjpTdG9yYWdlLlRhc2tNYW5hZ2Vy LlRhc2s6OihfdXBkYXRlU3RhdGUpIFRhc2s9YGJjZmExMjc2LWZkNTQtNGFiYy05Zjk5LWI0ZjBk NWE2NWJjMWA6Om1vdmluZyBmcm9tIHN0YXRlIHByZXBhcmluZyAtPiBzdGF0ZSBmaW5pc2hlZA0K VGhyZWFkLTEzOjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTozOCw0MzU6OnJlc291cmNlTWFuYWdl cjo6OTQwOjpTdG9yYWdlLlJlc291cmNlTWFuYWdlci5Pd25lcjo6KHJlbGVhc2VBbGwpIE93bmVy LnJlbGVhc2VBbGwgcmVxdWVzdHMge30gcmVzb3VyY2VzIHt9DQpUaHJlYWQtMTM6OkRFQlVHOjoy MDE0LTExLTI1IDE0OjU1OjM4LDQzNjo6cmVzb3VyY2VNYW5hZ2VyOjo5Nzc6OlN0b3JhZ2UuUmVz b3VyY2VNYW5hZ2VyLk93bmVyOjooY2FuY2VsQWxsKSBPd25lci5jYW5jZWxBbGwgcmVxdWVzdHMg e30NClRocmVhZC0xMzo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6MzgsNDM2Ojp0YXNrOjo5OTM6 OlN0b3JhZ2UuVGFza01hbmFnZXIuVGFzazo6KF9kZWNyZWYpIFRhc2s9YGJjZmExMjc2LWZkNTQt NGFiYy05Zjk5LWI0ZjBkNWE2NWJjMWA6OnJlZiAwIGFib3J0aW5nIEZhbHNlDQpUaHJlYWQtMTM6 OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM4LDg3Nzo6QmluZGluZ1hNTFJQQzo6MTEzMjo6dmRz Ojood3JhcHBlcikgY2xpZW50IFsxMC4xMC4xMC4yXTo6Y2FsbCBzZXRNT01Qb2xpY3lQYXJhbWV0 ZXJzIHdpdGggKHsnYmFsbG9vbkVuYWJsZWQnOiBGYWxzZSwgJ2tzbUVuYWJsZWQnOiBGYWxzZX0s KSB7fSBmbG93SUQgWzVkMmMwMDYxXQ0KVGhyZWFkLTEzOjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1 NTozOCw5NTE6OkJpbmRpbmdYTUxSUEM6OjExMzk6OnZkczo6KHdyYXBwZXIpIHJldHVybiBzZXRN T01Qb2xpY3lQYXJhbWV0ZXJzIHdpdGggeydzdGF0dXMnOiB7J21lc3NhZ2UnOiAnRG9uZScsICdj b2RlJzogMH19DQpUaHJlYWQtMTM6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjM5LDAxMTo6Qmlu ZGluZ1hNTFJQQzo6MTEzMjo6dmRzOjood3JhcHBlcikgY2xpZW50IFsxMC4xMC4xMC4yXTo6Y2Fs bCBob3N0VVVJREdldCB3aXRoICgpIHt9IGZsb3dJRCBbNWQyYzAwNjFdDQpUaHJlYWQtMTM6OkRF QlVHOjoyMDE0LTExLTI1IDE0OjU1OjM5LDA0MTo6QmluZGluZ1hNTFJQQzo6MTEzOTo6dmRzOjoo d3JhcHBlcikgcmV0dXJuIGhvc3RVVUlER2V0IHdpdGggeydzdGF0dXMnOiB7J21lc3NhZ2UnOiAn RG9uZScsICdjb2RlJzogMH0sICd1dWlkJzogJzkxOGNlZjFmLWVkODMtNDU3MC05MGFkLTM5YzVj ZWVjNzUzYyd9DQpUaHJlYWQtMTM6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjQxLDczMDo6Qmlu ZGluZ1hNTFJQQzo6MTEzMjo6dmRzOjood3JhcHBlcikgY2xpZW50IFsxMC4xMC4xMC4yXTo6Y2Fs bCB2b2x1bWVzTGlzdCB3aXRoICgpIHt9DQpUaHJlYWQtMTM6OkRFQlVHOjoyMDE0LTExLTI1IDE0 OjU1OjQxLDc0MDo6QmluZGluZ1hNTFJQQzo6MTEzOTo6dmRzOjood3JhcHBlcikgcmV0dXJuIHZv bHVtZXNMaXN0IHdpdGggeydzdGF0dXMnOiB7J21lc3NhZ2UnOiAnRG9uZScsICdjb2RlJzogMH0s ICd2b2x1bWVzJzogeydkYXRhMSc6IHsndHJhbnNwb3J0VHlwZSc6IFsnVENQJ10sICd1dWlkJzog J2U2MTE3OTI1LTc5YjEtNDE3Yi05ZDA3LWNmYzMxZjY4YmM1MScsICdicmlja3MnOiBbJ2NvbXB1 dGUxLjNsaW51eC5jb206L2JyaWNrMScsICdjb21wdXRlMi4zbGludXguY29tOi9icmljazMnLCAn Y29tcHV0ZTMuM2xpbnV4LmNvbTovYnJpY2s1JywgJ2NvbXB1dGU0LjNsaW51eC5jb206L2JyaWNr NycsICdjb21wdXRlMS4zbGludXguY29tOi9icmljazInLCAnY29tcHV0ZTIuM2xpbnV4LmNvbTov YnJpY2s0JywgJ2NvbXB1dGUzLjNsaW51eC5jb206L2JyaWNrNicsICdjb21wdXRlNC4zbGludXgu Y29tOi9icmljazgnXSwgJ3ZvbHVtZU5hbWUnOiAnZGF0YTEnLCAndm9sdW1lVHlwZSc6ICdESVNU UklCVVRFRF9SRVBMSUNBVEUnLCAncmVwbGljYUNvdW50JzogJzInLCAnYnJpY2tDb3VudCc6ICc4 JywgJ2Rpc3RDb3VudCc6ICcyJywgJ3ZvbHVtZVN0YXR1cyc6ICdPTkxJTkUnLCAnc3RyaXBlQ291 bnQnOiAnMScsICdicmlja3NJbmZvJzogW3snbmFtZSc6ICdjb21wdXRlMS4zbGludXguY29tOi9i cmljazEnLCAnaG9zdFV1aWQnOiAnNjliMTlmNGUtMDNjOC00ZmUxLWE2NDQtMjcwMWM0MWQ3MDIw J30sIHsnbmFtZSc6ICdjb21wdXRlMi4zbGludXguY29tOi9icmljazMnLCAnaG9zdFV1aWQnOiAn MmNmMzQyOGMtNjJiZS00MmY0LTg2ZDMtZWY2N2U3ZGZjOTdmJ30sIHsnbmFtZSc6ICdjb21wdXRl My4zbGludXguY29tOi9icmljazUnLCAnaG9zdFV1aWQnOiAnMGQ5NGQ4OTYtZTk5NC00ZGViLWIx YTEtZDZmNjhiZTg2YzMwJ30sIHsnbmFtZSc6ICdjb21wdXRlNC4zbGludXguY29tOi9icmljazcn LCAnaG9zdFV1aWQnOiAnOTE4Y2VmMWYtZWQ4My00NTcwLTkwYWQtMzljNWNlZWM3NTNjJ30sIHsn bmFtZSc6ICdjb21wdXRlMS4zbGludXguY29tOi9icmljazInLCAnaG9zdFV1aWQnOiAnNjliMTlm NGUtMDNjOC00ZmUxLWE2NDQtMjcwMWM0MWQ3MDIwJ30sIHsnbmFtZSc6ICdjb21wdXRlMi4zbGlu dXguY29tOi9icmljazQnLCAnaG9zdFV1aWQnOiAnMmNmMzQyOGMtNjJiZS00MmY0LTg2ZDMtZWY2 N2U3ZGZjOTdmJ30sIHsnbmFtZSc6ICdjb21wdXRlMy4zbGludXguY29tOi9icmljazYnLCAnaG9z dFV1aWQnOiAnMGQ5NGQ4OTYtZTk5NC00ZGViLWIxYTEtZDZmNjhiZTg2YzMwJ30sIHsnbmFtZSc6 ICdjb21wdXRlNC4zbGludXguY29tOi9icmljazgnLCAnaG9zdFV1aWQnOiAnOTE4Y2VmMWYtZWQ4 My00NTcwLTkwYWQtMzljNWNlZWM3NTNjJ31dLCAnb3B0aW9ucyc6IHsnY2x1c3Rlci5zZXJ2ZXIt cXVvcnVtLXR5cGUnOiAnc2VydmVyJywgJ2NsdXN0ZXIuZWFnZXItbG9jayc6ICdlbmFibGUnLCAn cGVyZm9ybWFuY2Uuc3RhdC1wcmVmZXRjaCc6ICdvZmYnLCAnYXV0aC5hbGxvdyc6ICcqJywgJ2Ns dXN0ZXIucXVvcnVtLXR5cGUnOiAnYXV0bycsICdwZXJmb3JtYW5jZS5xdWljay1yZWFkJzogJ29m ZicsICduZXR3b3JrLnJlbW90ZS1kaW8nOiAnZW5hYmxlJywgJ25mcy5kaXNhYmxlJzogJ29mZics ICdwZXJmb3JtYW5jZS5pby1jYWNoZSc6ICdvZmYnLCAnc3RvcmFnZS5vd25lci11aWQnOiAnMzYn LCAndXNlci5jaWZzJzogJ2VuYWJsZScsICdwZXJmb3JtYW5jZS5yZWFkLWFoZWFkJzogJ29mZics ICdzdG9yYWdlLm93bmVyLWdpZCc6ICczNid9fX19DQpUaHJlYWQtMTM6OkRFQlVHOjoyMDE0LTEx LTI1IDE0OjU1OjQ3LDA3ODo6QmluZGluZ1hNTFJQQzo6MTEzMjo6dmRzOjood3JhcHBlcikgY2xp ZW50IFsxMC4xMC4xMC4yXTo6Y2FsbCB2b2x1bWVzTGlzdCB3aXRoICgpIHt9DQpUaHJlYWQtMTM6 OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjQ3LDA4OTo6QmluZGluZ1hNTFJQQzo6MTEzOTo6dmRz Ojood3JhcHBlcikgcmV0dXJuIHZvbHVtZXNMaXN0IHdpdGggeydzdGF0dXMnOiB7J21lc3NhZ2Un OiAnRG9uZScsICdjb2RlJzogMH0sICd2b2x1bWVzJzogeydkYXRhMSc6IHsndHJhbnNwb3J0VHlw ZSc6IFsnVENQJ10sICd1dWlkJzogJ2U2MTE3OTI1LTc5YjEtNDE3Yi05ZDA3LWNmYzMxZjY4YmM1 MScsICdicmlja3MnOiBbJ2NvbXB1dGUxLjNsaW51eC5jb206L2JyaWNrMScsICdjb21wdXRlMi4z bGludXguY29tOi9icmljazMnLCAnY29tcHV0ZTMuM2xpbnV4LmNvbTovYnJpY2s1JywgJ2NvbXB1 dGU0LjNsaW51eC5jb206L2JyaWNrNycsICdjb21wdXRlMS4zbGludXguY29tOi9icmljazInLCAn Y29tcHV0ZTIuM2xpbnV4LmNvbTovYnJpY2s0JywgJ2NvbXB1dGUzLjNsaW51eC5jb206L2JyaWNr NicsICdjb21wdXRlNC4zbGludXguY29tOi9icmljazgnXSwgJ3ZvbHVtZU5hbWUnOiAnZGF0YTEn LCAndm9sdW1lVHlwZSc6ICdESVNUUklCVVRFRF9SRVBMSUNBVEUnLCAncmVwbGljYUNvdW50Jzog JzInLCAnYnJpY2tDb3VudCc6ICc4JywgJ2Rpc3RDb3VudCc6ICcyJywgJ3ZvbHVtZVN0YXR1cyc6 ICdPTkxJTkUnLCAnc3RyaXBlQ291bnQnOiAnMScsICdicmlja3NJbmZvJzogW3snbmFtZSc6ICdj b21wdXRlMS4zbGludXguY29tOi9icmljazEnLCAnaG9zdFV1aWQnOiAnNjliMTlmNGUtMDNjOC00 ZmUxLWE2NDQtMjcwMWM0MWQ3MDIwJ30sIHsnbmFtZSc6ICdjb21wdXRlMi4zbGludXguY29tOi9i cmljazMnLCAnaG9zdFV1aWQnOiAnMmNmMzQyOGMtNjJiZS00MmY0LTg2ZDMtZWY2N2U3ZGZjOTdm J30sIHsnbmFtZSc6ICdjb21wdXRlMy4zbGludXguY29tOi9icmljazUnLCAnaG9zdFV1aWQnOiAn MGQ5NGQ4OTYtZTk5NC00ZGViLWIxYTEtZDZmNjhiZTg2YzMwJ30sIHsnbmFtZSc6ICdjb21wdXRl NC4zbGludXguY29tOi9icmljazcnLCAnaG9zdFV1aWQnOiAnOTE4Y2VmMWYtZWQ4My00NTcwLTkw YWQtMzljNWNlZWM3NTNjJ30sIHsnbmFtZSc6ICdjb21wdXRlMS4zbGludXguY29tOi9icmljazIn LCAnaG9zdFV1aWQnOiAnNjliMTlmNGUtMDNjOC00ZmUxLWE2NDQtMjcwMWM0MWQ3MDIwJ30sIHsn bmFtZSc6ICdjb21wdXRlMi4zbGludXguY29tOi9icmljazQnLCAnaG9zdFV1aWQnOiAnMmNmMzQy OGMtNjJiZS00MmY0LTg2ZDMtZWY2N2U3ZGZjOTdmJ30sIHsnbmFtZSc6ICdjb21wdXRlMy4zbGlu dXguY29tOi9icmljazYnLCAnaG9zdFV1aWQnOiAnMGQ5NGQ4OTYtZTk5NC00ZGViLWIxYTEtZDZm NjhiZTg2YzMwJ30sIHsnbmFtZSc6ICdjb21wdXRlNC4zbGludXguY29tOi9icmljazgnLCAnaG9z dFV1aWQnOiAnOTE4Y2VmMWYtZWQ4My00NTcwLTkwYWQtMzljNWNlZWM3NTNjJ31dLCAnb3B0aW9u cyc6IHsnY2x1c3Rlci5zZXJ2ZXItcXVvcnVtLXR5cGUnOiAnc2VydmVyJywgJ2NsdXN0ZXIuZWFn ZXItbG9jayc6ICdlbmFibGUnLCAncGVyZm9ybWFuY2Uuc3RhdC1wcmVmZXRjaCc6ICdvZmYnLCAn YXV0aC5hbGxvdyc6ICcqJywgJ2NsdXN0ZXIucXVvcnVtLXR5cGUnOiAnYXV0bycsICdwZXJmb3Jt YW5jZS5xdWljay1yZWFkJzogJ29mZicsICduZXR3b3JrLnJlbW90ZS1kaW8nOiAnZW5hYmxlJywg J25mcy5kaXNhYmxlJzogJ29mZicsICdwZXJmb3JtYW5jZS5pby1jYWNoZSc6ICdvZmYnLCAnc3Rv cmFnZS5vd25lci11aWQnOiAnMzYnLCAndXNlci5jaWZzJzogJ2VuYWJsZScsICdwZXJmb3JtYW5j ZS5yZWFkLWFoZWFkJzogJ29mZicsICdzdG9yYWdlLm93bmVyLWdpZCc6ICczNid9fX19DQpUaHJl YWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjQ4LDEyNDo6X19pbml0X186OjIzMjo6SU9Q cm9jZXNzOjooX3Byb2Nlc3NMb2dzKSBSZWNlaXZpbmcgcmVxdWVzdC4uLg0KVGhyZWFkLTE0OjpE RUJVRzo6MjAxNC0xMS0yNSAxNDo1NTo0OCwxMjQ6Ol9faW5pdF9fOjoyMzI6OklPUHJvY2Vzczo6 KF9wcm9jZXNzTG9ncykgUXVldWluZyByZXF1ZXN0IGluIHRoZSB0aHJlYWQgcG9vbC4uLg0KVGhy ZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTo0OCwxMjQ6Ol9faW5pdF9fOjoyMzI6OklP UHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgRXh0cmFjdGluZyByZXF1ZXN0IGluZm9ybWF0aW9uLi4u DQpUaHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjQ4LDEyNTo6X19pbml0X186OjIz Mjo6SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dzKSAoMjUpIEdvdCByZXF1ZXN0IGZvciBtZXRob2Qg J3N0YXR2ZnMnDQpUaHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjQ4LDEyOTo6X19p bml0X186OjIzMjo6SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dzKSBSZWNlaXZpbmcgcmVxdWVzdC4u Lg0KVGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTo0OCwxMjk6Ol9faW5pdF9fOjoy MzI6OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgUXVldWluZyByZXF1ZXN0IGluIHRoZSB0aHJl YWQgcG9vbC4uLg0KVGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTo0OCwxMjk6Ol9f aW5pdF9fOjoyMzI6OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgRXh0cmFjdGluZyByZXF1ZXN0 IGluZm9ybWF0aW9uLi4uDQpUaHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjQ4LDEz MDo6X19pbml0X186OjIzMjo6SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dzKSAoMjYpIEdvdCByZXF1 ZXN0IGZvciBtZXRob2QgJ3N0YXR2ZnMnDQpUaHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1IDE0 OjU1OjQ4LDEzMDo6X19pbml0X186OjIzMjo6SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dzKSAoMjUp IFF1ZXVpbmcgcmVzcG9uc2UNClRocmVhZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6NDgs MTMxOjpfX2luaXRfXzo6MjMyOjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3MpICgyNikgUXVldWlu ZyByZXNwb25zZQ0KVGhyZWFkLTE4OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTo0OCwxMzM6OmZp bGVTRDo6MjYxOjpTdG9yYWdlLk1pc2MuZXhjQ21kOjooZ2V0UmVhZERlbGF5KSAvYmluL2RkIGlm PS9yaGV2L2RhdGEtY2VudGVyL21udC9nbHVzdGVyU0QvMTAuMTAuMTAuMzpfZGF0YTEvZWE0OTlj YzAtNjg0Zi00YTJmLWFiZWYtNDNmZjI4ZDY0OWNiL2RvbV9tZC9tZXRhZGF0YSBpZmxhZz1kaXJl Y3Qgb2Y9L2Rldi9udWxsIGJzPTQwOTYgY291bnQ9MSAoY3dkIE5vbmUpDQpUaHJlYWQtMTc6OkRF QlVHOjoyMDE0LTExLTI1IDE0OjU1OjQ4LDEzOTo6ZmlsZVNEOjoyNjE6OlN0b3JhZ2UuTWlzYy5l eGNDbWQ6OihnZXRSZWFkRGVsYXkpIC9iaW4vZGQgaWY9L3JoZXYvZGF0YS1jZW50ZXIvbW50LzQz LjI1Mi4xNzYuMTI6X3Zhcl9saWJfZXhwb3J0c19pc28vOWIyZmRjZTktYTc5NS00NGE3LTg2ZWQt OTllNTE0ZDI0Mzk5L2RvbV9tZC9tZXRhZGF0YSBpZmxhZz1kaXJlY3Qgb2Y9L2Rldi9udWxsIGJz PTQwOTYgY291bnQ9MSAoY3dkIE5vbmUpDQpUaHJlYWQtMTg6OkRFQlVHOjoyMDE0LTExLTI1IDE0 OjU1OjQ4LDE0NDo6ZmlsZVNEOjoyNjE6OlN0b3JhZ2UuTWlzYy5leGNDbWQ6OihnZXRSZWFkRGVs YXkpIFNVQ0NFU1M6IDxlcnI+ID0gJzArMSByZWNvcmRzIGluXG4wKzEgcmVjb3JkcyBvdXRcbjU0 MCBieXRlcyAoNTQwIEIpIGNvcGllZCwgMC4wMDA0OTM4OTMgcywgMS4xIE1CL3Ncbic7IDxyYz4g PSAwDQpUaHJlYWQtMTc6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjQ4LDE0Nzo6ZmlsZVNEOjoy NjE6OlN0b3JhZ2UuTWlzYy5leGNDbWQ6OihnZXRSZWFkRGVsYXkpIFNVQ0NFU1M6IDxlcnI+ID0g JzArMSByZWNvcmRzIGluXG4wKzEgcmVjb3JkcyBvdXRcbjQzOCBieXRlcyAoNDM4IEIpIGNvcGll ZCwgMC4wMDAzMDU4MTkgcywgMS40IE1CL3Ncbic7IDxyYz4gPSAwDQpUaHJlYWQtMTQ6OkRFQlVH OjoyMDE0LTExLTI1IDE0OjU1OjQ4LDE0ODo6X19pbml0X186OjIzMjo6SU9Qcm9jZXNzOjooX3By b2Nlc3NMb2dzKSBSZWNlaXZpbmcgcmVxdWVzdC4uLg0KVGhyZWFkLTE0OjpERUJVRzo6MjAxNC0x MS0yNSAxNDo1NTo0OCwxNDk6Ol9faW5pdF9fOjoyMzI6OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9n cykgUXVldWluZyByZXF1ZXN0IGluIHRoZSB0aHJlYWQgcG9vbC4uLg0KVGhyZWFkLTE0OjpERUJV Rzo6MjAxNC0xMS0yNSAxNDo1NTo0OCwxNDk6Ol9faW5pdF9fOjoyMzI6OklPUHJvY2Vzczo6KF9w cm9jZXNzTG9ncykgRXh0cmFjdGluZyByZXF1ZXN0IGluZm9ybWF0aW9uLi4uDQpUaHJlYWQtMTQ6 OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjQ4LDE1MDo6X19pbml0X186OjIzMjo6SU9Qcm9jZXNz OjooX3Byb2Nlc3NMb2dzKSAoMjcpIEdvdCByZXF1ZXN0IGZvciBtZXRob2QgJ3N0YXR2ZnMnDQpU aHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjQ4LDE1MDo6X19pbml0X186OjIzMjo6 SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dzKSAoMjcpIFF1ZXVpbmcgcmVzcG9uc2UNClRocmVhZC0x NDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6NDgsMTUxOjpfX2luaXRfXzo6MjMyOjpJT1Byb2Nl c3M6OihfcHJvY2Vzc0xvZ3MpIFJlY2VpdmluZyByZXF1ZXN0Li4uDQpUaHJlYWQtMTQ6OkRFQlVH OjoyMDE0LTExLTI1IDE0OjU1OjQ4LDE1Mjo6X19pbml0X186OjIzMjo6SU9Qcm9jZXNzOjooX3By b2Nlc3NMb2dzKSBRdWV1aW5nIHJlcXVlc3QgaW4gdGhlIHRocmVhZCBwb29sLi4uDQpUaHJlYWQt MTQ6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjQ4LDE1Mjo6X19pbml0X186OjIzMjo6SU9Qcm9j ZXNzOjooX3Byb2Nlc3NMb2dzKSBFeHRyYWN0aW5nIHJlcXVlc3QgaW5mb3JtYXRpb24uLi4NClRo cmVhZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6NDgsMTUyOjpfX2luaXRfXzo6MjMyOjpJ T1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3MpICgyOCkgR290IHJlcXVlc3QgZm9yIG1ldGhvZCAnc3Rh dHZmcycNClRocmVhZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6NDgsMTUyOjpfX2luaXRf Xzo6MjMyOjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3MpICgyOCkgUXVldWluZyByZXNwb25zZQ0K VGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTo0OCwxNTM6Ol9faW5pdF9fOjoyMzI6 OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgUmVjZWl2aW5nIHJlcXVlc3QuLi4NClRocmVhZC0x NDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6NDgsMTU1OjpfX2luaXRfXzo6MjMyOjpJT1Byb2Nl c3M6OihfcHJvY2Vzc0xvZ3MpIFF1ZXVpbmcgcmVxdWVzdCBpbiB0aGUgdGhyZWFkIHBvb2wuLi4N ClRocmVhZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6NDgsMTU1OjpfX2luaXRfXzo6MjMy OjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3MpIEV4dHJhY3RpbmcgcmVxdWVzdCBpbmZvcm1hdGlv bi4uLg0KVGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTo0OCwxNTY6Ol9faW5pdF9f OjoyMzI6OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgKDI5KSBHb3QgcmVxdWVzdCBmb3IgbWV0 aG9kICdhY2Nlc3MnDQpUaHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjQ4LDE1Njo6 X19pbml0X186OjIzMjo6SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dzKSAoMjkpIFF1ZXVpbmcgcmVz cG9uc2UNClRocmVhZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6NDgsMTU3OjpfX2luaXRf Xzo6MjMyOjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3MpIFJlY2VpdmluZyByZXF1ZXN0Li4uDQpU aHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjQ4LDE1Nzo6X19pbml0X186OjIzMjo6 SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dzKSBRdWV1aW5nIHJlcXVlc3QgaW4gdGhlIHRocmVhZCBw b29sLi4uDQpUaHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjQ4LDE1Nzo6X19pbml0 X186OjIzMjo6SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dzKSBFeHRyYWN0aW5nIHJlcXVlc3QgaW5m b3JtYXRpb24uLi4NClRocmVhZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6NDgsMTU3Ojpf X2luaXRfXzo6MjMyOjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3MpICgzMCkgR290IHJlcXVlc3Qg Zm9yIG1ldGhvZCAnYWNjZXNzJw0KVGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTo0 OCwxNTc6Ol9faW5pdF9fOjoyMzI6OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgKDMwKSBRdWV1 aW5nIHJlc3BvbnNlDQpUaHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjQ4LDE1ODo6 X19pbml0X186OjIzMjo6SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dzKSBSZWNlaXZpbmcgcmVxdWVz dC4uLg0KVGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTo0OCwxNTg6Ol9faW5pdF9f OjoyMzI6OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgUXVldWluZyByZXF1ZXN0IGluIHRoZSB0 aHJlYWQgcG9vbC4uLg0KVGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTo0OCwxNTk6 Ol9faW5pdF9fOjoyMzI6OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgRXh0cmFjdGluZyByZXF1 ZXN0IGluZm9ybWF0aW9uLi4uDQpUaHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjQ4 LDE1OTo6X19pbml0X186OjIzMjo6SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dzKSAoMzEpIEdvdCBy ZXF1ZXN0IGZvciBtZXRob2QgJ2FjY2VzcycNClRocmVhZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUg MTQ6NTU6NDgsMTU5OjpfX2luaXRfXzo6MjMyOjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3MpICgz MSkgUXVldWluZyByZXNwb25zZQ0KVGhyZWFkLTEzOjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTo1 Miw0MTY6OkJpbmRpbmdYTUxSUEM6OjExMzI6OnZkczo6KHdyYXBwZXIpIGNsaWVudCBbMTAuMTAu MTAuMl06OmNhbGwgdm9sdW1lc0xpc3Qgd2l0aCAoKSB7fQ0KVGhyZWFkLTEzOjpERUJVRzo6MjAx NC0xMS0yNSAxNDo1NTo1Miw0Mjk6OkJpbmRpbmdYTUxSUEM6OjExMzk6OnZkczo6KHdyYXBwZXIp IHJldHVybiB2b2x1bWVzTGlzdCB3aXRoIHsnc3RhdHVzJzogeydtZXNzYWdlJzogJ0RvbmUnLCAn Y29kZSc6IDB9LCAndm9sdW1lcyc6IHsnZGF0YTEnOiB7J3RyYW5zcG9ydFR5cGUnOiBbJ1RDUCdd LCAndXVpZCc6ICdlNjExNzkyNS03OWIxLTQxN2ItOWQwNy1jZmMzMWY2OGJjNTEnLCAnYnJpY2tz JzogWydjb21wdXRlMS4zbGludXguY29tOi9icmljazEnLCAnY29tcHV0ZTIuM2xpbnV4LmNvbTov YnJpY2szJywgJ2NvbXB1dGUzLjNsaW51eC5jb206L2JyaWNrNScsICdjb21wdXRlNC4zbGludXgu Y29tOi9icmljazcnLCAnY29tcHV0ZTEuM2xpbnV4LmNvbTovYnJpY2syJywgJ2NvbXB1dGUyLjNs aW51eC5jb206L2JyaWNrNCcsICdjb21wdXRlMy4zbGludXguY29tOi9icmljazYnLCAnY29tcHV0 ZTQuM2xpbnV4LmNvbTovYnJpY2s4J10sICd2b2x1bWVOYW1lJzogJ2RhdGExJywgJ3ZvbHVtZVR5 cGUnOiAnRElTVFJJQlVURURfUkVQTElDQVRFJywgJ3JlcGxpY2FDb3VudCc6ICcyJywgJ2JyaWNr Q291bnQnOiAnOCcsICdkaXN0Q291bnQnOiAnMicsICd2b2x1bWVTdGF0dXMnOiAnT05MSU5FJywg J3N0cmlwZUNvdW50JzogJzEnLCAnYnJpY2tzSW5mbyc6IFt7J25hbWUnOiAnY29tcHV0ZTEuM2xp bnV4LmNvbTovYnJpY2sxJywgJ2hvc3RVdWlkJzogJzY5YjE5ZjRlLTAzYzgtNGZlMS1hNjQ0LTI3 MDFjNDFkNzAyMCd9LCB7J25hbWUnOiAnY29tcHV0ZTIuM2xpbnV4LmNvbTovYnJpY2szJywgJ2hv c3RVdWlkJzogJzJjZjM0MjhjLTYyYmUtNDJmNC04NmQzLWVmNjdlN2RmYzk3Zid9LCB7J25hbWUn OiAnY29tcHV0ZTMuM2xpbnV4LmNvbTovYnJpY2s1JywgJ2hvc3RVdWlkJzogJzBkOTRkODk2LWU5 OTQtNGRlYi1iMWExLWQ2ZjY4YmU4NmMzMCd9LCB7J25hbWUnOiAnY29tcHV0ZTQuM2xpbnV4LmNv bTovYnJpY2s3JywgJ2hvc3RVdWlkJzogJzkxOGNlZjFmLWVkODMtNDU3MC05MGFkLTM5YzVjZWVj NzUzYyd9LCB7J25hbWUnOiAnY29tcHV0ZTEuM2xpbnV4LmNvbTovYnJpY2syJywgJ2hvc3RVdWlk JzogJzY5YjE5ZjRlLTAzYzgtNGZlMS1hNjQ0LTI3MDFjNDFkNzAyMCd9LCB7J25hbWUnOiAnY29t cHV0ZTIuM2xpbnV4LmNvbTovYnJpY2s0JywgJ2hvc3RVdWlkJzogJzJjZjM0MjhjLTYyYmUtNDJm NC04NmQzLWVmNjdlN2RmYzk3Zid9LCB7J25hbWUnOiAnY29tcHV0ZTMuM2xpbnV4LmNvbTovYnJp Y2s2JywgJ2hvc3RVdWlkJzogJzBkOTRkODk2LWU5OTQtNGRlYi1iMWExLWQ2ZjY4YmU4NmMzMCd9 LCB7J25hbWUnOiAnY29tcHV0ZTQuM2xpbnV4LmNvbTovYnJpY2s4JywgJ2hvc3RVdWlkJzogJzkx OGNlZjFmLWVkODMtNDU3MC05MGFkLTM5YzVjZWVjNzUzYyd9XSwgJ29wdGlvbnMnOiB7J2NsdXN0 ZXIuc2VydmVyLXF1b3J1bS10eXBlJzogJ3NlcnZlcicsICdjbHVzdGVyLmVhZ2VyLWxvY2snOiAn ZW5hYmxlJywgJ3BlcmZvcm1hbmNlLnN0YXQtcHJlZmV0Y2gnOiAnb2ZmJywgJ2F1dGguYWxsb3cn OiAnKicsICdjbHVzdGVyLnF1b3J1bS10eXBlJzogJ2F1dG8nLCAncGVyZm9ybWFuY2UucXVpY2st cmVhZCc6ICdvZmYnLCAnbmV0d29yay5yZW1vdGUtZGlvJzogJ2VuYWJsZScsICduZnMuZGlzYWJs ZSc6ICdvZmYnLCAncGVyZm9ybWFuY2UuaW8tY2FjaGUnOiAnb2ZmJywgJ3N0b3JhZ2Uub3duZXIt dWlkJzogJzM2JywgJ3VzZXIuY2lmcyc6ICdlbmFibGUnLCAncGVyZm9ybWFuY2UucmVhZC1haGVh ZCc6ICdvZmYnLCAnc3RvcmFnZS5vd25lci1naWQnOiAnMzYnfX19fQ0KVGhyZWFkLTEzOjpERUJV Rzo6MjAxNC0xMS0yNSAxNDo1NTo1NCwzMjE6OnRhc2s6OjU5NTo6U3RvcmFnZS5UYXNrTWFuYWdl ci5UYXNrOjooX3VwZGF0ZVN0YXRlKSBUYXNrPWA2ZmYzYTdiZC1hZGI3LTQ3MzAtYjg4NS00OTQw ZjlmODVhOWJgOjptb3ZpbmcgZnJvbSBzdGF0ZSBpbml0IC0+IHN0YXRlIHByZXBhcmluZw0KVGhy ZWFkLTEzOjpJTkZPOjoyMDE0LTExLTI1IDE0OjU1OjU0LDMyMTo6bG9nVXRpbHM6OjQ0OjpkaXNw YXRjaGVyOjood3JhcHBlcikgUnVuIGFuZCBwcm90ZWN0OiByZXBvU3RhdHMob3B0aW9ucz1Ob25l KQ0KVGhyZWFkLTEzOjpJTkZPOjoyMDE0LTExLTI1IDE0OjU1OjU0LDMyMjo6bG9nVXRpbHM6OjQ3 OjpkaXNwYXRjaGVyOjood3JhcHBlcikgUnVuIGFuZCBwcm90ZWN0OiByZXBvU3RhdHMsIFJldHVy biByZXNwb25zZTogeyc5YjJmZGNlOS1hNzk1LTQ0YTctODZlZC05OWU1MTRkMjQzOTknOiB7J2Nv ZGUnOiAwLCAndmVyc2lvbic6IDAsICdhY3F1aXJlZCc6IFRydWUsICdkZWxheSc6ICcwLjAwMDMw NTgxOScsICdsYXN0Q2hlY2snOiAnNi4yJywgJ3ZhbGlkJzogVHJ1ZX0sICdlYTQ5OWNjMC02ODRm LTRhMmYtYWJlZi00M2ZmMjhkNjQ5Y2InOiB7J2NvZGUnOiAwLCAndmVyc2lvbic6IDMsICdhY3F1 aXJlZCc6IEZhbHNlLCAnZGVsYXknOiAnMC4wMDA0OTM4OTMnLCAnbGFzdENoZWNrJzogJzYuMics ICd2YWxpZCc6IFRydWV9fQ0KVGhyZWFkLTEzOjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTo1NCwz MjI6OnRhc2s6OjExOTE6OlN0b3JhZ2UuVGFza01hbmFnZXIuVGFzazo6KHByZXBhcmUpIFRhc2s9 YDZmZjNhN2JkLWFkYjctNDczMC1iODg1LTQ5NDBmOWY4NWE5YmA6OmZpbmlzaGVkOiB7JzliMmZk Y2U5LWE3OTUtNDRhNy04NmVkLTk5ZTUxNGQyNDM5OSc6IHsnY29kZSc6IDAsICd2ZXJzaW9uJzog MCwgJ2FjcXVpcmVkJzogVHJ1ZSwgJ2RlbGF5JzogJzAuMDAwMzA1ODE5JywgJ2xhc3RDaGVjayc6 ICc2LjInLCAndmFsaWQnOiBUcnVlfSwgJ2VhNDk5Y2MwLTY4NGYtNGEyZi1hYmVmLTQzZmYyOGQ2 NDljYic6IHsnY29kZSc6IDAsICd2ZXJzaW9uJzogMywgJ2FjcXVpcmVkJzogRmFsc2UsICdkZWxh eSc6ICcwLjAwMDQ5Mzg5MycsICdsYXN0Q2hlY2snOiAnNi4yJywgJ3ZhbGlkJzogVHJ1ZX19DQpU aHJlYWQtMTM6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjU0LDMyMjo6dGFzazo6NTk1OjpTdG9y YWdlLlRhc2tNYW5hZ2VyLlRhc2s6OihfdXBkYXRlU3RhdGUpIFRhc2s9YDZmZjNhN2JkLWFkYjct NDczMC1iODg1LTQ5NDBmOWY4NWE5YmA6Om1vdmluZyBmcm9tIHN0YXRlIHByZXBhcmluZyAtPiBz dGF0ZSBmaW5pc2hlZA0KVGhyZWFkLTEzOjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTo1NCwzMjI6 OnJlc291cmNlTWFuYWdlcjo6OTQwOjpTdG9yYWdlLlJlc291cmNlTWFuYWdlci5Pd25lcjo6KHJl bGVhc2VBbGwpIE93bmVyLnJlbGVhc2VBbGwgcmVxdWVzdHMge30gcmVzb3VyY2VzIHt9DQpUaHJl YWQtMTM6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjU0LDMyMjo6cmVzb3VyY2VNYW5hZ2VyOjo5 Nzc6OlN0b3JhZ2UuUmVzb3VyY2VNYW5hZ2VyLk93bmVyOjooY2FuY2VsQWxsKSBPd25lci5jYW5j ZWxBbGwgcmVxdWVzdHMge30NClRocmVhZC0xMzo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6NTQs MzIyOjp0YXNrOjo5OTM6OlN0b3JhZ2UuVGFza01hbmFnZXIuVGFzazo6KF9kZWNyZWYpIFRhc2s9 YDZmZjNhN2JkLWFkYjctNDczMC1iODg1LTQ5NDBmOWY4NWE5YmA6OnJlZiAwIGFib3J0aW5nIEZh bHNlDQpUaHJlYWQtMTM6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjU3LDc0OTo6QmluZGluZ1hN TFJQQzo6MTEzMjo6dmRzOjood3JhcHBlcikgY2xpZW50IFsxMC4xMC4xMC4yXTo6Y2FsbCB2b2x1 bWVzTGlzdCB3aXRoICgpIHt9DQpUaHJlYWQtMTM6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjU3 LDc2MTo6QmluZGluZ1hNTFJQQzo6MTEzOTo6dmRzOjood3JhcHBlcikgcmV0dXJuIHZvbHVtZXNM aXN0IHdpdGggeydzdGF0dXMnOiB7J21lc3NhZ2UnOiAnRG9uZScsICdjb2RlJzogMH0sICd2b2x1 bWVzJzogeydkYXRhMSc6IHsndHJhbnNwb3J0VHlwZSc6IFsnVENQJ10sICd1dWlkJzogJ2U2MTE3 OTI1LTc5YjEtNDE3Yi05ZDA3LWNmYzMxZjY4YmM1MScsICdicmlja3MnOiBbJ2NvbXB1dGUxLjNs aW51eC5jb206L2JyaWNrMScsICdjb21wdXRlMi4zbGludXguY29tOi9icmljazMnLCAnY29tcHV0 ZTMuM2xpbnV4LmNvbTovYnJpY2s1JywgJ2NvbXB1dGU0LjNsaW51eC5jb206L2JyaWNrNycsICdj b21wdXRlMS4zbGludXguY29tOi9icmljazInLCAnY29tcHV0ZTIuM2xpbnV4LmNvbTovYnJpY2s0 JywgJ2NvbXB1dGUzLjNsaW51eC5jb206L2JyaWNrNicsICdjb21wdXRlNC4zbGludXguY29tOi9i cmljazgnXSwgJ3ZvbHVtZU5hbWUnOiAnZGF0YTEnLCAndm9sdW1lVHlwZSc6ICdESVNUUklCVVRF RF9SRVBMSUNBVEUnLCAncmVwbGljYUNvdW50JzogJzInLCAnYnJpY2tDb3VudCc6ICc4JywgJ2Rp c3RDb3VudCc6ICcyJywgJ3ZvbHVtZVN0YXR1cyc6ICdPTkxJTkUnLCAnc3RyaXBlQ291bnQnOiAn MScsICdicmlja3NJbmZvJzogW3snbmFtZSc6ICdjb21wdXRlMS4zbGludXguY29tOi9icmljazEn LCAnaG9zdFV1aWQnOiAnNjliMTlmNGUtMDNjOC00ZmUxLWE2NDQtMjcwMWM0MWQ3MDIwJ30sIHsn bmFtZSc6ICdjb21wdXRlMi4zbGludXguY29tOi9icmljazMnLCAnaG9zdFV1aWQnOiAnMmNmMzQy OGMtNjJiZS00MmY0LTg2ZDMtZWY2N2U3ZGZjOTdmJ30sIHsnbmFtZSc6ICdjb21wdXRlMy4zbGlu dXguY29tOi9icmljazUnLCAnaG9zdFV1aWQnOiAnMGQ5NGQ4OTYtZTk5NC00ZGViLWIxYTEtZDZm NjhiZTg2YzMwJ30sIHsnbmFtZSc6ICdjb21wdXRlNC4zbGludXguY29tOi9icmljazcnLCAnaG9z dFV1aWQnOiAnOTE4Y2VmMWYtZWQ4My00NTcwLTkwYWQtMzljNWNlZWM3NTNjJ30sIHsnbmFtZSc6 ICdjb21wdXRlMS4zbGludXguY29tOi9icmljazInLCAnaG9zdFV1aWQnOiAnNjliMTlmNGUtMDNj OC00ZmUxLWE2NDQtMjcwMWM0MWQ3MDIwJ30sIHsnbmFtZSc6ICdjb21wdXRlMi4zbGludXguY29t Oi9icmljazQnLCAnaG9zdFV1aWQnOiAnMmNmMzQyOGMtNjJiZS00MmY0LTg2ZDMtZWY2N2U3ZGZj OTdmJ30sIHsnbmFtZSc6ICdjb21wdXRlMy4zbGludXguY29tOi9icmljazYnLCAnaG9zdFV1aWQn OiAnMGQ5NGQ4OTYtZTk5NC00ZGViLWIxYTEtZDZmNjhiZTg2YzMwJ30sIHsnbmFtZSc6ICdjb21w dXRlNC4zbGludXguY29tOi9icmljazgnLCAnaG9zdFV1aWQnOiAnOTE4Y2VmMWYtZWQ4My00NTcw LTkwYWQtMzljNWNlZWM3NTNjJ31dLCAnb3B0aW9ucyc6IHsnY2x1c3Rlci5zZXJ2ZXItcXVvcnVt LXR5cGUnOiAnc2VydmVyJywgJ2NsdXN0ZXIuZWFnZXItbG9jayc6ICdlbmFibGUnLCAncGVyZm9y bWFuY2Uuc3RhdC1wcmVmZXRjaCc6ICdvZmYnLCAnYXV0aC5hbGxvdyc6ICcqJywgJ2NsdXN0ZXIu cXVvcnVtLXR5cGUnOiAnYXV0bycsICdwZXJmb3JtYW5jZS5xdWljay1yZWFkJzogJ29mZicsICdu ZXR3b3JrLnJlbW90ZS1kaW8nOiAnZW5hYmxlJywgJ25mcy5kaXNhYmxlJzogJ29mZicsICdwZXJm b3JtYW5jZS5pby1jYWNoZSc6ICdvZmYnLCAnc3RvcmFnZS5vd25lci11aWQnOiAnMzYnLCAndXNl ci5jaWZzJzogJ2VuYWJsZScsICdwZXJmb3JtYW5jZS5yZWFkLWFoZWFkJzogJ29mZicsICdzdG9y YWdlLm93bmVyLWdpZCc6ICczNid9fX19DQpUaHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1IDE0 OjU1OjU4LDE1Njo6X19pbml0X186OjIzMjo6SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dzKSBSZWNl aXZpbmcgcmVxdWVzdC4uLg0KVGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTo1OCwx NTY6Ol9faW5pdF9fOjoyMzI6OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgUXVldWluZyByZXF1 ZXN0IGluIHRoZSB0aHJlYWQgcG9vbC4uLg0KVGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAx NDo1NTo1OCwxNTc6Ol9faW5pdF9fOjoyMzI6OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgRXh0 cmFjdGluZyByZXF1ZXN0IGluZm9ybWF0aW9uLi4uDQpUaHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTEx LTI1IDE0OjU1OjU4LDE1Nzo6X19pbml0X186OjIzMjo6SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dz KSAoMzIpIEdvdCByZXF1ZXN0IGZvciBtZXRob2QgJ3N0YXR2ZnMnDQpUaHJlYWQtMTQ6OkRFQlVH OjoyMDE0LTExLTI1IDE0OjU1OjU4LDE1Nzo6X19pbml0X186OjIzMjo6SU9Qcm9jZXNzOjooX3By b2Nlc3NMb2dzKSAoMzIpIFF1ZXVpbmcgcmVzcG9uc2UNClRocmVhZC0xNzo6REVCVUc6OjIwMTQt MTEtMjUgMTQ6NTU6NTgsMTU4OjpmaWxlU0Q6OjI2MTo6U3RvcmFnZS5NaXNjLmV4Y0NtZDo6KGdl dFJlYWREZWxheSkgL2Jpbi9kZCBpZj0vcmhldi9kYXRhLWNlbnRlci9tbnQvNDMuMjUyLjE3Ni4x MjpfdmFyX2xpYl9leHBvcnRzX2lzby85YjJmZGNlOS1hNzk1LTQ0YTctODZlZC05OWU1MTRkMjQz OTkvZG9tX21kL21ldGFkYXRhIGlmbGFnPWRpcmVjdCBvZj0vZGV2L251bGwgYnM9NDA5NiBjb3Vu dD0xIChjd2QgTm9uZSkNClRocmVhZC0xNzo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6NTgsMTY1 OjpmaWxlU0Q6OjI2MTo6U3RvcmFnZS5NaXNjLmV4Y0NtZDo6KGdldFJlYWREZWxheSkgU1VDQ0VT UzogPGVycj4gPSAnMCsxIHJlY29yZHMgaW5cbjArMSByZWNvcmRzIG91dFxuNDM4IGJ5dGVzICg0 MzggQikgY29waWVkLCAwLjAwMDMwMjQ5NCBzLCAxLjQgTUIvc1xuJzsgPHJjPiA9IDANClRocmVh ZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6NTgsMTY2OjpfX2luaXRfXzo6MjMyOjpJT1By b2Nlc3M6OihfcHJvY2Vzc0xvZ3MpIFJlY2VpdmluZyByZXF1ZXN0Li4uDQpUaHJlYWQtMTQ6OkRF QlVHOjoyMDE0LTExLTI1IDE0OjU1OjU4LDE2Nzo6X19pbml0X186OjIzMjo6SU9Qcm9jZXNzOjoo X3Byb2Nlc3NMb2dzKSBRdWV1aW5nIHJlcXVlc3QgaW4gdGhlIHRocmVhZCBwb29sLi4uDQpUaHJl YWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjU4LDE2ODo6X19pbml0X186OjIzMjo6SU9Q cm9jZXNzOjooX3Byb2Nlc3NMb2dzKSBFeHRyYWN0aW5nIHJlcXVlc3QgaW5mb3JtYXRpb24uLi4N ClRocmVhZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6NTgsMTY4OjpfX2luaXRfXzo6MjMy OjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3MpICgzMykgR290IHJlcXVlc3QgZm9yIG1ldGhvZCAn c3RhdHZmcycNClRocmVhZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6NTgsMTY4OjpfX2lu aXRfXzo6MjMyOjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3MpICgzMykgUXVldWluZyByZXNwb25z ZQ0KVGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTo1OCwxNjk6Ol9faW5pdF9fOjoy MzI6OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgUmVjZWl2aW5nIHJlcXVlc3QuLi4NClRocmVh ZC0xODo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6NTgsMTcwOjpmaWxlU0Q6OjI2MTo6U3RvcmFn ZS5NaXNjLmV4Y0NtZDo6KGdldFJlYWREZWxheSkgL2Jpbi9kZCBpZj0vcmhldi9kYXRhLWNlbnRl ci9tbnQvZ2x1c3RlclNELzEwLjEwLjEwLjM6X2RhdGExL2VhNDk5Y2MwLTY4NGYtNGEyZi1hYmVm LTQzZmYyOGQ2NDljYi9kb21fbWQvbWV0YWRhdGEgaWZsYWc9ZGlyZWN0IG9mPS9kZXYvbnVsbCBi cz00MDk2IGNvdW50PTEgKGN3ZCBOb25lKQ0KVGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAx NDo1NTo1OCwxNzA6Ol9faW5pdF9fOjoyMzI6OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgUXVl dWluZyByZXF1ZXN0IGluIHRoZSB0aHJlYWQgcG9vbC4uLg0KVGhyZWFkLTE0OjpERUJVRzo6MjAx NC0xMS0yNSAxNDo1NTo1OCwxNzY6Ol9faW5pdF9fOjoyMzI6OklPUHJvY2Vzczo6KF9wcm9jZXNz TG9ncykgRXh0cmFjdGluZyByZXF1ZXN0IGluZm9ybWF0aW9uLi4uDQpUaHJlYWQtMTQ6OkRFQlVH OjoyMDE0LTExLTI1IDE0OjU1OjU4LDE3Njo6X19pbml0X186OjIzMjo6SU9Qcm9jZXNzOjooX3By b2Nlc3NMb2dzKSAoMzQpIEdvdCByZXF1ZXN0IGZvciBtZXRob2QgJ3N0YXR2ZnMnDQpUaHJlYWQt MTQ6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjU4LDE3Njo6X19pbml0X186OjIzMjo6SU9Qcm9j ZXNzOjooX3Byb2Nlc3NMb2dzKSAoMzQpIFF1ZXVpbmcgcmVzcG9uc2UNClRocmVhZC0xODo6REVC VUc6OjIwMTQtMTEtMjUgMTQ6NTU6NTgsMTc5OjpmaWxlU0Q6OjI2MTo6U3RvcmFnZS5NaXNjLmV4 Y0NtZDo6KGdldFJlYWREZWxheSkgU1VDQ0VTUzogPGVycj4gPSAnMCsxIHJlY29yZHMgaW5cbjAr MSByZWNvcmRzIG91dFxuNTQwIGJ5dGVzICg1NDAgQikgY29waWVkLCAwLjAwMDQxNzc0IHMsIDEu MyBNQi9zXG4nOyA8cmM+ID0gMA0KVGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTo1 OCwxNzk6Ol9faW5pdF9fOjoyMzI6OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgUmVjZWl2aW5n IHJlcXVlc3QuLi4NClRocmVhZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6NTgsMTc5Ojpf X2luaXRfXzo6MjMyOjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3MpIFF1ZXVpbmcgcmVxdWVzdCBp biB0aGUgdGhyZWFkIHBvb2wuLi4NClRocmVhZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6 NTgsMTgwOjpfX2luaXRfXzo6MjMyOjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3MpIEV4dHJhY3Rp bmcgcmVxdWVzdCBpbmZvcm1hdGlvbi4uLg0KVGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAx NDo1NTo1OCwxODA6Ol9faW5pdF9fOjoyMzI6OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgKDM1 KSBHb3QgcmVxdWVzdCBmb3IgbWV0aG9kICdzdGF0dmZzJw0KVGhyZWFkLTE0OjpERUJVRzo6MjAx NC0xMS0yNSAxNDo1NTo1OCwxODA6Ol9faW5pdF9fOjoyMzI6OklPUHJvY2Vzczo6KF9wcm9jZXNz TG9ncykgKDM1KSBRdWV1aW5nIHJlc3BvbnNlDQpUaHJlYWQtMTQ6OkRFQlVHOjoyMDE0LTExLTI1 IDE0OjU1OjU4LDE4MTo6X19pbml0X186OjIzMjo6SU9Qcm9jZXNzOjooX3Byb2Nlc3NMb2dzKSBS ZWNlaXZpbmcgcmVxdWVzdC4uLg0KVGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTo1 OCwxODE6Ol9faW5pdF9fOjoyMzI6OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgUXVldWluZyBy ZXF1ZXN0IGluIHRoZSB0aHJlYWQgcG9vbC4uLg0KVGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0y NSAxNDo1NTo1OCwxODE6Ol9faW5pdF9fOjoyMzI6OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykg RXh0cmFjdGluZyByZXF1ZXN0IGluZm9ybWF0aW9uLi4uDQpUaHJlYWQtMTQ6OkRFQlVHOjoyMDE0 LTExLTI1IDE0OjU1OjU4LDE4MTo6X19pbml0X186OjIzMjo6SU9Qcm9jZXNzOjooX3Byb2Nlc3NM b2dzKSAoMzYpIEdvdCByZXF1ZXN0IGZvciBtZXRob2QgJ2FjY2VzcycNClRocmVhZC0xNDo6REVC VUc6OjIwMTQtMTEtMjUgMTQ6NTU6NTgsMTgyOjpfX2luaXRfXzo6MjMyOjpJT1Byb2Nlc3M6Oihf cHJvY2Vzc0xvZ3MpICgzNikgUXVldWluZyByZXNwb25zZQ0KVGhyZWFkLTE0OjpERUJVRzo6MjAx NC0xMS0yNSAxNDo1NTo1OCwxODI6Ol9faW5pdF9fOjoyMzI6OklPUHJvY2Vzczo6KF9wcm9jZXNz TG9ncykgUmVjZWl2aW5nIHJlcXVlc3QuLi4NClRocmVhZC0xNDo6REVCVUc6OjIwMTQtMTEtMjUg MTQ6NTU6NTgsMTgzOjpfX2luaXRfXzo6MjMyOjpJT1Byb2Nlc3M6OihfcHJvY2Vzc0xvZ3MpIFF1 ZXVpbmcgcmVxdWVzdCBpbiB0aGUgdGhyZWFkIHBvb2wuLi4NClRocmVhZC0xNDo6REVCVUc6OjIw MTQtMTEtMjUgMTQ6NTU6NTgsMTgzOjpfX2luaXRfXzo6MjMyOjpJT1Byb2Nlc3M6OihfcHJvY2Vz c0xvZ3MpIEV4dHJhY3RpbmcgcmVxdWVzdCBpbmZvcm1hdGlvbi4uLg0KVGhyZWFkLTE0OjpERUJV Rzo6MjAxNC0xMS0yNSAxNDo1NTo1OCwxODM6Ol9faW5pdF9fOjoyMzI6OklPUHJvY2Vzczo6KF9w cm9jZXNzTG9ncykgKDM3KSBHb3QgcmVxdWVzdCBmb3IgbWV0aG9kICdhY2Nlc3MnDQpUaHJlYWQt MTQ6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjU1OjU4LDE4Mzo6X19pbml0X186OjIzMjo6SU9Qcm9j ZXNzOjooX3Byb2Nlc3NMb2dzKSAoMzcpIFF1ZXVpbmcgcmVzcG9uc2UNClRocmVhZC0xNDo6REVC VUc6OjIwMTQtMTEtMjUgMTQ6NTU6NTgsMTg0OjpfX2luaXRfXzo6MjMyOjpJT1Byb2Nlc3M6Oihf cHJvY2Vzc0xvZ3MpIFJlY2VpdmluZyByZXF1ZXN0Li4uDQpUaHJlYWQtMTQ6OkRFQlVHOjoyMDE0 LTExLTI1IDE0OjU1OjU4LDE4NDo6X19pbml0X186OjIzMjo6SU9Qcm9jZXNzOjooX3Byb2Nlc3NM b2dzKSBRdWV1aW5nIHJlcXVlc3QgaW4gdGhlIHRocmVhZCBwb29sLi4uDQpUaHJlYWQtMTQ6OkRF QlVHOjoyMDE0LTExLTI1IDE0OjU1OjU4LDE4NDo6X19pbml0X186OjIzMjo6SU9Qcm9jZXNzOjoo X3Byb2Nlc3NMb2dzKSBFeHRyYWN0aW5nIHJlcXVlc3QgaW5mb3JtYXRpb24uLi4NClRocmVhZC0x NDo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTU6NTgsMTg0OjpfX2luaXRfXzo6MjMyOjpJT1Byb2Nl c3M6OihfcHJvY2Vzc0xvZ3MpICgzOCkgR290IHJlcXVlc3QgZm9yIG1ldGhvZCAnYWNjZXNzJw0K VGhyZWFkLTE0OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1NTo1OCwxODU6Ol9faW5pdF9fOjoyMzI6 OklPUHJvY2Vzczo6KF9wcm9jZXNzTG9ncykgKDM4KSBRdWV1aW5nIHJlc3BvbnNlDQo= ------=_Part_3480043_71003673.1416938399727 Content-Type: text/plain; name="vdsm_just after_reboot.txt" Content-Disposition: attachment; filename="vdsm_just after_reboot.txt" Content-Transfer-Encoding: base64 VkRTTSBMb2dzIDotIA0KDQpUaHJlYWQtMTM6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjUwOjE5LDky Njo6QmluZGluZ1hNTFJQQzo6MTEzMjo6dmRzOjood3JhcHBlcikgY2xpZW50IFsxMC4xMC4xMC4y XTo6Y2FsbCBnZXRDYXBhYmlsaXRpZXMgd2l0aCAoKSB7fQ0KVGhyZWFkLTEzOjpERUJVRzo6MjAx NC0xMS0yNSAxNDo1MDoxOSw5Mjg6OnV0aWxzOjo3Mzg6OnJvb3Q6OihleGVjQ21kKSAvc2Jpbi9p cCByb3V0ZSBzaG93IHRvIDAuMC4wLjAvMCB0YWJsZSBhbGwgKGN3ZCBOb25lKQ0KVGhyZWFkLTEz OjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1MDoxOSw5MzQ6OnV0aWxzOjo3NTg6OnJvb3Q6OihleGVj Q21kKSBTVUNDRVNTOiA8ZXJyPiA9ICcnOyA8cmM+ID0gMA0KVGhyZWFkLTEzOjpERUJVRzo6MjAx NC0xMS0yNSAxNDo1MDoxOSw5NjE6OmNhcHM6OjcyODo6cm9vdDo6KF9nZXRLZXlQYWNrYWdlcykg cnBtIHBhY2thZ2UgKCdnbHVzdGVyLXN3aWZ0JywpIG5vdCBmb3VuZA0KVGhyZWFkLTEzOjpERUJV Rzo6MjAxNC0xMS0yNSAxNDo1MDoxOSw5NjU6OmNhcHM6OjcyODo6cm9vdDo6KF9nZXRLZXlQYWNr YWdlcykgcnBtIHBhY2thZ2UgKCdnbHVzdGVyLXN3aWZ0LW9iamVjdCcsKSBub3QgZm91bmQNClRo cmVhZC0xMzo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6NTA6MTksOTY2OjpjYXBzOjo3Mjg6OnJvb3Q6 OihfZ2V0S2V5UGFja2FnZXMpIHJwbSBwYWNrYWdlICgnZ2x1c3Rlci1zd2lmdC1wbHVnaW4nLCkg bm90IGZvdW5kDQpUaHJlYWQtMTM6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjUwOjE5LDk3MDo6Y2Fw czo6NzI4Ojpyb290OjooX2dldEtleVBhY2thZ2VzKSBycG0gcGFja2FnZSAoJ2dsdXN0ZXItc3dp ZnQtYWNjb3VudCcsKSBub3QgZm91bmQNClRocmVhZC0xMzo6REVCVUc6OjIwMTQtMTEtMjUgMTQ6 NTA6MTksOTcxOjpjYXBzOjo3Mjg6OnJvb3Q6OihfZ2V0S2V5UGFja2FnZXMpIHJwbSBwYWNrYWdl ICgnZ2x1c3Rlci1zd2lmdC1wcm94eScsKSBub3QgZm91bmQNClRocmVhZC0xMzo6REVCVUc6OjIw MTQtMTEtMjUgMTQ6NTA6MTksOTcxOjpjYXBzOjo3Mjg6OnJvb3Q6OihfZ2V0S2V5UGFja2FnZXMp IHJwbSBwYWNrYWdlICgnZ2x1c3Rlci1zd2lmdC1kb2MnLCkgbm90IGZvdW5kDQpUaHJlYWQtMTM6 OkRFQlVHOjoyMDE0LTExLTI1IDE0OjUwOjE5LDk3MTo6Y2Fwczo6NzI4Ojpyb290OjooX2dldEtl eVBhY2thZ2VzKSBycG0gcGFja2FnZSAoJ2dsdXN0ZXItc3dpZnQtY29udGFpbmVyJywpIG5vdCBm b3VuZA0KVGhyZWFkLTEzOjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1MDoxOSw5NzI6OmNhcHM6Ojcy ODo6cm9vdDo6KF9nZXRLZXlQYWNrYWdlcykgcnBtIHBhY2thZ2UgKCdnbHVzdGVyZnMtZ2VvLXJl cGxpY2F0aW9uJywpIG5vdCBmb3VuZA0KVGhyZWFkLTEzOjpERUJVRzo6MjAxNC0xMS0yNSAxNDo1 MDoxOSw5NzI6OmNhcHM6OjY0Njo6cm9vdDo6KGdldCkgVmlydGlvUk5HIERJU0FCTEVEOiBsaWJ2 aXJ0IHZlcnNpb24gMC4xMC4yLTI5LmVsNl81LjkgcmVxdWlyZWQgPj0gMC4xMC4yLTMxDQpUaHJl YWQtMTM6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjUwOjE5LDk3NTo6QmluZGluZ1hNTFJQQzo6MTEz OTo6dmRzOjood3JhcHBlcikgcmV0dXJuIGdldENhcGFiaWxpdGllcyB3aXRoIHsnc3RhdHVzJzog eydtZXNzYWdlJzogJ0RvbmUnLCAnY29kZSc6IDB9LCAnaW5mbyc6IHsnSEJBSW52ZW50b3J5Jzog eydpU0NTSSc6IFt7J0luaXRpYXRvck5hbWUnOiAnaXFuLjE5OTQtMDUuY29tLnJlZGhhdDozMjE1 MWNlMTgzYzgnfV0sICdGQyc6IFtdfSwgJ3BhY2thZ2VzMic6IHsna2VybmVsJzogeydyZWxlYXNl JzogJzQzMS5lbDYueDg2XzY0JywgJ2J1aWxkdGltZSc6IDEzODUwNjEzMDkuMCwgJ3ZlcnNpb24n OiAnMi42LjMyJ30sICdnbHVzdGVyZnMtcmRtYSc6IHsncmVsZWFzZSc6ICcxLmVsNicsICdidWls ZHRpbWUnOiAxNDAzNjIyNjI4TCwgJ3ZlcnNpb24nOiAnMy41LjEnfSwgJ2dsdXN0ZXJmcy1mdXNl JzogeydyZWxlYXNlJzogJzEuZWw2JywgJ2J1aWxkdGltZSc6IDE0MDM2MjI2MjhMLCAndmVyc2lv bic6ICczLjUuMSd9LCAnc3BpY2Utc2VydmVyJzogeydyZWxlYXNlJzogJzYuZWw2XzUuMicsICdi dWlsZHRpbWUnOiAxNDAyMzI0NjM3TCwgJ3ZlcnNpb24nOiAnMC4xMi40J30sICd2ZHNtJzogeydy ZWxlYXNlJzogJzEuZ2l0ZGI4Mzk0My5lbDYnLCAnYnVpbGR0aW1lJzogMTQxMjc4NDU2N0wsICd2 ZXJzaW9uJzogJzQuMTYuNyd9LCAncWVtdS1rdm0nOiB7J3JlbGVhc2UnOiAnMi40MTUuZWw2XzUu MTAnLCAnYnVpbGR0aW1lJzogMTQwMjQzNTcwMEwsICd2ZXJzaW9uJzogJzAuMTIuMS4yJ30sICdx ZW11LWltZyc6IHsncmVsZWFzZSc6ICcyLjQxNS5lbDZfNS4xMCcsICdidWlsZHRpbWUnOiAxNDAy NDM1NzAwTCwgJ3ZlcnNpb24nOiAnMC4xMi4xLjInfSwgJ2xpYnZpcnQnOiB7J3JlbGVhc2UnOiAn MjkuZWw2XzUuOScsICdidWlsZHRpbWUnOiAxNDAyNDA0NjEyTCwgJ3ZlcnNpb24nOiAnMC4xMC4y J30sICdnbHVzdGVyZnMnOiB7J3JlbGVhc2UnOiAnMS5lbDYnLCAnYnVpbGR0aW1lJzogMTQwMzYy MjYyOEwsICd2ZXJzaW9uJzogJzMuNS4xJ30sICdtb20nOiB7J3JlbGVhc2UnOiAnMi5lbDYnLCAn YnVpbGR0aW1lJzogMTQwMzc5NDM0NEwsICd2ZXJzaW9uJzogJzAuNC4xJ30sICdnbHVzdGVyZnMt c2VydmVyJzogeydyZWxlYXNlJzogJzEuZWw2JywgJ2J1aWxkdGltZSc6IDE0MDM2MjI2MjhMLCAn dmVyc2lvbic6ICczLjUuMSd9fSwgJ251bWFOb2RlRGlzdGFuY2UnOiB7JzEnOiBbMjAsIDEwXSwg JzAnOiBbMTAsIDIwXX0sICdjcHVNb2RlbCc6ICdJbnRlbChSKSBYZW9uKFIpIENQVSAgICAgICAg ICAgWDU2NTAgIEAgMi42N0dIeicsICdsaXZlTWVyZ2UnOiAnZmFsc2UnLCAnaG9va3MnOiB7fSwg J2NwdVNvY2tldHMnOiAnMicsICd2bVR5cGVzJzogWydrdm0nXSwgJ3NlbGludXgnOiB7J21vZGUn OiAnMSd9LCAna2R1bXBTdGF0dXMnOiAwLCAnc3VwcG9ydGVkUHJvdG9jb2xzJzogWycyLjInLCAn Mi4zJ10sICduZXR3b3Jrcyc6IHsnb3ZpcnRtZ210JzogeydpZmFjZSc6IHUnYm9uZDAuMTAnLCAn YWRkcic6ICc0My4yNTIuMTc2LjE2JywgJ2JyaWRnZWQnOiBGYWxzZSwgJ2lwdjZhZGRycyc6IFsn ZmU4MDo6NjJlYjo2OWZmOmZlMjA6YjQ2Yy82NCddLCAnbXR1JzogJzE1MDAnLCAnYm9vdHByb3Rv NCc6ICdub25lJywgJ25ldG1hc2snOiAnMjU1LjI1NS4yNTUuMCcsICdpcHY0YWRkcnMnOiBbJzQz LjI1Mi4xNzYuMTYvMjQnXSwgJ2ludGVyZmFjZSc6IHUnYm9uZDAuMTAnLCAnaXB2NmdhdGV3YXkn OiAnOjonLCAnZ2F0ZXdheSc6ICc0My4yNTIuMTc2LjEnfSwgJ0ludGVybmFsJzogeydpZmFjZSc6 ICdJbnRlcm5hbCcsICdhZGRyJzogJycsICdjZmcnOiB7J0RFRlJPVVRFJzogJ25vJywgJ0hPVFBM VUcnOiAnbm8nLCAnTVRVJzogJzkwMDAnLCAnREVMQVknOiAnMCcsICdOTV9DT05UUk9MTEVEJzog J25vJywgJ0JPT1RQUk9UTyc6ICdub25lJywgJ1NUUCc6ICdvZmYnLCAnREVWSUNFJzogJ0ludGVy bmFsJywgJ1RZUEUnOiAnQnJpZGdlJywgJ09OQk9PVCc6ICdubyd9LCAnYnJpZGdlZCc6IFRydWUs ICdpcHY2YWRkcnMnOiBbJ2ZlODA6OjIxMDoxOGZmOmZlY2Q6ZGFhYy82NCddLCAnZ2F0ZXdheSc6 ICcnLCAnYm9vdHByb3RvNCc6ICdub25lJywgJ25ldG1hc2snOiAnJywgJ3N0cCc6ICdvZmYnLCAn aXB2NGFkZHJzJzogW10sICdtdHUnOiAnOTAwMCcsICdpcHY2Z2F0ZXdheSc6ICc6OicsICdwb3J0 cyc6IFsnYm9uZDEuMTAwJ119LCAnc3RvcmFnZSc6IHsnaWZhY2UnOiB1J2JvbmQxJywgJ2FkZHIn OiAnMTAuMTAuMTAuNicsICdicmlkZ2VkJzogRmFsc2UsICdpcHY2YWRkcnMnOiBbJ2ZlODA6OjIx MDoxOGZmOmZlY2Q6ZGFhYy82NCddLCAnbXR1JzogJzkwMDAnLCAnYm9vdHByb3RvNCc6ICdub25l JywgJ25ldG1hc2snOiAnMjU1LjI1NS4yNTUuMCcsICdpcHY0YWRkcnMnOiBbJzEwLjEwLjEwLjYv MjQnXSwgJ2ludGVyZmFjZSc6IHUnYm9uZDEnLCAnaXB2NmdhdGV3YXknOiAnOjonLCAnZ2F0ZXdh eSc6ICcnfSwgJ1ZNTmV0d29yayc6IHsnaWZhY2UnOiAnVk1OZXR3b3JrJywgJ2FkZHInOiAnJywg J2NmZyc6IHsnREVGUk9VVEUnOiAnbm8nLCAnSE9UUExVRyc6ICdubycsICdNVFUnOiAnMTUwMCcs ICdERUxBWSc6ICcwJywgJ05NX0NPTlRST0xMRUQnOiAnbm8nLCAnQk9PVFBST1RPJzogJ25vbmUn LCAnU1RQJzogJ29mZicsICdERVZJQ0UnOiAnVk1OZXR3b3JrJywgJ1RZUEUnOiAnQnJpZGdlJywg J09OQk9PVCc6ICdubyd9LCAnYnJpZGdlZCc6IFRydWUsICdpcHY2YWRkcnMnOiBbJ2ZlODA6OjYy ZWI6NjlmZjpmZTIwOmI0NmMvNjQnXSwgJ2dhdGV3YXknOiAnJywgJ2Jvb3Rwcm90bzQnOiAnbm9u ZScsICduZXRtYXNrJzogJycsICdzdHAnOiAnb2ZmJywgJ2lwdjRhZGRycyc6IFtdLCAnbXR1Jzog JzE1MDAnLCAnaXB2NmdhdGV3YXknOiAnOjonLCAncG9ydHMnOiBbJ2JvbmQwLjM2J119fSwgJ2Jy aWRnZXMnOiB7J0ludGVybmFsJzogeydhZGRyJzogJycsICdjZmcnOiB7J0RFRlJPVVRFJzogJ25v JywgJ0hPVFBMVUcnOiAnbm8nLCAnTVRVJzogJzkwMDAnLCAnREVMQVknOiAnMCcsICdOTV9DT05U Uk9MTEVEJzogJ25vJywgJ0JPT1RQUk9UTyc6ICdub25lJywgJ1NUUCc6ICdvZmYnLCAnREVWSUNF JzogJ0ludGVybmFsJywgJ1RZUEUnOiAnQnJpZGdlJywgJ09OQk9PVCc6ICdubyd9LCAnaXB2NmFk ZHJzJzogWydmZTgwOjoyMTA6MThmZjpmZWNkOmRhYWMvNjQnXSwgJ210dSc6ICc5MDAwJywgJ25l dG1hc2snOiAnJywgJ3N0cCc6ICdvZmYnLCAnaXB2NGFkZHJzJzogW10sICdpcHY2Z2F0ZXdheSc6 ICc6OicsICdnYXRld2F5JzogJycsICdvcHRzJzogeyd0b3BvbG9neV9jaGFuZ2VfZGV0ZWN0ZWQn OiAnMCcsICdtdWx0aWNhc3RfbGFzdF9tZW1iZXJfY291bnQnOiAnMicsICdoYXNoX2VsYXN0aWNp dHknOiAnNCcsICdtdWx0aWNhc3RfcXVlcnlfcmVzcG9uc2VfaW50ZXJ2YWwnOiAnOTk5JywgJ211 bHRpY2FzdF9zbm9vcGluZyc6ICcxJywgJ211bHRpY2FzdF9zdGFydHVwX3F1ZXJ5X2ludGVydmFs JzogJzMxMjQnLCAnaGVsbG9fdGltZXInOiAnMTg2JywgJ211bHRpY2FzdF9xdWVyaWVyX2ludGVy dmFsJzogJzI1NDk2JywgJ21heF9hZ2UnOiAnMTk5OScsICdoYXNoX21heCc6ICc1MTInLCAnc3Rw X3N0YXRlJzogJzAnLCAncm9vdF9pZCc6ICc4MDAwLjAwMTAxOGNkZGFhYycsICdwcmlvcml0eSc6 ICczMjc2OCcsICdtdWx0aWNhc3RfbWVtYmVyc2hpcF9pbnRlcnZhbCc6ICcyNTk5NicsICdyb290 X3BhdGhfY29zdCc6ICcwJywgJ3Jvb3RfcG9ydCc6ICcwJywgJ211bHRpY2FzdF9xdWVyaWVyJzog JzAnLCAnbXVsdGljYXN0X3N0YXJ0dXBfcXVlcnlfY291bnQnOiAnMicsICdoZWxsb190aW1lJzog JzE5OScsICd0b3BvbG9neV9jaGFuZ2UnOiAnMCcsICdicmlkZ2VfaWQnOiAnODAwMC4wMDEwMThj ZGRhYWMnLCAndG9wb2xvZ3lfY2hhbmdlX3RpbWVyJzogJzAnLCAnYWdlaW5nX3RpbWUnOiAnMjk5 OTUnLCAnZ2NfdGltZXInOiAnODYnLCAnZ3JvdXBfYWRkcic6ICcxOjgwOmMyOjA6MDowJywgJ3Rj bl90aW1lcic6ICcwJywgJ211bHRpY2FzdF9xdWVyeV9pbnRlcnZhbCc6ICcxMjQ5OCcsICdtdWx0 aWNhc3RfbGFzdF9tZW1iZXJfaW50ZXJ2YWwnOiAnOTknLCAnbXVsdGljYXN0X3JvdXRlcic6ICcx JywgJ2ZvcndhcmRfZGVsYXknOiAnMCd9LCAncG9ydHMnOiBbJ2JvbmQxLjEwMCddfSwgJ1ZNTmV0 d29yayc6IHsnYWRkcic6ICcnLCAnY2ZnJzogeydERUZST1VURSc6ICdubycsICdIT1RQTFVHJzog J25vJywgJ01UVSc6ICcxNTAwJywgJ0RFTEFZJzogJzAnLCAnTk1fQ09OVFJPTExFRCc6ICdubycs ICdCT09UUFJPVE8nOiAnbm9uZScsICdTVFAnOiAnb2ZmJywgJ0RFVklDRSc6ICdWTU5ldHdvcmsn LCAnVFlQRSc6ICdCcmlkZ2UnLCAnT05CT09UJzogJ25vJ30sICdpcHY2YWRkcnMnOiBbJ2ZlODA6 OjYyZWI6NjlmZjpmZTIwOmI0NmMvNjQnXSwgJ210dSc6ICcxNTAwJywgJ25ldG1hc2snOiAnJywg J3N0cCc6ICdvZmYnLCAnaXB2NGFkZHJzJzogW10sICdpcHY2Z2F0ZXdheSc6ICc6OicsICdnYXRl d2F5JzogJycsICdvcHRzJzogeyd0b3BvbG9neV9jaGFuZ2VfZGV0ZWN0ZWQnOiAnMCcsICdtdWx0 aWNhc3RfbGFzdF9tZW1iZXJfY291bnQnOiAnMicsICdoYXNoX2VsYXN0aWNpdHknOiAnNCcsICdt dWx0aWNhc3RfcXVlcnlfcmVzcG9uc2VfaW50ZXJ2YWwnOiAnOTk5JywgJ211bHRpY2FzdF9zbm9v cGluZyc6ICcxJywgJ211bHRpY2FzdF9zdGFydHVwX3F1ZXJ5X2ludGVydmFsJzogJzMxMjQnLCAn aGVsbG9fdGltZXInOiAnODQnLCAnbXVsdGljYXN0X3F1ZXJpZXJfaW50ZXJ2YWwnOiAnMjU0OTYn LCAnbWF4X2FnZSc6ICcxOTk5JywgJ2hhc2hfbWF4JzogJzUxMicsICdzdHBfc3RhdGUnOiAnMCcs ICdyb290X2lkJzogJzgwMDAuNjBlYjY5MjBiNDZjJywgJ3ByaW9yaXR5JzogJzMyNzY4JywgJ211 bHRpY2FzdF9tZW1iZXJzaGlwX2ludGVydmFsJzogJzI1OTk2JywgJ3Jvb3RfcGF0aF9jb3N0Jzog JzAnLCAncm9vdF9wb3J0JzogJzAnLCAnbXVsdGljYXN0X3F1ZXJpZXInOiAnMCcsICdtdWx0aWNh c3Rfc3RhcnR1cF9xdWVyeV9jb3VudCc6ICcyJywgJ2hlbGxvX3RpbWUnOiAnMTk5JywgJ3RvcG9s b2d5X2NoYW5nZSc6ICcwJywgJ2JyaWRnZV9pZCc6ICc4MDAwLjYwZWI2OTIwYjQ2YycsICd0b3Bv bG9neV9jaGFuZ2VfdGltZXInOiAnMCcsICdhZ2VpbmdfdGltZSc6ICcyOTk5NScsICdnY190aW1l cic6ICc4NCcsICdncm91cF9hZGRyJzogJzE6ODA6YzI6MDowOjAnLCAndGNuX3RpbWVyJzogJzAn LCAnbXVsdGljYXN0X3F1ZXJ5X2ludGVydmFsJzogJzEyNDk4JywgJ211bHRpY2FzdF9sYXN0X21l bWJlcl9pbnRlcnZhbCc6ICc5OScsICdtdWx0aWNhc3Rfcm91dGVyJzogJzEnLCAnZm9yd2FyZF9k ZWxheSc6ICcwJ30sICdwb3J0cyc6IFsnYm9uZDAuMzYnXX19LCAndXVpZCc6ICc0NDQ1NEM0Qy00 QzAwLTEwNTctODA1My1CN0MwNEY1MDRFMzEnLCAnbGFzdENsaWVudElmYWNlJzogJ2JvbmQxJywg J25pY3MnOiB7J2V0aDMnOiB7J3Blcm1od2FkZHInOiAnMDA6MTA6MTg6Y2Q6ZGE6YWUnLCAnYWRk cic6ICcnLCAnY2ZnJzogeydTTEFWRSc6ICd5ZXMnLCAnTk1fQ09OVFJPTExFRCc6ICdubycsICdN VFUnOiAnOTAwMCcsICdIV0FERFInOiAnMDA6MTA6MTg6Y2Q6ZGE6YWUnLCAnTUFTVEVSJzogJ2Jv bmQxJywgJ0RFVklDRSc6ICdldGgzJywgJ09OQk9PVCc6ICdubyd9LCAnaXB2NmFkZHJzJzogW10s ICdtdHUnOiAnOTAwMCcsICduZXRtYXNrJzogJycsICdpcHY0YWRkcnMnOiBbXSwgJ2h3YWRkcic6 ICcwMDoxMDoxODpjZDpkYTphYycsICdzcGVlZCc6IDEwMDB9LCAnZXRoMic6IHsncGVybWh3YWRk cic6ICcwMDoxMDoxODpjZDpkYTphYycsICdhZGRyJzogJycsICdjZmcnOiB7J1NMQVZFJzogJ3ll cycsICdOTV9DT05UUk9MTEVEJzogJ25vJywgJ01UVSc6ICc5MDAwJywgJ0hXQUREUic6ICcwMDox MDoxODpjZDpkYTphYycsICdNQVNURVInOiAnYm9uZDEnLCAnREVWSUNFJzogJ2V0aDInLCAnT05C T09UJzogJ25vJ30sICdpcHY2YWRkcnMnOiBbXSwgJ210dSc6ICc5MDAwJywgJ25ldG1hc2snOiAn JywgJ2lwdjRhZGRycyc6IFtdLCAnaHdhZGRyJzogJzAwOjEwOjE4OmNkOmRhOmFjJywgJ3NwZWVk JzogMTAwMH0sICdldGgxJzogeydwZXJtaHdhZGRyJzogJzYwOmViOjY5OjIwOmI0OjZkJywgJ2Fk ZHInOiAnJywgJ2NmZyc6IHsnU0xBVkUnOiAneWVzJywgJ05NX0NPTlRST0xMRUQnOiAnbm8nLCAn TVRVJzogJzE1MDAnLCAnSFdBRERSJzogJzYwOmViOjY5OjIwOmI0OjZkJywgJ01BU1RFUic6ICdi b25kMCcsICdERVZJQ0UnOiAnZXRoMScsICdPTkJPT1QnOiAneWVzJ30sICdpcHY2YWRkcnMnOiBb XSwgJ210dSc6ICcxNTAwJywgJ25ldG1hc2snOiAnJywgJ2lwdjRhZGRycyc6IFtdLCAnaHdhZGRy JzogJzYwOmViOjY5OjIwOmI0OjZjJywgJ3NwZWVkJzogMTAwMH0sICdldGgwJzogeydwZXJtaHdh ZGRyJzogJzYwOmViOjY5OjIwOmI0OjZjJywgJ2FkZHInOiAnJywgJ2NmZyc6IHsnU0xBVkUnOiAn eWVzJywgJ05NX0NPTlRST0xMRUQnOiAnbm8nLCAnTVRVJzogJzE1MDAnLCAnSFdBRERSJzogJzYw OmViOjY5OjIwOmI0OjZjJywgJ01BU1RFUic6ICdib25kMCcsICdERVZJQ0UnOiAnZXRoMCcsICdP TkJPT1QnOiAneWVzJ30sICdpcHY2YWRkcnMnOiBbXSwgJ210dSc6ICcxNTAwJywgJ25ldG1hc2sn OiAnJywgJ2lwdjRhZGRycyc6IFtdLCAnaHdhZGRyJzogJzYwOmViOjY5OjIwOmI0OjZjJywgJ3Nw ZWVkJzogMTAwMH19LCAnc29mdHdhcmVfcmV2aXNpb24nOiAnMScsICdjbHVzdGVyTGV2ZWxzJzog WyczLjAnLCAnMy4xJywgJzMuMicsICczLjMnLCAnMy40JywgJzMuNSddLCAnY3B1RmxhZ3MnOiB1 J2ZwdSx2bWUsZGUscHNlLHRzYyxtc3IscGFlLG1jZSxjeDgsYXBpYyxzZXAsbXRycixwZ2UsbWNh LGNtb3YscGF0LHBzZTM2LGNsZmx1c2gsZHRzLGFjcGksbW14LGZ4c3Isc3NlLHNzZTIsc3MsaHQs dG0scGJlLHN5c2NhbGwsbngscGRwZTFnYixyZHRzY3AsbG0sY29uc3RhbnRfdHNjLGFyY2hfcGVy Zm1vbixwZWJzLGJ0cyxyZXBfZ29vZCx4dG9wb2xvZ3ksbm9uc3RvcF90c2MscG5pLHBjbG11bHFk cSxkdGVzNjQsbW9uaXRvcixkc19jcGwsdm14LHNteCxlc3QsdG0yLHNzc2UzLGN4MTYseHRwcixw ZGNtLHBjaWQsZGNhLHNzZTRfMSxzc2U0XzIscG9wY250LGFlcyxsYWhmX2xtLHRwcl9zaGFkb3cs dm5taSxmbGV4cHJpb3JpdHksZXB0LHZwaWQsbW9kZWxfTmVoYWxlbSxtb2RlbF9Db25yb2UsbW9k ZWxfY29yZWR1byxtb2RlbF9jb3JlMmR1byxtb2RlbF9QZW5yeW4sbW9kZWxfV2VzdG1lcmUsbW9k ZWxfbjI3MCcsICdJU0NTSUluaXRpYXRvck5hbWUnOiAnaXFuLjE5OTQtMDUuY29tLnJlZGhhdDoz MjE1MWNlMTgzYzgnLCAnbmV0Q29uZmlnRGlydHknOiAnRmFsc2UnLCAnc3VwcG9ydGVkRU5HSU5F cyc6IFsnMy4wJywgJzMuMScsICczLjInLCAnMy4zJywgJzMuNCcsICczLjUnXSwgJ2F1dG9OdW1h QmFsYW5jaW5nJzogMiwgJ3Jlc2VydmVkTWVtJzogJzMyMScsICdib25kaW5ncyc6IHsnYm9uZDQn OiB7J2FkZHInOiAnJywgJ2NmZyc6IHt9LCAnbXR1JzogJzE1MDAnLCAnbmV0bWFzayc6ICcnLCAn c2xhdmVzJzogW10sICdod2FkZHInOiAnMDA6MDA6MDA6MDA6MDA6MDAnfSwgJ2JvbmQwJzogeydh ZGRyJzogJycsICdjZmcnOiB7J0hPVFBMVUcnOiAnbm8nLCAnTVRVJzogJzE1MDAnLCAnTk1fQ09O VFJPTExFRCc6ICdubycsICdCT05ESU5HX09QVFMnOiAnbW9kZT00IG1paW1vbj0xMDAnLCAnREVW SUNFJzogJ2JvbmQwJywgJ09OQk9PVCc6ICd5ZXMnfSwgJ2lwdjZhZGRycyc6IFsnZmU4MDo6NjJl Yjo2OWZmOmZlMjA6YjQ2Yy82NCddLCAnbXR1JzogJzE1MDAnLCAnbmV0bWFzayc6ICcnLCAnaXB2 NGFkZHJzJzogW10sICdod2FkZHInOiAnNjA6ZWI6Njk6MjA6YjQ6NmMnLCAnc2xhdmVzJzogWydl dGgwJywgJ2V0aDEnXSwgJ29wdHMnOiB7J21paW1vbic6ICcxMDAnLCAnbW9kZSc6ICc0J319LCAn Ym9uZDEnOiB7J2FkZHInOiAnMTAuMTAuMTAuNicsICdjZmcnOiB7J0RFRlJPVVRFJzogJ25vJywg J0lQQUREUic6ICcxMC4xMC4xMC42JywgJ0hPVFBMVUcnOiAnbm8nLCAnTVRVJzogJzkwMDAnLCAn Tk1fQ09OVFJPTExFRCc6ICdubycsICdORVRNQVNLJzogJzI1NS4yNTUuMjU1LjAnLCAnQk9PVFBS T1RPJzogJ25vbmUnLCAnQk9ORElOR19PUFRTJzogJ21vZGU9NCBtaWltb249MTAwJywgJ0RFVklD RSc6ICdib25kMScsICdPTkJPT1QnOiAnbm8nfSwgJ2lwdjZhZGRycyc6IFsnZmU4MDo6MjEwOjE4 ZmY6ZmVjZDpkYWFjLzY0J10sICdtdHUnOiAnOTAwMCcsICduZXRtYXNrJzogJzI1NS4yNTUuMjU1 LjAnLCAnaXB2NGFkZHJzJzogWycxMC4xMC4xMC42LzI0J10sICdod2FkZHInOiAnMDA6MTA6MTg6 Y2Q6ZGE6YWMnLCAnc2xhdmVzJzogWydldGgyJywgJ2V0aDMnXSwgJ29wdHMnOiB7J21paW1vbic6 ICcxMDAnLCAnbW9kZSc6ICc0J319LCAnYm9uZDInOiB7J2FkZHInOiAnJywgJ2NmZyc6IHt9LCAn bXR1JzogJzE1MDAnLCAnbmV0bWFzayc6ICcnLCAnc2xhdmVzJzogW10sICdod2FkZHInOiAnMDA6 MDA6MDA6MDA6MDA6MDAnfSwgJ2JvbmQzJzogeydhZGRyJzogJycsICdjZmcnOiB7fSwgJ210dSc6 ICcxNTAwJywgJ25ldG1hc2snOiAnJywgJ3NsYXZlcyc6IFtdLCAnaHdhZGRyJzogJzAwOjAwOjAw OjAwOjAwOjAwJ319LCAnc29mdHdhcmVfdmVyc2lvbic6ICc0LjE2JywgJ21lbVNpemUnOiAnMjQw MTknLCAnY3B1U3BlZWQnOiAnMjY2Ny4wMDAnLCAnbnVtYU5vZGVzJzoge3UnMSc6IHsndG90YWxN ZW1vcnknOiAnMTIyODgnLCAnY3B1cyc6IFs2LCA3LCA4LCA5LCAxMCwgMTEsIDE4LCAxOSwgMjAs IDIxLCAyMiwgMjNdfSwgdScwJzogeyd0b3RhbE1lbW9yeSc6ICcxMjI3OCcsICdjcHVzJzogWzAs IDEsIDIsIDMsIDQsIDUsIDEyLCAxMywgMTQsIDE1LCAxNiwgMTddfX0sICd2ZXJzaW9uX25hbWUn OiAnU25vdyBNYW4nLCAndmxhbnMnOiB7J2JvbmQwLjEwJzogeydpZmFjZSc6ICdib25kMCcsICdh ZGRyJzogJzQzLjI1Mi4xNzYuMTYnLCAnY2ZnJzogeydERUZST1VURSc6ICd5ZXMnLCAnVkxBTic6 ICd5ZXMnLCAnSVBBRERSJzogJzQzLjI1Mi4xNzYuMTYnLCAnSE9UUExVRyc6ICdubycsICdHQVRF V0FZJzogJzQzLjI1Mi4xNzYuMScsICdOTV9DT05UUk9MTEVEJzogJ25vJywgJ05FVE1BU0snOiAn MjU1LjI1NS4yNTUuMCcsICdCT09UUFJPVE8nOiAnbm9uZScsICdERVZJQ0UnOiAnYm9uZDAuMTAn LCAnTVRVJzogJzE1MDAnLCAnT05CT09UJzogJ3llcyd9LCAnaXB2NmFkZHJzJzogWydmZTgwOjo2 MmViOjY5ZmY6ZmUyMDpiNDZjLzY0J10sICd2bGFuaWQnOiAxMCwgJ210dSc6ICcxNTAwJywgJ25l dG1hc2snOiAnMjU1LjI1NS4yNTUuMCcsICdpcHY0YWRkcnMnOiBbJzQzLjI1Mi4xNzYuMTYvMjQn XX0sICdib25kMC4zNic6IHsnaWZhY2UnOiAnYm9uZDAnLCAnYWRkcic6ICcnLCAnY2ZnJzogeydC UklER0UnOiAnVk1OZXR3b3JrJywgJ1ZMQU4nOiAneWVzJywgJ0hPVFBMVUcnOiAnbm8nLCAnTVRV JzogJzE1MDAnLCAnTk1fQ09OVFJPTExFRCc6ICdubycsICdERVZJQ0UnOiAnYm9uZDAuMzYnLCAn T05CT09UJzogJ25vJ30sICdpcHY2YWRkcnMnOiBbJ2ZlODA6OjYyZWI6NjlmZjpmZTIwOmI0NmMv NjQnXSwgJ3ZsYW5pZCc6IDM2LCAnbXR1JzogJzE1MDAnLCAnbmV0bWFzayc6ICcnLCAnaXB2NGFk ZHJzJzogW119LCAnYm9uZDEuMTAwJzogeydpZmFjZSc6ICdib25kMScsICdhZGRyJzogJycsICdj ZmcnOiB7J0JSSURHRSc6ICdJbnRlcm5hbCcsICdWTEFOJzogJ3llcycsICdIT1RQTFVHJzogJ25v JywgJ01UVSc6ICc5MDAwJywgJ05NX0NPTlRST0xMRUQnOiAnbm8nLCAnREVWSUNFJzogJ2JvbmQx LjEwMCcsICdPTkJPT1QnOiAnbm8nfSwgJ2lwdjZhZGRycyc6IFsnZmU4MDo6MjEwOjE4ZmY6ZmVj ZDpkYWFjLzY0J10sICd2bGFuaWQnOiAxMDAsICdtdHUnOiAnOTAwMCcsICduZXRtYXNrJzogJycs ICdpcHY0YWRkcnMnOiBbXX19LCAnY3B1Q29yZXMnOiAnMTInLCAna3ZtRW5hYmxlZCc6ICd0cnVl JywgJ2d1ZXN0T3ZlcmhlYWQnOiAnNjUnLCAnY3B1VGhyZWFkcyc6ICcyNCcsICdlbXVsYXRlZE1h Y2hpbmVzJzogW3UncmhlbDYuNS4wJywgdSdwYycsIHUncmhlbDYuNC4wJywgdSdyaGVsNi4zLjAn LCB1J3JoZWw2LjIuMCcsIHUncmhlbDYuMS4wJywgdSdyaGVsNi4wLjAnLCB1J3JoZWw1LjUuMCcs IHUncmhlbDUuNC40JywgdSdyaGVsNS40LjAnXSwgJ29wZXJhdGluZ1N5c3RlbSc6IHsncmVsZWFz ZSc6ICc1LmVsNi5jZW50b3MuMTEuMScsICd2ZXJzaW9uJzogJzYnLCAnbmFtZSc6ICdSSEVMJ30s ICdsYXN0Q2xpZW50JzogJzEwLjEwLjEwLjInfX0NClRocmVhZC0xMzo6REVCVUc6OjIwMTQtMTEt MjUgMTQ6NTA6MTksOTkzOjpCaW5kaW5nWE1MUlBDOjoxMTMyOjp2ZHM6Oih3cmFwcGVyKSBjbGll bnQgWzEwLjEwLjEwLjJdOjpjYWxsIGdldEhhcmR3YXJlSW5mbyB3aXRoICgpIHt9DQpUaHJlYWQt MTM6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjUwOjE5LDk5NDo6QmluZGluZ1hNTFJQQzo6MTEzOTo6 dmRzOjood3JhcHBlcikgcmV0dXJuIGdldEhhcmR3YXJlSW5mbyB3aXRoIHsnc3RhdHVzJzogeydt ZXNzYWdlJzogJ0RvbmUnLCAnY29kZSc6IDB9LCAnaW5mbyc6IHsnc3lzdGVtUHJvZHVjdE5hbWUn OiAnQ1MyNC1UWScsICdzeXN0ZW1TZXJpYWxOdW1iZXInOiAnN0xXU1BOMScsICdzeXN0ZW1GYW1p bHknOiAnU2VydmVyJywgJ3N5c3RlbVZlcnNpb24nOiAnQTAwJywgJ3N5c3RlbVVVSUQnOiAnNDQ0 NTRjNGMtNGMwMC0xMDU3LTgwNTMtYjdjMDRmNTA0ZTMxJywgJ3N5c3RlbU1hbnVmYWN0dXJlcic6 ICdEZWxsJ319DQpUaHJlYWQtMTM6OkRFQlVHOjoyMDE0LTExLTI1IDE0OjUwOjIwLDA5Njo6Qmlu ZGluZ1hNTFJQQzo6MTEzMjo6dmRzOjood3JhcHBlcikgY2xpZW50IFsxMC4xMC4xMC4yXTo6Y2Fs bCBob3N0c0xpc3Qgd2l0aCAoKSB7fSBmbG93SUQgWzdhZjBlZjQzXQ0KVGhyZWFkLTEzOjpFUlJP Ujo6MjAxNC0xMS0yNSAxNDo1MDoyMywxMDQ6OkJpbmRpbmdYTUxSUEM6OjExNDg6OnZkczo6KHdy YXBwZXIpIHZkc20gZXhjZXB0aW9uIG9jY3VyZWQNClRyYWNlYmFjayAobW9zdCByZWNlbnQgY2Fs bCBsYXN0KToNCiAgRmlsZSAiL3Vzci9zaGFyZS92ZHNtL3JwYy9CaW5kaW5nWE1MUlBDLnB5Iiwg bGluZSAxMTM1LCBpbiB3cmFwcGVyDQogICAgcmVzID0gZigqYXJncywgKiprd2FyZ3MpDQogIEZp bGUgIi91c3Ivc2hhcmUvdmRzbS9nbHVzdGVyL2FwaS5weSIsIGxpbmUgNTQsIGluIHdyYXBwZXIN CiAgICBydiA9IGZ1bmMoKmFyZ3MsICoqa3dhcmdzKQ0KICBGaWxlICIvdXNyL3NoYXJlL3Zkc20v Z2x1c3Rlci9hcGkucHkiLCBsaW5lIDI1MSwgaW4gaG9zdHNMaXN0DQogICAgcmV0dXJuIHsnaG9z dHMnOiBzZWxmLnN2ZHNtUHJveHkuZ2x1c3RlclBlZXJTdGF0dXMoKX0NCiAgRmlsZSAiL3Vzci9z aGFyZS92ZHNtL3N1cGVydmRzbS5weSIsIGxpbmUgNTAsIGluIF9fY2FsbF9fDQogICAgcmV0dXJu IGNhbGxNZXRob2QoKQ0KICBGaWxlICIvdXNyL3NoYXJlL3Zkc20vc3VwZXJ2ZHNtLnB5IiwgbGlu ZSA0OCwgaW4gPGxhbWJkYT4NCiAgICAqKmt3YXJncykNCiAgRmlsZSAiPHN0cmluZz4iLCBsaW5l IDIsIGluIGdsdXN0ZXJQZWVyU3RhdHVzDQogIEZpbGUgIi91c3IvbGliNjQvcHl0aG9uMi42L211 bHRpcHJvY2Vzc2luZy9tYW5hZ2Vycy5weSIsIGxpbmUgNzQwLCBpbiBfY2FsbG1ldGhvZA0KICAg IHJhaXNlIGNvbnZlcnRfdG9fZXJyb3Ioa2luZCwgcmVzdWx0KQ0KR2x1c3RlckNtZEV4ZWNGYWls ZWRFeGNlcHRpb246IENvbW1hbmQgZXhlY3V0aW9uIGZhaWxlZA0KZXJyb3I6IENvbm5lY3Rpb24g ZmFpbGVkLiBQbGVhc2UgY2hlY2sgaWYgZ2x1c3RlciBkYWVtb24gaXMgb3BlcmF0aW9uYWwuDQpy ZXR1cm4gY29kZTogMQ0KDQo= ------=_Part_3480043_71003673.1416938399727--

Hi, My Glusterfs version is :- glusterfs-3.6.1-1.el7 On Wed, Nov 26, 2014 at 1:59 AM, Kanagaraj Mayilsamy <kmayilsa@redhat.com> wrote:
[+Gluster-users@gluster.org]
"Initialization of volume 'management' failed, review your volfile again", glusterd throws this error when the service is started automatically after the reboot. But the service is successfully started later manually by the user.
can somebody from gluster-users please help on this?
glusterfs version: 3.5.1
Thanks, Kanagaraj
From: "Punit Dambiwal" <hypunit@gmail.com> To: "Kanagaraj" <kmayilsa@redhat.com> Cc: users@ovirt.org Sent: Tuesday, November 25, 2014 7:24:45 PM Subject: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server...
Hi Kanagraj,
Please check the attached log files....i didn't find any thing special....
On Tue, Nov 25, 2014 at 12:12 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
Do you see any errors in /var/log/glusterfs/etc-glusterfs-glusterd.vol.log or vdsm.log when the service is trying to start automatically after the reboot?
Thanks, Kanagaraj
On 11/24/2014 08:13 PM, Punit Dambiwal wrote:
Hi Kanagaraj,
Yes...once i will start the gluster service and then vdsmd ...the host can connect to cluster...but the question is why it's not started even it has chkconfig enabled...
I have tested it in two host cluster environment...(Centos 6.6 and centos 7.0) on both hypervisior cluster..it's failed to reconnect in to cluster after reboot....
In both the environment glusterd enabled for next boot....but it's failed with the same error....seems it's bug in either gluster or Ovirt ??
Please help me to find the workaround here if can not resolve it...as without this the Host machine can not connect after reboot....that means engine will consider it as down and every time need to manually start
gluster service and vdsmd... ??
Thanks, Punit
On Mon, Nov 24, 2014 at 10:20 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
From vdsm.log "error: Connection failed. Please check if gluster daemon is operational."
Starting glusterd service should fix this issue. 'service glusterd start' But i am wondering why the glusterd was not started automatically after the reboot.
Thanks, Kanagaraj
On 11/24/2014 07:18 PM, Punit Dambiwal wrote:
Hi Kanagaraj,
Please find the attached VDSM logs :-
---------------- Thread-13::DEBUG::2014-11-24
21:41:17,182::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {} Thread-13::DEBUG::2014-11-24 21:41:17,182::task::993::Storage.TaskManager.Task::(_decref) Task=`1691d409-9b27-4585-8281-5ec26154367a`::ref 0 aborting False Thread-13::DEBUG::2014-11-24 21:41:32,393::task::595::Storage.TaskManager.Task::(_updateState) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from state init -> state preparing Thread-13::INFO::2014-11-24 21:41:32,393::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-13::INFO::2014-11-24 21:41:32,393::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-13::DEBUG::2014-11-24 21:41:32,393::task::1191::Storage.TaskManager.Task::(prepare) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::finished: {} Thread-13::DEBUG::2014-11-24 21:41:32,394::task::595::Storage.TaskManager.Task::(_updateState) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from state
-> state finished Thread-13::DEBUG::2014-11-24
21:41:32,394::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {} Thread-13::DEBUG::2014-11-24
21:41:32,394::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {} Thread-13::DEBUG::2014-11-24 21:41:32,394::task::993::Storage.TaskManager.Task::(_decref) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::ref 0 aborting False Thread-13::DEBUG::2014-11-24 21:41:41,550::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call getCapabilities with () {} Thread-13::DEBUG::2014-11-24 21:41:41,553::utils::738::root::(execCmd) /sbin/ip route show to 0.0.0.0/0 table all (cwd None) Thread-13::DEBUG::2014-11-24 21:41:41,560::utils::758::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-13::DEBUG::2014-11-24 21:41:41,588::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift',) not found Thread-13::DEBUG::2014-11-24 21:41:41,592::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-object',) not found Thread-13::DEBUG::2014-11-24 21:41:41,593::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-plugin',) not found Thread-13::DEBUG::2014-11-24 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-account',) not found Thread-13::DEBUG::2014-11-24 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-proxy',) not found Thread-13::DEBUG::2014-11-24 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-doc',) not found Thread-13::DEBUG::2014-11-24 21:41:41,599::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-container',) not found Thread-13::DEBUG::2014-11-24 21:41:41,599::caps::728::root::(_getKeyPackages) rpm package ('glusterfs-geo-replication',) not found Thread-13::DEBUG::2014-11-24 21:41:41,600::caps::646::root::(get) VirtioRNG DISABLED: libvirt version 0.10.2-29.el6_5.9 required >= 0.10.2-31 Thread-13::DEBUG::2014-11-24 21:41:41,603::BindingXMLRPC::1139::vds::(wrapper) return getCapabilities with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory': {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:32151ce183c8'}], 'FC': []}, 'packages2': {'kernel': {'release': '431.el6.x86_64', 'buildtime': 1385061309.0, 'version': '2.6.32'}, 'glusterfs-rdma': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, 'glusterfs-fuse': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, 'spice-server': {'release': '6.el6_5.2', 'buildtime': 1402324637L, 'version': '0.12.4'}, 'vdsm': {'release': '1.gitdb83943.el6', 'buildtime': 1412784567L, 'version': '4.16.7'}, 'qemu-kvm': {'release': '2.415.el6_5.10', 'buildtime': 1402435700L, 'version': '0.12.1.2'}, 'qemu-img': {'release': '2.415.el6_5.10', 'buildtime': 1402435700L, 'version': '0.12.1.2'}, 'libvirt': {'release': '29.el6_5.9', 'buildtime': 1402404612L, 'version': '0.10.2'}, 'glusterfs': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, 'mom': {'release': '2.el6', 'buildtime': 1403794344L, 'version': '0.4.1'}, 'glusterfs-server': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}}, 'numaNodeDistance': {'1': [20, 10], '0': [10, 20]}, 'cpuModel': 'Intel(R) Xeon(R) CPU X5650 @ 2.67GHz', 'liveMerge': 'false', 'hooks': {}, 'cpuSockets': '2', 'vmTypes': ['kvm'], 'selinux': {'mode': '1'}, 'kdumpStatus': 0, 'supportedProtocols': ['2.2', '2.3'], 'networks': {'ovirtmgmt': {'iface': u'bond0.10', 'addr': '43.252.176.16', 'bridged': False, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'bootproto4': 'none', 'netmask': '255.255.255.0', 'ipv4addrs': [' 43.252.176.16/24' <http://43.252.176.16/24%27>], 'interface': u'bond0.10', 'ipv6gateway': '::', 'gateway': '43.25.17.1'}, 'Internal': {'iface': 'Internal', 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'bridged': True, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'gateway': '', 'bootproto4': 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'mtu': '9000', 'ipv6gateway': '::', 'ports': ['bond1.100']}, 'storage': {'iface': u'bond1', 'addr': '10.10.10.6', 'bridged': False, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'bootproto4': 'none', 'netmask': '255.255.255.0', 'ipv4addrs': [' 10.10.10.6/24' <http://10.10.10.6/24%27>], 'interface': u'bond1', 'ipv6gateway': '::', 'gateway': ''}, 'VMNetwork': {'iface': 'VMNetwork', 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'bridged': True, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'gateway': '', 'bootproto4': 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'mtu': '1500', 'ipv6gateway': '::', 'ports': ['bond0.36']}}, 'bridges': {'Internal': {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': '', 'opts': {'topology_change_detected': '0', 'multicast_last_member_count': '2', 'hash_elasticity': '4', 'multicast_query_response_interval': '999', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3124', 'hello_timer': '31', 'multicast_querier_interval': '25496', 'max_age': '1999', 'hash_max': '512', 'stp_state': '0', 'root_id': '8000.001018cddaac', 'priority': '32768', 'multicast_membership_interval': '25996', 'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_startup_query_count': '2', 'hello_time': '199', 'topology_change': '0', 'bridge_id': '8000.001018cddaac', 'topology_change_timer': '0', 'ageing_time': '29995', 'gc_timer': '31', 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', 'multicast_query_interval': '12498', 'multicast_last_member_interval': '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': ['bond1.100']}, 'VMNetwork': {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': '', 'opts': {'topology_change_detected': '0', 'multicast_last_member_count': '2', 'hash_elasticity': '4', 'multicast_query_response_interval': '999', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3124', 'hello_timer': '131', 'multicast_querier_interval': '25496', 'max_age': '1999', 'hash_max': '512', 'stp_state': '0', 'root_id': '8000.60eb6920b46c', 'priority': '32768', 'multicast_membership_interval': '25996', 'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_startup_query_count': '2', 'hello_time': '199', 'topology_change': '0', 'bridge_id': '8000.60eb6920b46c', 'topology_change_timer': '0', 'ageing_time': '29995', 'gc_timer': '31', 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', 'multicast_query_interval': '12498', 'multicast_last_member_interval': '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': ['bond0.36']}}, 'uuid': '44454C4C-4C00-1057-8053-B7C04F504E31', 'lastClientIface': 'bond1', 'nics': {'eth3': {'permhwaddr': '00:10:18:cd:da:ae', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ae', 'MASTER': 'bond1', 'DEVICE': 'eth3', 'ONBOOT': 'no'}, 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '00:10:18:cd:da:ac', 'speed': 1000}, 'eth2': {'permhwaddr': '00:10:18:cd:da:ac', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ac', 'MASTER': 'bond1', 'DEVICE': 'eth2', 'ONBOOT': 'no'}, 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '00:10:18:cd:da:ac', 'speed': 1000}, 'eth1': {'permhwaddr': '60:eb:69:20:b4:6d', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6d', 'MASTER': 'bond0', 'DEVICE': 'eth1', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'speed': 1000}, 'eth0': {'permhwaddr': '60:eb:69:20:b4:6c', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6c', 'MASTER': 'bond0', 'DEVICE': 'eth0', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'speed': 1000}}, 'software_revision': '1', 'clusterLevels': ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5'], 'cpuFlags':
u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,popcnt,aes,lahf_lm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270',
'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:32151ce183c8', 'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5'], 'autoNumaBalancing': 2, 'reservedMem': '321', 'bondings': {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': 'no', 'BONDING_OPTS': 'mode=4 miimon=100', 'DEVICE': 'bond0', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'slaves': ['eth0', 'eth1'], 'opts': {'miimon': '100', 'mode': '4'}}, 'bond1': {'addr': '10.10.10.6', 'cfg': {'DEFROUTE': 'no', 'IPADDR': '10.10.10.6', 'HOTPLUG': 'no', 'MTU': '9000', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'BONDING_OPTS': 'mode=4 miimon=100', 'DEVICE': 'bond1', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'netmask': '255.255.255.0', 'ipv4addrs': ['10.10.10.6/24' <http://10.10.10.6/24%27>], 'hwaddr': '00:10:18:cd:da:ac', 'slaves': ['eth2', 'eth3'], 'opts': {'miimon': '100', 'mode': '4'}}, 'bond2': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}}, 'software_version': '4.16', 'memSize': '24019', 'cpuSpeed': '2667.000', 'numaNodes': {u'1': {'totalMemory': '12288', 'cpus': [6, 7, 8, 9, 10, 11, 18, 19, 20, 21, 22, 23]}, u'0': {'totalMemory': '12278', 'cpus': [0, 1, 2, 3, 4, 5, 12, 13, 14, 15, 16, 17]}}, 'version_name': 'Snow Man', 'vlans': {'bond0.10': {'iface': 'bond0', 'addr': '43.25.17.16', 'cfg': {'DEFROUTE': 'yes', 'VLAN': 'yes', 'IPADDR': '43.25.17.16', 'HOTPLUG': 'no', 'GATEWAY': '43.25.17.1', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'DEVICE': 'bond0.10', 'MTU': '1500', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 10, 'mtu': '1500', 'netmask': '255.255.255.0', 'ipv4addrs': ['43.25.17.16/24'] <http://43.25.17.16/24%27%5D>}, 'bond0.36': {'iface': 'bond0', 'addr': '', 'cfg': {'BRIDGE': 'VMNetwork', 'VLAN': 'yes', 'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond0.36', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 36, 'mtu': '1500', 'netmask': '', 'ipv4addrs': []}, 'bond1.100': {'iface': 'bond1', 'addr': '', 'cfg': {'BRIDGE': 'Internal', 'VLAN': 'yes', 'HOTPLUG': 'no', 'MTU': '9000', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond1.100', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'vlanid': 100, 'mtu': '9000', 'netmask': '', 'ipv4addrs': []}}, 'cpuCores': '12', 'kvmEnabled': 'true', 'guestOverhead': '65', 'cpuThreads': '24', 'emulatedMachines': [u'rhel6.5.0', u'pc', u'rhel6.4.0', u'rhel6.3.0', u'rhel6.2.0', u'rhel6.1.0', u'rhel6.0.0', u'rhel5.5.0', u'rhel5.4.4', u'rhel5.4.0'], 'operatingSystem': {'release': '5.el6.centos.11.1', 'version': '6', 'name': 'RHEL'}, 'lastClient': '10.10.10.2'}} Thread-13::DEBUG::2014-11-24 21:41:41,620::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call getHardwareInfo with () {} Thread-13::DEBUG::2014-11-24 21:41:41,621::BindingXMLRPC::1139::vds::(wrapper) return getHardwareInfo with {'status': {'message': 'Done', 'code': 0}, 'info': {'systemProductName': 'CS24-TY', 'systemSerialNumber': '7LWSPN1', 'systemFamily': 'Server', 'systemVersion': 'A00', 'systemUUID': '44454c4c-4c00-1057-8053-b7c04f504e31', 'systemManufacturer': 'Dell'}} Thread-13::DEBUG::2014-11-24 21:41:41,733::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call hostsList with () {} flowID [222e8036] Thread-13::ERROR::2014-11-24 21:41:44,753::BindingXMLRPC::1148::vds::(wrapper) vdsm exception occured Traceback (most recent call last): File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1135, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper rv = func(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList return {'hosts': self.svdsmProxy.glusterPeerStatus()} File "/usr/share/vdsm/supervdsm.py", line 50, in __call__ return callMethod() File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda> **kwargs) File "<string>", line 2, in glusterPeerStatus File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in _callmethod raise convert_to_error(kind, result) GlusterCmdExecFailedException: Command execution failed error: Connection failed. Please check if gluster daemon is operational. return code: 1 Thread-13::DEBUG::2014-11-24 21:41:50,949::task::595::Storage.TaskManager.Task::(_updateState) Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from state init -> state preparing Thread-13::INFO::2014-11-24 21:41:50,950::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-13::INFO::2014-11-24 21:41:50,950::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-13::DEBUG::2014-11-24 21:41:50,950::task::1191::Storage.TaskManager.Task::(prepare) Task=`c9042986-c978-4b08-adb2-616f5299e115`::finished: {} Thread-13::DEBUG::2014-11-24 21:41:50,950::task::595::Storage.TaskManager.Task::(_updateState) Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from state
----- Original Message ----- the preparing preparing
-> state finished Thread-13::DEBUG::2014-11-24
21:41:50,951::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {} Thread-13::DEBUG::2014-11-24
21:41:50,951::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {} Thread-13::DEBUG::2014-11-24 21:41:50,951::task::993::Storage.TaskManager.Task::(_decref) Task=`c9042986-c978-4b08-adb2-616f5299e115`::ref 0 aborting False -------------------------------
[root@compute4 ~]# service glusterd status glusterd is stopped [root@compute4 ~]# chkconfig --list | grep glusterd glusterd 0:off 1:off 2:on 3:on 4:on 5:on 6:off [root@compute4 ~]#
Thanks, Punit
On Mon, Nov 24, 2014 at 6:36 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
Can you send the corresponding error in vdsm.log from the host?
Also check if glusterd service is running.
Thanks, Kanagaraj
On 11/24/2014 03:39 PM, Punit Dambiwal wrote:
Hi,
After reboot my Hypervisior host can not activate again in the cluster and failed with the following error :-
Gluster command [<UNKNOWN>] failed on server...
Engine logs :-
2014-11-24 18:05:28,397 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-64) START, GlusterVolumesListVDSCommand(HostName = Compute4, HostId = 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 5f251c90 2014-11-24 18:05:30,609 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-64) FINISH, GlusterVolumesListVDSCommand, return:
{26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@d95203e0 },
log id: 5f251c90 2014-11-24 18:05:33,768 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (ajp--127.0.0.1-8702-8) [287d570d] Lock Acquired to object EngineLock [exclusiveLocks= key: 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS , sharedLocks= ] 2014-11-24 18:05:33,795 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] Running command: ActivateVdsCommand internal: false. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDSAction group MANIPULATE_HOST with role type ADMIN 2014-11-24 18:05:33,796 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] Before acquiring lock in order to prevent monitoring for host Compute5 from data-center SV_WTC 2014-11-24 18:05:33,797 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] Lock acquired, from now a monitoring of host will be skipped for host Compute5 from data-center SV_WTC 2014-11-24 18:05:33,817 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] START, SetVdsStatusVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=Unassigned, nonOperationalReason=NONE, stopSpmFailureLogged=false), log id: 1cbc7311 2014-11-24 18:05:33,820 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] FINISH, SetVdsStatusVDSCommand, log id: 1cbc7311 2014-11-24 18:05:34,086 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) Activate finished. Lock released. Monitoring can run now for host Compute5 from data-center SV_WTC 2014-11-24 18:05:34,088 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-45) Correlation ID: 287d570d, Job ID: 5ef8e4d6-b2bc-469e-8e81-7ef74b2a001a, Call Stack: null, Custom Event ID: -1, Message: Host Compute5 was activated by admin. 2014-11-24 18:05:34,090 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) Lock freed to object EngineLock [exclusiveLocks= key: 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS , sharedLocks= ] 2014-11-24 18:05:35,792 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-55) [3706e836] START, GlusterVolumesListVDSCommand(HostName = Compute4, HostId = 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 48a0c832 2014-11-24 18:05:37,064 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand]
(DefaultQuartzScheduler_Worker-69) START, GetHardwareInfoVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a, vds=Host[Compute5,0bf6b00f-7947-4411-b55a-cc5eea2b381a]), log id: 6d560cc2 2014-11-24 18:05:37,074 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand]
(DefaultQuartzScheduler_Worker-69) FINISH, GetHardwareInfoVDSCommand, log id: 6d560cc2 2014-11-24 18:05:37,093 WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (DefaultQuartzScheduler_Worker-69) Host Compute5 is running with disabled SELinux. 2014-11-24 18:05:37,127 INFO [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] (DefaultQuartzScheduler_Worker-69) [2b4a51cf] Running command: HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS 2014-11-24 18:05:37,147 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
(DefaultQuartzScheduler_Worker-69) [2b4a51cf] START, GlusterServersListVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a), log id: 4faed87 2014-11-24 18:05:37,164 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
(DefaultQuartzScheduler_Worker-69) [2b4a51cf] FINISH, GlusterServersListVDSCommand, log id: 4faed87 2014-11-24 18:05:37,189 INFO [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Running command: SetNonOperationalVdsCommand internal: true. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS 2014-11-24 18:05:37,206 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] START, SetVdsStatusVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=NonOperational, nonOperationalReason=GLUSTER_COMMAND_FAILED, stopSpmFailureLogged=false), log id: fed5617 2014-11-24 18:05:37,209 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] FINISH, SetVdsStatusVDSCommand, log id: fed5617 2014-11-24 18:05:37,223 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: 4a84c4e5, Job ID: 4bfd4a6d-c3ef-468f-a40e-a3a6ca13011b, Call Stack: null, Custom Event ID: -1, Message: Gluster command [<UNKNOWN>] failed on server Compute5. 2014-11-24 18:05:37,243 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Status of host Compute5 was set to NonOperational. 2014-11-24 18:05:37,272 INFO [org.ovirt.engine.core.bll.HandleVdsVersionCommand] (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Running command: HandleVdsVersionCommand internal: true. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS 2014-11-24 18:05:37,274 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Host 0bf6b00f-7947-4411-b55a-cc5eea2b381a : Compute5 is already in NonOperational status for reason GLUSTER_COMMAND_FAILED. SetNonOperationalVds command is skipped. 2014-11-24 18:05:38,065 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-55) [3706e836] FINISH, GlusterVolumesListVDSCommand, return:
{26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4e72a1b1 },
log id: 48a0c832 2014-11-24 18:05:43,243 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-35) START, GlusterVolumesListVDSCommand(HostName = Compute4, HostId = 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 3ce13ebc ^C [root@ccr01 ~]#
Thanks, Punit
_______________________________________________ Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users

Is there any one can help on this ?? Thanks, punit On Wed, Nov 26, 2014 at 9:42 AM, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi,
My Glusterfs version is :- glusterfs-3.6.1-1.el7
On Wed, Nov 26, 2014 at 1:59 AM, Kanagaraj Mayilsamy <kmayilsa@redhat.com> wrote:
[+Gluster-users@gluster.org]
"Initialization of volume 'management' failed, review your volfile again", glusterd throws this error when the service is started automatically after the reboot. But the service is successfully started later manually by the user.
can somebody from gluster-users please help on this?
glusterfs version: 3.5.1
Thanks, Kanagaraj
From: "Punit Dambiwal" <hypunit@gmail.com> To: "Kanagaraj" <kmayilsa@redhat.com> Cc: users@ovirt.org Sent: Tuesday, November 25, 2014 7:24:45 PM Subject: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server...
Hi Kanagraj,
Please check the attached log files....i didn't find any thing special....
On Tue, Nov 25, 2014 at 12:12 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
Do you see any errors in /var/log/glusterfs/etc-glusterfs-glusterd.vol.log or vdsm.log when the service is trying to start automatically after the reboot?
Thanks, Kanagaraj
On 11/24/2014 08:13 PM, Punit Dambiwal wrote:
Hi Kanagaraj,
Yes...once i will start the gluster service and then vdsmd ...the host can connect to cluster...but the question is why it's not started even it has chkconfig enabled...
I have tested it in two host cluster environment...(Centos 6.6 and centos 7.0) on both hypervisior cluster..it's failed to reconnect in to cluster after reboot....
In both the environment glusterd enabled for next boot....but it's failed with the same error....seems it's bug in either gluster or Ovirt ??
Please help me to find the workaround here if can not resolve it...as without this the Host machine can not connect after reboot....that means engine will consider it as down and every time need to manually start
gluster service and vdsmd... ??
Thanks, Punit
On Mon, Nov 24, 2014 at 10:20 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
From vdsm.log "error: Connection failed. Please check if gluster daemon is operational."
Starting glusterd service should fix this issue. 'service glusterd start' But i am wondering why the glusterd was not started automatically after the reboot.
Thanks, Kanagaraj
On 11/24/2014 07:18 PM, Punit Dambiwal wrote:
Hi Kanagaraj,
Please find the attached VDSM logs :-
---------------- Thread-13::DEBUG::2014-11-24
21:41:17,182::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {} Thread-13::DEBUG::2014-11-24 21:41:17,182::task::993::Storage.TaskManager.Task::(_decref) Task=`1691d409-9b27-4585-8281-5ec26154367a`::ref 0 aborting False Thread-13::DEBUG::2014-11-24 21:41:32,393::task::595::Storage.TaskManager.Task::(_updateState) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from state init -> state preparing Thread-13::INFO::2014-11-24 21:41:32,393::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-13::INFO::2014-11-24 21:41:32,393::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-13::DEBUG::2014-11-24 21:41:32,393::task::1191::Storage.TaskManager.Task::(prepare) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::finished: {} Thread-13::DEBUG::2014-11-24 21:41:32,394::task::595::Storage.TaskManager.Task::(_updateState) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from state
-> state finished Thread-13::DEBUG::2014-11-24
21:41:32,394::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {} Thread-13::DEBUG::2014-11-24
21:41:32,394::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {} Thread-13::DEBUG::2014-11-24 21:41:32,394::task::993::Storage.TaskManager.Task::(_decref) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::ref 0 aborting False Thread-13::DEBUG::2014-11-24 21:41:41,550::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call getCapabilities with () {} Thread-13::DEBUG::2014-11-24 21:41:41,553::utils::738::root::(execCmd) /sbin/ip route show to 0.0.0.0/0 table all (cwd None) Thread-13::DEBUG::2014-11-24 21:41:41,560::utils::758::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-13::DEBUG::2014-11-24 21:41:41,588::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift',) not found Thread-13::DEBUG::2014-11-24 21:41:41,592::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-object',) not found Thread-13::DEBUG::2014-11-24 21:41:41,593::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-plugin',) not found Thread-13::DEBUG::2014-11-24 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-account',) not found Thread-13::DEBUG::2014-11-24 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-proxy',) not found Thread-13::DEBUG::2014-11-24 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-doc',) not found Thread-13::DEBUG::2014-11-24 21:41:41,599::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-container',) not found Thread-13::DEBUG::2014-11-24 21:41:41,599::caps::728::root::(_getKeyPackages) rpm package ('glusterfs-geo-replication',) not found Thread-13::DEBUG::2014-11-24 21:41:41,600::caps::646::root::(get) VirtioRNG DISABLED: libvirt version 0.10.2-29.el6_5.9 required >= 0.10.2-31 Thread-13::DEBUG::2014-11-24 21:41:41,603::BindingXMLRPC::1139::vds::(wrapper) return getCapabilities with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory': {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:32151ce183c8'}], 'FC': []}, 'packages2': {'kernel': {'release': '431.el6.x86_64', 'buildtime': 1385061309.0, 'version': '2.6.32'}, 'glusterfs-rdma': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, 'glusterfs-fuse': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, 'spice-server': {'release': '6.el6_5.2', 'buildtime': 1402324637L, 'version': '0.12.4'}, 'vdsm': {'release': '1.gitdb83943.el6', 'buildtime': 1412784567L, 'version': '4.16.7'}, 'qemu-kvm': {'release': '2.415.el6_5.10', 'buildtime': 1402435700L, 'version': '0.12.1.2'}, 'qemu-img': {'release': '2.415.el6_5.10', 'buildtime': 1402435700L, 'version': '0.12.1.2'}, 'libvirt': {'release': '29.el6_5.9', 'buildtime': 1402404612L, 'version': '0.10.2'}, 'glusterfs': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, 'mom': {'release': '2.el6', 'buildtime': 1403794344L, 'version': '0.4.1'}, 'glusterfs-server': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}}, 'numaNodeDistance': {'1': [20, 10], '0': [10, 20]}, 'cpuModel': 'Intel(R) Xeon(R) CPU X5650 @ 2.67GHz', 'liveMerge': 'false', 'hooks': {}, 'cpuSockets': '2', 'vmTypes': ['kvm'], 'selinux': {'mode': '1'}, 'kdumpStatus': 0, 'supportedProtocols': ['2.2', '2.3'], 'networks': {'ovirtmgmt': {'iface': u'bond0.10', 'addr': '43.252.176.16', 'bridged': False, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'bootproto4': 'none', 'netmask': '255.255.255.0', 'ipv4addrs': [' 43.252.176.16/24' <http://43.252.176.16/24%27>], 'interface': u'bond0.10', 'ipv6gateway': '::', 'gateway': '43.25.17.1'}, 'Internal': {'iface': 'Internal', 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'bridged': True, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'gateway': '', 'bootproto4': 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'mtu': '9000', 'ipv6gateway': '::', 'ports': ['bond1.100']}, 'storage': {'iface': u'bond1', 'addr': '10.10.10.6', 'bridged': False, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'bootproto4': 'none', 'netmask': '255.255.255.0', 'ipv4addrs': [' 10.10.10.6/24' <http://10.10.10.6/24%27>], 'interface': u'bond1', 'ipv6gateway': '::', 'gateway': ''}, 'VMNetwork': {'iface': 'VMNetwork', 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'bridged': True, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'gateway': '', 'bootproto4': 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'mtu': '1500', 'ipv6gateway': '::', 'ports': ['bond0.36']}}, 'bridges': {'Internal': {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': '', 'opts': {'topology_change_detected': '0', 'multicast_last_member_count': '2', 'hash_elasticity': '4', 'multicast_query_response_interval': '999', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3124', 'hello_timer': '31', 'multicast_querier_interval': '25496', 'max_age': '1999', 'hash_max': '512', 'stp_state': '0', 'root_id': '8000.001018cddaac', 'priority': '32768', 'multicast_membership_interval': '25996', 'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_startup_query_count': '2', 'hello_time': '199', 'topology_change': '0', 'bridge_id': '8000.001018cddaac', 'topology_change_timer': '0', 'ageing_time': '29995', 'gc_timer': '31', 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', 'multicast_query_interval': '12498', 'multicast_last_member_interval': '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': ['bond1.100']}, 'VMNetwork': {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': '', 'opts': {'topology_change_detected': '0', 'multicast_last_member_count': '2', 'hash_elasticity': '4', 'multicast_query_response_interval': '999', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3124', 'hello_timer': '131', 'multicast_querier_interval': '25496', 'max_age': '1999', 'hash_max': '512', 'stp_state': '0', 'root_id': '8000.60eb6920b46c', 'priority': '32768', 'multicast_membership_interval': '25996', 'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_startup_query_count': '2', 'hello_time': '199', 'topology_change': '0', 'bridge_id': '8000.60eb6920b46c', 'topology_change_timer': '0', 'ageing_time': '29995', 'gc_timer': '31', 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', 'multicast_query_interval': '12498', 'multicast_last_member_interval': '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': ['bond0.36']}}, 'uuid': '44454C4C-4C00-1057-8053-B7C04F504E31', 'lastClientIface': 'bond1', 'nics': {'eth3': {'permhwaddr': '00:10:18:cd:da:ae', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ae', 'MASTER': 'bond1', 'DEVICE': 'eth3', 'ONBOOT': 'no'}, 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '00:10:18:cd:da:ac', 'speed': 1000}, 'eth2': {'permhwaddr': '00:10:18:cd:da:ac', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ac', 'MASTER': 'bond1', 'DEVICE': 'eth2', 'ONBOOT': 'no'}, 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '00:10:18:cd:da:ac', 'speed': 1000}, 'eth1': {'permhwaddr': '60:eb:69:20:b4:6d', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6d', 'MASTER': 'bond0', 'DEVICE': 'eth1', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'speed': 1000}, 'eth0': {'permhwaddr': '60:eb:69:20:b4:6c', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6c', 'MASTER': 'bond0', 'DEVICE': 'eth0', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'speed': 1000}}, 'software_revision': '1', 'clusterLevels': ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5'], 'cpuFlags':
u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,popcnt,aes,lahf_lm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270',
'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:32151ce183c8', 'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5'], 'autoNumaBalancing': 2, 'reservedMem': '321', 'bondings': {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': 'no', 'BONDING_OPTS': 'mode=4 miimon=100', 'DEVICE': 'bond0', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'slaves': ['eth0', 'eth1'], 'opts': {'miimon': '100', 'mode': '4'}}, 'bond1': {'addr': '10.10.10.6', 'cfg': {'DEFROUTE': 'no', 'IPADDR': '10.10.10.6', 'HOTPLUG': 'no', 'MTU': '9000', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'BONDING_OPTS': 'mode=4 miimon=100', 'DEVICE': 'bond1', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'netmask': '255.255.255.0', 'ipv4addrs': ['10.10.10.6/24' <http://10.10.10.6/24%27>], 'hwaddr': '00:10:18:cd:da:ac', 'slaves': ['eth2', 'eth3'], 'opts': {'miimon': '100', 'mode': '4'}}, 'bond2': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}}, 'software_version': '4.16', 'memSize': '24019', 'cpuSpeed': '2667.000', 'numaNodes': {u'1': {'totalMemory': '12288', 'cpus': [6, 7, 8, 9, 10, 11, 18, 19, 20, 21, 22, 23]}, u'0': {'totalMemory': '12278', 'cpus': [0, 1, 2, 3, 4, 5, 12, 13, 14, 15, 16, 17]}}, 'version_name': 'Snow Man', 'vlans': {'bond0.10': {'iface': 'bond0', 'addr': '43.25.17.16', 'cfg': {'DEFROUTE': 'yes', 'VLAN': 'yes', 'IPADDR': '43.25.17.16', 'HOTPLUG': 'no', 'GATEWAY': '43.25.17.1', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'DEVICE': 'bond0.10', 'MTU': '1500', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 10, 'mtu': '1500', 'netmask': '255.255.255.0', 'ipv4addrs': ['43.25.17.16/24'] <http://43.25.17.16/24%27%5D>}, 'bond0.36': {'iface': 'bond0', 'addr': '', 'cfg': {'BRIDGE': 'VMNetwork', 'VLAN': 'yes', 'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond0.36', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 36, 'mtu': '1500', 'netmask': '', 'ipv4addrs': []}, 'bond1.100': {'iface': 'bond1', 'addr': '', 'cfg': {'BRIDGE': 'Internal', 'VLAN': 'yes', 'HOTPLUG': 'no', 'MTU': '9000', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond1.100', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'vlanid': 100, 'mtu': '9000', 'netmask': '', 'ipv4addrs': []}}, 'cpuCores': '12', 'kvmEnabled': 'true', 'guestOverhead': '65', 'cpuThreads': '24', 'emulatedMachines': [u'rhel6.5.0', u'pc', u'rhel6.4.0', u'rhel6.3.0', u'rhel6.2.0', u'rhel6.1.0', u'rhel6.0.0', u'rhel5.5.0', u'rhel5.4.4', u'rhel5.4.0'], 'operatingSystem': {'release': '5.el6.centos.11.1', 'version': '6', 'name': 'RHEL'}, 'lastClient': '10.10.10.2'}} Thread-13::DEBUG::2014-11-24 21:41:41,620::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call getHardwareInfo with () {} Thread-13::DEBUG::2014-11-24 21:41:41,621::BindingXMLRPC::1139::vds::(wrapper) return getHardwareInfo with {'status': {'message': 'Done', 'code': 0}, 'info': {'systemProductName': 'CS24-TY', 'systemSerialNumber': '7LWSPN1', 'systemFamily': 'Server', 'systemVersion': 'A00', 'systemUUID': '44454c4c-4c00-1057-8053-b7c04f504e31', 'systemManufacturer': 'Dell'}} Thread-13::DEBUG::2014-11-24 21:41:41,733::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call hostsList with () {} flowID [222e8036] Thread-13::ERROR::2014-11-24 21:41:44,753::BindingXMLRPC::1148::vds::(wrapper) vdsm exception occured Traceback (most recent call last): File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1135, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper rv = func(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList return {'hosts': self.svdsmProxy.glusterPeerStatus()} File "/usr/share/vdsm/supervdsm.py", line 50, in __call__ return callMethod() File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda> **kwargs) File "<string>", line 2, in glusterPeerStatus File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in _callmethod raise convert_to_error(kind, result) GlusterCmdExecFailedException: Command execution failed error: Connection failed. Please check if gluster daemon is operational. return code: 1 Thread-13::DEBUG::2014-11-24 21:41:50,949::task::595::Storage.TaskManager.Task::(_updateState) Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from state init -> state preparing Thread-13::INFO::2014-11-24 21:41:50,950::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-13::INFO::2014-11-24 21:41:50,950::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-13::DEBUG::2014-11-24 21:41:50,950::task::1191::Storage.TaskManager.Task::(prepare) Task=`c9042986-c978-4b08-adb2-616f5299e115`::finished: {} Thread-13::DEBUG::2014-11-24 21:41:50,950::task::595::Storage.TaskManager.Task::(_updateState) Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from state
----- Original Message ----- the preparing preparing
-> state finished Thread-13::DEBUG::2014-11-24
21:41:50,951::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {} Thread-13::DEBUG::2014-11-24
21:41:50,951::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {} Thread-13::DEBUG::2014-11-24 21:41:50,951::task::993::Storage.TaskManager.Task::(_decref) Task=`c9042986-c978-4b08-adb2-616f5299e115`::ref 0 aborting False -------------------------------
[root@compute4 ~]# service glusterd status glusterd is stopped [root@compute4 ~]# chkconfig --list | grep glusterd glusterd 0:off 1:off 2:on 3:on 4:on 5:on 6:off [root@compute4 ~]#
Thanks, Punit
On Mon, Nov 24, 2014 at 6:36 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
Can you send the corresponding error in vdsm.log from the host?
Also check if glusterd service is running.
Thanks, Kanagaraj
On 11/24/2014 03:39 PM, Punit Dambiwal wrote:
Hi,
After reboot my Hypervisior host can not activate again in the cluster and failed with the following error :-
Gluster command [<UNKNOWN>] failed on server...
Engine logs :-
2014-11-24 18:05:28,397 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-64) START, GlusterVolumesListVDSCommand(HostName = Compute4, HostId = 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 5f251c90 2014-11-24 18:05:30,609 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-64) FINISH, GlusterVolumesListVDSCommand, return:
{26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@d95203e0 },
log id: 5f251c90 2014-11-24 18:05:33,768 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (ajp--127.0.0.1-8702-8) [287d570d] Lock Acquired to object EngineLock [exclusiveLocks= key: 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS , sharedLocks= ] 2014-11-24 18:05:33,795 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] Running command: ActivateVdsCommand internal: false. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDSAction group MANIPULATE_HOST with role type ADMIN 2014-11-24 18:05:33,796 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] Before acquiring lock in order to prevent monitoring for host Compute5 from data-center SV_WTC 2014-11-24 18:05:33,797 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] Lock acquired, from now a monitoring of host will be skipped for host Compute5 from data-center SV_WTC 2014-11-24 18:05:33,817 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] START, SetVdsStatusVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=Unassigned, nonOperationalReason=NONE, stopSpmFailureLogged=false), log id: 1cbc7311 2014-11-24 18:05:33,820 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (org.ovirt.thread.pool-8-thread-45) [287d570d] FINISH, SetVdsStatusVDSCommand, log id: 1cbc7311 2014-11-24 18:05:34,086 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) Activate finished. Lock released. Monitoring can run now for host Compute5 from data-center SV_WTC 2014-11-24 18:05:34,088 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-45) Correlation ID: 287d570d, Job ID: 5ef8e4d6-b2bc-469e-8e81-7ef74b2a001a, Call Stack: null, Custom Event ID: -1, Message: Host Compute5 was activated by admin. 2014-11-24 18:05:34,090 INFO [org.ovirt.engine.core.bll.ActivateVdsCommand] (org.ovirt.thread.pool-8-thread-45) Lock freed to object EngineLock [exclusiveLocks= key: 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS , sharedLocks= ] 2014-11-24 18:05:35,792 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-55) [3706e836] START, GlusterVolumesListVDSCommand(HostName = Compute4, HostId = 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 48a0c832 2014-11-24 18:05:37,064 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand]
(DefaultQuartzScheduler_Worker-69) START, GetHardwareInfoVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a, vds=Host[Compute5,0bf6b00f-7947-4411-b55a-cc5eea2b381a]), log id: 6d560cc2 2014-11-24 18:05:37,074 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand]
(DefaultQuartzScheduler_Worker-69) FINISH, GetHardwareInfoVDSCommand, log id: 6d560cc2 2014-11-24 18:05:37,093 WARN [org.ovirt.engine.core.vdsbroker.VdsManager] (DefaultQuartzScheduler_Worker-69) Host Compute5 is running with disabled SELinux. 2014-11-24 18:05:37,127 INFO
[org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand]
(DefaultQuartzScheduler_Worker-69) [2b4a51cf] Running command: HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS 2014-11-24 18:05:37,147 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
(DefaultQuartzScheduler_Worker-69) [2b4a51cf] START, GlusterServersListVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a), log id: 4faed87 2014-11-24 18:05:37,164 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand]
(DefaultQuartzScheduler_Worker-69) [2b4a51cf] FINISH, GlusterServersListVDSCommand, log id: 4faed87 2014-11-24 18:05:37,189 INFO [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Running command: SetNonOperationalVdsCommand internal: true. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS 2014-11-24 18:05:37,206 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] START, SetVdsStatusVDSCommand(HostName = Compute5, HostId = 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=NonOperational, nonOperationalReason=GLUSTER_COMMAND_FAILED, stopSpmFailureLogged=false), log id: fed5617 2014-11-24 18:05:37,209 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (DefaultQuartzScheduler_Worker-69) [4a84c4e5] FINISH, SetVdsStatusVDSCommand, log id: fed5617 2014-11-24 18:05:37,223 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: 4a84c4e5, Job ID: 4bfd4a6d-c3ef-468f-a40e-a3a6ca13011b, Call Stack: null, Custom Event ID: -1, Message: Gluster command [<UNKNOWN>] failed on server Compute5. 2014-11-24 18:05:37,243 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: Status of host Compute5 was set to NonOperational. 2014-11-24 18:05:37,272 INFO [org.ovirt.engine.core.bll.HandleVdsVersionCommand] (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Running command: HandleVdsVersionCommand internal: true. Entities affected : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS 2014-11-24 18:05:37,274 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Host 0bf6b00f-7947-4411-b55a-cc5eea2b381a : Compute5 is already in NonOperational status for reason GLUSTER_COMMAND_FAILED. SetNonOperationalVds command is skipped. 2014-11-24 18:05:38,065 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-55) [3706e836] FINISH, GlusterVolumesListVDSCommand, return:
{26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4e72a1b1 },
log id: 48a0c832 2014-11-24 18:05:43,243 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-35) START, GlusterVolumesListVDSCommand(HostName = Compute4, HostId = 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 3ce13ebc ^C [root@ccr01 ~]#
Thanks, Punit
_______________________________________________ Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users

Based on the logs I can guess that glusterd is being started before the network has come up and that the addresses given to bricks do not directly match the addresses used in during peer probe. The gluster_after_reboot log has the line "[2014-11-25 06:46:09.972113] E [glusterd-store.c:2632:glusterd_resolve_all_bricks] 0-glusterd: resolve brick failed in restore". Brick resolution fails when glusterd cannot match the address for the brick, with one of the peers. Brick resolution happens in two phases, 1. We first try to identify the peer by performing string comparisions with the brick address and the peer addresses (The peer names will be the names/addresses that were given when the peer was probed). 2. If we don't find a match from step 1, we will then resolve all the brick address and the peer addresses into addrinfo structs, and then compare these structs to find a match. This process should generally find a match if available. This will fail only if the network is not up yet as we cannot resolve addresses. The above steps are applicable only to glusterfs versions >=3.6. They were introduced to reduce problems with peer identification, like the one you encountered Since both of the steps failed to find a match in one run, but succeeded later, we can come to the conclusion that, a) the bricks don't have the exact same string used in peer probe for their addresses as step 1 failed, and b) the network was not up in the initial run, as step 2 failed during the initial run, but passed in the second run. Please let me know if my conclusion is correct. If it is, you can solve your problem in two ways. 1. Use the same string for doing the peer probe and for the brick address during volume create/add-brick. Ideally, we suggest you use properly resolvable FQDNs everywhere. If that is not possible, then use only IP addresses. Try to avoid short names. 2. During boot up, make sure to launch glusterd only after the network is up. This will allow the new peer identification mechanism to do its job correctly. If you have already followed these steps and yet still hit the problem, then please provide more information (setup, logs, etc.). It could be much different problem that you are facing. ~kaushal On Wed, Nov 26, 2014 at 4:01 PM, Punit Dambiwal <hypunit@gmail.com> wrote:
Is there any one can help on this ??
Thanks, punit
On Wed, Nov 26, 2014 at 9:42 AM, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi,
My Glusterfs version is :- glusterfs-3.6.1-1.el7
On Wed, Nov 26, 2014 at 1:59 AM, Kanagaraj Mayilsamy <kmayilsa@redhat.com> wrote:
[+Gluster-users@gluster.org]
"Initialization of volume 'management' failed, review your volfile again", glusterd throws this error when the service is started automatically after the reboot. But the service is successfully started later manually by the user.
can somebody from gluster-users please help on this?
glusterfs version: 3.5.1
Thanks, Kanagaraj
----- Original Message -----
From: "Punit Dambiwal" <hypunit@gmail.com> To: "Kanagaraj" <kmayilsa@redhat.com> Cc: users@ovirt.org Sent: Tuesday, November 25, 2014 7:24:45 PM Subject: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server...
Hi Kanagraj,
Please check the attached log files....i didn't find any thing special....
On Tue, Nov 25, 2014 at 12:12 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
Do you see any errors in /var/log/glusterfs/etc-glusterfs-glusterd.vol.log or vdsm.log when the service is trying to start automatically after the reboot?
Thanks, Kanagaraj
On 11/24/2014 08:13 PM, Punit Dambiwal wrote:
Hi Kanagaraj,
Yes...once i will start the gluster service and then vdsmd ...the host can connect to cluster...but the question is why it's not started even it has chkconfig enabled...
I have tested it in two host cluster environment...(Centos 6.6 and centos 7.0) on both hypervisior cluster..it's failed to reconnect in to cluster after reboot....
In both the environment glusterd enabled for next boot....but it's failed with the same error....seems it's bug in either gluster or Ovirt ??
Please help me to find the workaround here if can not resolve it...as without this the Host machine can not connect after reboot....that means engine will consider it as down and every time need to manually start the gluster service and vdsmd... ??
Thanks, Punit
On Mon, Nov 24, 2014 at 10:20 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
From vdsm.log "error: Connection failed. Please check if gluster daemon is operational."
Starting glusterd service should fix this issue. 'service glusterd start' But i am wondering why the glusterd was not started automatically after the reboot.
Thanks, Kanagaraj
On 11/24/2014 07:18 PM, Punit Dambiwal wrote:
Hi Kanagaraj,
Please find the attached VDSM logs :-
---------------- Thread-13::DEBUG::2014-11-24
21:41:17,182::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-13::DEBUG::2014-11-24 21:41:17,182::task::993::Storage.TaskManager.Task::(_decref) Task=`1691d409-9b27-4585-8281-5ec26154367a`::ref 0 aborting False Thread-13::DEBUG::2014-11-24 21:41:32,393::task::595::Storage.TaskManager.Task::(_updateState) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from state init -> state preparing Thread-13::INFO::2014-11-24 21:41:32,393::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-13::INFO::2014-11-24 21:41:32,393::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-13::DEBUG::2014-11-24 21:41:32,393::task::1191::Storage.TaskManager.Task::(prepare) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::finished: {} Thread-13::DEBUG::2014-11-24 21:41:32,394::task::595::Storage.TaskManager.Task::(_updateState) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from state preparing -> state finished Thread-13::DEBUG::2014-11-24
21:41:32,394::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-13::DEBUG::2014-11-24
21:41:32,394::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-13::DEBUG::2014-11-24 21:41:32,394::task::993::Storage.TaskManager.Task::(_decref) Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::ref 0 aborting False Thread-13::DEBUG::2014-11-24 21:41:41,550::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call getCapabilities with () {} Thread-13::DEBUG::2014-11-24 21:41:41,553::utils::738::root::(execCmd) /sbin/ip route show to 0.0.0.0/0 table all (cwd None) Thread-13::DEBUG::2014-11-24 21:41:41,560::utils::758::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0 Thread-13::DEBUG::2014-11-24 21:41:41,588::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift',) not found Thread-13::DEBUG::2014-11-24 21:41:41,592::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-object',) not found Thread-13::DEBUG::2014-11-24 21:41:41,593::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-plugin',) not found Thread-13::DEBUG::2014-11-24 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-account',) not found Thread-13::DEBUG::2014-11-24 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-proxy',) not found Thread-13::DEBUG::2014-11-24 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-doc',) not found Thread-13::DEBUG::2014-11-24 21:41:41,599::caps::728::root::(_getKeyPackages) rpm package ('gluster-swift-container',) not found Thread-13::DEBUG::2014-11-24 21:41:41,599::caps::728::root::(_getKeyPackages) rpm package ('glusterfs-geo-replication',) not found Thread-13::DEBUG::2014-11-24 21:41:41,600::caps::646::root::(get) VirtioRNG DISABLED: libvirt version 0.10.2-29.el6_5.9 required >= 0.10.2-31 Thread-13::DEBUG::2014-11-24 21:41:41,603::BindingXMLRPC::1139::vds::(wrapper) return getCapabilities with {'status': {'message': 'Done', 'code': 0}, 'info': {'HBAInventory': {'iSCSI': [{'InitiatorName': 'iqn.1994-05.com.redhat:32151ce183c8'}], 'FC': []}, 'packages2': {'kernel': {'release': '431.el6.x86_64', 'buildtime': 1385061309.0, 'version': '2.6.32'}, 'glusterfs-rdma': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, 'glusterfs-fuse': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, 'spice-server': {'release': '6.el6_5.2', 'buildtime': 1402324637L, 'version': '0.12.4'}, 'vdsm': {'release': '1.gitdb83943.el6', 'buildtime': 1412784567L, 'version': '4.16.7'}, 'qemu-kvm': {'release': '2.415.el6_5.10', 'buildtime': 1402435700L, 'version': '0.12.1.2'}, 'qemu-img': {'release': '2.415.el6_5.10', 'buildtime': 1402435700L, 'version': '0.12.1.2'}, 'libvirt': {'release': '29.el6_5.9', 'buildtime': 1402404612L, 'version': '0.10.2'}, 'glusterfs': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, 'mom': {'release': '2.el6', 'buildtime': 1403794344L, 'version': '0.4.1'}, 'glusterfs-server': {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}}, 'numaNodeDistance': {'1': [20, 10], '0': [10, 20]}, 'cpuModel': 'Intel(R) Xeon(R) CPU X5650 @ 2.67GHz', 'liveMerge': 'false', 'hooks': {}, 'cpuSockets': '2', 'vmTypes': ['kvm'], 'selinux': {'mode': '1'}, 'kdumpStatus': 0, 'supportedProtocols': ['2.2', '2.3'], 'networks': {'ovirtmgmt': {'iface': u'bond0.10', 'addr': '43.252.176.16', 'bridged': False, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'bootproto4': 'none', 'netmask': '255.255.255.0', 'ipv4addrs': [' 43.252.176.16/24' <http://43.252.176.16/24%27>], 'interface': u'bond0.10', 'ipv6gateway': '::', 'gateway': '43.25.17.1'}, 'Internal': {'iface': 'Internal', 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'bridged': True, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'gateway': '', 'bootproto4': 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'mtu': '9000', 'ipv6gateway': '::', 'ports': ['bond1.100']}, 'storage': {'iface': u'bond1', 'addr': '10.10.10.6', 'bridged': False, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'bootproto4': 'none', 'netmask': '255.255.255.0', 'ipv4addrs': [' 10.10.10.6/24' <http://10.10.10.6/24%27>], 'interface': u'bond1', 'ipv6gateway': '::', 'gateway': ''}, 'VMNetwork': {'iface': 'VMNetwork', 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'bridged': True, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'gateway': '', 'bootproto4': 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'mtu': '1500', 'ipv6gateway': '::', 'ports': ['bond0.36']}}, 'bridges': {'Internal': {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': '', 'opts': {'topology_change_detected': '0', 'multicast_last_member_count': '2', 'hash_elasticity': '4', 'multicast_query_response_interval': '999', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3124', 'hello_timer': '31', 'multicast_querier_interval': '25496', 'max_age': '1999', 'hash_max': '512', 'stp_state': '0', 'root_id': '8000.001018cddaac', 'priority': '32768', 'multicast_membership_interval': '25996', 'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_startup_query_count': '2', 'hello_time': '199', 'topology_change': '0', 'bridge_id': '8000.001018cddaac', 'topology_change_timer': '0', 'ageing_time': '29995', 'gc_timer': '31', 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', 'multicast_query_interval': '12498', 'multicast_last_member_interval': '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': ['bond1.100']}, 'VMNetwork': {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': '', 'opts': {'topology_change_detected': '0', 'multicast_last_member_count': '2', 'hash_elasticity': '4', 'multicast_query_response_interval': '999', 'multicast_snooping': '1', 'multicast_startup_query_interval': '3124', 'hello_timer': '131', 'multicast_querier_interval': '25496', 'max_age': '1999', 'hash_max': '512', 'stp_state': '0', 'root_id': '8000.60eb6920b46c', 'priority': '32768', 'multicast_membership_interval': '25996', 'root_path_cost': '0', 'root_port': '0', 'multicast_querier': '0', 'multicast_startup_query_count': '2', 'hello_time': '199', 'topology_change': '0', 'bridge_id': '8000.60eb6920b46c', 'topology_change_timer': '0', 'ageing_time': '29995', 'gc_timer': '31', 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', 'multicast_query_interval': '12498', 'multicast_last_member_interval': '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': ['bond0.36']}}, 'uuid': '44454C4C-4C00-1057-8053-B7C04F504E31', 'lastClientIface': 'bond1', 'nics': {'eth3': {'permhwaddr': '00:10:18:cd:da:ae', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ae', 'MASTER': 'bond1', 'DEVICE': 'eth3', 'ONBOOT': 'no'}, 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '00:10:18:cd:da:ac', 'speed': 1000}, 'eth2': {'permhwaddr': '00:10:18:cd:da:ac', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ac', 'MASTER': 'bond1', 'DEVICE': 'eth2', 'ONBOOT': 'no'}, 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '00:10:18:cd:da:ac', 'speed': 1000}, 'eth1': {'permhwaddr': '60:eb:69:20:b4:6d', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6d', 'MASTER': 'bond0', 'DEVICE': 'eth1', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'speed': 1000}, 'eth0': {'permhwaddr': '60:eb:69:20:b4:6c', 'addr': '', 'cfg': {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6c', 'MASTER': 'bond0', 'DEVICE': 'eth0', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'speed': 1000}}, 'software_revision': '1', 'clusterLevels': ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5'], 'cpuFlags':
u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,popcnt,aes,lahf_lm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270', 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:32151ce183c8', 'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5'], 'autoNumaBalancing': 2, 'reservedMem': '321', 'bondings': {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': {'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': 'no', 'BONDING_OPTS': 'mode=4 miimon=100', 'DEVICE': 'bond0', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'slaves': ['eth0', 'eth1'], 'opts': {'miimon': '100', 'mode': '4'}}, 'bond1': {'addr': '10.10.10.6', 'cfg': {'DEFROUTE': 'no', 'IPADDR': '10.10.10.6', 'HOTPLUG': 'no', 'MTU': '9000', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'BONDING_OPTS': 'mode=4 miimon=100', 'DEVICE': 'bond1', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'netmask': '255.255.255.0', 'ipv4addrs': ['10.10.10.6/24' <http://10.10.10.6/24%27>], 'hwaddr': '00:10:18:cd:da:ac', 'slaves': ['eth2', 'eth3'], 'opts': {'miimon': '100', 'mode': '4'}}, 'bond2': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}}, 'software_version': '4.16', 'memSize': '24019', 'cpuSpeed': '2667.000', 'numaNodes': {u'1': {'totalMemory': '12288', 'cpus': [6, 7, 8, 9, 10, 11, 18, 19, 20, 21, 22, 23]}, u'0': {'totalMemory': '12278', 'cpus': [0, 1, 2, 3, 4, 5, 12, 13, 14, 15, 16, 17]}}, 'version_name': 'Snow Man', 'vlans': {'bond0.10': {'iface': 'bond0', 'addr': '43.25.17.16', 'cfg': {'DEFROUTE': 'yes', 'VLAN': 'yes', 'IPADDR': '43.25.17.16', 'HOTPLUG': 'no', 'GATEWAY': '43.25.17.1', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', 'BOOTPROTO': 'none', 'DEVICE': 'bond0.10', 'MTU': '1500', 'ONBOOT': 'yes'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 10, 'mtu': '1500', 'netmask': '255.255.255.0', 'ipv4addrs': ['43.25.17.16/24'] <http://43.25.17.16/24%27%5D>}, 'bond0.36': {'iface': 'bond0', 'addr': '', 'cfg': {'BRIDGE': 'VMNetwork', 'VLAN': 'yes', 'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond0.36', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 36, 'mtu': '1500', 'netmask': '', 'ipv4addrs': []}, 'bond1.100': {'iface': 'bond1', 'addr': '', 'cfg': {'BRIDGE': 'Internal', 'VLAN': 'yes', 'HOTPLUG': 'no', 'MTU': '9000', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond1.100', 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'vlanid': 100, 'mtu': '9000', 'netmask': '', 'ipv4addrs': []}}, 'cpuCores': '12', 'kvmEnabled': 'true', 'guestOverhead': '65', 'cpuThreads': '24', 'emulatedMachines': [u'rhel6.5.0', u'pc', u'rhel6.4.0', u'rhel6.3.0', u'rhel6.2.0', u'rhel6.1.0', u'rhel6.0.0', u'rhel5.5.0', u'rhel5.4.4', u'rhel5.4.0'], 'operatingSystem': {'release': '5.el6.centos.11.1', 'version': '6', 'name': 'RHEL'}, 'lastClient': '10.10.10.2'}} Thread-13::DEBUG::2014-11-24 21:41:41,620::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call getHardwareInfo with () {} Thread-13::DEBUG::2014-11-24 21:41:41,621::BindingXMLRPC::1139::vds::(wrapper) return getHardwareInfo with {'status': {'message': 'Done', 'code': 0}, 'info': {'systemProductName': 'CS24-TY', 'systemSerialNumber': '7LWSPN1', 'systemFamily': 'Server', 'systemVersion': 'A00', 'systemUUID': '44454c4c-4c00-1057-8053-b7c04f504e31', 'systemManufacturer': 'Dell'}} Thread-13::DEBUG::2014-11-24 21:41:41,733::BindingXMLRPC::1132::vds::(wrapper) client [10.10.10.2]::call hostsList with () {} flowID [222e8036] Thread-13::ERROR::2014-11-24 21:41:44,753::BindingXMLRPC::1148::vds::(wrapper) vdsm exception occured Traceback (most recent call last): File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1135, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper rv = func(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList return {'hosts': self.svdsmProxy.glusterPeerStatus()} File "/usr/share/vdsm/supervdsm.py", line 50, in __call__ return callMethod() File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda> **kwargs) File "<string>", line 2, in glusterPeerStatus File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in _callmethod raise convert_to_error(kind, result) GlusterCmdExecFailedException: Command execution failed error: Connection failed. Please check if gluster daemon is operational. return code: 1 Thread-13::DEBUG::2014-11-24 21:41:50,949::task::595::Storage.TaskManager.Task::(_updateState) Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from state init -> state preparing Thread-13::INFO::2014-11-24 21:41:50,950::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-13::INFO::2014-11-24 21:41:50,950::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-13::DEBUG::2014-11-24 21:41:50,950::task::1191::Storage.TaskManager.Task::(prepare) Task=`c9042986-c978-4b08-adb2-616f5299e115`::finished: {} Thread-13::DEBUG::2014-11-24 21:41:50,950::task::595::Storage.TaskManager.Task::(_updateState) Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from state preparing -> state finished Thread-13::DEBUG::2014-11-24
21:41:50,951::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-13::DEBUG::2014-11-24
21:41:50,951::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-13::DEBUG::2014-11-24 21:41:50,951::task::993::Storage.TaskManager.Task::(_decref) Task=`c9042986-c978-4b08-adb2-616f5299e115`::ref 0 aborting False -------------------------------
[root@compute4 ~]# service glusterd status glusterd is stopped [root@compute4 ~]# chkconfig --list | grep glusterd glusterd 0:off 1:off 2:on 3:on 4:on 5:on 6:off [root@compute4 ~]#
Thanks, Punit
On Mon, Nov 24, 2014 at 6:36 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
> Can you send the corresponding error in vdsm.log from the host? > > Also check if glusterd service is running. > > Thanks, > Kanagaraj > > > On 11/24/2014 03:39 PM, Punit Dambiwal wrote: > > Hi, > > After reboot my Hypervisior host can not activate again in the > cluster > and failed with the following error :- > > Gluster command [<UNKNOWN>] failed on server... > > Engine logs :- > > 2014-11-24 18:05:28,397 INFO > > [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] > (DefaultQuartzScheduler_Worker-64) START, > GlusterVolumesListVDSCommand(HostName = Compute4, HostId = > 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 5f251c90 > 2014-11-24 18:05:30,609 INFO > > [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] > (DefaultQuartzScheduler_Worker-64) FINISH, > GlusterVolumesListVDSCommand, > return: > > {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@d95203e0}, > log id: 5f251c90 > 2014-11-24 18:05:33,768 INFO > [org.ovirt.engine.core.bll.ActivateVdsCommand] > (ajp--127.0.0.1-8702-8) > [287d570d] Lock Acquired to object EngineLock [exclusiveLocks= key: > 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS > , sharedLocks= ] > 2014-11-24 18:05:33,795 INFO > [org.ovirt.engine.core.bll.ActivateVdsCommand] > (org.ovirt.thread.pool-8-thread-45) [287d570d] Running command: > ActivateVdsCommand internal: false. Entities affected : ID: > 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDSAction group > MANIPULATE_HOST > with role type ADMIN > 2014-11-24 18:05:33,796 INFO > [org.ovirt.engine.core.bll.ActivateVdsCommand] > (org.ovirt.thread.pool-8-thread-45) [287d570d] Before acquiring > lock in > order to prevent monitoring for host Compute5 from data-center > SV_WTC > 2014-11-24 18:05:33,797 INFO > [org.ovirt.engine.core.bll.ActivateVdsCommand] > (org.ovirt.thread.pool-8-thread-45) [287d570d] Lock acquired, from > now a > monitoring of host will be skipped for host Compute5 from > data-center > SV_WTC > 2014-11-24 18:05:33,817 INFO > [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] > (org.ovirt.thread.pool-8-thread-45) [287d570d] START, > SetVdsStatusVDSCommand(HostName = Compute5, HostId = > 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=Unassigned, > nonOperationalReason=NONE, stopSpmFailureLogged=false), log id: > 1cbc7311 > 2014-11-24 18:05:33,820 INFO > [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] > (org.ovirt.thread.pool-8-thread-45) [287d570d] FINISH, > SetVdsStatusVDSCommand, log id: 1cbc7311 > 2014-11-24 18:05:34,086 INFO > [org.ovirt.engine.core.bll.ActivateVdsCommand] > (org.ovirt.thread.pool-8-thread-45) Activate finished. Lock > released. > Monitoring can run now for host Compute5 from data-center SV_WTC > 2014-11-24 18:05:34,088 INFO > > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > (org.ovirt.thread.pool-8-thread-45) Correlation ID: 287d570d, Job > ID: > 5ef8e4d6-b2bc-469e-8e81-7ef74b2a001a, Call Stack: null, Custom > Event ID: > -1, Message: Host Compute5 was activated by admin. > 2014-11-24 18:05:34,090 INFO > [org.ovirt.engine.core.bll.ActivateVdsCommand] > (org.ovirt.thread.pool-8-thread-45) Lock freed to object EngineLock > [exclusiveLocks= key: 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: > VDS > , sharedLocks= ] > 2014-11-24 18:05:35,792 INFO > > [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] > (DefaultQuartzScheduler_Worker-55) [3706e836] START, > GlusterVolumesListVDSCommand(HostName = Compute4, HostId = > 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 48a0c832 > 2014-11-24 18:05:37,064 INFO > > [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] > (DefaultQuartzScheduler_Worker-69) START, > GetHardwareInfoVDSCommand(HostName = Compute5, HostId = > 0bf6b00f-7947-4411-b55a-cc5eea2b381a, > vds=Host[Compute5,0bf6b00f-7947-4411-b55a-cc5eea2b381a]), log id: > 6d560cc2 > 2014-11-24 18:05:37,074 INFO > > [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] > (DefaultQuartzScheduler_Worker-69) FINISH, > GetHardwareInfoVDSCommand, log > id: 6d560cc2 > 2014-11-24 18:05:37,093 WARN > [org.ovirt.engine.core.vdsbroker.VdsManager] > (DefaultQuartzScheduler_Worker-69) Host Compute5 is running with > disabled > SELinux. > 2014-11-24 18:05:37,127 INFO > > [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] > (DefaultQuartzScheduler_Worker-69) [2b4a51cf] Running command: > HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities > affected > : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS > 2014-11-24 18:05:37,147 INFO > > [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] > (DefaultQuartzScheduler_Worker-69) [2b4a51cf] START, > GlusterServersListVDSCommand(HostName = Compute5, HostId = > 0bf6b00f-7947-4411-b55a-cc5eea2b381a), log id: 4faed87 > 2014-11-24 18:05:37,164 INFO > > [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] > (DefaultQuartzScheduler_Worker-69) [2b4a51cf] FINISH, > GlusterServersListVDSCommand, log id: 4faed87 > 2014-11-24 18:05:37,189 INFO > [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] > (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Running command: > SetNonOperationalVdsCommand internal: true. Entities affected : > ID: > 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS > 2014-11-24 18:05:37,206 INFO > [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] > (DefaultQuartzScheduler_Worker-69) [4a84c4e5] START, > SetVdsStatusVDSCommand(HostName = Compute5, HostId = > 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=NonOperational, > nonOperationalReason=GLUSTER_COMMAND_FAILED, > stopSpmFailureLogged=false), > log id: fed5617 > 2014-11-24 18:05:37,209 INFO > [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] > (DefaultQuartzScheduler_Worker-69) [4a84c4e5] FINISH, > SetVdsStatusVDSCommand, log id: fed5617 > 2014-11-24 18:05:37,223 ERROR > > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: > 4a84c4e5, > Job > ID: 4bfd4a6d-c3ef-468f-a40e-a3a6ca13011b, Call Stack: null, Custom > Event > ID: -1, Message: Gluster command [<UNKNOWN>] failed on server > Compute5. > 2014-11-24 18:05:37,243 INFO > > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: null, > Call > Stack: null, Custom Event ID: -1, Message: Status of host Compute5 > was > set > to NonOperational. > 2014-11-24 18:05:37,272 INFO > [org.ovirt.engine.core.bll.HandleVdsVersionCommand] > (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Running command: > HandleVdsVersionCommand internal: true. Entities affected : ID: > 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS > 2014-11-24 18:05:37,274 INFO > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] > (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Host > 0bf6b00f-7947-4411-b55a-cc5eea2b381a : Compute5 is already in > NonOperational status for reason GLUSTER_COMMAND_FAILED. > SetNonOperationalVds command is skipped. > 2014-11-24 18:05:38,065 INFO > > [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] > (DefaultQuartzScheduler_Worker-55) [3706e836] FINISH, > GlusterVolumesListVDSCommand, return: > > {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4e72a1b1}, > log id: 48a0c832 > 2014-11-24 18:05:43,243 INFO > > [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] > (DefaultQuartzScheduler_Worker-35) START, > GlusterVolumesListVDSCommand(HostName = Compute4, HostId = > 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 3ce13ebc > ^C > [root@ccr01 ~]# > > Thanks, > Punit > > > _______________________________________________ > Users mailing > listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users > > >
_______________________________________________ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users

Hi Kaushal, Thanks for the detailed reply....let me explain my setup first :- 1. Ovirt Engine 2. 4* host as well as storage machine (Host and gluster combined) 3. Every host has 24 bricks... Now whenever the host machine reboot...it can come up but can not join the cluster again and through the following error "Gluster command [<UNKNOWN>] failed on server.." Please check my comment in line :- 1. Use the same string for doing the peer probe and for the brick address during volume create/add-brick. Ideally, we suggest you use properly resolvable FQDNs everywhere. If that is not possible, then use only IP addresses. Try to avoid short names. --------------- [root@cpu05 ~]# gluster peer status Number of Peers: 3 Hostname: cpu03.stack.com Uuid: 5729b8c4-e80d-4353-b456-6f467bddbdfb State: Peer in Cluster (Connected) Hostname: cpu04.stack.com Uuid: d272b790-c4b2-4bed-ba68-793656e6d7b0 State: Peer in Cluster (Connected) Other names: 10.10.0.8 Hostname: cpu02.stack.com Uuid: 8d8a7041-950e-40d0-85f9-58d14340ca25 State: Peer in Cluster (Connected) [root@cpu05 ~]# ---------------- 2. During boot up, make sure to launch glusterd only after the network is up. This will allow the new peer identification mechanism to do its job correctly.
I think the service itself doing the same job....
[root@cpu05 ~]# cat /usr/lib/systemd/system/glusterd.service [Unit] Description=GlusterFS, a clustered file-system server After=network.target rpcbind.service Before=network-online.target [Service] Type=forking PIDFile=/var/run/glusterd.pid LimitNOFILE=65536 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid KillMode=process [Install] WantedBy=multi-user.target [root@cpu05 ~]# -------------------- gluster logs :- [2014-11-24 09:22:22.147471] I [MSGID: 100030] [glusterfsd.c:2018:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.6.1 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid) [2014-11-24 09:22:22.151565] I [glusterd.c:1214:init] 0-management: Maximum allowed open file descriptors set to 65536 [2014-11-24 09:22:22.151599] I [glusterd.c:1259:init] 0-management: Using /var/lib/glusterd as working directory [2014-11-24 09:22:22.155216] W [rdma.c:4195:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel creation failed (No such device) [2014-11-24 09:22:22.155264] E [rdma.c:4483:init] 0-rdma.management: Failed to initialize IB Device [2014-11-24 09:22:22.155285] E [rpc-transport.c:333:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed [2014-11-24 09:22:22.155354] W [rpcsvc.c:1524:rpcsvc_transport_create] 0-rpc-service: cannot create listener, initing the transport failed [2014-11-24 09:22:22.156290] I [glusterd.c:413:glusterd_check_gsync_present] 0-glusterd: geo-replication module not installed in the system [2014-11-24 09:22:22.161318] I [glusterd-store.c:2043:glusterd_restore_op_version] 0-glusterd: retrieved op-version: 30600 [2014-11-24 09:22:22.821800] I [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0 [2014-11-24 09:22:22.825810] I [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0 [2014-11-24 09:22:22.828705] I [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0 [2014-11-24 09:22:22.828771] I [rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2014-11-24 09:22:22.832670] I [rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2014-11-24 09:22:22.835919] I [rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2014-11-24 09:22:22.840209] E [glusterd-store.c:4248:glusterd_resolve_all_bricks] 0-glusterd: resolve brick failed in restore [2014-11-24 09:22:22.840233] E [xlator.c:425:xlator_init] 0-management: Initialization of volume 'management' failed, review your volfile again [2014-11-24 09:22:22.840245] E [graph.c:322:glusterfs_graph_init] 0-management: initializing translator failed [2014-11-24 09:22:22.840264] E [graph.c:525:glusterfs_graph_activate] 0-graph: init failed [2014-11-24 09:22:22.840754] W [glusterfsd.c:1194:cleanup_and_exit] (--> 0-: received signum (0), shutting down Thanks, Punit On Wed, Nov 26, 2014 at 7:14 PM, Kaushal M <kshlmster@gmail.com> wrote:
Based on the logs I can guess that glusterd is being started before the network has come up and that the addresses given to bricks do not directly match the addresses used in during peer probe.
The gluster_after_reboot log has the line "[2014-11-25 06:46:09.972113] E [glusterd-store.c:2632:glusterd_resolve_all_bricks] 0-glusterd: resolve brick failed in restore".
Brick resolution fails when glusterd cannot match the address for the brick, with one of the peers. Brick resolution happens in two phases, 1. We first try to identify the peer by performing string comparisions with the brick address and the peer addresses (The peer names will be the names/addresses that were given when the peer was probed). 2. If we don't find a match from step 1, we will then resolve all the brick address and the peer addresses into addrinfo structs, and then compare these structs to find a match. This process should generally find a match if available. This will fail only if the network is not up yet as we cannot resolve addresses.
The above steps are applicable only to glusterfs versions >=3.6. They were introduced to reduce problems with peer identification, like the one you encountered
Since both of the steps failed to find a match in one run, but succeeded later, we can come to the conclusion that, a) the bricks don't have the exact same string used in peer probe for their addresses as step 1 failed, and b) the network was not up in the initial run, as step 2 failed during the initial run, but passed in the second run.
Please let me know if my conclusion is correct.
If it is, you can solve your problem in two ways. 1. Use the same string for doing the peer probe and for the brick address during volume create/add-brick. Ideally, we suggest you use properly resolvable FQDNs everywhere. If that is not possible, then use only IP addresses. Try to avoid short names. 2. During boot up, make sure to launch glusterd only after the network is up. This will allow the new peer identification mechanism to do its job correctly.
If you have already followed these steps and yet still hit the problem, then please provide more information (setup, logs, etc.). It could be much different problem that you are facing.
~kaushal
On Wed, Nov 26, 2014 at 4:01 PM, Punit Dambiwal <hypunit@gmail.com> wrote:
Is there any one can help on this ??
Thanks, punit
On Wed, Nov 26, 2014 at 9:42 AM, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi,
My Glusterfs version is :- glusterfs-3.6.1-1.el7
On Wed, Nov 26, 2014 at 1:59 AM, Kanagaraj Mayilsamy <
kmayilsa@redhat.com>
wrote:
[+Gluster-users@gluster.org]
"Initialization of volume 'management' failed, review your volfile again", glusterd throws this error when the service is started
automatically
after the reboot. But the service is successfully started later manually by the user.
can somebody from gluster-users please help on this?
glusterfs version: 3.5.1
Thanks, Kanagaraj
----- Original Message -----
From: "Punit Dambiwal" <hypunit@gmail.com> To: "Kanagaraj" <kmayilsa@redhat.com> Cc: users@ovirt.org Sent: Tuesday, November 25, 2014 7:24:45 PM Subject: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server...
Hi Kanagraj,
Please check the attached log files....i didn't find any thing special....
On Tue, Nov 25, 2014 at 12:12 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
Do you see any errors in /var/log/glusterfs/etc-glusterfs-glusterd.vol.log or vdsm.log when the service is trying to start automatically after the reboot?
Thanks, Kanagaraj
On 11/24/2014 08:13 PM, Punit Dambiwal wrote:
Hi Kanagaraj,
Yes...once i will start the gluster service and then vdsmd ...the host can connect to cluster...but the question is why it's not started even it has chkconfig enabled...
I have tested it in two host cluster environment...(Centos 6.6 and centos 7.0) on both hypervisior cluster..it's failed to reconnect in to cluster after reboot....
In both the environment glusterd enabled for next boot....but it's failed with the same error....seems it's bug in either gluster or Ovirt ??
Please help me to find the workaround here if can not resolve it...as without this the Host machine can not connect after reboot....that means engine will consider it as down and every time need to manually start the gluster service and vdsmd... ??
Thanks, Punit
On Mon, Nov 24, 2014 at 10:20 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
> From vdsm.log "error: Connection failed. Please check if gluster > daemon > is operational." > > Starting glusterd service should fix this issue. 'service glusterd > start' > But i am wondering why the glusterd was not started automatically > after > the reboot. > > Thanks, > Kanagaraj > > > > On 11/24/2014 07:18 PM, Punit Dambiwal wrote: > > Hi Kanagaraj, > > Please find the attached VDSM logs :- > > ---------------- > Thread-13::DEBUG::2014-11-24 > > 21:41:17,182::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) > Owner.cancelAll requests {} > Thread-13::DEBUG::2014-11-24 > 21:41:17,182::task::993::Storage.TaskManager.Task::(_decref) > Task=`1691d409-9b27-4585-8281-5ec26154367a`::ref 0 aborting False > Thread-13::DEBUG::2014-11-24 > 21:41:32,393::task::595::Storage.TaskManager.Task::(_updateState) > Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from state init > -> > state preparing > Thread-13::INFO::2014-11-24 > 21:41:32,393::logUtils::44::dispatcher::(wrapper) Run and protect: > repoStats(options=None) > Thread-13::INFO::2014-11-24 > 21:41:32,393::logUtils::47::dispatcher::(wrapper) Run and protect: > repoStats, Return response: {} > Thread-13::DEBUG::2014-11-24 > 21:41:32,393::task::1191::Storage.TaskManager.Task::(prepare) > Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::finished: {} > Thread-13::DEBUG::2014-11-24 > 21:41:32,394::task::595::Storage.TaskManager.Task::(_updateState) > Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from state > preparing > -> > state finished > Thread-13::DEBUG::2014-11-24 > > 21:41:32,394::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) > Owner.releaseAll requests {} resources {} > Thread-13::DEBUG::2014-11-24 > > 21:41:32,394::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) > Owner.cancelAll requests {} > Thread-13::DEBUG::2014-11-24 > 21:41:32,394::task::993::Storage.TaskManager.Task::(_decref) > Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::ref 0 aborting False > Thread-13::DEBUG::2014-11-24 > 21:41:41,550::BindingXMLRPC::1132::vds::(wrapper) client > [10.10.10.2]::call > getCapabilities with () {} > Thread-13::DEBUG::2014-11-24 > 21:41:41,553::utils::738::root::(execCmd) > /sbin/ip route show to 0.0.0.0/0 table all (cwd None) > Thread-13::DEBUG::2014-11-24 > 21:41:41,560::utils::758::root::(execCmd) > SUCCESS: <err> = ''; <rc> = 0 > Thread-13::DEBUG::2014-11-24 > 21:41:41,588::caps::728::root::(_getKeyPackages) rpm package > ('gluster-swift',) not found > Thread-13::DEBUG::2014-11-24 > 21:41:41,592::caps::728::root::(_getKeyPackages) rpm package > ('gluster-swift-object',) not found > Thread-13::DEBUG::2014-11-24 > 21:41:41,593::caps::728::root::(_getKeyPackages) rpm package > ('gluster-swift-plugin',) not found > Thread-13::DEBUG::2014-11-24 > 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package > ('gluster-swift-account',) not found > Thread-13::DEBUG::2014-11-24 > 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package > ('gluster-swift-proxy',) not found > Thread-13::DEBUG::2014-11-24 > 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package > ('gluster-swift-doc',) not found > Thread-13::DEBUG::2014-11-24 > 21:41:41,599::caps::728::root::(_getKeyPackages) rpm package > ('gluster-swift-container',) not found > Thread-13::DEBUG::2014-11-24 > 21:41:41,599::caps::728::root::(_getKeyPackages) rpm package > ('glusterfs-geo-replication',) not found > Thread-13::DEBUG::2014-11-24 21:41:41,600::caps::646::root::(get) > VirtioRNG DISABLED: libvirt version 0.10.2-29.el6_5.9 required >= > 0.10.2-31 > Thread-13::DEBUG::2014-11-24 > 21:41:41,603::BindingXMLRPC::1139::vds::(wrapper) return > getCapabilities > with {'status': {'message': 'Done', 'code': 0}, 'info': > {'HBAInventory': > {'iSCSI': [{'InitiatorName': > 'iqn.1994-05.com.redhat:32151ce183c8'}], > 'FC': > []}, 'packages2': {'kernel': {'release': '431.el6.x86_64', > 'buildtime': > 1385061309.0, 'version': '2.6.32'}, 'glusterfs-rdma': {'release': > '1.el6', > 'buildtime': 1403622628L, 'version': '3.5.1'}, 'glusterfs-fuse': > {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, > 'spice-server': {'release': '6.el6_5.2', 'buildtime': 1402324637L, > 'version': '0.12.4'}, 'vdsm': {'release': '1.gitdb83943.el6', > 'buildtime': > 1412784567L, 'version': '4.16.7'}, 'qemu-kvm': {'release': > '2.415.el6_5.10', 'buildtime': 1402435700L, 'version': '0.12.1.2'}, > 'qemu-img': {'release': '2.415.el6_5.10', 'buildtime': 1402435700L, > 'version': '0.12.1.2'}, 'libvirt': {'release': '29.el6_5.9', > 'buildtime': > 1402404612L, 'version': '0.10.2'}, 'glusterfs': {'release': '1.el6', > 'buildtime': 1403622628L, 'version': '3.5.1'}, 'mom': {'release': > '2.el6', > 'buildtime': 1403794344L, 'version': '0.4.1'}, 'glusterfs-server': > {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}}, > 'numaNodeDistance': {'1': [20, 10], '0': [10, 20]}, 'cpuModel': > 'Intel(R) > Xeon(R) CPU X5650 @ 2.67GHz', 'liveMerge': 'false', > 'hooks': > {}, > 'cpuSockets': '2', 'vmTypes': ['kvm'], 'selinux': {'mode': '1'}, > 'kdumpStatus': 0, 'supportedProtocols': ['2.2', '2.3'], 'networks': > {'ovirtmgmt': {'iface': u'bond0.10', 'addr': '43.252.176.16', > 'bridged': > False, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', > 'bootproto4': 'none', 'netmask': '255.255.255.0', 'ipv4addrs': [' > 43.252.176.16/24' <http://43.252.176.16/24%27>], 'interface': > u'bond0.10', 'ipv6gateway': '::', 'gateway': '43.25.17.1'}, > 'Internal': > {'iface': 'Internal', 'addr': '', 'cfg': {'DEFROUTE': 'no', > 'HOTPLUG': > 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', > 'BOOTPROTO': > 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': 'Bridge', > 'ONBOOT': > 'no'}, 'bridged': True, 'ipv6addrs': > ['fe80::210:18ff:fecd:daac/64'], > 'gateway': '', 'bootproto4': 'none', 'netmask': '', 'stp': 'off', > 'ipv4addrs': [], 'mtu': '9000', 'ipv6gateway': '::', 'ports': > ['bond1.100']}, 'storage': {'iface': u'bond1', 'addr': '10.10.10.6', > 'bridged': False, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], > 'mtu': > '9000', 'bootproto4': 'none', 'netmask': '255.255.255.0', > 'ipv4addrs': [' > 10.10.10.6/24' <http://10.10.10.6/24%27>], 'interface': u'bond1', > 'ipv6gateway': '::', 'gateway': ''}, 'VMNetwork': {'iface': > 'VMNetwork', > 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': > '1500', > 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': > 'off', > 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'bridged': > True, > 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'gateway': '', > 'bootproto4': > 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'mtu': '1500', > 'ipv6gateway': '::', 'ports': ['bond0.36']}}, 'bridges': > {'Internal': > {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': > '9000', > 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': > 'off', > 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, > 'ipv6addrs': > ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'netmask': '', > 'stp': > 'off', 'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': '', 'opts': > {'topology_change_detected': '0', 'multicast_last_member_count': > '2', > 'hash_elasticity': '4', 'multicast_query_response_interval': '999', > 'multicast_snooping': '1', 'multicast_startup_query_interval': > '3124', > 'hello_timer': '31', 'multicast_querier_interval': '25496', > 'max_age': > '1999', 'hash_max': '512', 'stp_state': '0', 'root_id': > '8000.001018cddaac', 'priority': '32768', > 'multicast_membership_interval': > '25996', 'root_path_cost': '0', 'root_port': '0', > 'multicast_querier': > '0', > 'multicast_startup_query_count': '2', 'hello_time': '199', > 'topology_change': '0', 'bridge_id': '8000.001018cddaac', > 'topology_change_timer': '0', 'ageing_time': '29995', 'gc_timer': > '31', > 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', > 'multicast_query_interval': '12498', > 'multicast_last_member_interval': > '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': > ['bond1.100']}, 'VMNetwork': {'addr': '', 'cfg': {'DEFROUTE': 'no', > 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', 'NM_CONTROLLED': 'no', > 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'VMNetwork', 'TYPE': > 'Bridge', > 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], > 'mtu': > '1500', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'ipv6gateway': > '::', > 'gateway': '', 'opts': {'topology_change_detected': '0', > 'multicast_last_member_count': '2', 'hash_elasticity': '4', > 'multicast_query_response_interval': '999', 'multicast_snooping': > '1', > 'multicast_startup_query_interval': '3124', 'hello_timer': '131', > 'multicast_querier_interval': '25496', 'max_age': '1999', > 'hash_max': > '512', 'stp_state': '0', 'root_id': '8000.60eb6920b46c', 'priority': > '32768', 'multicast_membership_interval': '25996', 'root_path_cost': > '0', > 'root_port': '0', 'multicast_querier': '0', > 'multicast_startup_query_count': '2', 'hello_time': '199', > 'topology_change': '0', 'bridge_id': '8000.60eb6920b46c', > 'topology_change_timer': '0', 'ageing_time': '29995', 'gc_timer': > '31', > 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', > 'multicast_query_interval': '12498', > 'multicast_last_member_interval': > '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': > ['bond0.36']}}, 'uuid': '44454C4C-4C00-1057-8053-B7C04F504E31', > 'lastClientIface': 'bond1', 'nics': {'eth3': {'permhwaddr': > '00:10:18:cd:da:ae', 'addr': '', 'cfg': {'SLAVE': 'yes', > 'NM_CONTROLLED': > 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ae', 'MASTER': > 'bond1', > 'DEVICE': 'eth3', 'ONBOOT': 'no'}, 'ipv6addrs': [], 'mtu': '9000', > 'netmask': '', 'ipv4addrs': [], 'hwaddr': '00:10:18:cd:da:ac', > 'speed': > 1000}, 'eth2': {'permhwaddr': '00:10:18:cd:da:ac', 'addr': '', > 'cfg': > {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '9000', 'HWADDR': > '00:10:18:cd:da:ac', 'MASTER': 'bond1', 'DEVICE': 'eth2', 'ONBOOT': > 'no'}, > 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', 'ipv4addrs': [], > 'hwaddr': > '00:10:18:cd:da:ac', 'speed': 1000}, 'eth1': {'permhwaddr': > '60:eb:69:20:b4:6d', 'addr': '', 'cfg': {'SLAVE': 'yes', > 'NM_CONTROLLED': > 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6d', 'MASTER': > 'bond0', > 'DEVICE': 'eth1', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'mtu': '1500', > 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', > 'speed': > 1000}, 'eth0': {'permhwaddr': '60:eb:69:20:b4:6c', 'addr': '', > 'cfg': > {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', 'HWADDR': > '60:eb:69:20:b4:6c', 'MASTER': 'bond0', 'DEVICE': 'eth0', 'ONBOOT': > 'yes'}, > 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], > 'hwaddr': > '60:eb:69:20:b4:6c', 'speed': 1000}}, 'software_revision': '1', > 'clusterLevels': ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5'], > 'cpuFlags': > > u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,popcnt,aes,lahf_lm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270', > 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:32151ce183c8', > 'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1', '3.2', > '3.3', > '3.4', '3.5'], 'autoNumaBalancing': 2, 'reservedMem': '321', > 'bondings': > {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', > 'slaves': > [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': > {'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': 'no', > 'BONDING_OPTS': > 'mode=4 miimon=100', 'DEVICE': 'bond0', 'ONBOOT': 'yes'}, > 'ipv6addrs': > ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'netmask': '', > 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'slaves': ['eth0', > 'eth1'], > 'opts': {'miimon': '100', 'mode': '4'}}, 'bond1': {'addr': > '10.10.10.6', > 'cfg': {'DEFROUTE': 'no', 'IPADDR': '10.10.10.6', 'HOTPLUG': 'no', > 'MTU': > '9000', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', > 'BOOTPROTO': > 'none', 'BONDING_OPTS': 'mode=4 miimon=100', 'DEVICE': 'bond1', > 'ONBOOT': > 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', > 'netmask': '255.255.255.0', 'ipv4addrs': ['10.10.10.6/24' > <http://10.10.10.6/24%27>], 'hwaddr': '00:10:18:cd:da:ac', 'slaves': > ['eth2', 'eth3'], 'opts': {'miimon': '100', 'mode': '4'}}, 'bond2': > {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], > 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', 'cfg': {}, > 'mtu': > '1500', 'netmask': '', 'slaves': [], 'hwaddr': > '00:00:00:00:00:00'}}, > 'software_version': '4.16', 'memSize': '24019', 'cpuSpeed': > '2667.000', > 'numaNodes': {u'1': {'totalMemory': '12288', 'cpus': [6, 7, 8, 9, > 10, 11, > 18, 19, 20, 21, 22, 23]}, u'0': {'totalMemory': '12278', 'cpus': [0, > 1, 2, > 3, 4, 5, 12, 13, 14, 15, 16, 17]}}, 'version_name': 'Snow Man', > 'vlans': > {'bond0.10': {'iface': 'bond0', 'addr': '43.25.17.16', 'cfg': > {'DEFROUTE': > 'yes', 'VLAN': 'yes', 'IPADDR': '43.25.17.16', 'HOTPLUG': 'no', > 'GATEWAY': > '43.25.17.1', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', > 'BOOTPROTO': 'none', 'DEVICE': 'bond0.10', 'MTU': '1500', 'ONBOOT': > 'yes'}, > 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 10, 'mtu': > '1500', > 'netmask': '255.255.255.0', 'ipv4addrs': ['43.25.17.16/24'] > <http://43.25.17.16/24%27%5D>}, 'bond0.36': {'iface': 'bond0', > 'addr': > '', 'cfg': {'BRIDGE': 'VMNetwork', 'VLAN': 'yes', 'HOTPLUG': 'no', > 'MTU': > '1500', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond0.36', 'ONBOOT': > 'no'}, > 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 36, 'mtu': > '1500', > 'netmask': '', 'ipv4addrs': []}, 'bond1.100': {'iface': 'bond1', > 'addr': > '', 'cfg': {'BRIDGE': 'Internal', 'VLAN': 'yes', 'HOTPLUG': 'no', > 'MTU': > '9000', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond1.100', 'ONBOOT': > 'no'}, > 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'vlanid': 100, 'mtu': > '9000', > 'netmask': '', 'ipv4addrs': []}}, 'cpuCores': '12', 'kvmEnabled': > 'true', > 'guestOverhead': '65', 'cpuThreads': '24', 'emulatedMachines': > [u'rhel6.5.0', u'pc', u'rhel6.4.0', u'rhel6.3.0', u'rhel6.2.0', > u'rhel6.1.0', u'rhel6.0.0', u'rhel5.5.0', u'rhel5.4.4', > u'rhel5.4.0'], > 'operatingSystem': {'release': '5.el6.centos.11.1', 'version': '6', > 'name': > 'RHEL'}, 'lastClient': '10.10.10.2'}} > Thread-13::DEBUG::2014-11-24 > 21:41:41,620::BindingXMLRPC::1132::vds::(wrapper) client > [10.10.10.2]::call > getHardwareInfo with () {} > Thread-13::DEBUG::2014-11-24 > 21:41:41,621::BindingXMLRPC::1139::vds::(wrapper) return > getHardwareInfo > with {'status': {'message': 'Done', 'code': 0}, 'info': > {'systemProductName': 'CS24-TY', 'systemSerialNumber': '7LWSPN1', > 'systemFamily': 'Server', 'systemVersion': 'A00', 'systemUUID': > '44454c4c-4c00-1057-8053-b7c04f504e31', 'systemManufacturer': > 'Dell'}} > Thread-13::DEBUG::2014-11-24 > 21:41:41,733::BindingXMLRPC::1132::vds::(wrapper) client > [10.10.10.2]::call > hostsList with () {} flowID [222e8036] > Thread-13::ERROR::2014-11-24 > 21:41:44,753::BindingXMLRPC::1148::vds::(wrapper) vdsm exception > occured > Traceback (most recent call last): > File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1135, in wrapper > res = f(*args, **kwargs) > File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper > rv = func(*args, **kwargs) > File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList > return {'hosts': self.svdsmProxy.glusterPeerStatus()} > File "/usr/share/vdsm/supervdsm.py", line 50, in __call__ > return callMethod() > File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda> > **kwargs) > File "<string>", line 2, in glusterPeerStatus > File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, > in > _callmethod > raise convert_to_error(kind, result) > GlusterCmdExecFailedException: Command execution failed > error: Connection failed. Please check if gluster daemon is > operational. > return code: 1 > Thread-13::DEBUG::2014-11-24 > 21:41:50,949::task::595::Storage.TaskManager.Task::(_updateState) > Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from state init > -> > state preparing > Thread-13::INFO::2014-11-24 > 21:41:50,950::logUtils::44::dispatcher::(wrapper) Run and protect: > repoStats(options=None) > Thread-13::INFO::2014-11-24 > 21:41:50,950::logUtils::47::dispatcher::(wrapper) Run and protect: > repoStats, Return response: {} > Thread-13::DEBUG::2014-11-24 > 21:41:50,950::task::1191::Storage.TaskManager.Task::(prepare) > Task=`c9042986-c978-4b08-adb2-616f5299e115`::finished: {} > Thread-13::DEBUG::2014-11-24 > 21:41:50,950::task::595::Storage.TaskManager.Task::(_updateState) > Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from state > preparing > -> > state finished > Thread-13::DEBUG::2014-11-24 > > 21:41:50,951::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) > Owner.releaseAll requests {} resources {} > Thread-13::DEBUG::2014-11-24 > > 21:41:50,951::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) > Owner.cancelAll requests {} > Thread-13::DEBUG::2014-11-24 > 21:41:50,951::task::993::Storage.TaskManager.Task::(_decref) > Task=`c9042986-c978-4b08-adb2-616f5299e115`::ref 0 aborting False > ------------------------------- > > [root@compute4 ~]# service glusterd status > glusterd is stopped > [root@compute4 ~]# chkconfig --list | grep glusterd > glusterd 0:off 1:off 2:on 3:on 4:on 5:on > 6:off > [root@compute4 ~]# > > Thanks, > Punit > > On Mon, Nov 24, 2014 at 6:36 PM, Kanagaraj <kmayilsa@redhat.com> > wrote: > >> Can you send the corresponding error in vdsm.log from the host? >> >> Also check if glusterd service is running. >> >> Thanks, >> Kanagaraj >> >> >> On 11/24/2014 03:39 PM, Punit Dambiwal wrote: >> >> Hi, >> >> After reboot my Hypervisior host can not activate again in the >> cluster >> and failed with the following error :- >> >> Gluster command [<UNKNOWN>] failed on server... >> >> Engine logs :- >> >> 2014-11-24 18:05:28,397 INFO >> >> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >> (DefaultQuartzScheduler_Worker-64) START, >> GlusterVolumesListVDSCommand(HostName = Compute4, HostId = >> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 5f251c90 >> 2014-11-24 18:05:30,609 INFO >> >> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >> (DefaultQuartzScheduler_Worker-64) FINISH, >> GlusterVolumesListVDSCommand, >> return: >> >> {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@d95203e0 }, >> log id: 5f251c90 >> 2014-11-24 18:05:33,768 INFO >> [org.ovirt.engine.core.bll.ActivateVdsCommand] >> (ajp--127.0.0.1-8702-8) >> [287d570d] Lock Acquired to object EngineLock [exclusiveLocks= key: >> 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS >> , sharedLocks= ] >> 2014-11-24 18:05:33,795 INFO >> [org.ovirt.engine.core.bll.ActivateVdsCommand] >> (org.ovirt.thread.pool-8-thread-45) [287d570d] Running command: >> ActivateVdsCommand internal: false. Entities affected : ID: >> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDSAction group >> MANIPULATE_HOST >> with role type ADMIN >> 2014-11-24 18:05:33,796 INFO >> [org.ovirt.engine.core.bll.ActivateVdsCommand] >> (org.ovirt.thread.pool-8-thread-45) [287d570d] Before acquiring >> lock in >> order to prevent monitoring for host Compute5 from data-center >> SV_WTC >> 2014-11-24 18:05:33,797 INFO >> [org.ovirt.engine.core.bll.ActivateVdsCommand] >> (org.ovirt.thread.pool-8-thread-45) [287d570d] Lock acquired, from >> now a >> monitoring of host will be skipped for host Compute5 from >> data-center >> SV_WTC >> 2014-11-24 18:05:33,817 INFO >> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >> (org.ovirt.thread.pool-8-thread-45) [287d570d] START, >> SetVdsStatusVDSCommand(HostName = Compute5, HostId = >> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=Unassigned, >> nonOperationalReason=NONE, stopSpmFailureLogged=false), log id: >> 1cbc7311 >> 2014-11-24 18:05:33,820 INFO >> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >> (org.ovirt.thread.pool-8-thread-45) [287d570d] FINISH, >> SetVdsStatusVDSCommand, log id: 1cbc7311 >> 2014-11-24 18:05:34,086 INFO >> [org.ovirt.engine.core.bll.ActivateVdsCommand] >> (org.ovirt.thread.pool-8-thread-45) Activate finished. Lock >> released. >> Monitoring can run now for host Compute5 from data-center SV_WTC >> 2014-11-24 18:05:34,088 INFO >> >> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >> (org.ovirt.thread.pool-8-thread-45) Correlation ID: 287d570d, Job >> ID: >> 5ef8e4d6-b2bc-469e-8e81-7ef74b2a001a, Call Stack: null, Custom >> Event ID: >> -1, Message: Host Compute5 was activated by admin. >> 2014-11-24 18:05:34,090 INFO >> [org.ovirt.engine.core.bll.ActivateVdsCommand] >> (org.ovirt.thread.pool-8-thread-45) Lock freed to object EngineLock >> [exclusiveLocks= key: 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: >> VDS >> , sharedLocks= ] >> 2014-11-24 18:05:35,792 INFO >> >> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >> (DefaultQuartzScheduler_Worker-55) [3706e836] START, >> GlusterVolumesListVDSCommand(HostName = Compute4, HostId = >> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 48a0c832 >> 2014-11-24 18:05:37,064 INFO >> >> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] >> (DefaultQuartzScheduler_Worker-69) START, >> GetHardwareInfoVDSCommand(HostName = Compute5, HostId = >> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, >> vds=Host[Compute5,0bf6b00f-7947-4411-b55a-cc5eea2b381a]), log id: >> 6d560cc2 >> 2014-11-24 18:05:37,074 INFO >> >> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] >> (DefaultQuartzScheduler_Worker-69) FINISH, >> GetHardwareInfoVDSCommand, log >> id: 6d560cc2 >> 2014-11-24 18:05:37,093 WARN >> [org.ovirt.engine.core.vdsbroker.VdsManager] >> (DefaultQuartzScheduler_Worker-69) Host Compute5 is running with >> disabled >> SELinux. >> 2014-11-24 18:05:37,127 INFO >> >> [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] >> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] Running command: >> HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities >> affected >> : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS >> 2014-11-24 18:05:37,147 INFO >> >> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] START, >> GlusterServersListVDSCommand(HostName = Compute5, HostId = >> 0bf6b00f-7947-4411-b55a-cc5eea2b381a), log id: 4faed87 >> 2014-11-24 18:05:37,164 INFO >> >> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] FINISH, >> GlusterServersListVDSCommand, log id: 4faed87 >> 2014-11-24 18:05:37,189 INFO >> [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] >> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Running command: >> SetNonOperationalVdsCommand internal: true. Entities affected : >> ID: >> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS >> 2014-11-24 18:05:37,206 INFO >> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] START, >> SetVdsStatusVDSCommand(HostName = Compute5, HostId = >> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=NonOperational, >> nonOperationalReason=GLUSTER_COMMAND_FAILED, >> stopSpmFailureLogged=false), >> log id: fed5617 >> 2014-11-24 18:05:37,209 INFO >> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] FINISH, >> SetVdsStatusVDSCommand, log id: fed5617 >> 2014-11-24 18:05:37,223 ERROR >> >> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: >> 4a84c4e5, >> Job >> ID: 4bfd4a6d-c3ef-468f-a40e-a3a6ca13011b, Call Stack: null, Custom >> Event >> ID: -1, Message: Gluster command [<UNKNOWN>] failed on server >> Compute5. >> 2014-11-24 18:05:37,243 INFO >> >> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: null, >> Call >> Stack: null, Custom Event ID: -1, Message: Status of host Compute5 >> was >> set >> to NonOperational. >> 2014-11-24 18:05:37,272 INFO >> [org.ovirt.engine.core.bll.HandleVdsVersionCommand] >> (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Running command: >> HandleVdsVersionCommand internal: true. Entities affected : ID: >> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS >> 2014-11-24 18:05:37,274 INFO >> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] >> (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Host >> 0bf6b00f-7947-4411-b55a-cc5eea2b381a : Compute5 is already in >> NonOperational status for reason GLUSTER_COMMAND_FAILED. >> SetNonOperationalVds command is skipped. >> 2014-11-24 18:05:38,065 INFO >> >> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >> (DefaultQuartzScheduler_Worker-55) [3706e836] FINISH, >> GlusterVolumesListVDSCommand, return: >> >> {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4e72a1b1 }, >> log id: 48a0c832 >> 2014-11-24 18:05:43,243 INFO >> >> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >> (DefaultQuartzScheduler_Worker-35) START, >> GlusterVolumesListVDSCommand(HostName = Compute4, HostId = >> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 3ce13ebc >> ^C >> [root@ccr01 ~]# >> >> Thanks, >> Punit >> >> >> _______________________________________________ >> Users mailing >> listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users >> >> >> > >
_______________________________________________ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users

Hi, Can Any body help me on this ?? On Thu, Nov 27, 2014 at 9:29 AM, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi Kaushal,
Thanks for the detailed reply....let me explain my setup first :-
1. Ovirt Engine 2. 4* host as well as storage machine (Host and gluster combined) 3. Every host has 24 bricks...
Now whenever the host machine reboot...it can come up but can not join the cluster again and through the following error "Gluster command [<UNKNOWN>] failed on server.."
Please check my comment in line :-
1. Use the same string for doing the peer probe and for the brick address during volume create/add-brick. Ideally, we suggest you use properly resolvable FQDNs everywhere. If that is not possible, then use only IP addresses. Try to avoid short names. --------------- [root@cpu05 ~]# gluster peer status Number of Peers: 3
Hostname: cpu03.stack.com Uuid: 5729b8c4-e80d-4353-b456-6f467bddbdfb State: Peer in Cluster (Connected)
Hostname: cpu04.stack.com Uuid: d272b790-c4b2-4bed-ba68-793656e6d7b0 State: Peer in Cluster (Connected) Other names: 10.10.0.8
Hostname: cpu02.stack.com Uuid: 8d8a7041-950e-40d0-85f9-58d14340ca25 State: Peer in Cluster (Connected) [root@cpu05 ~]# ---------------- 2. During boot up, make sure to launch glusterd only after the network is up. This will allow the new peer identification mechanism to do its job correctly.
I think the service itself doing the same job....
[root@cpu05 ~]# cat /usr/lib/systemd/system/glusterd.service [Unit] Description=GlusterFS, a clustered file-system server After=network.target rpcbind.service Before=network-online.target
[Service] Type=forking PIDFile=/var/run/glusterd.pid LimitNOFILE=65536 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid KillMode=process
[Install] WantedBy=multi-user.target [root@cpu05 ~]# --------------------
gluster logs :-
[2014-11-24 09:22:22.147471] I [MSGID: 100030] [glusterfsd.c:2018:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.6.1 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid) [2014-11-24 09:22:22.151565] I [glusterd.c:1214:init] 0-management: Maximum allowed open file descriptors set to 65536 [2014-11-24 09:22:22.151599] I [glusterd.c:1259:init] 0-management: Using /var/lib/glusterd as working directory [2014-11-24 09:22:22.155216] W [rdma.c:4195:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel creation failed (No such device) [2014-11-24 09:22:22.155264] E [rdma.c:4483:init] 0-rdma.management: Failed to initialize IB Device [2014-11-24 09:22:22.155285] E [rpc-transport.c:333:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed [2014-11-24 09:22:22.155354] W [rpcsvc.c:1524:rpcsvc_transport_create] 0-rpc-service: cannot create listener, initing the transport failed [2014-11-24 09:22:22.156290] I [glusterd.c:413:glusterd_check_gsync_present] 0-glusterd: geo-replication module not installed in the system [2014-11-24 09:22:22.161318] I [glusterd-store.c:2043:glusterd_restore_op_version] 0-glusterd: retrieved op-version: 30600 [2014-11-24 09:22:22.821800] I [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0 [2014-11-24 09:22:22.825810] I [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0 [2014-11-24 09:22:22.828705] I [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0 [2014-11-24 09:22:22.828771] I [rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2014-11-24 09:22:22.832670] I [rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2014-11-24 09:22:22.835919] I [rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2014-11-24 09:22:22.840209] E [glusterd-store.c:4248:glusterd_resolve_all_bricks] 0-glusterd: resolve brick failed in restore [2014-11-24 09:22:22.840233] E [xlator.c:425:xlator_init] 0-management: Initialization of volume 'management' failed, review your volfile again [2014-11-24 09:22:22.840245] E [graph.c:322:glusterfs_graph_init] 0-management: initializing translator failed [2014-11-24 09:22:22.840264] E [graph.c:525:glusterfs_graph_activate] 0-graph: init failed [2014-11-24 09:22:22.840754] W [glusterfsd.c:1194:cleanup_and_exit] (--> 0-: received signum (0), shutting down
Thanks, Punit
On Wed, Nov 26, 2014 at 7:14 PM, Kaushal M <kshlmster@gmail.com> wrote:
Based on the logs I can guess that glusterd is being started before the network has come up and that the addresses given to bricks do not directly match the addresses used in during peer probe.
The gluster_after_reboot log has the line "[2014-11-25 06:46:09.972113] E [glusterd-store.c:2632:glusterd_resolve_all_bricks] 0-glusterd: resolve brick failed in restore".
Brick resolution fails when glusterd cannot match the address for the brick, with one of the peers. Brick resolution happens in two phases, 1. We first try to identify the peer by performing string comparisions with the brick address and the peer addresses (The peer names will be the names/addresses that were given when the peer was probed). 2. If we don't find a match from step 1, we will then resolve all the brick address and the peer addresses into addrinfo structs, and then compare these structs to find a match. This process should generally find a match if available. This will fail only if the network is not up yet as we cannot resolve addresses.
The above steps are applicable only to glusterfs versions >=3.6. They were introduced to reduce problems with peer identification, like the one you encountered
Since both of the steps failed to find a match in one run, but succeeded later, we can come to the conclusion that, a) the bricks don't have the exact same string used in peer probe for their addresses as step 1 failed, and b) the network was not up in the initial run, as step 2 failed during the initial run, but passed in the second run.
Please let me know if my conclusion is correct.
If it is, you can solve your problem in two ways. 1. Use the same string for doing the peer probe and for the brick address during volume create/add-brick. Ideally, we suggest you use properly resolvable FQDNs everywhere. If that is not possible, then use only IP addresses. Try to avoid short names. 2. During boot up, make sure to launch glusterd only after the network is up. This will allow the new peer identification mechanism to do its job correctly.
If you have already followed these steps and yet still hit the problem, then please provide more information (setup, logs, etc.). It could be much different problem that you are facing.
~kaushal
Is there any one can help on this ??
Thanks, punit
On Wed, Nov 26, 2014 at 9:42 AM, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi,
My Glusterfs version is :- glusterfs-3.6.1-1.el7
On Wed, Nov 26, 2014 at 1:59 AM, Kanagaraj Mayilsamy <
kmayilsa@redhat.com>
wrote:
[+Gluster-users@gluster.org]
"Initialization of volume 'management' failed, review your volfile again", glusterd throws this error when the service is started
automatically
after the reboot. But the service is successfully started later manually by the user.
can somebody from gluster-users please help on this?
glusterfs version: 3.5.1
Thanks, Kanagaraj
----- Original Message -----
From: "Punit Dambiwal" <hypunit@gmail.com> To: "Kanagaraj" <kmayilsa@redhat.com> Cc: users@ovirt.org Sent: Tuesday, November 25, 2014 7:24:45 PM Subject: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on server...
Hi Kanagraj,
Please check the attached log files....i didn't find any thing special....
On Tue, Nov 25, 2014 at 12:12 PM, Kanagaraj <kmayilsa@redhat.com> wrote:
> Do you see any errors in > /var/log/glusterfs/etc-glusterfs-glusterd.vol.log or vdsm.log when > the > service is trying to start automatically after the reboot? > > Thanks, > Kanagaraj > > > On 11/24/2014 08:13 PM, Punit Dambiwal wrote: > > Hi Kanagaraj, > > Yes...once i will start the gluster service and then vdsmd ...the > host > can connect to cluster...but the question is why it's not started > even it > has chkconfig enabled... > > I have tested it in two host cluster environment...(Centos 6.6 and > centos 7.0) on both hypervisior cluster..it's failed to reconnect in > to > cluster after reboot.... > > In both the environment glusterd enabled for next boot....but it's > failed with the same error....seems it's bug in either gluster or > Ovirt ?? > > Please help me to find the workaround here if can not resolve > it...as > without this the Host machine can not connect after reboot....that > means > engine will consider it as down and every time need to manually start > the > gluster service and vdsmd... ?? > > Thanks, > Punit > > On Mon, Nov 24, 2014 at 10:20 PM, Kanagaraj <kmayilsa@redhat.com> > wrote: > >> From vdsm.log "error: Connection failed. Please check if gluster >> daemon >> is operational." >> >> Starting glusterd service should fix this issue. 'service glusterd >> start' >> But i am wondering why the glusterd was not started automatically >> after >> the reboot. >> >> Thanks, >> Kanagaraj >> >> >> >> On 11/24/2014 07:18 PM, Punit Dambiwal wrote: >> >> Hi Kanagaraj, >> >> Please find the attached VDSM logs :- >> >> ---------------- >> Thread-13::DEBUG::2014-11-24 >> >> 21:41:17,182::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >> Owner.cancelAll requests {} >> Thread-13::DEBUG::2014-11-24 >> 21:41:17,182::task::993::Storage.TaskManager.Task::(_decref) >> Task=`1691d409-9b27-4585-8281-5ec26154367a`::ref 0 aborting False >> Thread-13::DEBUG::2014-11-24 >> 21:41:32,393::task::595::Storage.TaskManager.Task::(_updateState) >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from state init >> -> >> state preparing >> Thread-13::INFO::2014-11-24 >> 21:41:32,393::logUtils::44::dispatcher::(wrapper) Run and
>> repoStats(options=None) >> Thread-13::INFO::2014-11-24 >> 21:41:32,393::logUtils::47::dispatcher::(wrapper) Run and
>> repoStats, Return response: {} >> Thread-13::DEBUG::2014-11-24 >> 21:41:32,393::task::1191::Storage.TaskManager.Task::(prepare) >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::finished: {} >> Thread-13::DEBUG::2014-11-24 >> 21:41:32,394::task::595::Storage.TaskManager.Task::(_updateState) >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from state >> preparing >> -> >> state finished >> Thread-13::DEBUG::2014-11-24 >> >> 21:41:32,394::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) >> Owner.releaseAll requests {} resources {} >> Thread-13::DEBUG::2014-11-24 >> >> 21:41:32,394::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >> Owner.cancelAll requests {} >> Thread-13::DEBUG::2014-11-24 >> 21:41:32,394::task::993::Storage.TaskManager.Task::(_decref) >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::ref 0 aborting False >> Thread-13::DEBUG::2014-11-24 >> 21:41:41,550::BindingXMLRPC::1132::vds::(wrapper) client >> [10.10.10.2]::call >> getCapabilities with () {} >> Thread-13::DEBUG::2014-11-24 >> 21:41:41,553::utils::738::root::(execCmd) >> /sbin/ip route show to 0.0.0.0/0 table all (cwd None) >> Thread-13::DEBUG::2014-11-24 >> 21:41:41,560::utils::758::root::(execCmd) >> SUCCESS: <err> = ''; <rc> = 0 >> Thread-13::DEBUG::2014-11-24 >> 21:41:41,588::caps::728::root::(_getKeyPackages) rpm package >> ('gluster-swift',) not found >> Thread-13::DEBUG::2014-11-24 >> 21:41:41,592::caps::728::root::(_getKeyPackages) rpm package >> ('gluster-swift-object',) not found >> Thread-13::DEBUG::2014-11-24 >> 21:41:41,593::caps::728::root::(_getKeyPackages) rpm package >> ('gluster-swift-plugin',) not found >> Thread-13::DEBUG::2014-11-24 >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package >> ('gluster-swift-account',) not found >> Thread-13::DEBUG::2014-11-24 >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package >> ('gluster-swift-proxy',) not found >> Thread-13::DEBUG::2014-11-24 >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package >> ('gluster-swift-doc',) not found >> Thread-13::DEBUG::2014-11-24 >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm package >> ('gluster-swift-container',) not found >> Thread-13::DEBUG::2014-11-24 >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm package >> ('glusterfs-geo-replication',) not found >> Thread-13::DEBUG::2014-11-24 21:41:41,600::caps::646::root::(get) >> VirtioRNG DISABLED: libvirt version 0.10.2-29.el6_5.9 required >= >> 0.10.2-31 >> Thread-13::DEBUG::2014-11-24 >> 21:41:41,603::BindingXMLRPC::1139::vds::(wrapper) return >> getCapabilities >> with {'status': {'message': 'Done', 'code': 0}, 'info': >> {'HBAInventory': >> {'iSCSI': [{'InitiatorName': >> 'iqn.1994-05.com.redhat:32151ce183c8'}], >> 'FC': >> []}, 'packages2': {'kernel': {'release': '431.el6.x86_64', >> 'buildtime': >> 1385061309.0, 'version': '2.6.32'}, 'glusterfs-rdma': {'release': >> '1.el6', >> 'buildtime': 1403622628L, 'version': '3.5.1'}, 'glusterfs-fuse': >> {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}, >> 'spice-server': {'release': '6.el6_5.2', 'buildtime': 1402324637L, >> 'version': '0.12.4'}, 'vdsm': {'release': '1.gitdb83943.el6', >> 'buildtime': >> 1412784567L, 'version': '4.16.7'}, 'qemu-kvm': {'release': >> '2.415.el6_5.10', 'buildtime': 1402435700L, 'version': '0.12.1.2'}, >> 'qemu-img': {'release': '2.415.el6_5.10', 'buildtime': 1402435700L, >> 'version': '0.12.1.2'}, 'libvirt': {'release': '29.el6_5.9', >> 'buildtime': >> 1402404612L, 'version': '0.10.2'}, 'glusterfs': {'release': '1.el6', >> 'buildtime': 1403622628L, 'version': '3.5.1'}, 'mom': {'release': >> '2.el6', >> 'buildtime': 1403794344L, 'version': '0.4.1'}, 'glusterfs-server': >> {'release': '1.el6', 'buildtime': 1403622628L, 'version': '3.5.1'}}, >> 'numaNodeDistance': {'1': [20, 10], '0': [10, 20]}, 'cpuModel': >> 'Intel(R) >> Xeon(R) CPU X5650 @ 2.67GHz', 'liveMerge': 'false', >> 'hooks': >> {}, >> 'cpuSockets': '2', 'vmTypes': ['kvm'], 'selinux': {'mode': '1'}, >> 'kdumpStatus': 0, 'supportedProtocols': ['2.2', '2.3'], 'networks': >> {'ovirtmgmt': {'iface': u'bond0.10', 'addr': '43.252.176.16', >> 'bridged': >> False, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', >> 'bootproto4': 'none', 'netmask': '255.255.255.0', 'ipv4addrs': [' >> 43.252.176.16/24' <http://43.252.176.16/24%27>], 'interface': >> u'bond0.10', 'ipv6gateway': '::', 'gateway': '43.25.17.1'}, >> 'Internal': >> {'iface': 'Internal', 'addr': '', 'cfg': {'DEFROUTE': 'no', >> 'HOTPLUG': >> 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', >> 'BOOTPROTO': >> 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': 'Bridge', >> 'ONBOOT': >> 'no'}, 'bridged': True, 'ipv6addrs': >> ['fe80::210:18ff:fecd:daac/64'], >> 'gateway': '', 'bootproto4': 'none', 'netmask': '', 'stp': 'off', >> 'ipv4addrs': [], 'mtu': '9000', 'ipv6gateway': '::', 'ports': >> ['bond1.100']}, 'storage': {'iface': u'bond1', 'addr': '10.10.10.6', >> 'bridged': False, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], >> 'mtu': >> '9000', 'bootproto4': 'none', 'netmask': '255.255.255.0', >> 'ipv4addrs': [' >> 10.10.10.6/24' <http://10.10.10.6/24%27>], 'interface': u'bond1', >> 'ipv6gateway': '::', 'gateway': ''}, 'VMNetwork': {'iface': >> 'VMNetwork', >> 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': >> '1500', >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': >> 'off', >> 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, 'bridged': >> True, >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'gateway': '', >> 'bootproto4': >> 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'mtu': '1500', >> 'ipv6gateway': '::', 'ports': ['bond0.36']}}, 'bridges': >> {'Internal': >> {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': >> '9000', >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': >> 'off', >> 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, >> 'ipv6addrs': >> ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'netmask': '', >> 'stp': >> 'off', 'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': '', 'opts': >> {'topology_change_detected': '0', 'multicast_last_member_count': >> '2', >> 'hash_elasticity': '4', 'multicast_query_response_interval': '999', >> 'multicast_snooping': '1', 'multicast_startup_query_interval': >> '3124', >> 'hello_timer': '31', 'multicast_querier_interval': '25496', >> 'max_age': >> '1999', 'hash_max': '512', 'stp_state': '0', 'root_id': >> '8000.001018cddaac', 'priority': '32768', >> 'multicast_membership_interval': >> '25996', 'root_path_cost': '0', 'root_port': '0', >> 'multicast_querier': >> '0', >> 'multicast_startup_query_count': '2', 'hello_time': '199', >> 'topology_change': '0', 'bridge_id': '8000.001018cddaac', >> 'topology_change_timer': '0', 'ageing_time': '29995', 'gc_timer': >> '31', >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', >> 'multicast_query_interval': '12498', >> 'multicast_last_member_interval': >> '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': >> ['bond1.100']}, 'VMNetwork': {'addr': '', 'cfg': {'DEFROUTE': 'no', >> 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', 'NM_CONTROLLED': 'no', >> 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'VMNetwork', 'TYPE': >> 'Bridge', >> 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], >> 'mtu': >> '1500', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'ipv6gateway': >> '::', >> 'gateway': '', 'opts': {'topology_change_detected': '0', >> 'multicast_last_member_count': '2', 'hash_elasticity': '4', >> 'multicast_query_response_interval': '999', 'multicast_snooping': >> '1', >> 'multicast_startup_query_interval': '3124', 'hello_timer': '131', >> 'multicast_querier_interval': '25496', 'max_age': '1999', >> 'hash_max': >> '512', 'stp_state': '0', 'root_id': '8000.60eb6920b46c', 'priority': >> '32768', 'multicast_membership_interval': '25996', 'root_path_cost': >> '0', >> 'root_port': '0', 'multicast_querier': '0', >> 'multicast_startup_query_count': '2', 'hello_time': '199', >> 'topology_change': '0', 'bridge_id': '8000.60eb6920b46c', >> 'topology_change_timer': '0', 'ageing_time': '29995', 'gc_timer': >> '31', >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', >> 'multicast_query_interval': '12498', >> 'multicast_last_member_interval': >> '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': >> ['bond0.36']}}, 'uuid': '44454C4C-4C00-1057-8053-B7C04F504E31', >> 'lastClientIface': 'bond1', 'nics': {'eth3': {'permhwaddr': >> '00:10:18:cd:da:ae', 'addr': '', 'cfg': {'SLAVE': 'yes', >> 'NM_CONTROLLED': >> 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ae', 'MASTER': >> 'bond1', >> 'DEVICE': 'eth3', 'ONBOOT': 'no'}, 'ipv6addrs': [], 'mtu': '9000', >> 'netmask': '', 'ipv4addrs': [], 'hwaddr': '00:10:18:cd:da:ac', >> 'speed': >> 1000}, 'eth2': {'permhwaddr': '00:10:18:cd:da:ac', 'addr': '', >> 'cfg': >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '9000', 'HWADDR': >> '00:10:18:cd:da:ac', 'MASTER': 'bond1', 'DEVICE': 'eth2', 'ONBOOT': >> 'no'}, >> 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', 'ipv4addrs': [], >> 'hwaddr': >> '00:10:18:cd:da:ac', 'speed': 1000}, 'eth1': {'permhwaddr': >> '60:eb:69:20:b4:6d', 'addr': '', 'cfg': {'SLAVE': 'yes', >> 'NM_CONTROLLED': >> 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6d', 'MASTER': >> 'bond0', >> 'DEVICE': 'eth1', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'mtu': '1500', >> 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', >> 'speed': >> 1000}, 'eth0': {'permhwaddr': '60:eb:69:20:b4:6c', 'addr': '', >> 'cfg': >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', 'HWADDR': >> '60:eb:69:20:b4:6c', 'MASTER': 'bond0', 'DEVICE': 'eth0', 'ONBOOT': >> 'yes'}, >> 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], >> 'hwaddr': >> '60:eb:69:20:b4:6c', 'speed': 1000}}, 'software_revision': '1', >> 'clusterLevels': ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5'], >> 'cpuFlags': >> >> u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,popcnt,aes,lahf_lm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270', >> 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:32151ce183c8', >> 'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1', '3.2', >> '3.3', >> '3.4', '3.5'], 'autoNumaBalancing': 2, 'reservedMem': '321', >> 'bondings': >> {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', >> 'slaves': >> [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': >> {'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': 'no', >> 'BONDING_OPTS': >> 'mode=4 miimon=100', 'DEVICE': 'bond0', 'ONBOOT': 'yes'}, >> 'ipv6addrs': >> ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'netmask': '', >> 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'slaves': ['eth0', >> 'eth1'], >> 'opts': {'miimon': '100', 'mode': '4'}}, 'bond1': {'addr': >> '10.10.10.6', >> 'cfg': {'DEFROUTE': 'no', 'IPADDR': '10.10.10.6', 'HOTPLUG': 'no', >> 'MTU': >> '9000', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', >> 'BOOTPROTO': >> 'none', 'BONDING_OPTS': 'mode=4 miimon=100', 'DEVICE': 'bond1', >> 'ONBOOT': >> 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', >> 'netmask': '255.255.255.0', 'ipv4addrs': ['10.10.10.6/24' >> <http://10.10.10.6/24%27>], 'hwaddr': '00:10:18:cd:da:ac', 'slaves': >> ['eth2', 'eth3'], 'opts': {'miimon': '100', 'mode': '4'}}, 'bond2': >> {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], >> 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', 'cfg': {}, >> 'mtu': >> '1500', 'netmask': '', 'slaves': [], 'hwaddr': >> '00:00:00:00:00:00'}}, >> 'software_version': '4.16', 'memSize': '24019', 'cpuSpeed': >> '2667.000', >> 'numaNodes': {u'1': {'totalMemory': '12288', 'cpus': [6, 7, 8, 9, >> 10, 11, >> 18, 19, 20, 21, 22, 23]}, u'0': {'totalMemory': '12278', 'cpus': [0, >> 1, 2, >> 3, 4, 5, 12, 13, 14, 15, 16, 17]}}, 'version_name': 'Snow Man', >> 'vlans': >> {'bond0.10': {'iface': 'bond0', 'addr': '43.25.17.16', 'cfg': >> {'DEFROUTE': >> 'yes', 'VLAN': 'yes', 'IPADDR': '43.25.17.16', 'HOTPLUG': 'no', >> 'GATEWAY': >> '43.25.17.1', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', >> 'BOOTPROTO': 'none', 'DEVICE': 'bond0.10', 'MTU': '1500', 'ONBOOT': >> 'yes'}, >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 10, 'mtu': >> '1500', >> 'netmask': '255.255.255.0', 'ipv4addrs': ['43.25.17.16/24'] >> <http://43.25.17.16/24%27%5D>}, 'bond0.36': {'iface': 'bond0', >> 'addr': >> '', 'cfg': {'BRIDGE': 'VMNetwork', 'VLAN': 'yes', 'HOTPLUG': 'no', >> 'MTU': >> '1500', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond0.36', 'ONBOOT': >> 'no'}, >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 36, 'mtu': >> '1500', >> 'netmask': '', 'ipv4addrs': []}, 'bond1.100': {'iface': 'bond1', >> 'addr': >> '', 'cfg': {'BRIDGE': 'Internal', 'VLAN': 'yes', 'HOTPLUG': 'no', >> 'MTU': >> '9000', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond1.100', 'ONBOOT': >> 'no'}, >> 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'vlanid': 100, 'mtu': >> '9000', >> 'netmask': '', 'ipv4addrs': []}}, 'cpuCores': '12', 'kvmEnabled': >> 'true', >> 'guestOverhead': '65', 'cpuThreads': '24', 'emulatedMachines': >> [u'rhel6.5.0', u'pc', u'rhel6.4.0', u'rhel6.3.0', u'rhel6.2.0', >> u'rhel6.1.0', u'rhel6.0.0', u'rhel5.5.0', u'rhel5.4.4', >> u'rhel5.4.0'], >> 'operatingSystem': {'release': '5.el6.centos.11.1', 'version': '6', >> 'name': >> 'RHEL'}, 'lastClient': '10.10.10.2'}} >> Thread-13::DEBUG::2014-11-24 >> 21:41:41,620::BindingXMLRPC::1132::vds::(wrapper) client >> [10.10.10.2]::call >> getHardwareInfo with () {} >> Thread-13::DEBUG::2014-11-24 >> 21:41:41,621::BindingXMLRPC::1139::vds::(wrapper) return >> getHardwareInfo >> with {'status': {'message': 'Done', 'code': 0}, 'info': >> {'systemProductName': 'CS24-TY', 'systemSerialNumber': '7LWSPN1', >> 'systemFamily': 'Server', 'systemVersion': 'A00', 'systemUUID': >> '44454c4c-4c00-1057-8053-b7c04f504e31', 'systemManufacturer': >> 'Dell'}} >> Thread-13::DEBUG::2014-11-24 >> 21:41:41,733::BindingXMLRPC::1132::vds::(wrapper) client >> [10.10.10.2]::call >> hostsList with () {} flowID [222e8036] >> Thread-13::ERROR::2014-11-24 >> 21:41:44,753::BindingXMLRPC::1148::vds::(wrapper) vdsm exception >> occured >> Traceback (most recent call last): >> File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1135, in wrapper >> res = f(*args, **kwargs) >> File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper >> rv = func(*args, **kwargs) >> File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList >> return {'hosts': self.svdsmProxy.glusterPeerStatus()} >> File "/usr/share/vdsm/supervdsm.py", line 50, in __call__ >> return callMethod() >> File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda> >> **kwargs) >> File "<string>", line 2, in glusterPeerStatus >> File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, >> in >> _callmethod >> raise convert_to_error(kind, result) >> GlusterCmdExecFailedException: Command execution failed >> error: Connection failed. Please check if gluster daemon is >> operational. >> return code: 1 >> Thread-13::DEBUG::2014-11-24 >> 21:41:50,949::task::595::Storage.TaskManager.Task::(_updateState) >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from state init >> -> >> state preparing >> Thread-13::INFO::2014-11-24 >> 21:41:50,950::logUtils::44::dispatcher::(wrapper) Run and
>> repoStats(options=None) >> Thread-13::INFO::2014-11-24 >> 21:41:50,950::logUtils::47::dispatcher::(wrapper) Run and
>> repoStats, Return response: {} >> Thread-13::DEBUG::2014-11-24 >> 21:41:50,950::task::1191::Storage.TaskManager.Task::(prepare) >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::finished: {} >> Thread-13::DEBUG::2014-11-24 >> 21:41:50,950::task::595::Storage.TaskManager.Task::(_updateState) >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from state >> preparing >> -> >> state finished >> Thread-13::DEBUG::2014-11-24 >> >> 21:41:50,951::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) >> Owner.releaseAll requests {} resources {} >> Thread-13::DEBUG::2014-11-24 >> >> 21:41:50,951::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >> Owner.cancelAll requests {} >> Thread-13::DEBUG::2014-11-24 >> 21:41:50,951::task::993::Storage.TaskManager.Task::(_decref) >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::ref 0 aborting False >> ------------------------------- >> >> [root@compute4 ~]# service glusterd status >> glusterd is stopped >> [root@compute4 ~]# chkconfig --list | grep glusterd >> glusterd 0:off 1:off 2:on 3:on 4:on 5:on >> 6:off >> [root@compute4 ~]# >> >> Thanks, >> Punit >> >> On Mon, Nov 24, 2014 at 6:36 PM, Kanagaraj <kmayilsa@redhat.com> >> wrote: >> >>> Can you send the corresponding error in vdsm.log from the host? >>> >>> Also check if glusterd service is running. >>> >>> Thanks, >>> Kanagaraj >>> >>> >>> On 11/24/2014 03:39 PM, Punit Dambiwal wrote: >>> >>> Hi, >>> >>> After reboot my Hypervisior host can not activate again in the >>> cluster >>> and failed with the following error :- >>> >>> Gluster command [<UNKNOWN>] failed on server... >>> >>> Engine logs :- >>> >>> 2014-11-24 18:05:28,397 INFO >>> >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>> (DefaultQuartzScheduler_Worker-64) START, >>> GlusterVolumesListVDSCommand(HostName = Compute4, HostId = >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 5f251c90 >>> 2014-11-24 18:05:30,609 INFO >>> >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>> (DefaultQuartzScheduler_Worker-64) FINISH, >>> GlusterVolumesListVDSCommand, >>> return: >>> >>> {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@d95203e0 }, >>> log id: 5f251c90 >>> 2014-11-24 18:05:33,768 INFO >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>> (ajp--127.0.0.1-8702-8) >>> [287d570d] Lock Acquired to object EngineLock [exclusiveLocks= key: >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS >>> , sharedLocks= ] >>> 2014-11-24 18:05:33,795 INFO >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] Running command: >>> ActivateVdsCommand internal: false. Entities affected : ID: >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDSAction group >>> MANIPULATE_HOST >>> with role type ADMIN >>> 2014-11-24 18:05:33,796 INFO >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] Before acquiring >>> lock in >>> order to prevent monitoring for host Compute5 from data-center >>> SV_WTC >>> 2014-11-24 18:05:33,797 INFO >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] Lock acquired, from >>> now a >>> monitoring of host will be skipped for host Compute5 from >>> data-center >>> SV_WTC >>> 2014-11-24 18:05:33,817 INFO >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] START, >>> SetVdsStatusVDSCommand(HostName = Compute5, HostId = >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=Unassigned, >>> nonOperationalReason=NONE, stopSpmFailureLogged=false), log id: >>> 1cbc7311 >>> 2014-11-24 18:05:33,820 INFO >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] FINISH, >>> SetVdsStatusVDSCommand, log id: 1cbc7311 >>> 2014-11-24 18:05:34,086 INFO >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>> (org.ovirt.thread.pool-8-thread-45) Activate finished. Lock >>> released. >>> Monitoring can run now for host Compute5 from data-center SV_WTC >>> 2014-11-24 18:05:34,088 INFO >>> >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>> (org.ovirt.thread.pool-8-thread-45) Correlation ID: 287d570d, Job >>> ID: >>> 5ef8e4d6-b2bc-469e-8e81-7ef74b2a001a, Call Stack: null, Custom >>> Event ID: >>> -1, Message: Host Compute5 was activated by admin. >>> 2014-11-24 18:05:34,090 INFO >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>> (org.ovirt.thread.pool-8-thread-45) Lock freed to object EngineLock >>> [exclusiveLocks= key: 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: >>> VDS >>> , sharedLocks= ] >>> 2014-11-24 18:05:35,792 INFO >>> >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>> (DefaultQuartzScheduler_Worker-55) [3706e836] START, >>> GlusterVolumesListVDSCommand(HostName = Compute4, HostId = >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 48a0c832 >>> 2014-11-24 18:05:37,064 INFO >>> >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] >>> (DefaultQuartzScheduler_Worker-69) START, >>> GetHardwareInfoVDSCommand(HostName = Compute5, HostId = >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, >>> vds=Host[Compute5,0bf6b00f-7947-4411-b55a-cc5eea2b381a]), log id: >>> 6d560cc2 >>> 2014-11-24 18:05:37,074 INFO >>> >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] >>> (DefaultQuartzScheduler_Worker-69) FINISH, >>> GetHardwareInfoVDSCommand, log >>> id: 6d560cc2 >>> 2014-11-24 18:05:37,093 WARN >>> [org.ovirt.engine.core.vdsbroker.VdsManager] >>> (DefaultQuartzScheduler_Worker-69) Host Compute5 is running with >>> disabled >>> SELinux. >>> 2014-11-24 18:05:37,127 INFO >>> >>> [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] Running command: >>> HandleVdsCpuFlagsOrClusterChangedCommand internal: true. Entities >>> affected >>> : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS >>> 2014-11-24 18:05:37,147 INFO >>> >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] START, >>> GlusterServersListVDSCommand(HostName = Compute5, HostId = >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a), log id: 4faed87 >>> 2014-11-24 18:05:37,164 INFO >>> >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] FINISH, >>> GlusterServersListVDSCommand, log id: 4faed87 >>> 2014-11-24 18:05:37,189 INFO >>> [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Running command: >>> SetNonOperationalVdsCommand internal: true. Entities affected : >>> ID: >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS >>> 2014-11-24 18:05:37,206 INFO >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] START, >>> SetVdsStatusVDSCommand(HostName = Compute5, HostId = >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=NonOperational, >>> nonOperationalReason=GLUSTER_COMMAND_FAILED, >>> stopSpmFailureLogged=false), >>> log id: fed5617 >>> 2014-11-24 18:05:37,209 INFO >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] FINISH, >>> SetVdsStatusVDSCommand, log id: fed5617 >>> 2014-11-24 18:05:37,223 ERROR >>> >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: >>> 4a84c4e5, >>> Job >>> ID: 4bfd4a6d-c3ef-468f-a40e-a3a6ca13011b, Call Stack: null, Custom >>> Event >>> ID: -1, Message: Gluster command [<UNKNOWN>] failed on server >>> Compute5. >>> 2014-11-24 18:05:37,243 INFO >>> >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: null, >>> Call >>> Stack: null, Custom Event ID: -1, Message: Status of host Compute5 >>> was >>> set >>> to NonOperational. >>> 2014-11-24 18:05:37,272 INFO >>> [org.ovirt.engine.core.bll.HandleVdsVersionCommand] >>> (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Running command: >>> HandleVdsVersionCommand internal: true. Entities affected : ID: >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS >>> 2014-11-24 18:05:37,274 INFO >>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] >>> (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Host >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a : Compute5 is already in >>> NonOperational status for reason GLUSTER_COMMAND_FAILED. >>> SetNonOperationalVds command is skipped. >>> 2014-11-24 18:05:38,065 INFO >>> >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>> (DefaultQuartzScheduler_Worker-55) [3706e836] FINISH, >>> GlusterVolumesListVDSCommand, return: >>> >>> {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4e72a1b1 }, >>> log id: 48a0c832 >>> 2014-11-24 18:05:43,243 INFO >>> >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>> (DefaultQuartzScheduler_Worker-35) START, >>> GlusterVolumesListVDSCommand(HostName = Compute4, HostId = >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 3ce13ebc >>> ^C >>> [root@ccr01 ~]# >>> >>> Thanks, >>> Punit >>> >>> >>> _______________________________________________ >>> Users mailing >>> listUsers@ovirt.orghttp://
On Wed, Nov 26, 2014 at 4:01 PM, Punit Dambiwal <hypunit@gmail.com> wrote: protect: protect: protect: protect: lists.ovirt.org/mailman/listinfo/users
>>> >>> >>> >> >> > >
_______________________________________________ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users

Hey Punit, Could you start Glusterd in debug mode and provide the logs here? To start it in debug mode, append '-LDEBUG' to the ExecStart line in the service file. ~kaushal On Mon, Dec 1, 2014 at 9:05 AM, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi,
Can Any body help me on this ??
On Thu, Nov 27, 2014 at 9:29 AM, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi Kaushal,
Thanks for the detailed reply....let me explain my setup first :-
1. Ovirt Engine 2. 4* host as well as storage machine (Host and gluster combined) 3. Every host has 24 bricks...
Now whenever the host machine reboot...it can come up but can not join the cluster again and through the following error "Gluster command [<UNKNOWN>] failed on server.."
Please check my comment in line :-
1. Use the same string for doing the peer probe and for the brick address during volume create/add-brick. Ideally, we suggest you use properly resolvable FQDNs everywhere. If that is not possible, then use only IP addresses. Try to avoid short names. --------------- [root@cpu05 ~]# gluster peer status Number of Peers: 3
Hostname: cpu03.stack.com Uuid: 5729b8c4-e80d-4353-b456-6f467bddbdfb State: Peer in Cluster (Connected)
Hostname: cpu04.stack.com Uuid: d272b790-c4b2-4bed-ba68-793656e6d7b0 State: Peer in Cluster (Connected) Other names: 10.10.0.8
Hostname: cpu02.stack.com Uuid: 8d8a7041-950e-40d0-85f9-58d14340ca25 State: Peer in Cluster (Connected) [root@cpu05 ~]# ---------------- 2. During boot up, make sure to launch glusterd only after the network is up. This will allow the new peer identification mechanism to do its job correctly.
I think the service itself doing the same job....
[root@cpu05 ~]# cat /usr/lib/systemd/system/glusterd.service [Unit] Description=GlusterFS, a clustered file-system server After=network.target rpcbind.service Before=network-online.target
[Service] Type=forking PIDFile=/var/run/glusterd.pid LimitNOFILE=65536 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid KillMode=process
[Install] WantedBy=multi-user.target [root@cpu05 ~]# --------------------
gluster logs :-
[2014-11-24 09:22:22.147471] I [MSGID: 100030] [glusterfsd.c:2018:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.6.1 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid) [2014-11-24 09:22:22.151565] I [glusterd.c:1214:init] 0-management: Maximum allowed open file descriptors set to 65536 [2014-11-24 09:22:22.151599] I [glusterd.c:1259:init] 0-management: Using /var/lib/glusterd as working directory [2014-11-24 09:22:22.155216] W [rdma.c:4195:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel creation failed (No such device) [2014-11-24 09:22:22.155264] E [rdma.c:4483:init] 0-rdma.management: Failed to initialize IB Device [2014-11-24 09:22:22.155285] E [rpc-transport.c:333:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed [2014-11-24 09:22:22.155354] W [rpcsvc.c:1524:rpcsvc_transport_create] 0-rpc-service: cannot create listener, initing the transport failed [2014-11-24 09:22:22.156290] I [glusterd.c:413:glusterd_check_gsync_present] 0-glusterd: geo-replication module not installed in the system [2014-11-24 09:22:22.161318] I [glusterd-store.c:2043:glusterd_restore_op_version] 0-glusterd: retrieved op-version: 30600 [2014-11-24 09:22:22.821800] I [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0 [2014-11-24 09:22:22.825810] I [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0 [2014-11-24 09:22:22.828705] I [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0 [2014-11-24 09:22:22.828771] I [rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2014-11-24 09:22:22.832670] I [rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2014-11-24 09:22:22.835919] I [rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2014-11-24 09:22:22.840209] E [glusterd-store.c:4248:glusterd_resolve_all_bricks] 0-glusterd: resolve brick failed in restore [2014-11-24 09:22:22.840233] E [xlator.c:425:xlator_init] 0-management: Initialization of volume 'management' failed, review your volfile again [2014-11-24 09:22:22.840245] E [graph.c:322:glusterfs_graph_init] 0-management: initializing translator failed [2014-11-24 09:22:22.840264] E [graph.c:525:glusterfs_graph_activate] 0-graph: init failed [2014-11-24 09:22:22.840754] W [glusterfsd.c:1194:cleanup_and_exit] (--> 0-: received signum (0), shutting down
Thanks, Punit
On Wed, Nov 26, 2014 at 7:14 PM, Kaushal M <kshlmster@gmail.com> wrote:
Based on the logs I can guess that glusterd is being started before the network has come up and that the addresses given to bricks do not directly match the addresses used in during peer probe.
The gluster_after_reboot log has the line "[2014-11-25 06:46:09.972113] E [glusterd-store.c:2632:glusterd_resolve_all_bricks] 0-glusterd: resolve brick failed in restore".
Brick resolution fails when glusterd cannot match the address for the brick, with one of the peers. Brick resolution happens in two phases, 1. We first try to identify the peer by performing string comparisions with the brick address and the peer addresses (The peer names will be the names/addresses that were given when the peer was probed). 2. If we don't find a match from step 1, we will then resolve all the brick address and the peer addresses into addrinfo structs, and then compare these structs to find a match. This process should generally find a match if available. This will fail only if the network is not up yet as we cannot resolve addresses.
The above steps are applicable only to glusterfs versions >=3.6. They were introduced to reduce problems with peer identification, like the one you encountered
Since both of the steps failed to find a match in one run, but succeeded later, we can come to the conclusion that, a) the bricks don't have the exact same string used in peer probe for their addresses as step 1 failed, and b) the network was not up in the initial run, as step 2 failed during the initial run, but passed in the second run.
Please let me know if my conclusion is correct.
If it is, you can solve your problem in two ways. 1. Use the same string for doing the peer probe and for the brick address during volume create/add-brick. Ideally, we suggest you use properly resolvable FQDNs everywhere. If that is not possible, then use only IP addresses. Try to avoid short names. 2. During boot up, make sure to launch glusterd only after the network is up. This will allow the new peer identification mechanism to do its job correctly.
If you have already followed these steps and yet still hit the problem, then please provide more information (setup, logs, etc.). It could be much different problem that you are facing.
~kaushal
On Wed, Nov 26, 2014 at 4:01 PM, Punit Dambiwal <hypunit@gmail.com> wrote:
Is there any one can help on this ??
Thanks, punit
On Wed, Nov 26, 2014 at 9:42 AM, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi,
My Glusterfs version is :- glusterfs-3.6.1-1.el7
On Wed, Nov 26, 2014 at 1:59 AM, Kanagaraj Mayilsamy <kmayilsa@redhat.com> wrote:
[+Gluster-users@gluster.org]
"Initialization of volume 'management' failed, review your volfile again", glusterd throws this error when the service is started automatically after the reboot. But the service is successfully started later manually by the user.
can somebody from gluster-users please help on this?
glusterfs version: 3.5.1
Thanks, Kanagaraj
----- Original Message ----- > From: "Punit Dambiwal" <hypunit@gmail.com> > To: "Kanagaraj" <kmayilsa@redhat.com> > Cc: users@ovirt.org > Sent: Tuesday, November 25, 2014 7:24:45 PM > Subject: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on > server... > > Hi Kanagraj, > > Please check the attached log files....i didn't find any thing > special.... > > On Tue, Nov 25, 2014 at 12:12 PM, Kanagaraj <kmayilsa@redhat.com> > wrote: > > > Do you see any errors in > > /var/log/glusterfs/etc-glusterfs-glusterd.vol.log or vdsm.log > > when > > the > > service is trying to start automatically after the reboot? > > > > Thanks, > > Kanagaraj > > > > > > On 11/24/2014 08:13 PM, Punit Dambiwal wrote: > > > > Hi Kanagaraj, > > > > Yes...once i will start the gluster service and then vdsmd > > ...the > > host > > can connect to cluster...but the question is why it's not started > > even it > > has chkconfig enabled... > > > > I have tested it in two host cluster environment...(Centos 6.6 > > and > > centos 7.0) on both hypervisior cluster..it's failed to reconnect > > in > > to > > cluster after reboot.... > > > > In both the environment glusterd enabled for next boot....but > > it's > > failed with the same error....seems it's bug in either gluster or > > Ovirt ?? > > > > Please help me to find the workaround here if can not resolve > > it...as > > without this the Host machine can not connect after > > reboot....that > > means > > engine will consider it as down and every time need to manually > > start > > the > > gluster service and vdsmd... ?? > > > > Thanks, > > Punit > > > > On Mon, Nov 24, 2014 at 10:20 PM, Kanagaraj <kmayilsa@redhat.com> > > wrote: > > > >> From vdsm.log "error: Connection failed. Please check if > >> gluster > >> daemon > >> is operational." > >> > >> Starting glusterd service should fix this issue. 'service > >> glusterd > >> start' > >> But i am wondering why the glusterd was not started > >> automatically > >> after > >> the reboot. > >> > >> Thanks, > >> Kanagaraj > >> > >> > >> > >> On 11/24/2014 07:18 PM, Punit Dambiwal wrote: > >> > >> Hi Kanagaraj, > >> > >> Please find the attached VDSM logs :- > >> > >> ---------------- > >> Thread-13::DEBUG::2014-11-24 > >> > >> > >> 21:41:17,182::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) > >> Owner.cancelAll requests {} > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:17,182::task::993::Storage.TaskManager.Task::(_decref) > >> Task=`1691d409-9b27-4585-8281-5ec26154367a`::ref 0 aborting > >> False > >> Thread-13::DEBUG::2014-11-24 > >> > >> 21:41:32,393::task::595::Storage.TaskManager.Task::(_updateState) > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from state > >> init > >> -> > >> state preparing > >> Thread-13::INFO::2014-11-24 > >> 21:41:32,393::logUtils::44::dispatcher::(wrapper) Run and > >> protect: > >> repoStats(options=None) > >> Thread-13::INFO::2014-11-24 > >> 21:41:32,393::logUtils::47::dispatcher::(wrapper) Run and > >> protect: > >> repoStats, Return response: {} > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:32,393::task::1191::Storage.TaskManager.Task::(prepare) > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::finished: {} > >> Thread-13::DEBUG::2014-11-24 > >> > >> 21:41:32,394::task::595::Storage.TaskManager.Task::(_updateState) > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from state > >> preparing > >> -> > >> state finished > >> Thread-13::DEBUG::2014-11-24 > >> > >> > >> 21:41:32,394::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) > >> Owner.releaseAll requests {} resources {} > >> Thread-13::DEBUG::2014-11-24 > >> > >> > >> 21:41:32,394::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) > >> Owner.cancelAll requests {} > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:32,394::task::993::Storage.TaskManager.Task::(_decref) > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::ref 0 aborting > >> False > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:41,550::BindingXMLRPC::1132::vds::(wrapper) client > >> [10.10.10.2]::call > >> getCapabilities with () {} > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:41,553::utils::738::root::(execCmd) > >> /sbin/ip route show to 0.0.0.0/0 table all (cwd None) > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:41,560::utils::758::root::(execCmd) > >> SUCCESS: <err> = ''; <rc> = 0 > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:41,588::caps::728::root::(_getKeyPackages) rpm package > >> ('gluster-swift',) not found > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:41,592::caps::728::root::(_getKeyPackages) rpm package > >> ('gluster-swift-object',) not found > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:41,593::caps::728::root::(_getKeyPackages) rpm package > >> ('gluster-swift-plugin',) not found > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package > >> ('gluster-swift-account',) not found > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package > >> ('gluster-swift-proxy',) not found > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package > >> ('gluster-swift-doc',) not found > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm package > >> ('gluster-swift-container',) not found > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm package > >> ('glusterfs-geo-replication',) not found > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:41,600::caps::646::root::(get) > >> VirtioRNG DISABLED: libvirt version 0.10.2-29.el6_5.9 required > >> >= > >> 0.10.2-31 > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:41,603::BindingXMLRPC::1139::vds::(wrapper) return > >> getCapabilities > >> with {'status': {'message': 'Done', 'code': 0}, 'info': > >> {'HBAInventory': > >> {'iSCSI': [{'InitiatorName': > >> 'iqn.1994-05.com.redhat:32151ce183c8'}], > >> 'FC': > >> []}, 'packages2': {'kernel': {'release': '431.el6.x86_64', > >> 'buildtime': > >> 1385061309.0, 'version': '2.6.32'}, 'glusterfs-rdma': > >> {'release': > >> '1.el6', > >> 'buildtime': 1403622628L, 'version': '3.5.1'}, 'glusterfs-fuse': > >> {'release': '1.el6', 'buildtime': 1403622628L, 'version': > >> '3.5.1'}, > >> 'spice-server': {'release': '6.el6_5.2', 'buildtime': > >> 1402324637L, > >> 'version': '0.12.4'}, 'vdsm': {'release': '1.gitdb83943.el6', > >> 'buildtime': > >> 1412784567L, 'version': '4.16.7'}, 'qemu-kvm': {'release': > >> '2.415.el6_5.10', 'buildtime': 1402435700L, 'version': > >> '0.12.1.2'}, > >> 'qemu-img': {'release': '2.415.el6_5.10', 'buildtime': > >> 1402435700L, > >> 'version': '0.12.1.2'}, 'libvirt': {'release': '29.el6_5.9', > >> 'buildtime': > >> 1402404612L, 'version': '0.10.2'}, 'glusterfs': {'release': > >> '1.el6', > >> 'buildtime': 1403622628L, 'version': '3.5.1'}, 'mom': > >> {'release': > >> '2.el6', > >> 'buildtime': 1403794344L, 'version': '0.4.1'}, > >> 'glusterfs-server': > >> {'release': '1.el6', 'buildtime': 1403622628L, 'version': > >> '3.5.1'}}, > >> 'numaNodeDistance': {'1': [20, 10], '0': [10, 20]}, 'cpuModel': > >> 'Intel(R) > >> Xeon(R) CPU X5650 @ 2.67GHz', 'liveMerge': 'false', > >> 'hooks': > >> {}, > >> 'cpuSockets': '2', 'vmTypes': ['kvm'], 'selinux': {'mode': '1'}, > >> 'kdumpStatus': 0, 'supportedProtocols': ['2.2', '2.3'], > >> 'networks': > >> {'ovirtmgmt': {'iface': u'bond0.10', 'addr': '43.252.176.16', > >> 'bridged': > >> False, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': > >> '1500', > >> 'bootproto4': 'none', 'netmask': '255.255.255.0', 'ipv4addrs': > >> [' > >> 43.252.176.16/24' <http://43.252.176.16/24%27>], 'interface': > >> u'bond0.10', 'ipv6gateway': '::', 'gateway': '43.25.17.1'}, > >> 'Internal': > >> {'iface': 'Internal', 'addr': '', 'cfg': {'DEFROUTE': 'no', > >> 'HOTPLUG': > >> 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', > >> 'BOOTPROTO': > >> 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': 'Bridge', > >> 'ONBOOT': > >> 'no'}, 'bridged': True, 'ipv6addrs': > >> ['fe80::210:18ff:fecd:daac/64'], > >> 'gateway': '', 'bootproto4': 'none', 'netmask': '', 'stp': > >> 'off', > >> 'ipv4addrs': [], 'mtu': '9000', 'ipv6gateway': '::', 'ports': > >> ['bond1.100']}, 'storage': {'iface': u'bond1', 'addr': > >> '10.10.10.6', > >> 'bridged': False, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], > >> 'mtu': > >> '9000', 'bootproto4': 'none', 'netmask': '255.255.255.0', > >> 'ipv4addrs': [' > >> 10.10.10.6/24' <http://10.10.10.6/24%27>], 'interface': > >> u'bond1', > >> 'ipv6gateway': '::', 'gateway': ''}, 'VMNetwork': {'iface': > >> 'VMNetwork', > >> 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': > >> '1500', > >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': > >> 'off', > >> 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, > >> 'bridged': > >> True, > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'gateway': '', > >> 'bootproto4': > >> 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'mtu': > >> '1500', > >> 'ipv6gateway': '::', 'ports': ['bond0.36']}}, 'bridges': > >> {'Internal': > >> {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': > >> '9000', > >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': > >> 'off', > >> 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, > >> 'ipv6addrs': > >> ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'netmask': '', > >> 'stp': > >> 'off', 'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': '', > >> 'opts': > >> {'topology_change_detected': '0', 'multicast_last_member_count': > >> '2', > >> 'hash_elasticity': '4', 'multicast_query_response_interval': > >> '999', > >> 'multicast_snooping': '1', 'multicast_startup_query_interval': > >> '3124', > >> 'hello_timer': '31', 'multicast_querier_interval': '25496', > >> 'max_age': > >> '1999', 'hash_max': '512', 'stp_state': '0', 'root_id': > >> '8000.001018cddaac', 'priority': '32768', > >> 'multicast_membership_interval': > >> '25996', 'root_path_cost': '0', 'root_port': '0', > >> 'multicast_querier': > >> '0', > >> 'multicast_startup_query_count': '2', 'hello_time': '199', > >> 'topology_change': '0', 'bridge_id': '8000.001018cddaac', > >> 'topology_change_timer': '0', 'ageing_time': '29995', > >> 'gc_timer': > >> '31', > >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', > >> 'multicast_query_interval': '12498', > >> 'multicast_last_member_interval': > >> '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': > >> ['bond1.100']}, 'VMNetwork': {'addr': '', 'cfg': {'DEFROUTE': > >> 'no', > >> 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', 'NM_CONTROLLED': > >> 'no', > >> 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'VMNetwork', > >> 'TYPE': > >> 'Bridge', > >> 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], > >> 'mtu': > >> '1500', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], > >> 'ipv6gateway': > >> '::', > >> 'gateway': '', 'opts': {'topology_change_detected': '0', > >> 'multicast_last_member_count': '2', 'hash_elasticity': '4', > >> 'multicast_query_response_interval': '999', > >> 'multicast_snooping': > >> '1', > >> 'multicast_startup_query_interval': '3124', 'hello_timer': > >> '131', > >> 'multicast_querier_interval': '25496', 'max_age': '1999', > >> 'hash_max': > >> '512', 'stp_state': '0', 'root_id': '8000.60eb6920b46c', > >> 'priority': > >> '32768', 'multicast_membership_interval': '25996', > >> 'root_path_cost': > >> '0', > >> 'root_port': '0', 'multicast_querier': '0', > >> 'multicast_startup_query_count': '2', 'hello_time': '199', > >> 'topology_change': '0', 'bridge_id': '8000.60eb6920b46c', > >> 'topology_change_timer': '0', 'ageing_time': '29995', > >> 'gc_timer': > >> '31', > >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', > >> 'multicast_query_interval': '12498', > >> 'multicast_last_member_interval': > >> '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': > >> ['bond0.36']}}, 'uuid': '44454C4C-4C00-1057-8053-B7C04F504E31', > >> 'lastClientIface': 'bond1', 'nics': {'eth3': {'permhwaddr': > >> '00:10:18:cd:da:ae', 'addr': '', 'cfg': {'SLAVE': 'yes', > >> 'NM_CONTROLLED': > >> 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ae', 'MASTER': > >> 'bond1', > >> 'DEVICE': 'eth3', 'ONBOOT': 'no'}, 'ipv6addrs': [], 'mtu': > >> '9000', > >> 'netmask': '', 'ipv4addrs': [], 'hwaddr': '00:10:18:cd:da:ac', > >> 'speed': > >> 1000}, 'eth2': {'permhwaddr': '00:10:18:cd:da:ac', 'addr': '', > >> 'cfg': > >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '9000', 'HWADDR': > >> '00:10:18:cd:da:ac', 'MASTER': 'bond1', 'DEVICE': 'eth2', > >> 'ONBOOT': > >> 'no'}, > >> 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', 'ipv4addrs': [], > >> 'hwaddr': > >> '00:10:18:cd:da:ac', 'speed': 1000}, 'eth1': {'permhwaddr': > >> '60:eb:69:20:b4:6d', 'addr': '', 'cfg': {'SLAVE': 'yes', > >> 'NM_CONTROLLED': > >> 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6d', 'MASTER': > >> 'bond0', > >> 'DEVICE': 'eth1', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'mtu': > >> '1500', > >> 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', > >> 'speed': > >> 1000}, 'eth0': {'permhwaddr': '60:eb:69:20:b4:6c', 'addr': '', > >> 'cfg': > >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', 'HWADDR': > >> '60:eb:69:20:b4:6c', 'MASTER': 'bond0', 'DEVICE': 'eth0', > >> 'ONBOOT': > >> 'yes'}, > >> 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], > >> 'hwaddr': > >> '60:eb:69:20:b4:6c', 'speed': 1000}}, 'software_revision': '1', > >> 'clusterLevels': ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5'], > >> 'cpuFlags': > >> > >> > >> u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,popcnt,aes,lahf_lm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270', > >> 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:32151ce183c8', > >> 'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1', > >> '3.2', > >> '3.3', > >> '3.4', '3.5'], 'autoNumaBalancing': 2, 'reservedMem': '321', > >> 'bondings': > >> {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', > >> 'slaves': > >> [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': > >> {'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': 'no', > >> 'BONDING_OPTS': > >> 'mode=4 miimon=100', 'DEVICE': 'bond0', 'ONBOOT': 'yes'}, > >> 'ipv6addrs': > >> ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'netmask': '', > >> 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'slaves': > >> ['eth0', > >> 'eth1'], > >> 'opts': {'miimon': '100', 'mode': '4'}}, 'bond1': {'addr': > >> '10.10.10.6', > >> 'cfg': {'DEFROUTE': 'no', 'IPADDR': '10.10.10.6', 'HOTPLUG': > >> 'no', > >> 'MTU': > >> '9000', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', > >> 'BOOTPROTO': > >> 'none', 'BONDING_OPTS': 'mode=4 miimon=100', 'DEVICE': 'bond1', > >> 'ONBOOT': > >> 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': > >> '9000', > >> 'netmask': '255.255.255.0', 'ipv4addrs': ['10.10.10.6/24' > >> <http://10.10.10.6/24%27>], 'hwaddr': '00:10:18:cd:da:ac', > >> 'slaves': > >> ['eth2', 'eth3'], 'opts': {'miimon': '100', 'mode': '4'}}, > >> 'bond2': > >> {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': > >> [], > >> 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', 'cfg': {}, > >> 'mtu': > >> '1500', 'netmask': '', 'slaves': [], 'hwaddr': > >> '00:00:00:00:00:00'}}, > >> 'software_version': '4.16', 'memSize': '24019', 'cpuSpeed': > >> '2667.000', > >> 'numaNodes': {u'1': {'totalMemory': '12288', 'cpus': [6, 7, 8, > >> 9, > >> 10, 11, > >> 18, 19, 20, 21, 22, 23]}, u'0': {'totalMemory': '12278', 'cpus': > >> [0, > >> 1, 2, > >> 3, 4, 5, 12, 13, 14, 15, 16, 17]}}, 'version_name': 'Snow Man', > >> 'vlans': > >> {'bond0.10': {'iface': 'bond0', 'addr': '43.25.17.16', 'cfg': > >> {'DEFROUTE': > >> 'yes', 'VLAN': 'yes', 'IPADDR': '43.25.17.16', 'HOTPLUG': 'no', > >> 'GATEWAY': > >> '43.25.17.1', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', > >> 'BOOTPROTO': 'none', 'DEVICE': 'bond0.10', 'MTU': '1500', > >> 'ONBOOT': > >> 'yes'}, > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 10, > >> 'mtu': > >> '1500', > >> 'netmask': '255.255.255.0', 'ipv4addrs': ['43.25.17.16/24'] > >> <http://43.25.17.16/24%27%5D>}, 'bond0.36': {'iface': 'bond0', > >> 'addr': > >> '', 'cfg': {'BRIDGE': 'VMNetwork', 'VLAN': 'yes', 'HOTPLUG': > >> 'no', > >> 'MTU': > >> '1500', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond0.36', 'ONBOOT': > >> 'no'}, > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 36, > >> 'mtu': > >> '1500', > >> 'netmask': '', 'ipv4addrs': []}, 'bond1.100': {'iface': 'bond1', > >> 'addr': > >> '', 'cfg': {'BRIDGE': 'Internal', 'VLAN': 'yes', 'HOTPLUG': > >> 'no', > >> 'MTU': > >> '9000', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond1.100', 'ONBOOT': > >> 'no'}, > >> 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'vlanid': 100, > >> 'mtu': > >> '9000', > >> 'netmask': '', 'ipv4addrs': []}}, 'cpuCores': '12', > >> 'kvmEnabled': > >> 'true', > >> 'guestOverhead': '65', 'cpuThreads': '24', 'emulatedMachines': > >> [u'rhel6.5.0', u'pc', u'rhel6.4.0', u'rhel6.3.0', u'rhel6.2.0', > >> u'rhel6.1.0', u'rhel6.0.0', u'rhel5.5.0', u'rhel5.4.4', > >> u'rhel5.4.0'], > >> 'operatingSystem': {'release': '5.el6.centos.11.1', 'version': > >> '6', > >> 'name': > >> 'RHEL'}, 'lastClient': '10.10.10.2'}} > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:41,620::BindingXMLRPC::1132::vds::(wrapper) client > >> [10.10.10.2]::call > >> getHardwareInfo with () {} > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:41,621::BindingXMLRPC::1139::vds::(wrapper) return > >> getHardwareInfo > >> with {'status': {'message': 'Done', 'code': 0}, 'info': > >> {'systemProductName': 'CS24-TY', 'systemSerialNumber': > >> '7LWSPN1', > >> 'systemFamily': 'Server', 'systemVersion': 'A00', 'systemUUID': > >> '44454c4c-4c00-1057-8053-b7c04f504e31', 'systemManufacturer': > >> 'Dell'}} > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:41,733::BindingXMLRPC::1132::vds::(wrapper) client > >> [10.10.10.2]::call > >> hostsList with () {} flowID [222e8036] > >> Thread-13::ERROR::2014-11-24 > >> 21:41:44,753::BindingXMLRPC::1148::vds::(wrapper) vdsm exception > >> occured > >> Traceback (most recent call last): > >> File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1135, in > >> wrapper > >> res = f(*args, **kwargs) > >> File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper > >> rv = func(*args, **kwargs) > >> File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList > >> return {'hosts': self.svdsmProxy.glusterPeerStatus()} > >> File "/usr/share/vdsm/supervdsm.py", line 50, in __call__ > >> return callMethod() > >> File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda> > >> **kwargs) > >> File "<string>", line 2, in glusterPeerStatus > >> File "/usr/lib64/python2.6/multiprocessing/managers.py", line > >> 740, > >> in > >> _callmethod > >> raise convert_to_error(kind, result) > >> GlusterCmdExecFailedException: Command execution failed > >> error: Connection failed. Please check if gluster daemon is > >> operational. > >> return code: 1 > >> Thread-13::DEBUG::2014-11-24 > >> > >> 21:41:50,949::task::595::Storage.TaskManager.Task::(_updateState) > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from state > >> init > >> -> > >> state preparing > >> Thread-13::INFO::2014-11-24 > >> 21:41:50,950::logUtils::44::dispatcher::(wrapper) Run and > >> protect: > >> repoStats(options=None) > >> Thread-13::INFO::2014-11-24 > >> 21:41:50,950::logUtils::47::dispatcher::(wrapper) Run and > >> protect: > >> repoStats, Return response: {} > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:50,950::task::1191::Storage.TaskManager.Task::(prepare) > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::finished: {} > >> Thread-13::DEBUG::2014-11-24 > >> > >> 21:41:50,950::task::595::Storage.TaskManager.Task::(_updateState) > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from state > >> preparing > >> -> > >> state finished > >> Thread-13::DEBUG::2014-11-24 > >> > >> > >> 21:41:50,951::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) > >> Owner.releaseAll requests {} resources {} > >> Thread-13::DEBUG::2014-11-24 > >> > >> > >> 21:41:50,951::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) > >> Owner.cancelAll requests {} > >> Thread-13::DEBUG::2014-11-24 > >> 21:41:50,951::task::993::Storage.TaskManager.Task::(_decref) > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::ref 0 aborting > >> False > >> ------------------------------- > >> > >> [root@compute4 ~]# service glusterd status > >> glusterd is stopped > >> [root@compute4 ~]# chkconfig --list | grep glusterd > >> glusterd 0:off 1:off 2:on 3:on 4:on 5:on > >> 6:off > >> [root@compute4 ~]# > >> > >> Thanks, > >> Punit > >> > >> On Mon, Nov 24, 2014 at 6:36 PM, Kanagaraj <kmayilsa@redhat.com> > >> wrote: > >> > >>> Can you send the corresponding error in vdsm.log from the > >>> host? > >>> > >>> Also check if glusterd service is running. > >>> > >>> Thanks, > >>> Kanagaraj > >>> > >>> > >>> On 11/24/2014 03:39 PM, Punit Dambiwal wrote: > >>> > >>> Hi, > >>> > >>> After reboot my Hypervisior host can not activate again in the > >>> cluster > >>> and failed with the following error :- > >>> > >>> Gluster command [<UNKNOWN>] failed on server... > >>> > >>> Engine logs :- > >>> > >>> 2014-11-24 18:05:28,397 INFO > >>> > >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] > >>> (DefaultQuartzScheduler_Worker-64) START, > >>> GlusterVolumesListVDSCommand(HostName = Compute4, HostId = > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 5f251c90 > >>> 2014-11-24 18:05:30,609 INFO > >>> > >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] > >>> (DefaultQuartzScheduler_Worker-64) FINISH, > >>> GlusterVolumesListVDSCommand, > >>> return: > >>> > >>> > >>> {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@d95203e0}, > >>> log id: 5f251c90 > >>> 2014-11-24 18:05:33,768 INFO > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] > >>> (ajp--127.0.0.1-8702-8) > >>> [287d570d] Lock Acquired to object EngineLock [exclusiveLocks= > >>> key: > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS > >>> , sharedLocks= ] > >>> 2014-11-24 18:05:33,795 INFO > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] Running command: > >>> ActivateVdsCommand internal: false. Entities affected : ID: > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDSAction group > >>> MANIPULATE_HOST > >>> with role type ADMIN > >>> 2014-11-24 18:05:33,796 INFO > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] Before acquiring > >>> lock in > >>> order to prevent monitoring for host Compute5 from data-center > >>> SV_WTC > >>> 2014-11-24 18:05:33,797 INFO > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] Lock acquired, > >>> from > >>> now a > >>> monitoring of host will be skipped for host Compute5 from > >>> data-center > >>> SV_WTC > >>> 2014-11-24 18:05:33,817 INFO > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] START, > >>> SetVdsStatusVDSCommand(HostName = Compute5, HostId = > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=Unassigned, > >>> nonOperationalReason=NONE, stopSpmFailureLogged=false), log id: > >>> 1cbc7311 > >>> 2014-11-24 18:05:33,820 INFO > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] FINISH, > >>> SetVdsStatusVDSCommand, log id: 1cbc7311 > >>> 2014-11-24 18:05:34,086 INFO > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] > >>> (org.ovirt.thread.pool-8-thread-45) Activate finished. Lock > >>> released. > >>> Monitoring can run now for host Compute5 from data-center > >>> SV_WTC > >>> 2014-11-24 18:05:34,088 INFO > >>> > >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > >>> (org.ovirt.thread.pool-8-thread-45) Correlation ID: 287d570d, > >>> Job > >>> ID: > >>> 5ef8e4d6-b2bc-469e-8e81-7ef74b2a001a, Call Stack: null, Custom > >>> Event ID: > >>> -1, Message: Host Compute5 was activated by admin. > >>> 2014-11-24 18:05:34,090 INFO > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] > >>> (org.ovirt.thread.pool-8-thread-45) Lock freed to object > >>> EngineLock > >>> [exclusiveLocks= key: 0bf6b00f-7947-4411-b55a-cc5eea2b381a > >>> value: > >>> VDS > >>> , sharedLocks= ] > >>> 2014-11-24 18:05:35,792 INFO > >>> > >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] > >>> (DefaultQuartzScheduler_Worker-55) [3706e836] START, > >>> GlusterVolumesListVDSCommand(HostName = Compute4, HostId = > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 48a0c832 > >>> 2014-11-24 18:05:37,064 INFO > >>> > >>> > >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] > >>> (DefaultQuartzScheduler_Worker-69) START, > >>> GetHardwareInfoVDSCommand(HostName = Compute5, HostId = > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, > >>> vds=Host[Compute5,0bf6b00f-7947-4411-b55a-cc5eea2b381a]), log > >>> id: > >>> 6d560cc2 > >>> 2014-11-24 18:05:37,074 INFO > >>> > >>> > >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] > >>> (DefaultQuartzScheduler_Worker-69) FINISH, > >>> GetHardwareInfoVDSCommand, log > >>> id: 6d560cc2 > >>> 2014-11-24 18:05:37,093 WARN > >>> [org.ovirt.engine.core.vdsbroker.VdsManager] > >>> (DefaultQuartzScheduler_Worker-69) Host Compute5 is running > >>> with > >>> disabled > >>> SELinux. > >>> 2014-11-24 18:05:37,127 INFO > >>> > >>> > >>> [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] Running command: > >>> HandleVdsCpuFlagsOrClusterChangedCommand internal: true. > >>> Entities > >>> affected > >>> : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS > >>> 2014-11-24 18:05:37,147 INFO > >>> > >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] START, > >>> GlusterServersListVDSCommand(HostName = Compute5, HostId = > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a), log id: 4faed87 > >>> 2014-11-24 18:05:37,164 INFO > >>> > >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] FINISH, > >>> GlusterServersListVDSCommand, log id: 4faed87 > >>> 2014-11-24 18:05:37,189 INFO > >>> [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Running command: > >>> SetNonOperationalVdsCommand internal: true. Entities affected : > >>> ID: > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS > >>> 2014-11-24 18:05:37,206 INFO > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] START, > >>> SetVdsStatusVDSCommand(HostName = Compute5, HostId = > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=NonOperational, > >>> nonOperationalReason=GLUSTER_COMMAND_FAILED, > >>> stopSpmFailureLogged=false), > >>> log id: fed5617 > >>> 2014-11-24 18:05:37,209 INFO > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] FINISH, > >>> SetVdsStatusVDSCommand, log id: fed5617 > >>> 2014-11-24 18:05:37,223 ERROR > >>> > >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: > >>> 4a84c4e5, > >>> Job > >>> ID: 4bfd4a6d-c3ef-468f-a40e-a3a6ca13011b, Call Stack: null, > >>> Custom > >>> Event > >>> ID: -1, Message: Gluster command [<UNKNOWN>] failed on server > >>> Compute5. > >>> 2014-11-24 18:05:37,243 INFO > >>> > >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: > >>> null, > >>> Call > >>> Stack: null, Custom Event ID: -1, Message: Status of host > >>> Compute5 > >>> was > >>> set > >>> to NonOperational. > >>> 2014-11-24 18:05:37,272 INFO > >>> [org.ovirt.engine.core.bll.HandleVdsVersionCommand] > >>> (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Running command: > >>> HandleVdsVersionCommand internal: true. Entities affected : > >>> ID: > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS > >>> 2014-11-24 18:05:37,274 INFO > >>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] > >>> (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Host > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a : Compute5 is already in > >>> NonOperational status for reason GLUSTER_COMMAND_FAILED. > >>> SetNonOperationalVds command is skipped. > >>> 2014-11-24 18:05:38,065 INFO > >>> > >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] > >>> (DefaultQuartzScheduler_Worker-55) [3706e836] FINISH, > >>> GlusterVolumesListVDSCommand, return: > >>> > >>> > >>> {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4e72a1b1}, > >>> log id: 48a0c832 > >>> 2014-11-24 18:05:43,243 INFO > >>> > >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] > >>> (DefaultQuartzScheduler_Worker-35) START, > >>> GlusterVolumesListVDSCommand(HostName = Compute4, HostId = > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 3ce13ebc > >>> ^C > >>> [root@ccr01 ~]# > >>> > >>> Thanks, > >>> Punit > >>> > >>> > >>> _______________________________________________ > >>> Users mailing > >>> > >>> listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users > >>> > >>> > >>> > >> > >> > > > > >
_______________________________________________ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users

Hi Kaushal, Please find the logs here :- http://ur1.ca/iyhs5 and http://ur1.ca/iyhue Thanks, punit On Tue, Dec 2, 2014 at 12:00 PM, Kaushal M <kshlmster@gmail.com> wrote:
Hey Punit, Could you start Glusterd in debug mode and provide the logs here? To start it in debug mode, append '-LDEBUG' to the ExecStart line in the service file.
~kaushal
Hi,
Can Any body help me on this ??
On Thu, Nov 27, 2014 at 9:29 AM, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi Kaushal,
Thanks for the detailed reply....let me explain my setup first :-
1. Ovirt Engine 2. 4* host as well as storage machine (Host and gluster combined) 3. Every host has 24 bricks...
Now whenever the host machine reboot...it can come up but can not join
cluster again and through the following error "Gluster command [<UNKNOWN>] failed on server.."
Please check my comment in line :-
1. Use the same string for doing the peer probe and for the brick address during volume create/add-brick. Ideally, we suggest you use properly resolvable FQDNs everywhere. If that is not possible, then use only IP addresses. Try to avoid short names. --------------- [root@cpu05 ~]# gluster peer status Number of Peers: 3
Hostname: cpu03.stack.com Uuid: 5729b8c4-e80d-4353-b456-6f467bddbdfb State: Peer in Cluster (Connected)
Hostname: cpu04.stack.com Uuid: d272b790-c4b2-4bed-ba68-793656e6d7b0 State: Peer in Cluster (Connected) Other names: 10.10.0.8
Hostname: cpu02.stack.com Uuid: 8d8a7041-950e-40d0-85f9-58d14340ca25 State: Peer in Cluster (Connected) [root@cpu05 ~]# ---------------- 2. During boot up, make sure to launch glusterd only after the network is up. This will allow the new peer identification mechanism to do its job correctly.
I think the service itself doing the same job....
[root@cpu05 ~]# cat /usr/lib/systemd/system/glusterd.service [Unit] Description=GlusterFS, a clustered file-system server After=network.target rpcbind.service Before=network-online.target
[Service] Type=forking PIDFile=/var/run/glusterd.pid LimitNOFILE=65536 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid KillMode=process
[Install] WantedBy=multi-user.target [root@cpu05 ~]# --------------------
gluster logs :-
[2014-11-24 09:22:22.147471] I [MSGID: 100030] [glusterfsd.c:2018:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.6.1 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid) [2014-11-24 09:22:22.151565] I [glusterd.c:1214:init] 0-management: Maximum allowed open file descriptors set to 65536 [2014-11-24 09:22:22.151599] I [glusterd.c:1259:init] 0-management: Using /var/lib/glusterd as working directory [2014-11-24 09:22:22.155216] W [rdma.c:4195:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel creation failed (No such device) [2014-11-24 09:22:22.155264] E [rdma.c:4483:init] 0-rdma.management: Failed to initialize IB Device [2014-11-24 09:22:22.155285] E [rpc-transport.c:333:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed [2014-11-24 09:22:22.155354] W [rpcsvc.c:1524:rpcsvc_transport_create] 0-rpc-service: cannot create listener, initing the transport failed [2014-11-24 09:22:22.156290] I [glusterd.c:413:glusterd_check_gsync_present] 0-glusterd: geo-replication module not installed in the system [2014-11-24 09:22:22.161318] I [glusterd-store.c:2043:glusterd_restore_op_version] 0-glusterd: retrieved op-version: 30600 [2014-11-24 09:22:22.821800] I [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0 [2014-11-24 09:22:22.825810] I [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0 [2014-11-24 09:22:22.828705] I [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0 [2014-11-24 09:22:22.828771] I [rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2014-11-24 09:22:22.832670] I [rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2014-11-24 09:22:22.835919] I [rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2014-11-24 09:22:22.840209] E [glusterd-store.c:4248:glusterd_resolve_all_bricks] 0-glusterd: resolve brick failed in restore [2014-11-24 09:22:22.840233] E [xlator.c:425:xlator_init] 0-management: Initialization of volume 'management' failed, review your volfile again [2014-11-24 09:22:22.840245] E [graph.c:322:glusterfs_graph_init] 0-management: initializing translator failed [2014-11-24 09:22:22.840264] E [graph.c:525:glusterfs_graph_activate] 0-graph: init failed [2014-11-24 09:22:22.840754] W [glusterfsd.c:1194:cleanup_and_exit] (--> 0-: received signum (0), shutting down
Thanks, Punit
On Wed, Nov 26, 2014 at 7:14 PM, Kaushal M <kshlmster@gmail.com> wrote:
Based on the logs I can guess that glusterd is being started before the network has come up and that the addresses given to bricks do not directly match the addresses used in during peer probe.
The gluster_after_reboot log has the line "[2014-11-25 06:46:09.972113] E [glusterd-store.c:2632:glusterd_resolve_all_bricks] 0-glusterd: resolve brick failed in restore".
Brick resolution fails when glusterd cannot match the address for the brick, with one of the peers. Brick resolution happens in two phases, 1. We first try to identify the peer by performing string comparisions with the brick address and the peer addresses (The peer names will be the names/addresses that were given when the peer was probed). 2. If we don't find a match from step 1, we will then resolve all the brick address and the peer addresses into addrinfo structs, and then compare these structs to find a match. This process should generally find a match if available. This will fail only if the network is not up yet as we cannot resolve addresses.
The above steps are applicable only to glusterfs versions >=3.6. They were introduced to reduce problems with peer identification, like the one you encountered
Since both of the steps failed to find a match in one run, but succeeded later, we can come to the conclusion that, a) the bricks don't have the exact same string used in peer probe for their addresses as step 1 failed, and b) the network was not up in the initial run, as step 2 failed during the initial run, but passed in the second run.
Please let me know if my conclusion is correct.
If it is, you can solve your problem in two ways. 1. Use the same string for doing the peer probe and for the brick address during volume create/add-brick. Ideally, we suggest you use properly resolvable FQDNs everywhere. If that is not possible, then use only IP addresses. Try to avoid short names. 2. During boot up, make sure to launch glusterd only after the network is up. This will allow the new peer identification mechanism to do its job correctly.
If you have already followed these steps and yet still hit the problem, then please provide more information (setup, logs, etc.). It could be much different problem that you are facing.
~kaushal
On Wed, Nov 26, 2014 at 4:01 PM, Punit Dambiwal <hypunit@gmail.com> wrote:
Is there any one can help on this ??
Thanks, punit
On Wed, Nov 26, 2014 at 9:42 AM, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi,
My Glusterfs version is :- glusterfs-3.6.1-1.el7
On Wed, Nov 26, 2014 at 1:59 AM, Kanagaraj Mayilsamy <kmayilsa@redhat.com> wrote: > > [+Gluster-users@gluster.org] > > "Initialization of volume 'management' failed, review your volfile > again", glusterd throws this error when the service is started > automatically > after the reboot. But the service is successfully started later > manually by > the user. > > can somebody from gluster-users please help on this? > > glusterfs version: 3.5.1 > > Thanks, > Kanagaraj > > ----- Original Message ----- > > From: "Punit Dambiwal" <hypunit@gmail.com> > > To: "Kanagaraj" <kmayilsa@redhat.com> > > Cc: users@ovirt.org > > Sent: Tuesday, November 25, 2014 7:24:45 PM > > Subject: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on > > server... > > > > Hi Kanagraj, > > > > Please check the attached log files....i didn't find any thing > > special.... > > > > On Tue, Nov 25, 2014 at 12:12 PM, Kanagaraj <kmayilsa@redhat.com
> > wrote: > > > > > Do you see any errors in > > > /var/log/glusterfs/etc-glusterfs-glusterd.vol.log or vdsm.log > > > when > > > the > > > service is trying to start automatically after the reboot? > > > > > > Thanks, > > > Kanagaraj > > > > > > > > > On 11/24/2014 08:13 PM, Punit Dambiwal wrote: > > > > > > Hi Kanagaraj, > > > > > > Yes...once i will start the gluster service and then vdsmd > > > ...the > > > host > > > can connect to cluster...but the question is why it's not started > > > even it > > > has chkconfig enabled... > > > > > > I have tested it in two host cluster environment...(Centos 6.6 > > > and > > > centos 7.0) on both hypervisior cluster..it's failed to reconnect > > > in > > > to > > > cluster after reboot.... > > > > > > In both the environment glusterd enabled for next boot....but > > > it's > > > failed with the same error....seems it's bug in either gluster or > > > Ovirt ?? > > > > > > Please help me to find the workaround here if can not resolve > > > it...as > > > without this the Host machine can not connect after > > > reboot....that > > > means > > > engine will consider it as down and every time need to manually > > > start > > > the > > > gluster service and vdsmd... ?? > > > > > > Thanks, > > > Punit > > > > > > On Mon, Nov 24, 2014 at 10:20 PM, Kanagaraj < kmayilsa@redhat.com> > > > wrote: > > > > > >> From vdsm.log "error: Connection failed. Please check if > > >> gluster > > >> daemon > > >> is operational." > > >> > > >> Starting glusterd service should fix this issue. 'service > > >> glusterd > > >> start' > > >> But i am wondering why the glusterd was not started > > >> automatically > > >> after > > >> the reboot. > > >> > > >> Thanks, > > >> Kanagaraj > > >> > > >> > > >> > > >> On 11/24/2014 07:18 PM, Punit Dambiwal wrote: > > >> > > >> Hi Kanagaraj, > > >> > > >> Please find the attached VDSM logs :- > > >> > > >> ---------------- > > >> Thread-13::DEBUG::2014-11-24 > > >> > > >> > > >> 21:41:17,182::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) > > >> Owner.cancelAll requests {} > > >> Thread-13::DEBUG::2014-11-24 > > >> 21:41:17,182::task::993::Storage.TaskManager.Task::(_decref) > > >> Task=`1691d409-9b27-4585-8281-5ec26154367a`::ref 0 aborting > > >> False > > >> Thread-13::DEBUG::2014-11-24 > > >> > > >> 21:41:32,393::task::595::Storage.TaskManager.Task::(_updateState) > > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from state > > >> init > > >> -> > > >> state preparing > > >> Thread-13::INFO::2014-11-24 > > >> 21:41:32,393::logUtils::44::dispatcher::(wrapper) Run and > > >> protect: > > >> repoStats(options=None) > > >> Thread-13::INFO::2014-11-24 > > >> 21:41:32,393::logUtils::47::dispatcher::(wrapper) Run and > > >> protect: > > >> repoStats, Return response: {} > > >> Thread-13::DEBUG::2014-11-24 > > >> 21:41:32,393::task::1191::Storage.TaskManager.Task::(prepare) > > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::finished: {} > > >> Thread-13::DEBUG::2014-11-24 > > >> > > >> 21:41:32,394::task::595::Storage.TaskManager.Task::(_updateState) > > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from state > > >> preparing > > >> -> > > >> state finished > > >> Thread-13::DEBUG::2014-11-24 > > >> > > >> > > >> 21:41:32,394::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) > > >> Owner.releaseAll requests {} resources {} > > >> Thread-13::DEBUG::2014-11-24 > > >> > > >> > > >> 21:41:32,394::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) > > >> Owner.cancelAll requests {} > > >> Thread-13::DEBUG::2014-11-24 > > >> 21:41:32,394::task::993::Storage.TaskManager.Task::(_decref) > > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::ref 0 aborting > > >> False > > >> Thread-13::DEBUG::2014-11-24 > > >> 21:41:41,550::BindingXMLRPC::1132::vds::(wrapper) client > > >> [10.10.10.2]::call > > >> getCapabilities with () {} > > >> Thread-13::DEBUG::2014-11-24 > > >> 21:41:41,553::utils::738::root::(execCmd) > > >> /sbin/ip route show to 0.0.0.0/0 table all (cwd None) > > >> Thread-13::DEBUG::2014-11-24 > > >> 21:41:41,560::utils::758::root::(execCmd) > > >> SUCCESS: <err> = ''; <rc> = 0 > > >> Thread-13::DEBUG::2014-11-24 > > >> 21:41:41,588::caps::728::root::(_getKeyPackages) rpm package > > >> ('gluster-swift',) not found > > >> Thread-13::DEBUG::2014-11-24 > > >> 21:41:41,592::caps::728::root::(_getKeyPackages) rpm package > > >> ('gluster-swift-object',) not found > > >> Thread-13::DEBUG::2014-11-24 > > >> 21:41:41,593::caps::728::root::(_getKeyPackages) rpm package > > >> ('gluster-swift-plugin',) not found > > >> Thread-13::DEBUG::2014-11-24 > > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package > > >> ('gluster-swift-account',) not found > > >> Thread-13::DEBUG::2014-11-24 > > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package > > >> ('gluster-swift-proxy',) not found > > >> Thread-13::DEBUG::2014-11-24 > > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package > > >> ('gluster-swift-doc',) not found > > >> Thread-13::DEBUG::2014-11-24 > > >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm package > > >> ('gluster-swift-container',) not found > > >> Thread-13::DEBUG::2014-11-24 > > >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm package > > >> ('glusterfs-geo-replication',) not found > > >> Thread-13::DEBUG::2014-11-24 > > >> 21:41:41,600::caps::646::root::(get) > > >> VirtioRNG DISABLED: libvirt version 0.10.2-29.el6_5.9 required > > >> >= > > >> 0.10.2-31 > > >> Thread-13::DEBUG::2014-11-24 > > >> 21:41:41,603::BindingXMLRPC::1139::vds::(wrapper) return > > >> getCapabilities > > >> with {'status': {'message': 'Done', 'code': 0}, 'info': > > >> {'HBAInventory': > > >> {'iSCSI': [{'InitiatorName': > > >> 'iqn.1994-05.com.redhat:32151ce183c8'}], > > >> 'FC': > > >> []}, 'packages2': {'kernel': {'release': '431.el6.x86_64', > > >> 'buildtime': > > >> 1385061309.0, 'version': '2.6.32'}, 'glusterfs-rdma': > > >> {'release': > > >> '1.el6', > > >> 'buildtime': 1403622628L, 'version': '3.5.1'}, 'glusterfs-fuse': > > >> {'release': '1.el6', 'buildtime': 1403622628L, 'version': > > >> '3.5.1'}, > > >> 'spice-server': {'release': '6.el6_5.2', 'buildtime': > > >> 1402324637L, > > >> 'version': '0.12.4'}, 'vdsm': {'release': '1.gitdb83943.el6', > > >> 'buildtime': > > >> 1412784567L, 'version': '4.16.7'}, 'qemu-kvm': {'release': > > >> '2.415.el6_5.10', 'buildtime': 1402435700L, 'version': > > >> '0.12.1.2'}, > > >> 'qemu-img': {'release': '2.415.el6_5.10', 'buildtime': > > >> 1402435700L, > > >> 'version': '0.12.1.2'}, 'libvirt': {'release': '29.el6_5.9', > > >> 'buildtime': > > >> 1402404612L, 'version': '0.10.2'}, 'glusterfs': {'release': > > >> '1.el6', > > >> 'buildtime': 1403622628L, 'version': '3.5.1'}, 'mom': > > >> {'release': > > >> '2.el6', > > >> 'buildtime': 1403794344L, 'version': '0.4.1'}, > > >> 'glusterfs-server': > > >> {'release': '1.el6', 'buildtime': 1403622628L, 'version': > > >> '3.5.1'}}, > > >> 'numaNodeDistance': {'1': [20, 10], '0': [10, 20]}, 'cpuModel': > > >> 'Intel(R) > > >> Xeon(R) CPU X5650 @ 2.67GHz', 'liveMerge': 'false', > > >> 'hooks': > > >> {}, > > >> 'cpuSockets': '2', 'vmTypes': ['kvm'], 'selinux': {'mode': '1'}, > > >> 'kdumpStatus': 0, 'supportedProtocols': ['2.2', '2.3'], > > >> 'networks': > > >> {'ovirtmgmt': {'iface': u'bond0.10', 'addr': '43.252.176.16', > > >> 'bridged': > > >> False, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': > > >> '1500', > > >> 'bootproto4': 'none', 'netmask': '255.255.255.0', 'ipv4addrs': > > >> [' > > >> 43.252.176.16/24' <http://43.252.176.16/24%27>], 'interface': > > >> u'bond0.10', 'ipv6gateway': '::', 'gateway': '43.25.17.1'}, > > >> 'Internal': > > >> {'iface': 'Internal', 'addr': '', 'cfg': {'DEFROUTE': 'no', > > >> 'HOTPLUG': > > >> 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', > > >> 'BOOTPROTO': > > >> 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': 'Bridge', > > >> 'ONBOOT': > > >> 'no'}, 'bridged': True, 'ipv6addrs': > > >> ['fe80::210:18ff:fecd:daac/64'], > > >> 'gateway': '', 'bootproto4': 'none', 'netmask': '', 'stp': > > >> 'off', > > >> 'ipv4addrs': [], 'mtu': '9000', 'ipv6gateway': '::', 'ports': > > >> ['bond1.100']}, 'storage': {'iface': u'bond1', 'addr': > > >> '10.10.10.6', > > >> 'bridged': False, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], > > >> 'mtu': > > >> '9000', 'bootproto4': 'none', 'netmask': '255.255.255.0', > > >> 'ipv4addrs': [' > > >> 10.10.10.6/24' <http://10.10.10.6/24%27>], 'interface': > > >> u'bond1', > > >> 'ipv6gateway': '::', 'gateway': ''}, 'VMNetwork': {'iface': > > >> 'VMNetwork', > > >> 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': > > >> '1500', > > >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': > > >> 'off', > > >> 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, > > >> 'bridged': > > >> True, > > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'gateway': '', > > >> 'bootproto4': > > >> 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'mtu': > > >> '1500', > > >> 'ipv6gateway': '::', 'ports': ['bond0.36']}}, 'bridges': > > >> {'Internal': > > >> {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': > > >> '9000', > > >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': > > >> 'off', > > >> 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, > > >> 'ipv6addrs': > > >> ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'netmask': '', > > >> 'stp': > > >> 'off', 'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': '', > > >> 'opts': > > >> {'topology_change_detected': '0', 'multicast_last_member_count': > > >> '2', > > >> 'hash_elasticity': '4', 'multicast_query_response_interval': > > >> '999', > > >> 'multicast_snooping': '1', 'multicast_startup_query_interval': > > >> '3124', > > >> 'hello_timer': '31', 'multicast_querier_interval': '25496', > > >> 'max_age': > > >> '1999', 'hash_max': '512', 'stp_state': '0', 'root_id': > > >> '8000.001018cddaac', 'priority': '32768', > > >> 'multicast_membership_interval': > > >> '25996', 'root_path_cost': '0', 'root_port': '0', > > >> 'multicast_querier': > > >> '0', > > >> 'multicast_startup_query_count': '2', 'hello_time': '199', > > >> 'topology_change': '0', 'bridge_id': '8000.001018cddaac', > > >> 'topology_change_timer': '0', 'ageing_time': '29995', > > >> 'gc_timer': > > >> '31', > > >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', > > >> 'multicast_query_interval': '12498', > > >> 'multicast_last_member_interval': > > >> '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': > > >> ['bond1.100']}, 'VMNetwork': {'addr': '', 'cfg': {'DEFROUTE': > > >> 'no', > > >> 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', 'NM_CONTROLLED': > > >> 'no', > > >> 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'VMNetwork', > > >> 'TYPE': > > >> 'Bridge', > > >> 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], > > >> 'mtu': > > >> '1500', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], > > >> 'ipv6gateway': > > >> '::', > > >> 'gateway': '', 'opts': {'topology_change_detected': '0', > > >> 'multicast_last_member_count': '2', 'hash_elasticity': '4', > > >> 'multicast_query_response_interval': '999', > > >> 'multicast_snooping': > > >> '1', > > >> 'multicast_startup_query_interval': '3124', 'hello_timer': > > >> '131', > > >> 'multicast_querier_interval': '25496', 'max_age': '1999', > > >> 'hash_max': > > >> '512', 'stp_state': '0', 'root_id': '8000.60eb6920b46c', > > >> 'priority': > > >> '32768', 'multicast_membership_interval': '25996', > > >> 'root_path_cost': > > >> '0', > > >> 'root_port': '0', 'multicast_querier': '0', > > >> 'multicast_startup_query_count': '2', 'hello_time': '199', > > >> 'topology_change': '0', 'bridge_id': '8000.60eb6920b46c', > > >> 'topology_change_timer': '0', 'ageing_time': '29995', > > >> 'gc_timer': > > >> '31', > > >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', > > >> 'multicast_query_interval': '12498', > > >> 'multicast_last_member_interval': > > >> '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': > > >> ['bond0.36']}}, 'uuid': '44454C4C-4C00-1057-8053-B7C04F504E31', > > >> 'lastClientIface': 'bond1', 'nics': {'eth3': {'permhwaddr': > > >> '00:10:18:cd:da:ae', 'addr': '', 'cfg': {'SLAVE': 'yes', > > >> 'NM_CONTROLLED': > > >> 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ae', 'MASTER': > > >> 'bond1', > > >> 'DEVICE': 'eth3', 'ONBOOT': 'no'}, 'ipv6addrs': [], 'mtu': > > >> '9000', > > >> 'netmask': '', 'ipv4addrs': [], 'hwaddr': '00:10:18:cd:da:ac', > > >> 'speed': > > >> 1000}, 'eth2': {'permhwaddr': '00:10:18:cd:da:ac', 'addr': '', > > >> 'cfg': > > >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '9000', 'HWADDR': > > >> '00:10:18:cd:da:ac', 'MASTER': 'bond1', 'DEVICE': 'eth2', > > >> 'ONBOOT': > > >> 'no'}, > > >> 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', 'ipv4addrs': [], > > >> 'hwaddr': > > >> '00:10:18:cd:da:ac', 'speed': 1000}, 'eth1': {'permhwaddr': > > >> '60:eb:69:20:b4:6d', 'addr': '', 'cfg': {'SLAVE': 'yes', > > >> 'NM_CONTROLLED': > > >> 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6d', 'MASTER': > > >> 'bond0', > > >> 'DEVICE': 'eth1', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'mtu': > > >> '1500', > > >> 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', > > >> 'speed': > > >> 1000}, 'eth0': {'permhwaddr': '60:eb:69:20:b4:6c', 'addr': '', > > >> 'cfg': > > >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', 'HWADDR': > > >> '60:eb:69:20:b4:6c', 'MASTER': 'bond0', 'DEVICE': 'eth0', > > >> 'ONBOOT': > > >> 'yes'}, > > >> 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], > > >> 'hwaddr': > > >> '60:eb:69:20:b4:6c', 'speed': 1000}}, 'software_revision': '1', > > >> 'clusterLevels': ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5'], > > >> 'cpuFlags': > > >> > > >> > > >> u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,popcnt,aes,lahf_lm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270', > > >> 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:32151ce183c8', > > >> 'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1', > > >> '3.2', > > >> '3.3', > > >> '3.4', '3.5'], 'autoNumaBalancing': 2, 'reservedMem': '321', > > >> 'bondings': > > >> {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', > > >> 'slaves': > > >> [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': > > >> {'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': 'no', > > >> 'BONDING_OPTS': > > >> 'mode=4 miimon=100', 'DEVICE': 'bond0', 'ONBOOT': 'yes'}, > > >> 'ipv6addrs': > > >> ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'netmask': '', > > >> 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'slaves': > > >> ['eth0', > > >> 'eth1'], > > >> 'opts': {'miimon': '100', 'mode': '4'}}, 'bond1': {'addr': > > >> '10.10.10.6', > > >> 'cfg': {'DEFROUTE': 'no', 'IPADDR': '10.10.10.6', 'HOTPLUG': > > >> 'no', > > >> 'MTU': > > >> '9000', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', > > >> 'BOOTPROTO': > > >> 'none', 'BONDING_OPTS': 'mode=4 miimon=100', 'DEVICE': 'bond1', > > >> 'ONBOOT': > > >> 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': > > >> '9000', > > >> 'netmask': '255.255.255.0', 'ipv4addrs': ['10.10.10.6/24' > > >> <http://10.10.10.6/24%27>], 'hwaddr': '00:10:18:cd:da:ac', > > >> 'slaves': > > >> ['eth2', 'eth3'], 'opts': {'miimon': '100', 'mode': '4'}}, > > >> 'bond2': > > >> {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': > > >> [], > > >> 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', 'cfg': {}, > > >> 'mtu': > > >> '1500', 'netmask': '', 'slaves': [], 'hwaddr': > > >> '00:00:00:00:00:00'}}, > > >> 'software_version': '4.16', 'memSize': '24019', 'cpuSpeed': > > >> '2667.000', > > >> 'numaNodes': {u'1': {'totalMemory': '12288', 'cpus': [6, 7, 8, > > >> 9, > > >> 10, 11, > > >> 18, 19, 20, 21, 22, 23]}, u'0': {'totalMemory': '12278', 'cpus': > > >> [0, > > >> 1, 2, > > >> 3, 4, 5, 12, 13, 14, 15, 16, 17]}}, 'version_name': 'Snow Man', > > >> 'vlans': > > >> {'bond0.10': {'iface': 'bond0', 'addr': '43.25.17.16', 'cfg': > > >> {'DEFROUTE': > > >> 'yes', 'VLAN': 'yes', 'IPADDR': '43.25.17.16', 'HOTPLUG': 'no', > > >> 'GATEWAY': > > >> '43.25.17.1', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', > > >> 'BOOTPROTO': 'none', 'DEVICE': 'bond0.10', 'MTU': '1500', > > >> 'ONBOOT': > > >> 'yes'}, > > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 10, > > >> 'mtu': > > >> '1500', > > >> 'netmask': '255.255.255.0', 'ipv4addrs': ['43.25.17.16/24'] > > >> <http://43.25.17.16/24%27%5D>}, 'bond0.36': {'iface': 'bond0', > > >> 'addr': > > >> '', 'cfg': {'BRIDGE': 'VMNetwork', 'VLAN': 'yes', 'HOTPLUG': > > >> 'no', > > >> 'MTU': > > >> '1500', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond0.36', 'ONBOOT': > > >> 'no'}, > > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 36, > > >> 'mtu': > > >> '1500', > > >> 'netmask': '', 'ipv4addrs': []}, 'bond1.100': {'iface': 'bond1', > > >> 'addr': > > >> '', 'cfg': {'BRIDGE': 'Internal', 'VLAN': 'yes', 'HOTPLUG': > > >> 'no', > > >> 'MTU': > > >> '9000', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond1.100', 'ONBOOT': > > >> 'no'}, > > >> 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'vlanid': 100, > > >> 'mtu': > > >> '9000', > > >> 'netmask': '', 'ipv4addrs': []}}, 'cpuCores': '12', > > >> 'kvmEnabled': > > >> 'true', > > >> 'guestOverhead': '65', 'cpuThreads': '24', 'emulatedMachines': > > >> [u'rhel6.5.0', u'pc', u'rhel6.4.0', u'rhel6.3.0', u'rhel6.2.0', > > >> u'rhel6.1.0', u'rhel6.0.0', u'rhel5.5.0', u'rhel5.4.4', > > >> u'rhel5.4.0'], > > >> 'operatingSystem': {'release': '5.el6.centos.11.1', 'version': > > >> '6', > > >> 'name': > > >> 'RHEL'}, 'lastClient': '10.10.10.2'}} > > >> Thread-13::DEBUG::2014-11-24 > > >> 21:41:41,620::BindingXMLRPC::1132::vds::(wrapper) client > > >> [10.10.10.2]::call > > >> getHardwareInfo with () {} > > >> Thread-13::DEBUG::2014-11-24 > > >> 21:41:41,621::BindingXMLRPC::1139::vds::(wrapper) return > > >> getHardwareInfo > > >> with {'status': {'message': 'Done', 'code': 0}, 'info': > > >> {'systemProductName': 'CS24-TY', 'systemSerialNumber': > > >> '7LWSPN1', > > >> 'systemFamily': 'Server', 'systemVersion': 'A00', 'systemUUID': > > >> '44454c4c-4c00-1057-8053-b7c04f504e31', 'systemManufacturer': > > >> 'Dell'}} > > >> Thread-13::DEBUG::2014-11-24 > > >> 21:41:41,733::BindingXMLRPC::1132::vds::(wrapper) client > > >> [10.10.10.2]::call > > >> hostsList with () {} flowID [222e8036] > > >> Thread-13::ERROR::2014-11-24 > > >> 21:41:44,753::BindingXMLRPC::1148::vds::(wrapper) vdsm exception > > >> occured > > >> Traceback (most recent call last): > > >> File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1135, in > > >> wrapper > > >> res = f(*args, **kwargs) > > >> File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper > > >> rv = func(*args, **kwargs) > > >> File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList > > >> return {'hosts': self.svdsmProxy.glusterPeerStatus()} > > >> File "/usr/share/vdsm/supervdsm.py", line 50, in __call__ > > >> return callMethod() > > >> File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda> > > >> **kwargs) > > >> File "<string>", line 2, in glusterPeerStatus > > >> File "/usr/lib64/python2.6/multiprocessing/managers.py",
> > >> 740, > > >> in > > >> _callmethod > > >> raise convert_to_error(kind, result) > > >> GlusterCmdExecFailedException: Command execution failed > > >> error: Connection failed. Please check if gluster daemon is > > >> operational. > > >> return code: 1 > > >> Thread-13::DEBUG::2014-11-24 > > >> > > >> 21:41:50,949::task::595::Storage.TaskManager.Task::(_updateState) > > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from state > > >> init > > >> -> > > >> state preparing > > >> Thread-13::INFO::2014-11-24 > > >> 21:41:50,950::logUtils::44::dispatcher::(wrapper) Run and > > >> protect: > > >> repoStats(options=None) > > >> Thread-13::INFO::2014-11-24 > > >> 21:41:50,950::logUtils::47::dispatcher::(wrapper) Run and > > >> protect: > > >> repoStats, Return response: {} > > >> Thread-13::DEBUG::2014-11-24 > > >> 21:41:50,950::task::1191::Storage.TaskManager.Task::(prepare) > > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::finished: {} > > >> Thread-13::DEBUG::2014-11-24 > > >> > > >> 21:41:50,950::task::595::Storage.TaskManager.Task::(_updateState) > > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from state > > >> preparing > > >> -> > > >> state finished > > >> Thread-13::DEBUG::2014-11-24 > > >> > > >> > > >> 21:41:50,951::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) > > >> Owner.releaseAll requests {} resources {} > > >> Thread-13::DEBUG::2014-11-24 > > >> > > >> > > >> 21:41:50,951::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) > > >> Owner.cancelAll requests {} > > >> Thread-13::DEBUG::2014-11-24 > > >> 21:41:50,951::task::993::Storage.TaskManager.Task::(_decref) > > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::ref 0 aborting > > >> False > > >> ------------------------------- > > >> > > >> [root@compute4 ~]# service glusterd status > > >> glusterd is stopped > > >> [root@compute4 ~]# chkconfig --list | grep glusterd > > >> glusterd 0:off 1:off 2:on 3:on 4:on 5:on > > >> 6:off > > >> [root@compute4 ~]# > > >> > > >> Thanks, > > >> Punit > > >> > > >> On Mon, Nov 24, 2014 at 6:36 PM, Kanagaraj < kmayilsa@redhat.com> > > >> wrote: > > >> > > >>> Can you send the corresponding error in vdsm.log from the > > >>> host? > > >>> > > >>> Also check if glusterd service is running. > > >>> > > >>> Thanks, > > >>> Kanagaraj > > >>> > > >>> > > >>> On 11/24/2014 03:39 PM, Punit Dambiwal wrote: > > >>> > > >>> Hi, > > >>> > > >>> After reboot my Hypervisior host can not activate again in
> > >>> cluster > > >>> and failed with the following error :- > > >>> > > >>> Gluster command [<UNKNOWN>] failed on server... > > >>> > > >>> Engine logs :- > > >>> > > >>> 2014-11-24 18:05:28,397 INFO > > >>> > > >>> > > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] > > >>> (DefaultQuartzScheduler_Worker-64) START, > > >>> GlusterVolumesListVDSCommand(HostName = Compute4, HostId = > > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 5f251c90 > > >>> 2014-11-24 18:05:30,609 INFO > > >>> > > >>> > > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] > > >>> (DefaultQuartzScheduler_Worker-64) FINISH, > > >>> GlusterVolumesListVDSCommand, > > >>> return: > > >>> > > >>> > > >>> {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@d95203e0 }, > > >>> log id: 5f251c90 > > >>> 2014-11-24 18:05:33,768 INFO > > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] > > >>> (ajp--127.0.0.1-8702-8) > > >>> [287d570d] Lock Acquired to object EngineLock [exclusiveLocks= > > >>> key: > > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS > > >>> , sharedLocks= ] > > >>> 2014-11-24 18:05:33,795 INFO > > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] > > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] Running command: > > >>> ActivateVdsCommand internal: false. Entities affected : ID: > > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDSAction group > > >>> MANIPULATE_HOST > > >>> with role type ADMIN > > >>> 2014-11-24 18:05:33,796 INFO > > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] > > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] Before acquiring > > >>> lock in > > >>> order to prevent monitoring for host Compute5 from data-center > > >>> SV_WTC > > >>> 2014-11-24 18:05:33,797 INFO > > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] > > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] Lock acquired, > > >>> from > > >>> now a > > >>> monitoring of host will be skipped for host Compute5 from > > >>> data-center > > >>> SV_WTC > > >>> 2014-11-24 18:05:33,817 INFO > > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] > > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] START, > > >>> SetVdsStatusVDSCommand(HostName = Compute5, HostId = > > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=Unassigned, > > >>> nonOperationalReason=NONE, stopSpmFailureLogged=false), log id: > > >>> 1cbc7311 > > >>> 2014-11-24 18:05:33,820 INFO > > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] > > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] FINISH, > > >>> SetVdsStatusVDSCommand, log id: 1cbc7311 > > >>> 2014-11-24 18:05:34,086 INFO > > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] > > >>> (org.ovirt.thread.pool-8-thread-45) Activate finished. Lock > > >>> released. > > >>> Monitoring can run now for host Compute5 from data-center > > >>> SV_WTC > > >>> 2014-11-24 18:05:34,088 INFO > > >>> > > >>> > > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > > >>> (org.ovirt.thread.pool-8-thread-45) Correlation ID: 287d570d, > > >>> Job > > >>> ID: > > >>> 5ef8e4d6-b2bc-469e-8e81-7ef74b2a001a, Call Stack: null, Custom > > >>> Event ID: > > >>> -1, Message: Host Compute5 was activated by admin. > > >>> 2014-11-24 18:05:34,090 INFO > > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] > > >>> (org.ovirt.thread.pool-8-thread-45) Lock freed to object > > >>> EngineLock > > >>> [exclusiveLocks= key: 0bf6b00f-7947-4411-b55a-cc5eea2b381a > > >>> value: > > >>> VDS > > >>> , sharedLocks= ] > > >>> 2014-11-24 18:05:35,792 INFO > > >>> > > >>> > > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] > > >>> (DefaultQuartzScheduler_Worker-55) [3706e836] START, > > >>> GlusterVolumesListVDSCommand(HostName = Compute4, HostId = > > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 48a0c832 > > >>> 2014-11-24 18:05:37,064 INFO > > >>> > > >>> > > >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] > > >>> (DefaultQuartzScheduler_Worker-69) START, > > >>> GetHardwareInfoVDSCommand(HostName = Compute5, HostId = > > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, > > >>> vds=Host[Compute5,0bf6b00f-7947-4411-b55a-cc5eea2b381a]), log > > >>> id: > > >>> 6d560cc2 > > >>> 2014-11-24 18:05:37,074 INFO > > >>> > > >>> > > >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] > > >>> (DefaultQuartzScheduler_Worker-69) FINISH, > > >>> GetHardwareInfoVDSCommand, log > > >>> id: 6d560cc2 > > >>> 2014-11-24 18:05:37,093 WARN > > >>> [org.ovirt.engine.core.vdsbroker.VdsManager] > > >>> (DefaultQuartzScheduler_Worker-69) Host Compute5 is running > > >>> with > > >>> disabled > > >>> SELinux. > > >>> 2014-11-24 18:05:37,127 INFO > > >>> > > >>> > > >>> [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] > > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] Running command: > > >>> HandleVdsCpuFlagsOrClusterChangedCommand internal: true. > > >>> Entities > > >>> affected > > >>> : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS > > >>> 2014-11-24 18:05:37,147 INFO > > >>> > > >>> > > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] > > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] START, > > >>> GlusterServersListVDSCommand(HostName = Compute5, HostId = > > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a), log id: 4faed87 > > >>> 2014-11-24 18:05:37,164 INFO > > >>> > > >>> > > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] > > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] FINISH, > > >>> GlusterServersListVDSCommand, log id: 4faed87 > > >>> 2014-11-24 18:05:37,189 INFO > > >>> [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] > > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Running command: > > >>> SetNonOperationalVdsCommand internal: true. Entities affected : > > >>> ID: > > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS > > >>> 2014-11-24 18:05:37,206 INFO > > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] > > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] START, > > >>> SetVdsStatusVDSCommand(HostName = Compute5, HostId = > > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=NonOperational, > > >>> nonOperationalReason=GLUSTER_COMMAND_FAILED, > > >>> stopSpmFailureLogged=false), > > >>> log id: fed5617 > > >>> 2014-11-24 18:05:37,209 INFO > > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] > > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] FINISH, > > >>> SetVdsStatusVDSCommand, log id: fed5617 > > >>> 2014-11-24 18:05:37,223 ERROR > > >>> > > >>> > > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: > > >>> 4a84c4e5, > > >>> Job > > >>> ID: 4bfd4a6d-c3ef-468f-a40e-a3a6ca13011b, Call Stack: null, > > >>> Custom > > >>> Event > > >>> ID: -1, Message: Gluster command [<UNKNOWN>] failed on server > > >>> Compute5. > > >>> 2014-11-24 18:05:37,243 INFO > > >>> > > >>> > > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: > > >>> null, > > >>> Call > > >>> Stack: null, Custom Event ID: -1, Message: Status of host > > >>> Compute5 > > >>> was > > >>> set > > >>> to NonOperational. > > >>> 2014-11-24 18:05:37,272 INFO > > >>> [org.ovirt.engine.core.bll.HandleVdsVersionCommand] > > >>> (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Running command: > > >>> HandleVdsVersionCommand internal: true. Entities affected : > > >>> ID: > > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS > > >>> 2014-11-24 18:05:37,274 INFO > > >>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] > > >>> (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Host > > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a : Compute5 is already in > > >>> NonOperational status for reason GLUSTER_COMMAND_FAILED. > > >>> SetNonOperationalVds command is skipped. > > >>> 2014-11-24 18:05:38,065 INFO > > >>> > > >>> > > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] > > >>> (DefaultQuartzScheduler_Worker-55) [3706e836] FINISH, > > >>> GlusterVolumesListVDSCommand, return: > > >>> > > >>> > > >>> {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4e72a1b1 }, > > >>> log id: 48a0c832 > > >>> 2014-11-24 18:05:43,243 INFO > > >>> > > >>> > > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] > > >>> (DefaultQuartzScheduler_Worker-35) START, > > >>> GlusterVolumesListVDSCommand(HostName = Compute4, HostId = > > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 3ce13ebc > > >>> ^C > > >>> [root@ccr01 ~]# > > >>> > > >>> Thanks, > > >>> Punit > > >>> > > >>> > > >>> _______________________________________________ > > >>> Users mailing > > >>> > > >>> listUsers@ovirt.orghttp://
On Mon, Dec 1, 2014 at 9:05 AM, Punit Dambiwal <hypunit@gmail.com> wrote: the line the lists.ovirt.org/mailman/listinfo/users
> > >>> > > >>> > > >>> > > >> > > >> > > > > > > > >
_______________________________________________ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users

Hey Punit, In the logs you've provided, GlusterD appears to be running correctly. Could you provide the logs for the time period when GlusterD attempts to start but fails. ~kaushal On Dec 2, 2014 8:03 PM, "Punit Dambiwal" <hypunit@gmail.com> wrote:
Hi Kaushal,
Please find the logs here :- http://ur1.ca/iyhs5 and http://ur1.ca/iyhue
Thanks, punit
On Tue, Dec 2, 2014 at 12:00 PM, Kaushal M <kshlmster@gmail.com> wrote:
Hey Punit, Could you start Glusterd in debug mode and provide the logs here? To start it in debug mode, append '-LDEBUG' to the ExecStart line in the service file.
~kaushal
Hi,
Can Any body help me on this ??
On Thu, Nov 27, 2014 at 9:29 AM, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi Kaushal,
Thanks for the detailed reply....let me explain my setup first :-
1. Ovirt Engine 2. 4* host as well as storage machine (Host and gluster combined) 3. Every host has 24 bricks...
Now whenever the host machine reboot...it can come up but can not join
cluster again and through the following error "Gluster command [<UNKNOWN>] failed on server.."
Please check my comment in line :-
1. Use the same string for doing the peer probe and for the brick address during volume create/add-brick. Ideally, we suggest you use properly resolvable FQDNs everywhere. If that is not possible, then use only IP addresses. Try to avoid short names. --------------- [root@cpu05 ~]# gluster peer status Number of Peers: 3
Hostname: cpu03.stack.com Uuid: 5729b8c4-e80d-4353-b456-6f467bddbdfb State: Peer in Cluster (Connected)
Hostname: cpu04.stack.com Uuid: d272b790-c4b2-4bed-ba68-793656e6d7b0 State: Peer in Cluster (Connected) Other names: 10.10.0.8
Hostname: cpu02.stack.com Uuid: 8d8a7041-950e-40d0-85f9-58d14340ca25 State: Peer in Cluster (Connected) [root@cpu05 ~]# ---------------- 2. During boot up, make sure to launch glusterd only after the network is up. This will allow the new peer identification mechanism to do its job correctly.
I think the service itself doing the same job....
[root@cpu05 ~]# cat /usr/lib/systemd/system/glusterd.service [Unit] Description=GlusterFS, a clustered file-system server After=network.target rpcbind.service Before=network-online.target
[Service] Type=forking PIDFile=/var/run/glusterd.pid LimitNOFILE=65536 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid KillMode=process
[Install] WantedBy=multi-user.target [root@cpu05 ~]# --------------------
gluster logs :-
[2014-11-24 09:22:22.147471] I [MSGID: 100030] [glusterfsd.c:2018:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.6.1 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid) [2014-11-24 09:22:22.151565] I [glusterd.c:1214:init] 0-management: Maximum allowed open file descriptors set to 65536 [2014-11-24 09:22:22.151599] I [glusterd.c:1259:init] 0-management: Using /var/lib/glusterd as working directory [2014-11-24 09:22:22.155216] W [rdma.c:4195:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel creation failed (No such device) [2014-11-24 09:22:22.155264] E [rdma.c:4483:init] 0-rdma.management: Failed to initialize IB Device [2014-11-24 09:22:22.155285] E [rpc-transport.c:333:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed [2014-11-24 09:22:22.155354] W [rpcsvc.c:1524:rpcsvc_transport_create] 0-rpc-service: cannot create listener, initing the transport failed [2014-11-24 09:22:22.156290] I [glusterd.c:413:glusterd_check_gsync_present] 0-glusterd: geo-replication module not installed in the system [2014-11-24 09:22:22.161318] I [glusterd-store.c:2043:glusterd_restore_op_version] 0-glusterd: retrieved op-version: 30600 [2014-11-24 09:22:22.821800] I [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0 [2014-11-24 09:22:22.825810] I [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0 [2014-11-24 09:22:22.828705] I [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0 [2014-11-24 09:22:22.828771] I [rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2014-11-24 09:22:22.832670] I [rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2014-11-24 09:22:22.835919] I [rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2014-11-24 09:22:22.840209] E [glusterd-store.c:4248:glusterd_resolve_all_bricks] 0-glusterd: resolve brick failed in restore [2014-11-24 09:22:22.840233] E [xlator.c:425:xlator_init] 0-management: Initialization of volume 'management' failed, review your volfile again [2014-11-24 09:22:22.840245] E [graph.c:322:glusterfs_graph_init] 0-management: initializing translator failed [2014-11-24 09:22:22.840264] E [graph.c:525:glusterfs_graph_activate] 0-graph: init failed [2014-11-24 09:22:22.840754] W [glusterfsd.c:1194:cleanup_and_exit] (--> 0-: received signum (0), shutting down
Thanks, Punit
On Wed, Nov 26, 2014 at 7:14 PM, Kaushal M <kshlmster@gmail.com> wrote:
Based on the logs I can guess that glusterd is being started before the network has come up and that the addresses given to bricks do not directly match the addresses used in during peer probe.
The gluster_after_reboot log has the line "[2014-11-25 06:46:09.972113] E [glusterd-store.c:2632:glusterd_resolve_all_bricks] 0-glusterd: resolve brick failed in restore".
Brick resolution fails when glusterd cannot match the address for the brick, with one of the peers. Brick resolution happens in two phases, 1. We first try to identify the peer by performing string comparisions with the brick address and the peer addresses (The peer names will be the names/addresses that were given when the peer was probed). 2. If we don't find a match from step 1, we will then resolve all the brick address and the peer addresses into addrinfo structs, and then compare these structs to find a match. This process should generally find a match if available. This will fail only if the network is not up yet as we cannot resolve addresses.
The above steps are applicable only to glusterfs versions >=3.6. They were introduced to reduce problems with peer identification, like the one you encountered
Since both of the steps failed to find a match in one run, but succeeded later, we can come to the conclusion that, a) the bricks don't have the exact same string used in peer probe for their addresses as step 1 failed, and b) the network was not up in the initial run, as step 2 failed during the initial run, but passed in the second run.
Please let me know if my conclusion is correct.
If it is, you can solve your problem in two ways. 1. Use the same string for doing the peer probe and for the brick address during volume create/add-brick. Ideally, we suggest you use properly resolvable FQDNs everywhere. If that is not possible, then use only IP addresses. Try to avoid short names. 2. During boot up, make sure to launch glusterd only after the network is up. This will allow the new peer identification mechanism to do its job correctly.
If you have already followed these steps and yet still hit the problem, then please provide more information (setup, logs, etc.). It could be much different problem that you are facing.
~kaushal
On Wed, Nov 26, 2014 at 4:01 PM, Punit Dambiwal <hypunit@gmail.com> wrote:
Is there any one can help on this ??
Thanks, punit
On Wed, Nov 26, 2014 at 9:42 AM, Punit Dambiwal <hypunit@gmail.com> wrote: > > Hi, > > My Glusterfs version is :- glusterfs-3.6.1-1.el7 > > On Wed, Nov 26, 2014 at 1:59 AM, Kanagaraj Mayilsamy > <kmayilsa@redhat.com> > wrote: >> >> [+Gluster-users@gluster.org] >> >> "Initialization of volume 'management' failed, review your volfile >> again", glusterd throws this error when the service is started >> automatically >> after the reboot. But the service is successfully started later >> manually by >> the user. >> >> can somebody from gluster-users please help on this? >> >> glusterfs version: 3.5.1 >> >> Thanks, >> Kanagaraj >> >> ----- Original Message ----- >> > From: "Punit Dambiwal" <hypunit@gmail.com> >> > To: "Kanagaraj" <kmayilsa@redhat.com> >> > Cc: users@ovirt.org >> > Sent: Tuesday, November 25, 2014 7:24:45 PM >> > Subject: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on >> > server... >> > >> > Hi Kanagraj, >> > >> > Please check the attached log files....i didn't find any thing >> > special.... >> > >> > On Tue, Nov 25, 2014 at 12:12 PM, Kanagaraj <
kmayilsa@redhat.com>
>> > wrote: >> > >> > > Do you see any errors in >> > > /var/log/glusterfs/etc-glusterfs-glusterd.vol.log or vdsm.log >> > > when >> > > the >> > > service is trying to start automatically after the reboot? >> > > >> > > Thanks, >> > > Kanagaraj >> > > >> > > >> > > On 11/24/2014 08:13 PM, Punit Dambiwal wrote: >> > > >> > > Hi Kanagaraj, >> > > >> > > Yes...once i will start the gluster service and then vdsmd >> > > ...the >> > > host >> > > can connect to cluster...but the question is why it's not started >> > > even it >> > > has chkconfig enabled... >> > > >> > > I have tested it in two host cluster environment...(Centos 6.6 >> > > and >> > > centos 7.0) on both hypervisior cluster..it's failed to reconnect >> > > in >> > > to >> > > cluster after reboot.... >> > > >> > > In both the environment glusterd enabled for next boot....but >> > > it's >> > > failed with the same error....seems it's bug in either gluster or >> > > Ovirt ?? >> > > >> > > Please help me to find the workaround here if can not resolve >> > > it...as >> > > without this the Host machine can not connect after >> > > reboot....that >> > > means >> > > engine will consider it as down and every time need to manually >> > > start >> > > the >> > > gluster service and vdsmd... ?? >> > > >> > > Thanks, >> > > Punit >> > > >> > > On Mon, Nov 24, 2014 at 10:20 PM, Kanagaraj < kmayilsa@redhat.com> >> > > wrote: >> > > >> > >> From vdsm.log "error: Connection failed. Please check if >> > >> gluster >> > >> daemon >> > >> is operational." >> > >> >> > >> Starting glusterd service should fix this issue. 'service >> > >> glusterd >> > >> start' >> > >> But i am wondering why the glusterd was not started >> > >> automatically >> > >> after >> > >> the reboot. >> > >> >> > >> Thanks, >> > >> Kanagaraj >> > >> >> > >> >> > >> >> > >> On 11/24/2014 07:18 PM, Punit Dambiwal wrote: >> > >> >> > >> Hi Kanagaraj, >> > >> >> > >> Please find the attached VDSM logs :- >> > >> >> > >> ---------------- >> > >> Thread-13::DEBUG::2014-11-24 >> > >> >> > >> >> > >> 21:41:17,182::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >> > >> Owner.cancelAll requests {} >> > >> Thread-13::DEBUG::2014-11-24 >> > >> 21:41:17,182::task::993::Storage.TaskManager.Task::(_decref) >> > >> Task=`1691d409-9b27-4585-8281-5ec26154367a`::ref 0 aborting >> > >> False >> > >> Thread-13::DEBUG::2014-11-24 >> > >> >> > >> 21:41:32,393::task::595::Storage.TaskManager.Task::(_updateState) >> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from state >> > >> init >> > >> -> >> > >> state preparing >> > >> Thread-13::INFO::2014-11-24 >> > >> 21:41:32,393::logUtils::44::dispatcher::(wrapper) Run and >> > >> protect: >> > >> repoStats(options=None) >> > >> Thread-13::INFO::2014-11-24 >> > >> 21:41:32,393::logUtils::47::dispatcher::(wrapper) Run and >> > >> protect: >> > >> repoStats, Return response: {} >> > >> Thread-13::DEBUG::2014-11-24 >> > >> 21:41:32,393::task::1191::Storage.TaskManager.Task::(prepare) >> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::finished: {} >> > >> Thread-13::DEBUG::2014-11-24 >> > >> >> > >> 21:41:32,394::task::595::Storage.TaskManager.Task::(_updateState) >> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from state >> > >> preparing >> > >> -> >> > >> state finished >> > >> Thread-13::DEBUG::2014-11-24 >> > >> >> > >> >> > >> 21:41:32,394::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) >> > >> Owner.releaseAll requests {} resources {} >> > >> Thread-13::DEBUG::2014-11-24 >> > >> >> > >> >> > >> 21:41:32,394::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >> > >> Owner.cancelAll requests {} >> > >> Thread-13::DEBUG::2014-11-24 >> > >> 21:41:32,394::task::993::Storage.TaskManager.Task::(_decref) >> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::ref 0 aborting >> > >> False >> > >> Thread-13::DEBUG::2014-11-24 >> > >> 21:41:41,550::BindingXMLRPC::1132::vds::(wrapper) client >> > >> [10.10.10.2]::call >> > >> getCapabilities with () {} >> > >> Thread-13::DEBUG::2014-11-24 >> > >> 21:41:41,553::utils::738::root::(execCmd) >> > >> /sbin/ip route show to 0.0.0.0/0 table all (cwd None) >> > >> Thread-13::DEBUG::2014-11-24 >> > >> 21:41:41,560::utils::758::root::(execCmd) >> > >> SUCCESS: <err> = ''; <rc> = 0 >> > >> Thread-13::DEBUG::2014-11-24 >> > >> 21:41:41,588::caps::728::root::(_getKeyPackages) rpm package >> > >> ('gluster-swift',) not found >> > >> Thread-13::DEBUG::2014-11-24 >> > >> 21:41:41,592::caps::728::root::(_getKeyPackages) rpm package >> > >> ('gluster-swift-object',) not found >> > >> Thread-13::DEBUG::2014-11-24 >> > >> 21:41:41,593::caps::728::root::(_getKeyPackages) rpm package >> > >> ('gluster-swift-plugin',) not found >> > >> Thread-13::DEBUG::2014-11-24 >> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package >> > >> ('gluster-swift-account',) not found >> > >> Thread-13::DEBUG::2014-11-24 >> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package >> > >> ('gluster-swift-proxy',) not found >> > >> Thread-13::DEBUG::2014-11-24 >> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package >> > >> ('gluster-swift-doc',) not found >> > >> Thread-13::DEBUG::2014-11-24 >> > >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm package >> > >> ('gluster-swift-container',) not found >> > >> Thread-13::DEBUG::2014-11-24 >> > >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm package >> > >> ('glusterfs-geo-replication',) not found >> > >> Thread-13::DEBUG::2014-11-24 >> > >> 21:41:41,600::caps::646::root::(get) >> > >> VirtioRNG DISABLED: libvirt version 0.10.2-29.el6_5.9 required >> > >> >= >> > >> 0.10.2-31 >> > >> Thread-13::DEBUG::2014-11-24 >> > >> 21:41:41,603::BindingXMLRPC::1139::vds::(wrapper) return >> > >> getCapabilities >> > >> with {'status': {'message': 'Done', 'code': 0}, 'info': >> > >> {'HBAInventory': >> > >> {'iSCSI': [{'InitiatorName': >> > >> 'iqn.1994-05.com.redhat:32151ce183c8'}], >> > >> 'FC': >> > >> []}, 'packages2': {'kernel': {'release': '431.el6.x86_64', >> > >> 'buildtime': >> > >> 1385061309.0, 'version': '2.6.32'}, 'glusterfs-rdma': >> > >> {'release': >> > >> '1.el6', >> > >> 'buildtime': 1403622628L, 'version': '3.5.1'}, 'glusterfs-fuse': >> > >> {'release': '1.el6', 'buildtime': 1403622628L, 'version': >> > >> '3.5.1'}, >> > >> 'spice-server': {'release': '6.el6_5.2', 'buildtime': >> > >> 1402324637L, >> > >> 'version': '0.12.4'}, 'vdsm': {'release': '1.gitdb83943.el6', >> > >> 'buildtime': >> > >> 1412784567L, 'version': '4.16.7'}, 'qemu-kvm': {'release': >> > >> '2.415.el6_5.10', 'buildtime': 1402435700L, 'version': >> > >> '0.12.1.2'}, >> > >> 'qemu-img': {'release': '2.415.el6_5.10', 'buildtime': >> > >> 1402435700L, >> > >> 'version': '0.12.1.2'}, 'libvirt': {'release': '29.el6_5.9', >> > >> 'buildtime': >> > >> 1402404612L, 'version': '0.10.2'}, 'glusterfs': {'release': >> > >> '1.el6', >> > >> 'buildtime': 1403622628L, 'version': '3.5.1'}, 'mom': >> > >> {'release': >> > >> '2.el6', >> > >> 'buildtime': 1403794344L, 'version': '0.4.1'}, >> > >> 'glusterfs-server': >> > >> {'release': '1.el6', 'buildtime': 1403622628L, 'version': >> > >> '3.5.1'}}, >> > >> 'numaNodeDistance': {'1': [20, 10], '0': [10, 20]}, 'cpuModel': >> > >> 'Intel(R) >> > >> Xeon(R) CPU X5650 @ 2.67GHz', 'liveMerge': 'false', >> > >> 'hooks': >> > >> {}, >> > >> 'cpuSockets': '2', 'vmTypes': ['kvm'], 'selinux': {'mode': '1'}, >> > >> 'kdumpStatus': 0, 'supportedProtocols': ['2.2', '2.3'], >> > >> 'networks': >> > >> {'ovirtmgmt': {'iface': u'bond0.10', 'addr': '43.252.176.16', >> > >> 'bridged': >> > >> False, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': >> > >> '1500', >> > >> 'bootproto4': 'none', 'netmask': '255.255.255.0', 'ipv4addrs': >> > >> [' >> > >> 43.252.176.16/24' <http://43.252.176.16/24%27>], 'interface': >> > >> u'bond0.10', 'ipv6gateway': '::', 'gateway': '43.25.17.1'}, >> > >> 'Internal': >> > >> {'iface': 'Internal', 'addr': '', 'cfg': {'DEFROUTE': 'no', >> > >> 'HOTPLUG': >> > >> 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', >> > >> 'BOOTPROTO': >> > >> 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': 'Bridge', >> > >> 'ONBOOT': >> > >> 'no'}, 'bridged': True, 'ipv6addrs': >> > >> ['fe80::210:18ff:fecd:daac/64'], >> > >> 'gateway': '', 'bootproto4': 'none', 'netmask': '', 'stp': >> > >> 'off', >> > >> 'ipv4addrs': [], 'mtu': '9000', 'ipv6gateway': '::', 'ports': >> > >> ['bond1.100']}, 'storage': {'iface': u'bond1', 'addr': >> > >> '10.10.10.6', >> > >> 'bridged': False, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], >> > >> 'mtu': >> > >> '9000', 'bootproto4': 'none', 'netmask': '255.255.255.0', >> > >> 'ipv4addrs': [' >> > >> 10.10.10.6/24' <http://10.10.10.6/24%27>], 'interface': >> > >> u'bond1', >> > >> 'ipv6gateway': '::', 'gateway': ''}, 'VMNetwork': {'iface': >> > >> 'VMNetwork', >> > >> 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': >> > >> '1500', >> > >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': >> > >> 'off', >> > >> 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, >> > >> 'bridged': >> > >> True, >> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'gateway': '', >> > >> 'bootproto4': >> > >> 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'mtu': >> > >> '1500', >> > >> 'ipv6gateway': '::', 'ports': ['bond0.36']}}, 'bridges': >> > >> {'Internal': >> > >> {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': >> > >> '9000', >> > >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': >> > >> 'off', >> > >> 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, >> > >> 'ipv6addrs': >> > >> ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'netmask': '', >> > >> 'stp': >> > >> 'off', 'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': '', >> > >> 'opts': >> > >> {'topology_change_detected': '0', 'multicast_last_member_count': >> > >> '2', >> > >> 'hash_elasticity': '4', 'multicast_query_response_interval': >> > >> '999', >> > >> 'multicast_snooping': '1', 'multicast_startup_query_interval': >> > >> '3124', >> > >> 'hello_timer': '31', 'multicast_querier_interval': '25496', >> > >> 'max_age': >> > >> '1999', 'hash_max': '512', 'stp_state': '0', 'root_id': >> > >> '8000.001018cddaac', 'priority': '32768', >> > >> 'multicast_membership_interval': >> > >> '25996', 'root_path_cost': '0', 'root_port': '0', >> > >> 'multicast_querier': >> > >> '0', >> > >> 'multicast_startup_query_count': '2', 'hello_time': '199', >> > >> 'topology_change': '0', 'bridge_id': '8000.001018cddaac', >> > >> 'topology_change_timer': '0', 'ageing_time': '29995', >> > >> 'gc_timer': >> > >> '31', >> > >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', >> > >> 'multicast_query_interval': '12498', >> > >> 'multicast_last_member_interval': >> > >> '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': >> > >> ['bond1.100']}, 'VMNetwork': {'addr': '', 'cfg': {'DEFROUTE': >> > >> 'no', >> > >> 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', 'NM_CONTROLLED': >> > >> 'no', >> > >> 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'VMNetwork', >> > >> 'TYPE': >> > >> 'Bridge', >> > >> 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], >> > >> 'mtu': >> > >> '1500', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], >> > >> 'ipv6gateway': >> > >> '::', >> > >> 'gateway': '', 'opts': {'topology_change_detected': '0', >> > >> 'multicast_last_member_count': '2', 'hash_elasticity': '4', >> > >> 'multicast_query_response_interval': '999', >> > >> 'multicast_snooping': >> > >> '1', >> > >> 'multicast_startup_query_interval': '3124', 'hello_timer': >> > >> '131', >> > >> 'multicast_querier_interval': '25496', 'max_age': '1999', >> > >> 'hash_max': >> > >> '512', 'stp_state': '0', 'root_id': '8000.60eb6920b46c', >> > >> 'priority': >> > >> '32768', 'multicast_membership_interval': '25996', >> > >> 'root_path_cost': >> > >> '0', >> > >> 'root_port': '0', 'multicast_querier': '0', >> > >> 'multicast_startup_query_count': '2', 'hello_time': '199', >> > >> 'topology_change': '0', 'bridge_id': '8000.60eb6920b46c', >> > >> 'topology_change_timer': '0', 'ageing_time': '29995', >> > >> 'gc_timer': >> > >> '31', >> > >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', >> > >> 'multicast_query_interval': '12498', >> > >> 'multicast_last_member_interval': >> > >> '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': >> > >> ['bond0.36']}}, 'uuid': '44454C4C-4C00-1057-8053-B7C04F504E31', >> > >> 'lastClientIface': 'bond1', 'nics': {'eth3': {'permhwaddr': >> > >> '00:10:18:cd:da:ae', 'addr': '', 'cfg': {'SLAVE': 'yes', >> > >> 'NM_CONTROLLED': >> > >> 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ae', 'MASTER': >> > >> 'bond1', >> > >> 'DEVICE': 'eth3', 'ONBOOT': 'no'}, 'ipv6addrs': [], 'mtu': >> > >> '9000', >> > >> 'netmask': '', 'ipv4addrs': [], 'hwaddr': '00:10:18:cd:da:ac', >> > >> 'speed': >> > >> 1000}, 'eth2': {'permhwaddr': '00:10:18:cd:da:ac', 'addr': '', >> > >> 'cfg': >> > >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '9000', 'HWADDR': >> > >> '00:10:18:cd:da:ac', 'MASTER': 'bond1', 'DEVICE': 'eth2', >> > >> 'ONBOOT': >> > >> 'no'}, >> > >> 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', 'ipv4addrs': [], >> > >> 'hwaddr': >> > >> '00:10:18:cd:da:ac', 'speed': 1000}, 'eth1': {'permhwaddr': >> > >> '60:eb:69:20:b4:6d', 'addr': '', 'cfg': {'SLAVE': 'yes', >> > >> 'NM_CONTROLLED': >> > >> 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6d', 'MASTER': >> > >> 'bond0', >> > >> 'DEVICE': 'eth1', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'mtu': >> > >> '1500', >> > >> 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', >> > >> 'speed': >> > >> 1000}, 'eth0': {'permhwaddr': '60:eb:69:20:b4:6c', 'addr': '', >> > >> 'cfg': >> > >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', 'HWADDR': >> > >> '60:eb:69:20:b4:6c', 'MASTER': 'bond0', 'DEVICE': 'eth0', >> > >> 'ONBOOT': >> > >> 'yes'}, >> > >> 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], >> > >> 'hwaddr': >> > >> '60:eb:69:20:b4:6c', 'speed': 1000}}, 'software_revision': '1', >> > >> 'clusterLevels': ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5'], >> > >> 'cpuFlags': >> > >> >> > >> >> > >> u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,popcnt,aes,lahf_lm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270', >> > >> 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:32151ce183c8', >> > >> 'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1', >> > >> '3.2', >> > >> '3.3', >> > >> '3.4', '3.5'], 'autoNumaBalancing': 2, 'reservedMem': '321', >> > >> 'bondings': >> > >> {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', >> > >> 'slaves': >> > >> [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': >> > >> {'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': 'no', >> > >> 'BONDING_OPTS': >> > >> 'mode=4 miimon=100', 'DEVICE': 'bond0', 'ONBOOT': 'yes'}, >> > >> 'ipv6addrs': >> > >> ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'netmask': '', >> > >> 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'slaves': >> > >> ['eth0', >> > >> 'eth1'], >> > >> 'opts': {'miimon': '100', 'mode': '4'}}, 'bond1': {'addr': >> > >> '10.10.10.6', >> > >> 'cfg': {'DEFROUTE': 'no', 'IPADDR': '10.10.10.6', 'HOTPLUG': >> > >> 'no', >> > >> 'MTU': >> > >> '9000', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', >> > >> 'BOOTPROTO': >> > >> 'none', 'BONDING_OPTS': 'mode=4 miimon=100', 'DEVICE': 'bond1', >> > >> 'ONBOOT': >> > >> 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': >> > >> '9000', >> > >> 'netmask': '255.255.255.0', 'ipv4addrs': ['10.10.10.6/24' >> > >> <http://10.10.10.6/24%27>], 'hwaddr': '00:10:18:cd:da:ac', >> > >> 'slaves': >> > >> ['eth2', 'eth3'], 'opts': {'miimon': '100', 'mode': '4'}}, >> > >> 'bond2': >> > >> {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': >> > >> [], >> > >> 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', 'cfg': {}, >> > >> 'mtu': >> > >> '1500', 'netmask': '', 'slaves': [], 'hwaddr': >> > >> '00:00:00:00:00:00'}}, >> > >> 'software_version': '4.16', 'memSize': '24019', 'cpuSpeed': >> > >> '2667.000', >> > >> 'numaNodes': {u'1': {'totalMemory': '12288', 'cpus': [6, 7, 8, >> > >> 9, >> > >> 10, 11, >> > >> 18, 19, 20, 21, 22, 23]}, u'0': {'totalMemory': '12278', 'cpus': >> > >> [0, >> > >> 1, 2, >> > >> 3, 4, 5, 12, 13, 14, 15, 16, 17]}}, 'version_name': 'Snow Man', >> > >> 'vlans': >> > >> {'bond0.10': {'iface': 'bond0', 'addr': '43.25.17.16', 'cfg': >> > >> {'DEFROUTE': >> > >> 'yes', 'VLAN': 'yes', 'IPADDR': '43.25.17.16', 'HOTPLUG': 'no', >> > >> 'GATEWAY': >> > >> '43.25.17.1', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', >> > >> 'BOOTPROTO': 'none', 'DEVICE': 'bond0.10', 'MTU': '1500', >> > >> 'ONBOOT': >> > >> 'yes'}, >> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 10, >> > >> 'mtu': >> > >> '1500', >> > >> 'netmask': '255.255.255.0', 'ipv4addrs': ['43.25.17.16/24'] >> > >> <http://43.25.17.16/24%27%5D>}, 'bond0.36': {'iface': 'bond0', >> > >> 'addr': >> > >> '', 'cfg': {'BRIDGE': 'VMNetwork', 'VLAN': 'yes', 'HOTPLUG': >> > >> 'no', >> > >> 'MTU': >> > >> '1500', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond0.36', 'ONBOOT': >> > >> 'no'}, >> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 36, >> > >> 'mtu': >> > >> '1500', >> > >> 'netmask': '', 'ipv4addrs': []}, 'bond1.100': {'iface': 'bond1', >> > >> 'addr': >> > >> '', 'cfg': {'BRIDGE': 'Internal', 'VLAN': 'yes', 'HOTPLUG': >> > >> 'no', >> > >> 'MTU': >> > >> '9000', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond1.100', 'ONBOOT': >> > >> 'no'}, >> > >> 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'vlanid': 100, >> > >> 'mtu': >> > >> '9000', >> > >> 'netmask': '', 'ipv4addrs': []}}, 'cpuCores': '12', >> > >> 'kvmEnabled': >> > >> 'true', >> > >> 'guestOverhead': '65', 'cpuThreads': '24', 'emulatedMachines': >> > >> [u'rhel6.5.0', u'pc', u'rhel6.4.0', u'rhel6.3.0', u'rhel6.2.0', >> > >> u'rhel6.1.0', u'rhel6.0.0', u'rhel5.5.0', u'rhel5.4.4', >> > >> u'rhel5.4.0'], >> > >> 'operatingSystem': {'release': '5.el6.centos.11.1', 'version': >> > >> '6', >> > >> 'name': >> > >> 'RHEL'}, 'lastClient': '10.10.10.2'}} >> > >> Thread-13::DEBUG::2014-11-24 >> > >> 21:41:41,620::BindingXMLRPC::1132::vds::(wrapper) client >> > >> [10.10.10.2]::call >> > >> getHardwareInfo with () {} >> > >> Thread-13::DEBUG::2014-11-24 >> > >> 21:41:41,621::BindingXMLRPC::1139::vds::(wrapper) return >> > >> getHardwareInfo >> > >> with {'status': {'message': 'Done', 'code': 0}, 'info': >> > >> {'systemProductName': 'CS24-TY', 'systemSerialNumber': >> > >> '7LWSPN1', >> > >> 'systemFamily': 'Server', 'systemVersion': 'A00', 'systemUUID': >> > >> '44454c4c-4c00-1057-8053-b7c04f504e31', 'systemManufacturer': >> > >> 'Dell'}} >> > >> Thread-13::DEBUG::2014-11-24 >> > >> 21:41:41,733::BindingXMLRPC::1132::vds::(wrapper) client >> > >> [10.10.10.2]::call >> > >> hostsList with () {} flowID [222e8036] >> > >> Thread-13::ERROR::2014-11-24 >> > >> 21:41:44,753::BindingXMLRPC::1148::vds::(wrapper) vdsm exception >> > >> occured >> > >> Traceback (most recent call last): >> > >> File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1135, in >> > >> wrapper >> > >> res = f(*args, **kwargs) >> > >> File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper >> > >> rv = func(*args, **kwargs) >> > >> File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList >> > >> return {'hosts': self.svdsmProxy.glusterPeerStatus()} >> > >> File "/usr/share/vdsm/supervdsm.py", line 50, in __call__ >> > >> return callMethod() >> > >> File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda> >> > >> **kwargs) >> > >> File "<string>", line 2, in glusterPeerStatus >> > >> File "/usr/lib64/python2.6/multiprocessing/managers.py",
>> > >> 740, >> > >> in >> > >> _callmethod >> > >> raise convert_to_error(kind, result) >> > >> GlusterCmdExecFailedException: Command execution failed >> > >> error: Connection failed. Please check if gluster daemon is >> > >> operational. >> > >> return code: 1 >> > >> Thread-13::DEBUG::2014-11-24 >> > >> >> > >> 21:41:50,949::task::595::Storage.TaskManager.Task::(_updateState) >> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from state >> > >> init >> > >> -> >> > >> state preparing >> > >> Thread-13::INFO::2014-11-24 >> > >> 21:41:50,950::logUtils::44::dispatcher::(wrapper) Run and >> > >> protect: >> > >> repoStats(options=None) >> > >> Thread-13::INFO::2014-11-24 >> > >> 21:41:50,950::logUtils::47::dispatcher::(wrapper) Run and >> > >> protect: >> > >> repoStats, Return response: {} >> > >> Thread-13::DEBUG::2014-11-24 >> > >> 21:41:50,950::task::1191::Storage.TaskManager.Task::(prepare) >> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::finished: {} >> > >> Thread-13::DEBUG::2014-11-24 >> > >> >> > >> 21:41:50,950::task::595::Storage.TaskManager.Task::(_updateState) >> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from state >> > >> preparing >> > >> -> >> > >> state finished >> > >> Thread-13::DEBUG::2014-11-24 >> > >> >> > >> >> > >> 21:41:50,951::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) >> > >> Owner.releaseAll requests {} resources {} >> > >> Thread-13::DEBUG::2014-11-24 >> > >> >> > >> >> > >> 21:41:50,951::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >> > >> Owner.cancelAll requests {} >> > >> Thread-13::DEBUG::2014-11-24 >> > >> 21:41:50,951::task::993::Storage.TaskManager.Task::(_decref) >> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::ref 0 aborting >> > >> False >> > >> ------------------------------- >> > >> >> > >> [root@compute4 ~]# service glusterd status >> > >> glusterd is stopped >> > >> [root@compute4 ~]# chkconfig --list | grep glusterd >> > >> glusterd 0:off 1:off 2:on 3:on 4:on 5:on >> > >> 6:off >> > >> [root@compute4 ~]# >> > >> >> > >> Thanks, >> > >> Punit >> > >> >> > >> On Mon, Nov 24, 2014 at 6:36 PM, Kanagaraj < kmayilsa@redhat.com> >> > >> wrote: >> > >> >> > >>> Can you send the corresponding error in vdsm.log from the >> > >>> host? >> > >>> >> > >>> Also check if glusterd service is running. >> > >>> >> > >>> Thanks, >> > >>> Kanagaraj >> > >>> >> > >>> >> > >>> On 11/24/2014 03:39 PM, Punit Dambiwal wrote: >> > >>> >> > >>> Hi, >> > >>> >> > >>> After reboot my Hypervisior host can not activate again in
>> > >>> cluster >> > >>> and failed with the following error :- >> > >>> >> > >>> Gluster command [<UNKNOWN>] failed on server... >> > >>> >> > >>> Engine logs :- >> > >>> >> > >>> 2014-11-24 18:05:28,397 INFO >> > >>> >> > >>> >> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >> > >>> (DefaultQuartzScheduler_Worker-64) START, >> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, HostId = >> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 5f251c90 >> > >>> 2014-11-24 18:05:30,609 INFO >> > >>> >> > >>> >> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >> > >>> (DefaultQuartzScheduler_Worker-64) FINISH, >> > >>> GlusterVolumesListVDSCommand, >> > >>> return: >> > >>> >> > >>> >> > >>> {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@d95203e0 }, >> > >>> log id: 5f251c90 >> > >>> 2014-11-24 18:05:33,768 INFO >> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >> > >>> (ajp--127.0.0.1-8702-8) >> > >>> [287d570d] Lock Acquired to object EngineLock [exclusiveLocks= >> > >>> key: >> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS >> > >>> , sharedLocks= ] >> > >>> 2014-11-24 18:05:33,795 INFO >> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] Running command: >> > >>> ActivateVdsCommand internal: false. Entities affected : ID: >> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDSAction group >> > >>> MANIPULATE_HOST >> > >>> with role type ADMIN >> > >>> 2014-11-24 18:05:33,796 INFO >> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] Before acquiring >> > >>> lock in >> > >>> order to prevent monitoring for host Compute5 from data-center >> > >>> SV_WTC >> > >>> 2014-11-24 18:05:33,797 INFO >> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] Lock acquired, >> > >>> from >> > >>> now a >> > >>> monitoring of host will be skipped for host Compute5 from >> > >>> data-center >> > >>> SV_WTC >> > >>> 2014-11-24 18:05:33,817 INFO >> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] START, >> > >>> SetVdsStatusVDSCommand(HostName = Compute5, HostId = >> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=Unassigned, >> > >>> nonOperationalReason=NONE, stopSpmFailureLogged=false), log id: >> > >>> 1cbc7311 >> > >>> 2014-11-24 18:05:33,820 INFO >> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] FINISH, >> > >>> SetVdsStatusVDSCommand, log id: 1cbc7311 >> > >>> 2014-11-24 18:05:34,086 INFO >> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >> > >>> (org.ovirt.thread.pool-8-thread-45) Activate finished. Lock >> > >>> released. >> > >>> Monitoring can run now for host Compute5 from data-center >> > >>> SV_WTC >> > >>> 2014-11-24 18:05:34,088 INFO >> > >>> >> > >>> >> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >> > >>> (org.ovirt.thread.pool-8-thread-45) Correlation ID: 287d570d, >> > >>> Job >> > >>> ID: >> > >>> 5ef8e4d6-b2bc-469e-8e81-7ef74b2a001a, Call Stack: null, Custom >> > >>> Event ID: >> > >>> -1, Message: Host Compute5 was activated by admin. >> > >>> 2014-11-24 18:05:34,090 INFO >> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >> > >>> (org.ovirt.thread.pool-8-thread-45) Lock freed to object >> > >>> EngineLock >> > >>> [exclusiveLocks= key: 0bf6b00f-7947-4411-b55a-cc5eea2b381a >> > >>> value: >> > >>> VDS >> > >>> , sharedLocks= ] >> > >>> 2014-11-24 18:05:35,792 INFO >> > >>> >> > >>> >> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >> > >>> (DefaultQuartzScheduler_Worker-55) [3706e836] START, >> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, HostId = >> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 48a0c832 >> > >>> 2014-11-24 18:05:37,064 INFO >> > >>> >> > >>> >> > >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] >> > >>> (DefaultQuartzScheduler_Worker-69) START, >> > >>> GetHardwareInfoVDSCommand(HostName = Compute5, HostId = >> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, >> > >>> vds=Host[Compute5,0bf6b00f-7947-4411-b55a-cc5eea2b381a]), log >> > >>> id: >> > >>> 6d560cc2 >> > >>> 2014-11-24 18:05:37,074 INFO >> > >>> >> > >>> >> > >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] >> > >>> (DefaultQuartzScheduler_Worker-69) FINISH, >> > >>> GetHardwareInfoVDSCommand, log >> > >>> id: 6d560cc2 >> > >>> 2014-11-24 18:05:37,093 WARN >> > >>> [org.ovirt.engine.core.vdsbroker.VdsManager] >> > >>> (DefaultQuartzScheduler_Worker-69) Host Compute5 is running >> > >>> with >> > >>> disabled >> > >>> SELinux. >> > >>> 2014-11-24 18:05:37,127 INFO >> > >>> >> > >>> >> > >>> [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] >> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] Running command: >> > >>> HandleVdsCpuFlagsOrClusterChangedCommand internal: true. >> > >>> Entities >> > >>> affected >> > >>> : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS >> > >>> 2014-11-24 18:05:37,147 INFO >> > >>> >> > >>> >> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] START, >> > >>> GlusterServersListVDSCommand(HostName = Compute5, HostId = >> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a), log id: 4faed87 >> > >>> 2014-11-24 18:05:37,164 INFO >> > >>> >> > >>> >> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] FINISH, >> > >>> GlusterServersListVDSCommand, log id: 4faed87 >> > >>> 2014-11-24 18:05:37,189 INFO >> > >>> [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] >> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Running command: >> > >>> SetNonOperationalVdsCommand internal: true. Entities affected : >> > >>> ID: >> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS >> > >>> 2014-11-24 18:05:37,206 INFO >> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] START, >> > >>> SetVdsStatusVDSCommand(HostName = Compute5, HostId = >> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=NonOperational, >> > >>> nonOperationalReason=GLUSTER_COMMAND_FAILED, >> > >>> stopSpmFailureLogged=false), >> > >>> log id: fed5617 >> > >>> 2014-11-24 18:05:37,209 INFO >> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] FINISH, >> > >>> SetVdsStatusVDSCommand, log id: fed5617 >> > >>> 2014-11-24 18:05:37,223 ERROR >> > >>> >> > >>> >> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: >> > >>> 4a84c4e5, >> > >>> Job >> > >>> ID: 4bfd4a6d-c3ef-468f-a40e-a3a6ca13011b, Call Stack: null, >> > >>> Custom >> > >>> Event >> > >>> ID: -1, Message: Gluster command [<UNKNOWN>] failed on server >> > >>> Compute5. >> > >>> 2014-11-24 18:05:37,243 INFO >> > >>> >> > >>> >> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: >> > >>> null, >> > >>> Call >> > >>> Stack: null, Custom Event ID: -1, Message: Status of host >> > >>> Compute5 >> > >>> was >> > >>> set >> > >>> to NonOperational. >> > >>> 2014-11-24 18:05:37,272 INFO >> > >>> [org.ovirt.engine.core.bll.HandleVdsVersionCommand] >> > >>> (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Running command: >> > >>> HandleVdsVersionCommand internal: true. Entities affected : >> > >>> ID: >> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS >> > >>> 2014-11-24 18:05:37,274 INFO >> > >>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] >> > >>> (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Host >> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a : Compute5 is already in >> > >>> NonOperational status for reason GLUSTER_COMMAND_FAILED. >> > >>> SetNonOperationalVds command is skipped. >> > >>> 2014-11-24 18:05:38,065 INFO >> > >>> >> > >>> >> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >> > >>> (DefaultQuartzScheduler_Worker-55) [3706e836] FINISH, >> > >>> GlusterVolumesListVDSCommand, return: >> > >>> >> > >>> >> > >>> {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4e72a1b1 }, >> > >>> log id: 48a0c832 >> > >>> 2014-11-24 18:05:43,243 INFO >> > >>> >> > >>> >> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >> > >>> (DefaultQuartzScheduler_Worker-35) START, >> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, HostId = >> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 3ce13ebc >> > >>> ^C >> > >>> [root@ccr01 ~]# >> > >>> >> > >>> Thanks, >> > >>> Punit >> > >>> >> > >>> >> > >>> _______________________________________________ >> > >>> Users mailing >> > >>> >> > >>> listUsers@ovirt.orghttp://
On Mon, Dec 1, 2014 at 9:05 AM, Punit Dambiwal <hypunit@gmail.com> wrote: the line the lists.ovirt.org/mailman/listinfo/users
>> > >>> >> > >>> >> > >>> >> > >> >> > >> >> > > >> > > >> > > >
_______________________________________________ Gluster-users mailing list Gluster-users@gluster.org http://supercolony.gluster.org/mailman/listinfo/gluster-users

Hi Kaushal, Please find the logs here :- http://ur1.ca/iyoe5 and http://ur1.ca/iyoed On Tue, Dec 2, 2014 at 10:43 PM, Kaushal M <kshlmster@gmail.com> wrote:
Hey Punit, In the logs you've provided, GlusterD appears to be running correctly. Could you provide the logs for the time period when GlusterD attempts to start but fails.
~kaushal On Dec 2, 2014 8:03 PM, "Punit Dambiwal" <hypunit@gmail.com> wrote:
Hi Kaushal,
Please find the logs here :- http://ur1.ca/iyhs5 and http://ur1.ca/iyhue
Thanks, punit
On Tue, Dec 2, 2014 at 12:00 PM, Kaushal M <kshlmster@gmail.com> wrote:
Hey Punit, Could you start Glusterd in debug mode and provide the logs here? To start it in debug mode, append '-LDEBUG' to the ExecStart line in the service file.
~kaushal
Hi,
Can Any body help me on this ??
On Thu, Nov 27, 2014 at 9:29 AM, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi Kaushal,
Thanks for the detailed reply....let me explain my setup first :-
1. Ovirt Engine 2. 4* host as well as storage machine (Host and gluster combined) 3. Every host has 24 bricks...
Now whenever the host machine reboot...it can come up but can not
join the
cluster again and through the following error "Gluster command [<UNKNOWN>] failed on server.."
Please check my comment in line :-
1. Use the same string for doing the peer probe and for the brick address during volume create/add-brick. Ideally, we suggest you use properly resolvable FQDNs everywhere. If that is not possible, then use only IP addresses. Try to avoid short names. --------------- [root@cpu05 ~]# gluster peer status Number of Peers: 3
Hostname: cpu03.stack.com Uuid: 5729b8c4-e80d-4353-b456-6f467bddbdfb State: Peer in Cluster (Connected)
Hostname: cpu04.stack.com Uuid: d272b790-c4b2-4bed-ba68-793656e6d7b0 State: Peer in Cluster (Connected) Other names: 10.10.0.8
Hostname: cpu02.stack.com Uuid: 8d8a7041-950e-40d0-85f9-58d14340ca25 State: Peer in Cluster (Connected) [root@cpu05 ~]# ---------------- 2. During boot up, make sure to launch glusterd only after the network is up. This will allow the new peer identification mechanism to do its job correctly.
> I think the service itself doing the same job....
[root@cpu05 ~]# cat /usr/lib/systemd/system/glusterd.service [Unit] Description=GlusterFS, a clustered file-system server After=network.target rpcbind.service Before=network-online.target
[Service] Type=forking PIDFile=/var/run/glusterd.pid LimitNOFILE=65536 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid KillMode=process
[Install] WantedBy=multi-user.target [root@cpu05 ~]# --------------------
gluster logs :-
[2014-11-24 09:22:22.147471] I [MSGID: 100030] [glusterfsd.c:2018:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.6.1 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid) [2014-11-24 09:22:22.151565] I [glusterd.c:1214:init] 0-management: Maximum allowed open file descriptors set to 65536 [2014-11-24 09:22:22.151599] I [glusterd.c:1259:init] 0-management: Using /var/lib/glusterd as working directory [2014-11-24 09:22:22.155216] W [rdma.c:4195:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel creation failed (No such device) [2014-11-24 09:22:22.155264] E [rdma.c:4483:init] 0-rdma.management: Failed to initialize IB Device [2014-11-24 09:22:22.155285] E [rpc-transport.c:333:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed [2014-11-24 09:22:22.155354] W [rpcsvc.c:1524:rpcsvc_transport_create] 0-rpc-service: cannot create listener, initing the transport failed [2014-11-24 09:22:22.156290] I [glusterd.c:413:glusterd_check_gsync_present] 0-glusterd: geo-replication module not installed in the system [2014-11-24 09:22:22.161318] I [glusterd-store.c:2043:glusterd_restore_op_version] 0-glusterd: retrieved op-version: 30600 [2014-11-24 09:22:22.821800] I [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0 [2014-11-24 09:22:22.825810] I [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0 [2014-11-24 09:22:22.828705] I [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0 [2014-11-24 09:22:22.828771] I [rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2014-11-24 09:22:22.832670] I [rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2014-11-24 09:22:22.835919] I [rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2014-11-24 09:22:22.840209] E [glusterd-store.c:4248:glusterd_resolve_all_bricks] 0-glusterd: resolve brick failed in restore [2014-11-24 09:22:22.840233] E [xlator.c:425:xlator_init] 0-management: Initialization of volume 'management' failed, review your volfile again [2014-11-24 09:22:22.840245] E [graph.c:322:glusterfs_graph_init] 0-management: initializing translator failed [2014-11-24 09:22:22.840264] E [graph.c:525:glusterfs_graph_activate] 0-graph: init failed [2014-11-24 09:22:22.840754] W [glusterfsd.c:1194:cleanup_and_exit] (--> 0-: received signum (0), shutting down
Thanks, Punit
On Wed, Nov 26, 2014 at 7:14 PM, Kaushal M <kshlmster@gmail.com> wrote:
Based on the logs I can guess that glusterd is being started before the network has come up and that the addresses given to bricks do not directly match the addresses used in during peer probe.
The gluster_after_reboot log has the line "[2014-11-25 06:46:09.972113] E
[glusterd-store.c:2632:glusterd_resolve_all_bricks]
0-glusterd: resolve brick failed in restore".
Brick resolution fails when glusterd cannot match the address for the brick, with one of the peers. Brick resolution happens in two phases, 1. We first try to identify the peer by performing string comparisions with the brick address and the peer addresses (The peer names will be the names/addresses that were given when the peer was probed). 2. If we don't find a match from step 1, we will then resolve all the brick address and the peer addresses into addrinfo structs, and then compare these structs to find a match. This process should generally find a match if available. This will fail only if the network is not up yet as we cannot resolve addresses.
The above steps are applicable only to glusterfs versions >=3.6. They were introduced to reduce problems with peer identification, like the one you encountered
Since both of the steps failed to find a match in one run, but succeeded later, we can come to the conclusion that, a) the bricks don't have the exact same string used in peer probe for their addresses as step 1 failed, and b) the network was not up in the initial run, as step 2 failed during the initial run, but passed in the second run.
Please let me know if my conclusion is correct.
If it is, you can solve your problem in two ways. 1. Use the same string for doing the peer probe and for the brick address during volume create/add-brick. Ideally, we suggest you use properly resolvable FQDNs everywhere. If that is not possible, then use only IP addresses. Try to avoid short names. 2. During boot up, make sure to launch glusterd only after the network is up. This will allow the new peer identification mechanism to do its job correctly.
If you have already followed these steps and yet still hit the problem, then please provide more information (setup, logs, etc.). It could be much different problem that you are facing.
~kaushal
On Wed, Nov 26, 2014 at 4:01 PM, Punit Dambiwal <hypunit@gmail.com> wrote: > Is there any one can help on this ?? > > Thanks, > punit > > On Wed, Nov 26, 2014 at 9:42 AM, Punit Dambiwal <hypunit@gmail.com
> wrote: >> >> Hi, >> >> My Glusterfs version is :- glusterfs-3.6.1-1.el7 >> >> On Wed, Nov 26, 2014 at 1:59 AM, Kanagaraj Mayilsamy >> <kmayilsa@redhat.com> >> wrote: >>> >>> [+Gluster-users@gluster.org] >>> >>> "Initialization of volume 'management' failed, review your volfile >>> again", glusterd throws this error when the service is started >>> automatically >>> after the reboot. But the service is successfully started later >>> manually by >>> the user. >>> >>> can somebody from gluster-users please help on this? >>> >>> glusterfs version: 3.5.1 >>> >>> Thanks, >>> Kanagaraj >>> >>> ----- Original Message ----- >>> > From: "Punit Dambiwal" <hypunit@gmail.com> >>> > To: "Kanagaraj" <kmayilsa@redhat.com> >>> > Cc: users@ovirt.org >>> > Sent: Tuesday, November 25, 2014 7:24:45 PM >>> > Subject: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on >>> > server... >>> > >>> > Hi Kanagraj, >>> > >>> > Please check the attached log files....i didn't find any thing >>> > special.... >>> > >>> > On Tue, Nov 25, 2014 at 12:12 PM, Kanagaraj < kmayilsa@redhat.com> >>> > wrote: >>> > >>> > > Do you see any errors in >>> > > /var/log/glusterfs/etc-glusterfs-glusterd.vol.log or vdsm.log >>> > > when >>> > > the >>> > > service is trying to start automatically after the reboot? >>> > > >>> > > Thanks, >>> > > Kanagaraj >>> > > >>> > > >>> > > On 11/24/2014 08:13 PM, Punit Dambiwal wrote: >>> > > >>> > > Hi Kanagaraj, >>> > > >>> > > Yes...once i will start the gluster service and then vdsmd >>> > > ...the >>> > > host >>> > > can connect to cluster...but the question is why it's not started >>> > > even it >>> > > has chkconfig enabled... >>> > > >>> > > I have tested it in two host cluster environment...(Centos 6.6 >>> > > and >>> > > centos 7.0) on both hypervisior cluster..it's failed to reconnect >>> > > in >>> > > to >>> > > cluster after reboot.... >>> > > >>> > > In both the environment glusterd enabled for next boot....but >>> > > it's >>> > > failed with the same error....seems it's bug in either gluster or >>> > > Ovirt ?? >>> > > >>> > > Please help me to find the workaround here if can not resolve >>> > > it...as >>> > > without this the Host machine can not connect after >>> > > reboot....that >>> > > means >>> > > engine will consider it as down and every time need to manually >>> > > start >>> > > the >>> > > gluster service and vdsmd... ?? >>> > > >>> > > Thanks, >>> > > Punit >>> > > >>> > > On Mon, Nov 24, 2014 at 10:20 PM, Kanagaraj < kmayilsa@redhat.com> >>> > > wrote: >>> > > >>> > >> From vdsm.log "error: Connection failed. Please check if >>> > >> gluster >>> > >> daemon >>> > >> is operational." >>> > >> >>> > >> Starting glusterd service should fix this issue. 'service >>> > >> glusterd >>> > >> start' >>> > >> But i am wondering why the glusterd was not started >>> > >> automatically >>> > >> after >>> > >> the reboot. >>> > >> >>> > >> Thanks, >>> > >> Kanagaraj >>> > >> >>> > >> >>> > >> >>> > >> On 11/24/2014 07:18 PM, Punit Dambiwal wrote: >>> > >> >>> > >> Hi Kanagaraj, >>> > >> >>> > >> Please find the attached VDSM logs :- >>> > >> >>> > >> ---------------- >>> > >> Thread-13::DEBUG::2014-11-24 >>> > >> >>> > >> >>> > >> 21:41:17,182::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >>> > >> Owner.cancelAll requests {} >>> > >> Thread-13::DEBUG::2014-11-24 >>> > >> 21:41:17,182::task::993::Storage.TaskManager.Task::(_decref) >>> > >> Task=`1691d409-9b27-4585-8281-5ec26154367a`::ref 0 aborting >>> > >> False >>> > >> Thread-13::DEBUG::2014-11-24 >>> > >> >>> > >> 21:41:32,393::task::595::Storage.TaskManager.Task::(_updateState) >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from state >>> > >> init >>> > >> -> >>> > >> state preparing >>> > >> Thread-13::INFO::2014-11-24 >>> > >> 21:41:32,393::logUtils::44::dispatcher::(wrapper) Run and >>> > >> protect: >>> > >> repoStats(options=None) >>> > >> Thread-13::INFO::2014-11-24 >>> > >> 21:41:32,393::logUtils::47::dispatcher::(wrapper) Run and >>> > >> protect: >>> > >> repoStats, Return response: {} >>> > >> Thread-13::DEBUG::2014-11-24 >>> > >> 21:41:32,393::task::1191::Storage.TaskManager.Task::(prepare) >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::finished: {} >>> > >> Thread-13::DEBUG::2014-11-24 >>> > >> >>> > >> 21:41:32,394::task::595::Storage.TaskManager.Task::(_updateState) >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from state >>> > >> preparing >>> > >> -> >>> > >> state finished >>> > >> Thread-13::DEBUG::2014-11-24 >>> > >> >>> > >> >>> > >> 21:41:32,394::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) >>> > >> Owner.releaseAll requests {} resources {} >>> > >> Thread-13::DEBUG::2014-11-24 >>> > >> >>> > >> >>> > >> 21:41:32,394::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >>> > >> Owner.cancelAll requests {} >>> > >> Thread-13::DEBUG::2014-11-24 >>> > >> 21:41:32,394::task::993::Storage.TaskManager.Task::(_decref) >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::ref 0 aborting >>> > >> False >>> > >> Thread-13::DEBUG::2014-11-24 >>> > >> 21:41:41,550::BindingXMLRPC::1132::vds::(wrapper) client >>> > >> [10.10.10.2]::call >>> > >> getCapabilities with () {} >>> > >> Thread-13::DEBUG::2014-11-24 >>> > >> 21:41:41,553::utils::738::root::(execCmd) >>> > >> /sbin/ip route show to 0.0.0.0/0 table all (cwd None) >>> > >> Thread-13::DEBUG::2014-11-24 >>> > >> 21:41:41,560::utils::758::root::(execCmd) >>> > >> SUCCESS: <err> = ''; <rc> = 0 >>> > >> Thread-13::DEBUG::2014-11-24 >>> > >> 21:41:41,588::caps::728::root::(_getKeyPackages) rpm package >>> > >> ('gluster-swift',) not found >>> > >> Thread-13::DEBUG::2014-11-24 >>> > >> 21:41:41,592::caps::728::root::(_getKeyPackages) rpm package >>> > >> ('gluster-swift-object',) not found >>> > >> Thread-13::DEBUG::2014-11-24 >>> > >> 21:41:41,593::caps::728::root::(_getKeyPackages) rpm package >>> > >> ('gluster-swift-plugin',) not found >>> > >> Thread-13::DEBUG::2014-11-24 >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package >>> > >> ('gluster-swift-account',) not found >>> > >> Thread-13::DEBUG::2014-11-24 >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package >>> > >> ('gluster-swift-proxy',) not found >>> > >> Thread-13::DEBUG::2014-11-24 >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm package >>> > >> ('gluster-swift-doc',) not found >>> > >> Thread-13::DEBUG::2014-11-24 >>> > >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm package >>> > >> ('gluster-swift-container',) not found >>> > >> Thread-13::DEBUG::2014-11-24 >>> > >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm package >>> > >> ('glusterfs-geo-replication',) not found >>> > >> Thread-13::DEBUG::2014-11-24 >>> > >> 21:41:41,600::caps::646::root::(get) >>> > >> VirtioRNG DISABLED: libvirt version 0.10.2-29.el6_5.9 required >>> > >> >= >>> > >> 0.10.2-31 >>> > >> Thread-13::DEBUG::2014-11-24 >>> > >> 21:41:41,603::BindingXMLRPC::1139::vds::(wrapper) return >>> > >> getCapabilities >>> > >> with {'status': {'message': 'Done', 'code': 0}, 'info': >>> > >> {'HBAInventory': >>> > >> {'iSCSI': [{'InitiatorName': >>> > >> 'iqn.1994-05.com.redhat:32151ce183c8'}], >>> > >> 'FC': >>> > >> []}, 'packages2': {'kernel': {'release': '431.el6.x86_64', >>> > >> 'buildtime': >>> > >> 1385061309.0, 'version': '2.6.32'}, 'glusterfs-rdma': >>> > >> {'release': >>> > >> '1.el6', >>> > >> 'buildtime': 1403622628L, 'version': '3.5.1'}, 'glusterfs-fuse': >>> > >> {'release': '1.el6', 'buildtime': 1403622628L, 'version': >>> > >> '3.5.1'}, >>> > >> 'spice-server': {'release': '6.el6_5.2', 'buildtime': >>> > >> 1402324637L, >>> > >> 'version': '0.12.4'}, 'vdsm': {'release': '1.gitdb83943.el6', >>> > >> 'buildtime': >>> > >> 1412784567L, 'version': '4.16.7'}, 'qemu-kvm': {'release': >>> > >> '2.415.el6_5.10', 'buildtime': 1402435700L, 'version': >>> > >> '0.12.1.2'}, >>> > >> 'qemu-img': {'release': '2.415.el6_5.10', 'buildtime': >>> > >> 1402435700L, >>> > >> 'version': '0.12.1.2'}, 'libvirt': {'release': '29.el6_5.9', >>> > >> 'buildtime': >>> > >> 1402404612L, 'version': '0.10.2'}, 'glusterfs': {'release': >>> > >> '1.el6', >>> > >> 'buildtime': 1403622628L, 'version': '3.5.1'}, 'mom': >>> > >> {'release': >>> > >> '2.el6', >>> > >> 'buildtime': 1403794344L, 'version': '0.4.1'}, >>> > >> 'glusterfs-server': >>> > >> {'release': '1.el6', 'buildtime': 1403622628L, 'version': >>> > >> '3.5.1'}}, >>> > >> 'numaNodeDistance': {'1': [20, 10], '0': [10, 20]}, 'cpuModel': >>> > >> 'Intel(R) >>> > >> Xeon(R) CPU X5650 @ 2.67GHz', 'liveMerge': 'false', >>> > >> 'hooks': >>> > >> {}, >>> > >> 'cpuSockets': '2', 'vmTypes': ['kvm'], 'selinux': {'mode': '1'}, >>> > >> 'kdumpStatus': 0, 'supportedProtocols': ['2.2', '2.3'], >>> > >> 'networks': >>> > >> {'ovirtmgmt': {'iface': u'bond0.10', 'addr': '43.252.176.16', >>> > >> 'bridged': >>> > >> False, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': >>> > >> '1500', >>> > >> 'bootproto4': 'none', 'netmask': '255.255.255.0', 'ipv4addrs': >>> > >> [' >>> > >> 43.252.176.16/24' <http://43.252.176.16/24%27>], 'interface': >>> > >> u'bond0.10', 'ipv6gateway': '::', 'gateway': '43.25.17.1'}, >>> > >> 'Internal': >>> > >> {'iface': 'Internal', 'addr': '', 'cfg': {'DEFROUTE': 'no', >>> > >> 'HOTPLUG': >>> > >> 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', >>> > >> 'BOOTPROTO': >>> > >> 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': 'Bridge', >>> > >> 'ONBOOT': >>> > >> 'no'}, 'bridged': True, 'ipv6addrs': >>> > >> ['fe80::210:18ff:fecd:daac/64'], >>> > >> 'gateway': '', 'bootproto4': 'none', 'netmask': '', 'stp': >>> > >> 'off', >>> > >> 'ipv4addrs': [], 'mtu': '9000', 'ipv6gateway': '::', 'ports': >>> > >> ['bond1.100']}, 'storage': {'iface': u'bond1', 'addr': >>> > >> '10.10.10.6', >>> > >> 'bridged': False, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], >>> > >> 'mtu': >>> > >> '9000', 'bootproto4': 'none', 'netmask': '255.255.255.0', >>> > >> 'ipv4addrs': [' >>> > >> 10.10.10.6/24' <http://10.10.10.6/24%27>], 'interface': >>> > >> u'bond1', >>> > >> 'ipv6gateway': '::', 'gateway': ''}, 'VMNetwork': {'iface': >>> > >> 'VMNetwork', >>> > >> 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': >>> > >> '1500', >>> > >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': >>> > >> 'off', >>> > >> 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, >>> > >> 'bridged': >>> > >> True, >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'gateway': '', >>> > >> 'bootproto4': >>> > >> 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'mtu': >>> > >> '1500', >>> > >> 'ipv6gateway': '::', 'ports': ['bond0.36']}}, 'bridges': >>> > >> {'Internal': >>> > >> {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': >>> > >> '9000', >>> > >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': >>> > >> 'off', >>> > >> 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, >>> > >> 'ipv6addrs': >>> > >> ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'netmask': '', >>> > >> 'stp': >>> > >> 'off', 'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': '', >>> > >> 'opts': >>> > >> {'topology_change_detected': '0', 'multicast_last_member_count': >>> > >> '2', >>> > >> 'hash_elasticity': '4', 'multicast_query_response_interval': >>> > >> '999', >>> > >> 'multicast_snooping': '1', 'multicast_startup_query_interval': >>> > >> '3124', >>> > >> 'hello_timer': '31', 'multicast_querier_interval': '25496', >>> > >> 'max_age': >>> > >> '1999', 'hash_max': '512', 'stp_state': '0', 'root_id': >>> > >> '8000.001018cddaac', 'priority': '32768', >>> > >> 'multicast_membership_interval': >>> > >> '25996', 'root_path_cost': '0', 'root_port': '0', >>> > >> 'multicast_querier': >>> > >> '0', >>> > >> 'multicast_startup_query_count': '2', 'hello_time': '199', >>> > >> 'topology_change': '0', 'bridge_id': '8000.001018cddaac', >>> > >> 'topology_change_timer': '0', 'ageing_time': '29995', >>> > >> 'gc_timer': >>> > >> '31', >>> > >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', >>> > >> 'multicast_query_interval': '12498', >>> > >> 'multicast_last_member_interval': >>> > >> '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': >>> > >> ['bond1.100']}, 'VMNetwork': {'addr': '', 'cfg': {'DEFROUTE': >>> > >> 'no', >>> > >> 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', 'NM_CONTROLLED': >>> > >> 'no', >>> > >> 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'VMNetwork', >>> > >> 'TYPE': >>> > >> 'Bridge', >>> > >> 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], >>> > >> 'mtu': >>> > >> '1500', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], >>> > >> 'ipv6gateway': >>> > >> '::', >>> > >> 'gateway': '', 'opts': {'topology_change_detected': '0', >>> > >> 'multicast_last_member_count': '2', 'hash_elasticity': '4', >>> > >> 'multicast_query_response_interval': '999', >>> > >> 'multicast_snooping': >>> > >> '1', >>> > >> 'multicast_startup_query_interval': '3124', 'hello_timer': >>> > >> '131', >>> > >> 'multicast_querier_interval': '25496', 'max_age': '1999', >>> > >> 'hash_max': >>> > >> '512', 'stp_state': '0', 'root_id': '8000.60eb6920b46c', >>> > >> 'priority': >>> > >> '32768', 'multicast_membership_interval': '25996', >>> > >> 'root_path_cost': >>> > >> '0', >>> > >> 'root_port': '0', 'multicast_querier': '0', >>> > >> 'multicast_startup_query_count': '2', 'hello_time': '199', >>> > >> 'topology_change': '0', 'bridge_id': '8000.60eb6920b46c', >>> > >> 'topology_change_timer': '0', 'ageing_time': '29995', >>> > >> 'gc_timer': >>> > >> '31', >>> > >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', >>> > >> 'multicast_query_interval': '12498', >>> > >> 'multicast_last_member_interval': >>> > >> '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': >>> > >> ['bond0.36']}}, 'uuid': '44454C4C-4C00-1057-8053-B7C04F504E31', >>> > >> 'lastClientIface': 'bond1', 'nics': {'eth3': {'permhwaddr': >>> > >> '00:10:18:cd:da:ae', 'addr': '', 'cfg': {'SLAVE': 'yes', >>> > >> 'NM_CONTROLLED': >>> > >> 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ae', 'MASTER': >>> > >> 'bond1', >>> > >> 'DEVICE': 'eth3', 'ONBOOT': 'no'}, 'ipv6addrs': [], 'mtu': >>> > >> '9000', >>> > >> 'netmask': '', 'ipv4addrs': [], 'hwaddr': '00:10:18:cd:da:ac', >>> > >> 'speed': >>> > >> 1000}, 'eth2': {'permhwaddr': '00:10:18:cd:da:ac', 'addr': '', >>> > >> 'cfg': >>> > >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '9000', 'HWADDR': >>> > >> '00:10:18:cd:da:ac', 'MASTER': 'bond1', 'DEVICE': 'eth2', >>> > >> 'ONBOOT': >>> > >> 'no'}, >>> > >> 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', 'ipv4addrs': [], >>> > >> 'hwaddr': >>> > >> '00:10:18:cd:da:ac', 'speed': 1000}, 'eth1': {'permhwaddr': >>> > >> '60:eb:69:20:b4:6d', 'addr': '', 'cfg': {'SLAVE': 'yes', >>> > >> 'NM_CONTROLLED': >>> > >> 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6d', 'MASTER': >>> > >> 'bond0', >>> > >> 'DEVICE': 'eth1', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'mtu': >>> > >> '1500', >>> > >> 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', >>> > >> 'speed': >>> > >> 1000}, 'eth0': {'permhwaddr': '60:eb:69:20:b4:6c', 'addr': '', >>> > >> 'cfg': >>> > >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', 'HWADDR': >>> > >> '60:eb:69:20:b4:6c', 'MASTER': 'bond0', 'DEVICE': 'eth0', >>> > >> 'ONBOOT': >>> > >> 'yes'}, >>> > >> 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], >>> > >> 'hwaddr': >>> > >> '60:eb:69:20:b4:6c', 'speed': 1000}}, 'software_revision': '1', >>> > >> 'clusterLevels': ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5'], >>> > >> 'cpuFlags': >>> > >> >>> > >> >>> > >> u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,popcnt,aes,lahf_lm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270', >>> > >> 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:32151ce183c8', >>> > >> 'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1', >>> > >> '3.2', >>> > >> '3.3', >>> > >> '3.4', '3.5'], 'autoNumaBalancing': 2, 'reservedMem': '321', >>> > >> 'bondings': >>> > >> {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', >>> > >> 'slaves': >>> > >> [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': >>> > >> {'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': 'no', >>> > >> 'BONDING_OPTS': >>> > >> 'mode=4 miimon=100', 'DEVICE': 'bond0', 'ONBOOT': 'yes'}, >>> > >> 'ipv6addrs': >>> > >> ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'netmask': '', >>> > >> 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'slaves': >>> > >> ['eth0', >>> > >> 'eth1'], >>> > >> 'opts': {'miimon': '100', 'mode': '4'}}, 'bond1': {'addr': >>> > >> '10.10.10.6', >>> > >> 'cfg': {'DEFROUTE': 'no', 'IPADDR': '10.10.10.6', 'HOTPLUG': >>> > >> 'no', >>> > >> 'MTU': >>> > >> '9000', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', >>> > >> 'BOOTPROTO': >>> > >> 'none', 'BONDING_OPTS': 'mode=4 miimon=100', 'DEVICE': 'bond1', >>> > >> 'ONBOOT': >>> > >> 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': >>> > >> '9000', >>> > >> 'netmask': '255.255.255.0', 'ipv4addrs': ['10.10.10.6/24' >>> > >> <http://10.10.10.6/24%27>], 'hwaddr': '00:10:18:cd:da:ac', >>> > >> 'slaves': >>> > >> ['eth2', 'eth3'], 'opts': {'miimon': '100', 'mode': '4'}}, >>> > >> 'bond2': >>> > >> {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': >>> > >> [], >>> > >> 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', 'cfg': {}, >>> > >> 'mtu': >>> > >> '1500', 'netmask': '', 'slaves': [], 'hwaddr': >>> > >> '00:00:00:00:00:00'}}, >>> > >> 'software_version': '4.16', 'memSize': '24019', 'cpuSpeed': >>> > >> '2667.000', >>> > >> 'numaNodes': {u'1': {'totalMemory': '12288', 'cpus': [6, 7, 8, >>> > >> 9, >>> > >> 10, 11, >>> > >> 18, 19, 20, 21, 22, 23]}, u'0': {'totalMemory': '12278', 'cpus': >>> > >> [0, >>> > >> 1, 2, >>> > >> 3, 4, 5, 12, 13, 14, 15, 16, 17]}}, 'version_name': 'Snow Man', >>> > >> 'vlans': >>> > >> {'bond0.10': {'iface': 'bond0', 'addr': '43.25.17.16', 'cfg': >>> > >> {'DEFROUTE': >>> > >> 'yes', 'VLAN': 'yes', 'IPADDR': '43.25.17.16', 'HOTPLUG': 'no', >>> > >> 'GATEWAY': >>> > >> '43.25.17.1', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', >>> > >> 'BOOTPROTO': 'none', 'DEVICE': 'bond0.10', 'MTU': '1500', >>> > >> 'ONBOOT': >>> > >> 'yes'}, >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 10, >>> > >> 'mtu': >>> > >> '1500', >>> > >> 'netmask': '255.255.255.0', 'ipv4addrs': ['43.25.17.16/24'] >>> > >> <http://43.25.17.16/24%27%5D>}, 'bond0.36': {'iface': 'bond0', >>> > >> 'addr': >>> > >> '', 'cfg': {'BRIDGE': 'VMNetwork', 'VLAN': 'yes', 'HOTPLUG': >>> > >> 'no', >>> > >> 'MTU': >>> > >> '1500', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond0.36', 'ONBOOT': >>> > >> 'no'}, >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 36, >>> > >> 'mtu': >>> > >> '1500', >>> > >> 'netmask': '', 'ipv4addrs': []}, 'bond1.100': {'iface': 'bond1', >>> > >> 'addr': >>> > >> '', 'cfg': {'BRIDGE': 'Internal', 'VLAN': 'yes', 'HOTPLUG': >>> > >> 'no', >>> > >> 'MTU': >>> > >> '9000', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond1.100', 'ONBOOT': >>> > >> 'no'}, >>> > >> 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'vlanid': 100, >>> > >> 'mtu': >>> > >> '9000', >>> > >> 'netmask': '', 'ipv4addrs': []}}, 'cpuCores': '12', >>> > >> 'kvmEnabled': >>> > >> 'true', >>> > >> 'guestOverhead': '65', 'cpuThreads': '24', 'emulatedMachines': >>> > >> [u'rhel6.5.0', u'pc', u'rhel6.4.0', u'rhel6.3.0', u'rhel6.2.0', >>> > >> u'rhel6.1.0', u'rhel6.0.0', u'rhel5.5.0', u'rhel5.4.4', >>> > >> u'rhel5.4.0'], >>> > >> 'operatingSystem': {'release': '5.el6.centos.11.1', 'version': >>> > >> '6', >>> > >> 'name': >>> > >> 'RHEL'}, 'lastClient': '10.10.10.2'}} >>> > >> Thread-13::DEBUG::2014-11-24 >>> > >> 21:41:41,620::BindingXMLRPC::1132::vds::(wrapper) client >>> > >> [10.10.10.2]::call >>> > >> getHardwareInfo with () {} >>> > >> Thread-13::DEBUG::2014-11-24 >>> > >> 21:41:41,621::BindingXMLRPC::1139::vds::(wrapper) return >>> > >> getHardwareInfo >>> > >> with {'status': {'message': 'Done', 'code': 0}, 'info': >>> > >> {'systemProductName': 'CS24-TY', 'systemSerialNumber': >>> > >> '7LWSPN1', >>> > >> 'systemFamily': 'Server', 'systemVersion': 'A00', 'systemUUID': >>> > >> '44454c4c-4c00-1057-8053-b7c04f504e31', 'systemManufacturer': >>> > >> 'Dell'}} >>> > >> Thread-13::DEBUG::2014-11-24 >>> > >> 21:41:41,733::BindingXMLRPC::1132::vds::(wrapper) client >>> > >> [10.10.10.2]::call >>> > >> hostsList with () {} flowID [222e8036] >>> > >> Thread-13::ERROR::2014-11-24 >>> > >> 21:41:44,753::BindingXMLRPC::1148::vds::(wrapper) vdsm exception >>> > >> occured >>> > >> Traceback (most recent call last): >>> > >> File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1135, in >>> > >> wrapper >>> > >> res = f(*args, **kwargs) >>> > >> File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper >>> > >> rv = func(*args, **kwargs) >>> > >> File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList >>> > >> return {'hosts': self.svdsmProxy.glusterPeerStatus()} >>> > >> File "/usr/share/vdsm/supervdsm.py", line 50, in __call__ >>> > >> return callMethod() >>> > >> File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda> >>> > >> **kwargs) >>> > >> File "<string>", line 2, in glusterPeerStatus >>> > >> File "/usr/lib64/python2.6/multiprocessing/managers.py",
>>> > >> 740, >>> > >> in >>> > >> _callmethod >>> > >> raise convert_to_error(kind, result) >>> > >> GlusterCmdExecFailedException: Command execution failed >>> > >> error: Connection failed. Please check if gluster daemon is >>> > >> operational. >>> > >> return code: 1 >>> > >> Thread-13::DEBUG::2014-11-24 >>> > >> >>> > >> 21:41:50,949::task::595::Storage.TaskManager.Task::(_updateState) >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from state >>> > >> init >>> > >> -> >>> > >> state preparing >>> > >> Thread-13::INFO::2014-11-24 >>> > >> 21:41:50,950::logUtils::44::dispatcher::(wrapper) Run and >>> > >> protect: >>> > >> repoStats(options=None) >>> > >> Thread-13::INFO::2014-11-24 >>> > >> 21:41:50,950::logUtils::47::dispatcher::(wrapper) Run and >>> > >> protect: >>> > >> repoStats, Return response: {} >>> > >> Thread-13::DEBUG::2014-11-24 >>> > >> 21:41:50,950::task::1191::Storage.TaskManager.Task::(prepare) >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::finished: {} >>> > >> Thread-13::DEBUG::2014-11-24 >>> > >> >>> > >> 21:41:50,950::task::595::Storage.TaskManager.Task::(_updateState) >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from state >>> > >> preparing >>> > >> -> >>> > >> state finished >>> > >> Thread-13::DEBUG::2014-11-24 >>> > >> >>> > >> >>> > >> 21:41:50,951::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) >>> > >> Owner.releaseAll requests {} resources {} >>> > >> Thread-13::DEBUG::2014-11-24 >>> > >> >>> > >> >>> > >> 21:41:50,951::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >>> > >> Owner.cancelAll requests {} >>> > >> Thread-13::DEBUG::2014-11-24 >>> > >> 21:41:50,951::task::993::Storage.TaskManager.Task::(_decref) >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::ref 0 aborting >>> > >> False >>> > >> ------------------------------- >>> > >> >>> > >> [root@compute4 ~]# service glusterd status >>> > >> glusterd is stopped >>> > >> [root@compute4 ~]# chkconfig --list | grep glusterd >>> > >> glusterd 0:off 1:off 2:on 3:on 4:on 5:on >>> > >> 6:off >>> > >> [root@compute4 ~]# >>> > >> >>> > >> Thanks, >>> > >> Punit >>> > >> >>> > >> On Mon, Nov 24, 2014 at 6:36 PM, Kanagaraj < kmayilsa@redhat.com> >>> > >> wrote: >>> > >> >>> > >>> Can you send the corresponding error in vdsm.log from the >>> > >>> host? >>> > >>> >>> > >>> Also check if glusterd service is running. >>> > >>> >>> > >>> Thanks, >>> > >>> Kanagaraj >>> > >>> >>> > >>> >>> > >>> On 11/24/2014 03:39 PM, Punit Dambiwal wrote: >>> > >>> >>> > >>> Hi, >>> > >>> >>> > >>> After reboot my Hypervisior host can not activate again in the >>> > >>> cluster >>> > >>> and failed with the following error :- >>> > >>> >>> > >>> Gluster command [<UNKNOWN>] failed on server... >>> > >>> >>> > >>> Engine logs :- >>> > >>> >>> > >>> 2014-11-24 18:05:28,397 INFO >>> > >>> >>> > >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>> > >>> (DefaultQuartzScheduler_Worker-64) START, >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, HostId = >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 5f251c90 >>> > >>> 2014-11-24 18:05:30,609 INFO >>> > >>> >>> > >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>> > >>> (DefaultQuartzScheduler_Worker-64) FINISH, >>> > >>> GlusterVolumesListVDSCommand, >>> > >>> return: >>> > >>> >>> > >>> >>> > >>> {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@d95203e0 }, >>> > >>> log id: 5f251c90 >>> > >>> 2014-11-24 18:05:33,768 INFO >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>> > >>> (ajp--127.0.0.1-8702-8) >>> > >>> [287d570d] Lock Acquired to object EngineLock [exclusiveLocks= >>> > >>> key: >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS >>> > >>> , sharedLocks= ] >>> > >>> 2014-11-24 18:05:33,795 INFO >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] Running command: >>> > >>> ActivateVdsCommand internal: false. Entities affected : ID: >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDSAction group >>> > >>> MANIPULATE_HOST >>> > >>> with role type ADMIN >>> > >>> 2014-11-24 18:05:33,796 INFO >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] Before acquiring >>> > >>> lock in >>> > >>> order to prevent monitoring for host Compute5 from data-center >>> > >>> SV_WTC >>> > >>> 2014-11-24 18:05:33,797 INFO >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] Lock acquired, >>> > >>> from >>> > >>> now a >>> > >>> monitoring of host will be skipped for host Compute5 from >>> > >>> data-center >>> > >>> SV_WTC >>> > >>> 2014-11-24 18:05:33,817 INFO >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] START, >>> > >>> SetVdsStatusVDSCommand(HostName = Compute5, HostId = >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=Unassigned, >>> > >>> nonOperationalReason=NONE, stopSpmFailureLogged=false), log id: >>> > >>> 1cbc7311 >>> > >>> 2014-11-24 18:05:33,820 INFO >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] FINISH, >>> > >>> SetVdsStatusVDSCommand, log id: 1cbc7311 >>> > >>> 2014-11-24 18:05:34,086 INFO >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>> > >>> (org.ovirt.thread.pool-8-thread-45) Activate finished. Lock >>> > >>> released. >>> > >>> Monitoring can run now for host Compute5 from data-center >>> > >>> SV_WTC >>> > >>> 2014-11-24 18:05:34,088 INFO >>> > >>> >>> > >>> >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>> > >>> (org.ovirt.thread.pool-8-thread-45) Correlation ID: 287d570d, >>> > >>> Job >>> > >>> ID: >>> > >>> 5ef8e4d6-b2bc-469e-8e81-7ef74b2a001a, Call Stack: null, Custom >>> > >>> Event ID: >>> > >>> -1, Message: Host Compute5 was activated by admin. >>> > >>> 2014-11-24 18:05:34,090 INFO >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>> > >>> (org.ovirt.thread.pool-8-thread-45) Lock freed to object >>> > >>> EngineLock >>> > >>> [exclusiveLocks= key: 0bf6b00f-7947-4411-b55a-cc5eea2b381a >>> > >>> value: >>> > >>> VDS >>> > >>> , sharedLocks= ] >>> > >>> 2014-11-24 18:05:35,792 INFO >>> > >>> >>> > >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>> > >>> (DefaultQuartzScheduler_Worker-55) [3706e836] START, >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, HostId = >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 48a0c832 >>> > >>> 2014-11-24 18:05:37,064 INFO >>> > >>> >>> > >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] >>> > >>> (DefaultQuartzScheduler_Worker-69) START, >>> > >>> GetHardwareInfoVDSCommand(HostName = Compute5, HostId = >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, >>> > >>> vds=Host[Compute5,0bf6b00f-7947-4411-b55a-cc5eea2b381a]), log >>> > >>> id: >>> > >>> 6d560cc2 >>> > >>> 2014-11-24 18:05:37,074 INFO >>> > >>> >>> > >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] >>> > >>> (DefaultQuartzScheduler_Worker-69) FINISH, >>> > >>> GetHardwareInfoVDSCommand, log >>> > >>> id: 6d560cc2 >>> > >>> 2014-11-24 18:05:37,093 WARN >>> > >>> [org.ovirt.engine.core.vdsbroker.VdsManager] >>> > >>> (DefaultQuartzScheduler_Worker-69) Host Compute5 is running >>> > >>> with >>> > >>> disabled >>> > >>> SELinux. >>> > >>> 2014-11-24 18:05:37,127 INFO >>> > >>> >>> > >>> >>> > >>> [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] Running command: >>> > >>> HandleVdsCpuFlagsOrClusterChangedCommand internal: true. >>> > >>> Entities >>> > >>> affected >>> > >>> : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS >>> > >>> 2014-11-24 18:05:37,147 INFO >>> > >>> >>> > >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] START, >>> > >>> GlusterServersListVDSCommand(HostName = Compute5, HostId = >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a), log id: 4faed87 >>> > >>> 2014-11-24 18:05:37,164 INFO >>> > >>> >>> > >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] FINISH, >>> > >>> GlusterServersListVDSCommand, log id: 4faed87 >>> > >>> 2014-11-24 18:05:37,189 INFO >>> > >>> [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Running command: >>> > >>> SetNonOperationalVdsCommand internal: true. Entities affected : >>> > >>> ID: >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS >>> > >>> 2014-11-24 18:05:37,206 INFO >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] START, >>> > >>> SetVdsStatusVDSCommand(HostName = Compute5, HostId = >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=NonOperational, >>> > >>> nonOperationalReason=GLUSTER_COMMAND_FAILED, >>> > >>> stopSpmFailureLogged=false), >>> > >>> log id: fed5617 >>> > >>> 2014-11-24 18:05:37,209 INFO >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] FINISH, >>> > >>> SetVdsStatusVDSCommand, log id: fed5617 >>> > >>> 2014-11-24 18:05:37,223 ERROR >>> > >>> >>> > >>> >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: >>> > >>> 4a84c4e5, >>> > >>> Job >>> > >>> ID: 4bfd4a6d-c3ef-468f-a40e-a3a6ca13011b, Call Stack: null, >>> > >>> Custom >>> > >>> Event >>> > >>> ID: -1, Message: Gluster command [<UNKNOWN>] failed on server >>> > >>> Compute5. >>> > >>> 2014-11-24 18:05:37,243 INFO >>> > >>> >>> > >>> >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: >>> > >>> null, >>> > >>> Call >>> > >>> Stack: null, Custom Event ID: -1, Message: Status of host >>> > >>> Compute5 >>> > >>> was >>> > >>> set >>> > >>> to NonOperational. >>> > >>> 2014-11-24 18:05:37,272 INFO >>> > >>> [org.ovirt.engine.core.bll.HandleVdsVersionCommand] >>> > >>> (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Running command: >>> > >>> HandleVdsVersionCommand internal: true. Entities affected : >>> > >>> ID: >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS >>> > >>> 2014-11-24 18:05:37,274 INFO >>> > >>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] >>> > >>> (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Host >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a : Compute5 is already in >>> > >>> NonOperational status for reason GLUSTER_COMMAND_FAILED. >>> > >>> SetNonOperationalVds command is skipped. >>> > >>> 2014-11-24 18:05:38,065 INFO >>> > >>> >>> > >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>> > >>> (DefaultQuartzScheduler_Worker-55) [3706e836] FINISH, >>> > >>> GlusterVolumesListVDSCommand, return: >>> > >>> >>> > >>> >>> > >>> {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4e72a1b1 }, >>> > >>> log id: 48a0c832 >>> > >>> 2014-11-24 18:05:43,243 INFO >>> > >>> >>> > >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>> > >>> (DefaultQuartzScheduler_Worker-35) START, >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, HostId = >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 3ce13ebc >>> > >>> ^C >>> > >>> [root@ccr01 ~]# >>> > >>> >>> > >>> Thanks, >>> > >>> Punit >>> > >>> >>> > >>> >>> > >>> _______________________________________________ >>> > >>> Users mailing >>> > >>> >>> > >>> listUsers@ovirt.orghttp://
On Mon, Dec 1, 2014 at 9:05 AM, Punit Dambiwal <hypunit@gmail.com> wrote: line lists.ovirt.org/mailman/listinfo/users
>>> > >>> >>> > >>> >>> > >>> >>> > >> >>> > >> >>> > > >>> > > >>> > >> >> > > > _______________________________________________ > Gluster-users mailing list > Gluster-users@gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users

This peer cannot be identified. " [2014-12-03 02:29:25.998153] D [glusterd-peer-utils.c:121:glusterd_peerinfo_find_by_hostname] 0-management: Unable to find friend: cpu05.zne01.hkg1.ovt.36stack.com" I don't know why this address is not being resolved during boot time. If this is a valid peer, the the only reason I can think of this that the network is not up. If you had previously detached the peer forcefully, the that could have left stale entries in some volumes. In this case as well, GlusterD will fail to identify the peer. Do either of these reasons seem a possibility to you? On Dec 3, 2014 8:07 AM, "Punit Dambiwal" <hypunit@gmail.com> wrote:
Hi Kaushal,
Please find the logs here :- http://ur1.ca/iyoe5 and http://ur1.ca/iyoed
On Tue, Dec 2, 2014 at 10:43 PM, Kaushal M <kshlmster@gmail.com> wrote:
Hey Punit, In the logs you've provided, GlusterD appears to be running correctly. Could you provide the logs for the time period when GlusterD attempts to start but fails.
~kaushal On Dec 2, 2014 8:03 PM, "Punit Dambiwal" <hypunit@gmail.com> wrote:
Hi Kaushal,
Please find the logs here :- http://ur1.ca/iyhs5 and http://ur1.ca/iyhue
Thanks, punit
On Tue, Dec 2, 2014 at 12:00 PM, Kaushal M <kshlmster@gmail.com> wrote:
Hey Punit, Could you start Glusterd in debug mode and provide the logs here? To start it in debug mode, append '-LDEBUG' to the ExecStart line in the service file.
~kaushal
Hi,
Can Any body help me on this ??
On Thu, Nov 27, 2014 at 9:29 AM, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi Kaushal,
Thanks for the detailed reply....let me explain my setup first :-
1. Ovirt Engine 2. 4* host as well as storage machine (Host and gluster combined) 3. Every host has 24 bricks...
Now whenever the host machine reboot...it can come up but can not
join the
cluster again and through the following error "Gluster command [<UNKNOWN>] failed on server.."
Please check my comment in line :-
1. Use the same string for doing the peer probe and for the brick address during volume create/add-brick. Ideally, we suggest you use properly resolvable FQDNs everywhere. If that is not possible, then use only IP addresses. Try to avoid short names. --------------- [root@cpu05 ~]# gluster peer status Number of Peers: 3
Hostname: cpu03.stack.com Uuid: 5729b8c4-e80d-4353-b456-6f467bddbdfb State: Peer in Cluster (Connected)
Hostname: cpu04.stack.com Uuid: d272b790-c4b2-4bed-ba68-793656e6d7b0 State: Peer in Cluster (Connected) Other names: 10.10.0.8
Hostname: cpu02.stack.com Uuid: 8d8a7041-950e-40d0-85f9-58d14340ca25 State: Peer in Cluster (Connected) [root@cpu05 ~]# ---------------- 2. During boot up, make sure to launch glusterd only after the network is up. This will allow the new peer identification mechanism to do its job correctly. >> I think the service itself doing the same job....
[root@cpu05 ~]# cat /usr/lib/systemd/system/glusterd.service [Unit] Description=GlusterFS, a clustered file-system server After=network.target rpcbind.service Before=network-online.target
[Service] Type=forking PIDFile=/var/run/glusterd.pid LimitNOFILE=65536 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid KillMode=process
[Install] WantedBy=multi-user.target [root@cpu05 ~]# --------------------
gluster logs :-
[2014-11-24 09:22:22.147471] I [MSGID: 100030] [glusterfsd.c:2018:main] 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.6.1 (args: /usr/sbin/glusterd -p /var/run/glusterd.pid) [2014-11-24 09:22:22.151565] I [glusterd.c:1214:init] 0-management: Maximum allowed open file descriptors set to 65536 [2014-11-24 09:22:22.151599] I [glusterd.c:1259:init] 0-management: Using /var/lib/glusterd as working directory [2014-11-24 09:22:22.155216] W [rdma.c:4195:__gf_rdma_ctx_create] 0-rpc-transport/rdma: rdma_cm event channel creation failed (No such device) [2014-11-24 09:22:22.155264] E [rdma.c:4483:init] 0-rdma.management: Failed to initialize IB Device [2014-11-24 09:22:22.155285] E [rpc-transport.c:333:rpc_transport_load] 0-rpc-transport: 'rdma' initialization failed [2014-11-24 09:22:22.155354] W [rpcsvc.c:1524:rpcsvc_transport_create] 0-rpc-service: cannot create listener, initing the transport failed [2014-11-24 09:22:22.156290] I [glusterd.c:413:glusterd_check_gsync_present] 0-glusterd: geo-replication module not installed in the system [2014-11-24 09:22:22.161318] I [glusterd-store.c:2043:glusterd_restore_op_version] 0-glusterd: retrieved op-version: 30600 [2014-11-24 09:22:22.821800] I [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0 [2014-11-24 09:22:22.825810] I [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0 [2014-11-24 09:22:22.828705] I [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] 0-management: connect returned 0 [2014-11-24 09:22:22.828771] I [rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2014-11-24 09:22:22.832670] I [rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2014-11-24 09:22:22.835919] I [rpc-clnt.c:969:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600 [2014-11-24 09:22:22.840209] E [glusterd-store.c:4248:glusterd_resolve_all_bricks] 0-glusterd: resolve brick failed in restore [2014-11-24 09:22:22.840233] E [xlator.c:425:xlator_init] 0-management: Initialization of volume 'management' failed, review your volfile again [2014-11-24 09:22:22.840245] E [graph.c:322:glusterfs_graph_init] 0-management: initializing translator failed [2014-11-24 09:22:22.840264] E [graph.c:525:glusterfs_graph_activate] 0-graph: init failed [2014-11-24 09:22:22.840754] W [glusterfsd.c:1194:cleanup_and_exit] (--> 0-: received signum (0), shutting down
Thanks, Punit
On Wed, Nov 26, 2014 at 7:14 PM, Kaushal M <kshlmster@gmail.com> wrote: > > Based on the logs I can guess that glusterd is being started before > the network has come up and that the addresses given to bricks do not > directly match the addresses used in during peer probe. > > The gluster_after_reboot log has the line "[2014-11-25 > 06:46:09.972113] E [glusterd-store.c:2632:glusterd_resolve_all_bricks] > 0-glusterd: resolve brick failed in restore". > > Brick resolution fails when glusterd cannot match the address for
> brick, with one of the peers. Brick resolution happens in two
> 1. We first try to identify the peer by performing string comparisions > with the brick address and the peer addresses (The peer names will be > the names/addresses that were given when the peer was probed). > 2. If we don't find a match from step 1, we will then resolve all
> brick address and the peer addresses into addrinfo structs, and then > compare these structs to find a match. This process should generally > find a match if available. This will fail only if the network is not > up yet as we cannot resolve addresses. > > The above steps are applicable only to glusterfs versions >=3.6. They > were introduced to reduce problems with peer identification, like
> one you encountered > > Since both of the steps failed to find a match in one run, but > succeeded later, we can come to the conclusion that, > a) the bricks don't have the exact same string used in peer probe for > their addresses as step 1 failed, and > b) the network was not up in the initial run, as step 2 failed during > the initial run, but passed in the second run. > > Please let me know if my conclusion is correct. > > If it is, you can solve your problem in two ways. > 1. Use the same string for doing the peer probe and for the brick > address during volume create/add-brick. Ideally, we suggest you use > properly resolvable FQDNs everywhere. If that is not possible, then > use only IP addresses. Try to avoid short names. > 2. During boot up, make sure to launch glusterd only after the network > is up. This will allow the new peer identification mechanism to do its > job correctly. > > > If you have already followed these steps and yet still hit the > problem, then please provide more information (setup, logs, etc.). It > could be much different problem that you are facing. > > ~kaushal > > On Wed, Nov 26, 2014 at 4:01 PM, Punit Dambiwal <hypunit@gmail.com> > wrote: > > Is there any one can help on this ?? > > > > Thanks, > > punit > > > > On Wed, Nov 26, 2014 at 9:42 AM, Punit Dambiwal < hypunit@gmail.com> > > wrote: > >> > >> Hi, > >> > >> My Glusterfs version is :- glusterfs-3.6.1-1.el7 > >> > >> On Wed, Nov 26, 2014 at 1:59 AM, Kanagaraj Mayilsamy > >> <kmayilsa@redhat.com> > >> wrote: > >>> > >>> [+Gluster-users@gluster.org] > >>> > >>> "Initialization of volume 'management' failed, review your volfile > >>> again", glusterd throws this error when the service is started > >>> automatically > >>> after the reboot. But the service is successfully started later > >>> manually by > >>> the user. > >>> > >>> can somebody from gluster-users please help on this? > >>> > >>> glusterfs version: 3.5.1 > >>> > >>> Thanks, > >>> Kanagaraj > >>> > >>> ----- Original Message ----- > >>> > From: "Punit Dambiwal" <hypunit@gmail.com> > >>> > To: "Kanagaraj" <kmayilsa@redhat.com> > >>> > Cc: users@ovirt.org > >>> > Sent: Tuesday, November 25, 2014 7:24:45 PM > >>> > Subject: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on > >>> > server... > >>> > > >>> > Hi Kanagraj, > >>> > > >>> > Please check the attached log files....i didn't find any thing > >>> > special.... > >>> > > >>> > On Tue, Nov 25, 2014 at 12:12 PM, Kanagaraj < kmayilsa@redhat.com> > >>> > wrote: > >>> > > >>> > > Do you see any errors in > >>> > > /var/log/glusterfs/etc-glusterfs-glusterd.vol.log or vdsm.log > >>> > > when > >>> > > the > >>> > > service is trying to start automatically after the reboot? > >>> > > > >>> > > Thanks, > >>> > > Kanagaraj > >>> > > > >>> > > > >>> > > On 11/24/2014 08:13 PM, Punit Dambiwal wrote: > >>> > > > >>> > > Hi Kanagaraj, > >>> > > > >>> > > Yes...once i will start the gluster service and then vdsmd > >>> > > ...the > >>> > > host > >>> > > can connect to cluster...but the question is why it's not started > >>> > > even it > >>> > > has chkconfig enabled... > >>> > > > >>> > > I have tested it in two host cluster environment...(Centos 6.6 > >>> > > and > >>> > > centos 7.0) on both hypervisior cluster..it's failed to reconnect > >>> > > in > >>> > > to > >>> > > cluster after reboot.... > >>> > > > >>> > > In both the environment glusterd enabled for next boot....but > >>> > > it's > >>> > > failed with the same error....seems it's bug in either gluster or > >>> > > Ovirt ?? > >>> > > > >>> > > Please help me to find the workaround here if can not resolve > >>> > > it...as > >>> > > without this the Host machine can not connect after > >>> > > reboot....that > >>> > > means > >>> > > engine will consider it as down and every time need to manually > >>> > > start > >>> > > the > >>> > > gluster service and vdsmd... ?? > >>> > > > >>> > > Thanks, > >>> > > Punit > >>> > > > >>> > > On Mon, Nov 24, 2014 at 10:20 PM, Kanagaraj < kmayilsa@redhat.com> > >>> > > wrote: > >>> > > > >>> > >> From vdsm.log "error: Connection failed. Please check if > >>> > >> gluster > >>> > >> daemon > >>> > >> is operational." > >>> > >> > >>> > >> Starting glusterd service should fix this issue. 'service > >>> > >> glusterd > >>> > >> start' > >>> > >> But i am wondering why the glusterd was not started > >>> > >> automatically > >>> > >> after > >>> > >> the reboot. > >>> > >> > >>> > >> Thanks, > >>> > >> Kanagaraj > >>> > >> > >>> > >> > >>> > >> > >>> > >> On 11/24/2014 07:18 PM, Punit Dambiwal wrote: > >>> > >> > >>> > >> Hi Kanagaraj, > >>> > >> > >>> > >> Please find the attached VDSM logs :- > >>> > >> > >>> > >> ---------------- > >>> > >> Thread-13::DEBUG::2014-11-24 > >>> > >> > >>> > >> > >>> > >> 21:41:17,182::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) > >>> > >> Owner.cancelAll requests {} > >>> > >> Thread-13::DEBUG::2014-11-24 > >>> > >> 21:41:17,182::task::993::Storage.TaskManager.Task::(_decref) > >>> > >> Task=`1691d409-9b27-4585-8281-5ec26154367a`::ref 0 aborting > >>> > >> False > >>> > >> Thread-13::DEBUG::2014-11-24 > >>> > >> > >>> > >> 21:41:32,393::task::595::Storage.TaskManager.Task::(_updateState) > >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from state > >>> > >> init > >>> > >> -> > >>> > >> state preparing > >>> > >> Thread-13::INFO::2014-11-24 > >>> > >> 21:41:32,393::logUtils::44::dispatcher::(wrapper) Run and > >>> > >> protect: > >>> > >> repoStats(options=None) > >>> > >> Thread-13::INFO::2014-11-24 > >>> > >> 21:41:32,393::logUtils::47::dispatcher::(wrapper) Run and > >>> > >> protect: > >>> > >> repoStats, Return response: {} > >>> > >> Thread-13::DEBUG::2014-11-24 > >>> > >> 21:41:32,393::task::1191::Storage.TaskManager.Task::(prepare) > >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::finished: {} > >>> > >> Thread-13::DEBUG::2014-11-24 > >>> > >> > >>> > >> 21:41:32,394::task::595::Storage.TaskManager.Task::(_updateState) > >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from state > >>> > >> preparing > >>> > >> -> > >>> > >> state finished > >>> > >> Thread-13::DEBUG::2014-11-24 > >>> > >> > >>> > >> > >>> > >> 21:41:32,394::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) > >>> > >> Owner.releaseAll requests {} resources {} > >>> > >> Thread-13::DEBUG::2014-11-24 > >>> > >> > >>> > >> > >>> > >> 21:41:32,394::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) > >>> > >> Owner.cancelAll requests {} > >>> > >> Thread-13::DEBUG::2014-11-24 > >>> > >> 21:41:32,394::task::993::Storage.TaskManager.Task::(_decref) > >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::ref 0 aborting > >>> > >> False > >>> > >> Thread-13::DEBUG::2014-11-24 > >>> > >> 21:41:41,550::BindingXMLRPC::1132::vds::(wrapper) client > >>> > >> [10.10.10.2]::call > >>> > >> getCapabilities with () {} > >>> > >> Thread-13::DEBUG::2014-11-24 > >>> > >> 21:41:41,553::utils::738::root::(execCmd) > >>> > >> /sbin/ip route show to 0.0.0.0/0 table all (cwd None) > >>> > >> Thread-13::DEBUG::2014-11-24 > >>> > >> 21:41:41,560::utils::758::root::(execCmd) > >>> > >> SUCCESS: <err> = ''; <rc> = 0 > >>> > >> Thread-13::DEBUG::2014-11-24 > >>> > >> 21:41:41,588::caps::728::root::(_getKeyPackages) rpm
> >>> > >> ('gluster-swift',) not found > >>> > >> Thread-13::DEBUG::2014-11-24 > >>> > >> 21:41:41,592::caps::728::root::(_getKeyPackages) rpm
> >>> > >> ('gluster-swift-object',) not found > >>> > >> Thread-13::DEBUG::2014-11-24 > >>> > >> 21:41:41,593::caps::728::root::(_getKeyPackages) rpm
> >>> > >> ('gluster-swift-plugin',) not found > >>> > >> Thread-13::DEBUG::2014-11-24 > >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm
> >>> > >> ('gluster-swift-account',) not found > >>> > >> Thread-13::DEBUG::2014-11-24 > >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm
> >>> > >> ('gluster-swift-proxy',) not found > >>> > >> Thread-13::DEBUG::2014-11-24 > >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm
> >>> > >> ('gluster-swift-doc',) not found > >>> > >> Thread-13::DEBUG::2014-11-24 > >>> > >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm
> >>> > >> ('gluster-swift-container',) not found > >>> > >> Thread-13::DEBUG::2014-11-24 > >>> > >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm
> >>> > >> ('glusterfs-geo-replication',) not found > >>> > >> Thread-13::DEBUG::2014-11-24 > >>> > >> 21:41:41,600::caps::646::root::(get) > >>> > >> VirtioRNG DISABLED: libvirt version 0.10.2-29.el6_5.9 required > >>> > >> >= > >>> > >> 0.10.2-31 > >>> > >> Thread-13::DEBUG::2014-11-24 > >>> > >> 21:41:41,603::BindingXMLRPC::1139::vds::(wrapper) return > >>> > >> getCapabilities > >>> > >> with {'status': {'message': 'Done', 'code': 0}, 'info': > >>> > >> {'HBAInventory': > >>> > >> {'iSCSI': [{'InitiatorName': > >>> > >> 'iqn.1994-05.com.redhat:32151ce183c8'}], > >>> > >> 'FC': > >>> > >> []}, 'packages2': {'kernel': {'release': '431.el6.x86_64', > >>> > >> 'buildtime': > >>> > >> 1385061309.0, 'version': '2.6.32'}, 'glusterfs-rdma': > >>> > >> {'release': > >>> > >> '1.el6', > >>> > >> 'buildtime': 1403622628L, 'version': '3.5.1'}, 'glusterfs-fuse': > >>> > >> {'release': '1.el6', 'buildtime': 1403622628L, 'version': > >>> > >> '3.5.1'}, > >>> > >> 'spice-server': {'release': '6.el6_5.2', 'buildtime': > >>> > >> 1402324637L, > >>> > >> 'version': '0.12.4'}, 'vdsm': {'release': '1.gitdb83943.el6', > >>> > >> 'buildtime': > >>> > >> 1412784567L, 'version': '4.16.7'}, 'qemu-kvm': {'release': > >>> > >> '2.415.el6_5.10', 'buildtime': 1402435700L, 'version': > >>> > >> '0.12.1.2'}, > >>> > >> 'qemu-img': {'release': '2.415.el6_5.10', 'buildtime': > >>> > >> 1402435700L, > >>> > >> 'version': '0.12.1.2'}, 'libvirt': {'release': '29.el6_5.9', > >>> > >> 'buildtime': > >>> > >> 1402404612L, 'version': '0.10.2'}, 'glusterfs': {'release': > >>> > >> '1.el6', > >>> > >> 'buildtime': 1403622628L, 'version': '3.5.1'}, 'mom': > >>> > >> {'release': > >>> > >> '2.el6', > >>> > >> 'buildtime': 1403794344L, 'version': '0.4.1'}, > >>> > >> 'glusterfs-server': > >>> > >> {'release': '1.el6', 'buildtime': 1403622628L, 'version': > >>> > >> '3.5.1'}}, > >>> > >> 'numaNodeDistance': {'1': [20, 10], '0': [10, 20]}, 'cpuModel': > >>> > >> 'Intel(R) > >>> > >> Xeon(R) CPU X5650 @ 2.67GHz', 'liveMerge': 'false', > >>> > >> 'hooks': > >>> > >> {}, > >>> > >> 'cpuSockets': '2', 'vmTypes': ['kvm'], 'selinux': {'mode': '1'}, > >>> > >> 'kdumpStatus': 0, 'supportedProtocols': ['2.2', '2.3'], > >>> > >> 'networks': > >>> > >> {'ovirtmgmt': {'iface': u'bond0.10', 'addr': '43.252.176.16', > >>> > >> 'bridged': > >>> > >> False, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': > >>> > >> '1500', > >>> > >> 'bootproto4': 'none', 'netmask': '255.255.255.0', 'ipv4addrs': > >>> > >> [' > >>> > >> 43.252.176.16/24' <http://43.252.176.16/24%27>], 'interface': > >>> > >> u'bond0.10', 'ipv6gateway': '::', 'gateway': '43.25.17.1'}, > >>> > >> 'Internal': > >>> > >> {'iface': 'Internal', 'addr': '', 'cfg': {'DEFROUTE': 'no', > >>> > >> 'HOTPLUG': > >>> > >> 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', > >>> > >> 'BOOTPROTO': > >>> > >> 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': 'Bridge', > >>> > >> 'ONBOOT': > >>> > >> 'no'}, 'bridged': True, 'ipv6addrs': > >>> > >> ['fe80::210:18ff:fecd:daac/64'], > >>> > >> 'gateway': '', 'bootproto4': 'none', 'netmask': '', 'stp': > >>> > >> 'off', > >>> > >> 'ipv4addrs': [], 'mtu': '9000', 'ipv6gateway': '::', 'ports': > >>> > >> ['bond1.100']}, 'storage': {'iface': u'bond1', 'addr': > >>> > >> '10.10.10.6', > >>> > >> 'bridged': False, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], > >>> > >> 'mtu': > >>> > >> '9000', 'bootproto4': 'none', 'netmask': '255.255.255.0', > >>> > >> 'ipv4addrs': [' > >>> > >> 10.10.10.6/24' <http://10.10.10.6/24%27>], 'interface': > >>> > >> u'bond1', > >>> > >> 'ipv6gateway': '::', 'gateway': ''}, 'VMNetwork': {'iface': > >>> > >> 'VMNetwork', > >>> > >> 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': > >>> > >> '1500', > >>> > >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': > >>> > >> 'off', > >>> > >> 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, > >>> > >> 'bridged': > >>> > >> True, > >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'gateway': '', > >>> > >> 'bootproto4': > >>> > >> 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'mtu': > >>> > >> '1500', > >>> > >> 'ipv6gateway': '::', 'ports': ['bond0.36']}}, 'bridges': > >>> > >> {'Internal': > >>> > >> {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': > >>> > >> '9000', > >>> > >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': > >>> > >> 'off', > >>> > >> 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, > >>> > >> 'ipv6addrs': > >>> > >> ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'netmask': '', > >>> > >> 'stp': > >>> > >> 'off', 'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': '', > >>> > >> 'opts': > >>> > >> {'topology_change_detected': '0', 'multicast_last_member_count': > >>> > >> '2', > >>> > >> 'hash_elasticity': '4', 'multicast_query_response_interval': > >>> > >> '999', > >>> > >> 'multicast_snooping': '1', 'multicast_startup_query_interval': > >>> > >> '3124', > >>> > >> 'hello_timer': '31', 'multicast_querier_interval': '25496', > >>> > >> 'max_age': > >>> > >> '1999', 'hash_max': '512', 'stp_state': '0', 'root_id': > >>> > >> '8000.001018cddaac', 'priority': '32768', > >>> > >> 'multicast_membership_interval': > >>> > >> '25996', 'root_path_cost': '0', 'root_port': '0', > >>> > >> 'multicast_querier': > >>> > >> '0', > >>> > >> 'multicast_startup_query_count': '2', 'hello_time': '199', > >>> > >> 'topology_change': '0', 'bridge_id': '8000.001018cddaac', > >>> > >> 'topology_change_timer': '0', 'ageing_time': '29995', > >>> > >> 'gc_timer': > >>> > >> '31', > >>> > >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', > >>> > >> 'multicast_query_interval': '12498', > >>> > >> 'multicast_last_member_interval': > >>> > >> '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': > >>> > >> ['bond1.100']}, 'VMNetwork': {'addr': '', 'cfg': {'DEFROUTE': > >>> > >> 'no', > >>> > >> 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', 'NM_CONTROLLED': > >>> > >> 'no', > >>> > >> 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'VMNetwork', > >>> > >> 'TYPE': > >>> > >> 'Bridge', > >>> > >> 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], > >>> > >> 'mtu': > >>> > >> '1500', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], > >>> > >> 'ipv6gateway': > >>> > >> '::', > >>> > >> 'gateway': '', 'opts': {'topology_change_detected': '0', > >>> > >> 'multicast_last_member_count': '2', 'hash_elasticity': '4', > >>> > >> 'multicast_query_response_interval': '999', > >>> > >> 'multicast_snooping': > >>> > >> '1', > >>> > >> 'multicast_startup_query_interval': '3124', 'hello_timer': > >>> > >> '131', > >>> > >> 'multicast_querier_interval': '25496', 'max_age': '1999', > >>> > >> 'hash_max': > >>> > >> '512', 'stp_state': '0', 'root_id': '8000.60eb6920b46c', > >>> > >> 'priority': > >>> > >> '32768', 'multicast_membership_interval': '25996', > >>> > >> 'root_path_cost': > >>> > >> '0', > >>> > >> 'root_port': '0', 'multicast_querier': '0', > >>> > >> 'multicast_startup_query_count': '2', 'hello_time': '199', > >>> > >> 'topology_change': '0', 'bridge_id': '8000.60eb6920b46c', > >>> > >> 'topology_change_timer': '0', 'ageing_time': '29995', > >>> > >> 'gc_timer': > >>> > >> '31', > >>> > >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', > >>> > >> 'multicast_query_interval': '12498', > >>> > >> 'multicast_last_member_interval': > >>> > >> '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': > >>> > >> ['bond0.36']}}, 'uuid': '44454C4C-4C00-1057-8053-B7C04F504E31', > >>> > >> 'lastClientIface': 'bond1', 'nics': {'eth3': {'permhwaddr': > >>> > >> '00:10:18:cd:da:ae', 'addr': '', 'cfg': {'SLAVE': 'yes', > >>> > >> 'NM_CONTROLLED': > >>> > >> 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ae', 'MASTER': > >>> > >> 'bond1', > >>> > >> 'DEVICE': 'eth3', 'ONBOOT': 'no'}, 'ipv6addrs': [], 'mtu': > >>> > >> '9000', > >>> > >> 'netmask': '', 'ipv4addrs': [], 'hwaddr': '00:10:18:cd:da:ac', > >>> > >> 'speed': > >>> > >> 1000}, 'eth2': {'permhwaddr': '00:10:18:cd:da:ac', 'addr': '', > >>> > >> 'cfg': > >>> > >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '9000', 'HWADDR': > >>> > >> '00:10:18:cd:da:ac', 'MASTER': 'bond1', 'DEVICE': 'eth2', > >>> > >> 'ONBOOT': > >>> > >> 'no'}, > >>> > >> 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', 'ipv4addrs': [], > >>> > >> 'hwaddr': > >>> > >> '00:10:18:cd:da:ac', 'speed': 1000}, 'eth1': {'permhwaddr': > >>> > >> '60:eb:69:20:b4:6d', 'addr': '', 'cfg': {'SLAVE': 'yes', > >>> > >> 'NM_CONTROLLED': > >>> > >> 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6d', 'MASTER': > >>> > >> 'bond0', > >>> > >> 'DEVICE': 'eth1', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'mtu': > >>> > >> '1500', > >>> > >> 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', > >>> > >> 'speed': > >>> > >> 1000}, 'eth0': {'permhwaddr': '60:eb:69:20:b4:6c', 'addr': '', > >>> > >> 'cfg': > >>> > >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', 'HWADDR': > >>> > >> '60:eb:69:20:b4:6c', 'MASTER': 'bond0', 'DEVICE': 'eth0', > >>> > >> 'ONBOOT': > >>> > >> 'yes'}, > >>> > >> 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], > >>> > >> 'hwaddr': > >>> > >> '60:eb:69:20:b4:6c', 'speed': 1000}}, 'software_revision': '1', > >>> > >> 'clusterLevels': ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5'], > >>> > >> 'cpuFlags': > >>> > >> > >>> > >> > >>> > >> u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,popcnt,aes,lahf_lm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270', > >>> > >> 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:32151ce183c8', > >>> > >> 'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1', > >>> > >> '3.2', > >>> > >> '3.3', > >>> > >> '3.4', '3.5'], 'autoNumaBalancing': 2, 'reservedMem': '321', > >>> > >> 'bondings': > >>> > >> {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', > >>> > >> 'slaves': > >>> > >> [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': > >>> > >> {'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': 'no', > >>> > >> 'BONDING_OPTS': > >>> > >> 'mode=4 miimon=100', 'DEVICE': 'bond0', 'ONBOOT': 'yes'}, > >>> > >> 'ipv6addrs': > >>> > >> ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'netmask': '', > >>> > >> 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'slaves': > >>> > >> ['eth0', > >>> > >> 'eth1'], > >>> > >> 'opts': {'miimon': '100', 'mode': '4'}}, 'bond1': {'addr': > >>> > >> '10.10.10.6', > >>> > >> 'cfg': {'DEFROUTE': 'no', 'IPADDR': '10.10.10.6', 'HOTPLUG': > >>> > >> 'no', > >>> > >> 'MTU': > >>> > >> '9000', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', > >>> > >> 'BOOTPROTO': > >>> > >> 'none', 'BONDING_OPTS': 'mode=4 miimon=100', 'DEVICE': 'bond1', > >>> > >> 'ONBOOT': > >>> > >> 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': > >>> > >> '9000', > >>> > >> 'netmask': '255.255.255.0', 'ipv4addrs': ['10.10.10.6/24' > >>> > >> <http://10.10.10.6/24%27>], 'hwaddr': '00:10:18:cd:da:ac', > >>> > >> 'slaves': > >>> > >> ['eth2', 'eth3'], 'opts': {'miimon': '100', 'mode': '4'}}, > >>> > >> 'bond2': > >>> > >> {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': > >>> > >> [], > >>> > >> 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', 'cfg': {}, > >>> > >> 'mtu': > >>> > >> '1500', 'netmask': '', 'slaves': [], 'hwaddr': > >>> > >> '00:00:00:00:00:00'}}, > >>> > >> 'software_version': '4.16', 'memSize': '24019', 'cpuSpeed': > >>> > >> '2667.000', > >>> > >> 'numaNodes': {u'1': {'totalMemory': '12288', 'cpus': [6, 7, 8, > >>> > >> 9, > >>> > >> 10, 11, > >>> > >> 18, 19, 20, 21, 22, 23]}, u'0': {'totalMemory': '12278', 'cpus': > >>> > >> [0, > >>> > >> 1, 2, > >>> > >> 3, 4, 5, 12, 13, 14, 15, 16, 17]}}, 'version_name': 'Snow Man', > >>> > >> 'vlans': > >>> > >> {'bond0.10': {'iface': 'bond0', 'addr': '43.25.17.16', 'cfg': > >>> > >> {'DEFROUTE': > >>> > >> 'yes', 'VLAN': 'yes', 'IPADDR': '43.25.17.16', 'HOTPLUG': 'no', > >>> > >> 'GATEWAY': > >>> > >> '43.25.17.1', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', > >>> > >> 'BOOTPROTO': 'none', 'DEVICE': 'bond0.10', 'MTU': '1500', > >>> > >> 'ONBOOT': > >>> > >> 'yes'}, > >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 10, > >>> > >> 'mtu': > >>> > >> '1500', > >>> > >> 'netmask': '255.255.255.0', 'ipv4addrs': ['43.25.17.16/24 '] > >>> > >> <http://43.25.17.16/24%27%5D>}, 'bond0.36': {'iface': 'bond0', > >>> > >> 'addr': > >>> > >> '', 'cfg': {'BRIDGE': 'VMNetwork', 'VLAN': 'yes', 'HOTPLUG': > >>> > >> 'no', > >>> > >> 'MTU': > >>> > >> '1500', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond0.36', 'ONBOOT': > >>> > >> 'no'}, > >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 36, > >>> > >> 'mtu': > >>> > >> '1500', > >>> > >> 'netmask': '', 'ipv4addrs': []}, 'bond1.100': {'iface': 'bond1', > >>> > >> 'addr': > >>> > >> '', 'cfg': {'BRIDGE': 'Internal', 'VLAN': 'yes', 'HOTPLUG': > >>> > >> 'no', > >>> > >> 'MTU': > >>> > >> '9000', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond1.100', 'ONBOOT': > >>> > >> 'no'}, > >>> > >> 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'vlanid': 100, > >>> > >> 'mtu': > >>> > >> '9000', > >>> > >> 'netmask': '', 'ipv4addrs': []}}, 'cpuCores': '12', > >>> > >> 'kvmEnabled': > >>> > >> 'true', > >>> > >> 'guestOverhead': '65', 'cpuThreads': '24', 'emulatedMachines': > >>> > >> [u'rhel6.5.0', u'pc', u'rhel6.4.0', u'rhel6.3.0', u'rhel6.2.0', > >>> > >> u'rhel6.1.0', u'rhel6.0.0', u'rhel5.5.0', u'rhel5.4.4', > >>> > >> u'rhel5.4.0'], > >>> > >> 'operatingSystem': {'release': '5.el6.centos.11.1', 'version': > >>> > >> '6', > >>> > >> 'name': > >>> > >> 'RHEL'}, 'lastClient': '10.10.10.2'}} > >>> > >> Thread-13::DEBUG::2014-11-24 > >>> > >> 21:41:41,620::BindingXMLRPC::1132::vds::(wrapper) client > >>> > >> [10.10.10.2]::call > >>> > >> getHardwareInfo with () {} > >>> > >> Thread-13::DEBUG::2014-11-24 > >>> > >> 21:41:41,621::BindingXMLRPC::1139::vds::(wrapper) return > >>> > >> getHardwareInfo > >>> > >> with {'status': {'message': 'Done', 'code': 0}, 'info': > >>> > >> {'systemProductName': 'CS24-TY', 'systemSerialNumber': > >>> > >> '7LWSPN1', > >>> > >> 'systemFamily': 'Server', 'systemVersion': 'A00', 'systemUUID': > >>> > >> '44454c4c-4c00-1057-8053-b7c04f504e31', 'systemManufacturer': > >>> > >> 'Dell'}} > >>> > >> Thread-13::DEBUG::2014-11-24 > >>> > >> 21:41:41,733::BindingXMLRPC::1132::vds::(wrapper) client > >>> > >> [10.10.10.2]::call > >>> > >> hostsList with () {} flowID [222e8036] > >>> > >> Thread-13::ERROR::2014-11-24 > >>> > >> 21:41:44,753::BindingXMLRPC::1148::vds::(wrapper) vdsm exception > >>> > >> occured > >>> > >> Traceback (most recent call last): > >>> > >> File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1135, in > >>> > >> wrapper > >>> > >> res = f(*args, **kwargs) > >>> > >> File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper > >>> > >> rv = func(*args, **kwargs) > >>> > >> File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList > >>> > >> return {'hosts': self.svdsmProxy.glusterPeerStatus()} > >>> > >> File "/usr/share/vdsm/supervdsm.py", line 50, in __call__ > >>> > >> return callMethod() > >>> > >> File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda> > >>> > >> **kwargs) > >>> > >> File "<string>", line 2, in glusterPeerStatus > >>> > >> File "/usr/lib64/python2.6/multiprocessing/managers.py",
> >>> > >> 740, > >>> > >> in > >>> > >> _callmethod > >>> > >> raise convert_to_error(kind, result) > >>> > >> GlusterCmdExecFailedException: Command execution failed > >>> > >> error: Connection failed. Please check if gluster daemon is > >>> > >> operational. > >>> > >> return code: 1 > >>> > >> Thread-13::DEBUG::2014-11-24 > >>> > >> > >>> > >> 21:41:50,949::task::595::Storage.TaskManager.Task::(_updateState) > >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from state > >>> > >> init > >>> > >> -> > >>> > >> state preparing > >>> > >> Thread-13::INFO::2014-11-24 > >>> > >> 21:41:50,950::logUtils::44::dispatcher::(wrapper) Run and > >>> > >> protect: > >>> > >> repoStats(options=None) > >>> > >> Thread-13::INFO::2014-11-24 > >>> > >> 21:41:50,950::logUtils::47::dispatcher::(wrapper) Run and > >>> > >> protect: > >>> > >> repoStats, Return response: {} > >>> > >> Thread-13::DEBUG::2014-11-24 > >>> > >> 21:41:50,950::task::1191::Storage.TaskManager.Task::(prepare) > >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::finished: {} > >>> > >> Thread-13::DEBUG::2014-11-24 > >>> > >> > >>> > >> 21:41:50,950::task::595::Storage.TaskManager.Task::(_updateState) > >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from state > >>> > >> preparing > >>> > >> -> > >>> > >> state finished > >>> > >> Thread-13::DEBUG::2014-11-24 > >>> > >> > >>> > >> > >>> > >> 21:41:50,951::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) > >>> > >> Owner.releaseAll requests {} resources {} > >>> > >> Thread-13::DEBUG::2014-11-24 > >>> > >> > >>> > >> > >>> > >> 21:41:50,951::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) > >>> > >> Owner.cancelAll requests {} > >>> > >> Thread-13::DEBUG::2014-11-24 > >>> > >> 21:41:50,951::task::993::Storage.TaskManager.Task::(_decref) > >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::ref 0 aborting > >>> > >> False > >>> > >> ------------------------------- > >>> > >> > >>> > >> [root@compute4 ~]# service glusterd status > >>> > >> glusterd is stopped > >>> > >> [root@compute4 ~]# chkconfig --list | grep glusterd > >>> > >> glusterd 0:off 1:off 2:on 3:on 4:on 5:on > >>> > >> 6:off > >>> > >> [root@compute4 ~]# > >>> > >> > >>> > >> Thanks, > >>> > >> Punit > >>> > >> > >>> > >> On Mon, Nov 24, 2014 at 6:36 PM, Kanagaraj < kmayilsa@redhat.com> > >>> > >> wrote: > >>> > >> > >>> > >>> Can you send the corresponding error in vdsm.log from the > >>> > >>> host? > >>> > >>> > >>> > >>> Also check if glusterd service is running. > >>> > >>> > >>> > >>> Thanks, > >>> > >>> Kanagaraj > >>> > >>> > >>> > >>> > >>> > >>> On 11/24/2014 03:39 PM, Punit Dambiwal wrote: > >>> > >>> > >>> > >>> Hi, > >>> > >>> > >>> > >>> After reboot my Hypervisior host can not activate again in the > >>> > >>> cluster > >>> > >>> and failed with the following error :- > >>> > >>> > >>> > >>> Gluster command [<UNKNOWN>] failed on server... > >>> > >>> > >>> > >>> Engine logs :- > >>> > >>> > >>> > >>> 2014-11-24 18:05:28,397 INFO > >>> > >>> > >>> > >>> > >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] > >>> > >>> (DefaultQuartzScheduler_Worker-64) START, > >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, HostId = > >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 5f251c90 > >>> > >>> 2014-11-24 18:05:30,609 INFO > >>> > >>> > >>> > >>> > >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] > >>> > >>> (DefaultQuartzScheduler_Worker-64) FINISH, > >>> > >>> GlusterVolumesListVDSCommand, > >>> > >>> return: > >>> > >>> > >>> > >>> > >>> > >>> {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@d95203e0 }, > >>> > >>> log id: 5f251c90 > >>> > >>> 2014-11-24 18:05:33,768 INFO > >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] > >>> > >>> (ajp--127.0.0.1-8702-8) > >>> > >>> [287d570d] Lock Acquired to object EngineLock [exclusiveLocks= > >>> > >>> key: > >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS > >>> > >>> , sharedLocks= ] > >>> > >>> 2014-11-24 18:05:33,795 INFO > >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] > >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] Running command: > >>> > >>> ActivateVdsCommand internal: false. Entities affected : ID: > >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDSAction group > >>> > >>> MANIPULATE_HOST > >>> > >>> with role type ADMIN > >>> > >>> 2014-11-24 18:05:33,796 INFO > >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] > >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] Before acquiring > >>> > >>> lock in > >>> > >>> order to prevent monitoring for host Compute5 from data-center > >>> > >>> SV_WTC > >>> > >>> 2014-11-24 18:05:33,797 INFO > >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] > >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] Lock acquired, > >>> > >>> from > >>> > >>> now a > >>> > >>> monitoring of host will be skipped for host Compute5 from > >>> > >>> data-center > >>> > >>> SV_WTC > >>> > >>> 2014-11-24 18:05:33,817 INFO > >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] > >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] START, > >>> > >>> SetVdsStatusVDSCommand(HostName = Compute5, HostId = > >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=Unassigned, > >>> > >>> nonOperationalReason=NONE, stopSpmFailureLogged=false), log id: > >>> > >>> 1cbc7311 > >>> > >>> 2014-11-24 18:05:33,820 INFO > >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] > >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] FINISH, > >>> > >>> SetVdsStatusVDSCommand, log id: 1cbc7311 > >>> > >>> 2014-11-24 18:05:34,086 INFO > >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] > >>> > >>> (org.ovirt.thread.pool-8-thread-45) Activate finished. Lock > >>> > >>> released. > >>> > >>> Monitoring can run now for host Compute5 from data-center > >>> > >>> SV_WTC > >>> > >>> 2014-11-24 18:05:34,088 INFO > >>> > >>> > >>> > >>> > >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > >>> > >>> (org.ovirt.thread.pool-8-thread-45) Correlation ID: 287d570d, > >>> > >>> Job > >>> > >>> ID: > >>> > >>> 5ef8e4d6-b2bc-469e-8e81-7ef74b2a001a, Call Stack: null, Custom > >>> > >>> Event ID: > >>> > >>> -1, Message: Host Compute5 was activated by admin. > >>> > >>> 2014-11-24 18:05:34,090 INFO > >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] > >>> > >>> (org.ovirt.thread.pool-8-thread-45) Lock freed to object > >>> > >>> EngineLock > >>> > >>> [exclusiveLocks= key: 0bf6b00f-7947-4411-b55a-cc5eea2b381a > >>> > >>> value: > >>> > >>> VDS > >>> > >>> , sharedLocks= ] > >>> > >>> 2014-11-24 18:05:35,792 INFO > >>> > >>> > >>> > >>> > >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] > >>> > >>> (DefaultQuartzScheduler_Worker-55) [3706e836] START, > >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, HostId = > >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 48a0c832 > >>> > >>> 2014-11-24 18:05:37,064 INFO > >>> > >>> > >>> > >>> > >>> > >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] > >>> > >>> (DefaultQuartzScheduler_Worker-69) START, > >>> > >>> GetHardwareInfoVDSCommand(HostName = Compute5, HostId = > >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, > >>> > >>> vds=Host[Compute5,0bf6b00f-7947-4411-b55a-cc5eea2b381a]), log > >>> > >>> id: > >>> > >>> 6d560cc2 > >>> > >>> 2014-11-24 18:05:37,074 INFO > >>> > >>> > >>> > >>> > >>> > >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] > >>> > >>> (DefaultQuartzScheduler_Worker-69) FINISH, > >>> > >>> GetHardwareInfoVDSCommand, log > >>> > >>> id: 6d560cc2 > >>> > >>> 2014-11-24 18:05:37,093 WARN > >>> > >>> [org.ovirt.engine.core.vdsbroker.VdsManager] > >>> > >>> (DefaultQuartzScheduler_Worker-69) Host Compute5 is running > >>> > >>> with > >>> > >>> disabled > >>> > >>> SELinux. > >>> > >>> 2014-11-24 18:05:37,127 INFO > >>> > >>> > >>> > >>> > >>> > >>> [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] > >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] Running command: > >>> > >>> HandleVdsCpuFlagsOrClusterChangedCommand internal: true. > >>> > >>> Entities > >>> > >>> affected > >>> > >>> : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS > >>> > >>> 2014-11-24 18:05:37,147 INFO > >>> > >>> > >>> > >>> > >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] > >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] START, > >>> > >>> GlusterServersListVDSCommand(HostName = Compute5, HostId = > >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a), log id: 4faed87 > >>> > >>> 2014-11-24 18:05:37,164 INFO > >>> > >>> > >>> > >>> > >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] > >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] FINISH, > >>> > >>> GlusterServersListVDSCommand, log id: 4faed87 > >>> > >>> 2014-11-24 18:05:37,189 INFO > >>> > >>> [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] > >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Running command: > >>> > >>> SetNonOperationalVdsCommand internal: true. Entities affected : > >>> > >>> ID: > >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS > >>> > >>> 2014-11-24 18:05:37,206 INFO > >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] > >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] START, > >>> > >>> SetVdsStatusVDSCommand(HostName = Compute5, HostId = > >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=NonOperational, > >>> > >>> nonOperationalReason=GLUSTER_COMMAND_FAILED, > >>> > >>> stopSpmFailureLogged=false), > >>> > >>> log id: fed5617 > >>> > >>> 2014-11-24 18:05:37,209 INFO > >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] > >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] FINISH, > >>> > >>> SetVdsStatusVDSCommand, log id: fed5617 > >>> > >>> 2014-11-24 18:05:37,223 ERROR > >>> > >>> > >>> > >>> > >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: > >>> > >>> 4a84c4e5, > >>> > >>> Job > >>> > >>> ID: 4bfd4a6d-c3ef-468f-a40e-a3a6ca13011b, Call Stack: null, > >>> > >>> Custom > >>> > >>> Event > >>> > >>> ID: -1, Message: Gluster command [<UNKNOWN>] failed on server > >>> > >>> Compute5. > >>> > >>> 2014-11-24 18:05:37,243 INFO > >>> > >>> > >>> > >>> > >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: > >>> > >>> null, > >>> > >>> Call > >>> > >>> Stack: null, Custom Event ID: -1, Message: Status of host > >>> > >>> Compute5 > >>> > >>> was > >>> > >>> set > >>> > >>> to NonOperational. > >>> > >>> 2014-11-24 18:05:37,272 INFO > >>> > >>> [org.ovirt.engine.core.bll.HandleVdsVersionCommand] > >>> > >>> (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Running command: > >>> > >>> HandleVdsVersionCommand internal: true. Entities affected : > >>> > >>> ID: > >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS > >>> > >>> 2014-11-24 18:05:37,274 INFO > >>> > >>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] > >>> > >>> (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Host > >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a : Compute5 is already in > >>> > >>> NonOperational status for reason GLUSTER_COMMAND_FAILED. > >>> > >>> SetNonOperationalVds command is skipped. > >>> > >>> 2014-11-24 18:05:38,065 INFO > >>> > >>> > >>> > >>> > >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] > >>> > >>> (DefaultQuartzScheduler_Worker-55) [3706e836] FINISH, > >>> > >>> GlusterVolumesListVDSCommand, return: > >>> > >>> > >>> > >>> > >>> > >>> {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4e72a1b1 }, > >>> > >>> log id: 48a0c832 > >>> > >>> 2014-11-24 18:05:43,243 INFO > >>> > >>> > >>> > >>> > >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] > >>> > >>> (DefaultQuartzScheduler_Worker-35) START, > >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, HostId = > >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 3ce13ebc > >>> > >>> ^C > >>> > >>> [root@ccr01 ~]# > >>> > >>> > >>> > >>> Thanks, > >>> > >>> Punit > >>> > >>> > >>> > >>> > >>> > >>> _______________________________________________ > >>> > >>> Users mailing > >>> > >>> > >>> > >>> listUsers@ovirt.orghttp://
On Mon, Dec 1, 2014 at 9:05 AM, Punit Dambiwal <hypunit@gmail.com> wrote: the phases, the the package package package package package package package package line lists.ovirt.org/mailman/listinfo/users
> >>> > >>> > >>> > >>> > >>> > >>> > >>> > >> > >>> > >> > >>> > > > >>> > > > >>> > > >> > >> > > > > > > _______________________________________________ > > Gluster-users mailing list > > Gluster-users@gluster.org > > http://supercolony.gluster.org/mailman/listinfo/gluster-users

Hi Kaushal, This is the host...which i rebooted...would you mind to let me know how i can make the glusterd sevice come up after network...i am using centos7...if network is the issue... On Wed, Dec 3, 2014 at 11:54 AM, Kaushal M <kshlmster@gmail.com> wrote:
This peer cannot be identified.
" [2014-12-03 02:29:25.998153] D [glusterd-peer-utils.c:121:glusterd_peerinfo_find_by_hostname] 0-management: Unable to find friend: cpu05.zne01.hkg1.ovt.36stack.com"
I don't know why this address is not being resolved during boot time. If this is a valid peer, the the only reason I can think of this that the network is not up.
If you had previously detached the peer forcefully, the that could have left stale entries in some volumes. In this case as well, GlusterD will fail to identify the peer.
Do either of these reasons seem a possibility to you? On Dec 3, 2014 8:07 AM, "Punit Dambiwal" <hypunit@gmail.com> wrote:
Hi Kaushal,
Please find the logs here :- http://ur1.ca/iyoe5 and http://ur1.ca/iyoed
On Tue, Dec 2, 2014 at 10:43 PM, Kaushal M <kshlmster@gmail.com> wrote:
Hey Punit, In the logs you've provided, GlusterD appears to be running correctly. Could you provide the logs for the time period when GlusterD attempts to start but fails.
~kaushal On Dec 2, 2014 8:03 PM, "Punit Dambiwal" <hypunit@gmail.com> wrote:
Hi Kaushal,
Please find the logs here :- http://ur1.ca/iyhs5 and http://ur1.ca/iyhue
Thanks, punit
On Tue, Dec 2, 2014 at 12:00 PM, Kaushal M <kshlmster@gmail.com> wrote:
Hey Punit, Could you start Glusterd in debug mode and provide the logs here? To start it in debug mode, append '-LDEBUG' to the ExecStart line in the service file.
~kaushal
Hi,
Can Any body help me on this ??
On Thu, Nov 27, 2014 at 9:29 AM, Punit Dambiwal <hypunit@gmail.com> wrote: > > Hi Kaushal, > > Thanks for the detailed reply....let me explain my setup first :- > > 1. Ovirt Engine > 2. 4* host as well as storage machine (Host and gluster combined) > 3. Every host has 24 bricks... > > Now whenever the host machine reboot...it can come up but can not join the > cluster again and through the following error "Gluster command [<UNKNOWN>] > failed on server.." > > Please check my comment in line :- > > 1. Use the same string for doing the peer probe and for the brick address > during volume create/add-brick. Ideally, we suggest you use properly > resolvable FQDNs everywhere. If that is not possible, then use only IP > addresses. Try to avoid short names. > --------------- > [root@cpu05 ~]# gluster peer status > Number of Peers: 3 > > Hostname: cpu03.stack.com > Uuid: 5729b8c4-e80d-4353-b456-6f467bddbdfb > State: Peer in Cluster (Connected) > > Hostname: cpu04.stack.com > Uuid: d272b790-c4b2-4bed-ba68-793656e6d7b0 > State: Peer in Cluster (Connected) > Other names: > 10.10.0.8 > > Hostname: cpu02.stack.com > Uuid: 8d8a7041-950e-40d0-85f9-58d14340ca25 > State: Peer in Cluster (Connected) > [root@cpu05 ~]# > ---------------- > 2. During boot up, make sure to launch glusterd only after the network is > up. This will allow the new peer identification mechanism to do its > job correctly. > >> I think the service itself doing the same job.... > > [root@cpu05 ~]# cat /usr/lib/systemd/system/glusterd.service > [Unit] > Description=GlusterFS, a clustered file-system server > After=network.target rpcbind.service > Before=network-online.target > > [Service] > Type=forking > PIDFile=/var/run/glusterd.pid > LimitNOFILE=65536 > ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid > KillMode=process > > [Install] > WantedBy=multi-user.target > [root@cpu05 ~]# > -------------------- > > gluster logs :- > > [2014-11-24 09:22:22.147471] I [MSGID: 100030] [glusterfsd.c:2018:main] > 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.6.1 > (args: /usr/sbin/glusterd -p /var/run/glusterd.pid) > [2014-11-24 09:22:22.151565] I [glusterd.c:1214:init] 0-management: > Maximum allowed open file descriptors set to 65536 > [2014-11-24 09:22:22.151599] I [glusterd.c:1259:init] 0-management: Using > /var/lib/glusterd as working directory > [2014-11-24 09:22:22.155216] W [rdma.c:4195:__gf_rdma_ctx_create] > 0-rpc-transport/rdma: rdma_cm event channel creation failed (No such device) > [2014-11-24 09:22:22.155264] E [rdma.c:4483:init] 0-rdma.management: > Failed to initialize IB Device > [2014-11-24 09:22:22.155285] E [rpc-transport.c:333:rpc_transport_load] > 0-rpc-transport: 'rdma' initialization failed > [2014-11-24 09:22:22.155354] W [rpcsvc.c:1524:rpcsvc_transport_create] > 0-rpc-service: cannot create listener, initing the transport failed > [2014-11-24 09:22:22.156290] I > [glusterd.c:413:glusterd_check_gsync_present] 0-glusterd: geo-replication > module not installed in the system > [2014-11-24 09:22:22.161318] I > [glusterd-store.c:2043:glusterd_restore_op_version] 0-glusterd: retrieved > op-version: 30600 > [2014-11-24 09:22:22.821800] I > [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] 0-management: > connect returned 0 > [2014-11-24 09:22:22.825810] I > [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] 0-management: > connect returned 0 > [2014-11-24 09:22:22.828705] I > [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] 0-management: > connect returned 0 > [2014-11-24 09:22:22.828771] I [rpc-clnt.c:969:rpc_clnt_connection_init] > 0-management: setting frame-timeout to 600 > [2014-11-24 09:22:22.832670] I [rpc-clnt.c:969:rpc_clnt_connection_init] > 0-management: setting frame-timeout to 600 > [2014-11-24 09:22:22.835919] I [rpc-clnt.c:969:rpc_clnt_connection_init] > 0-management: setting frame-timeout to 600 > [2014-11-24 09:22:22.840209] E > [glusterd-store.c:4248:glusterd_resolve_all_bricks] 0-glusterd: resolve > brick failed in restore > [2014-11-24 09:22:22.840233] E [xlator.c:425:xlator_init] 0-management: > Initialization of volume 'management' failed, review your volfile again > [2014-11-24 09:22:22.840245] E [graph.c:322:glusterfs_graph_init] > 0-management: initializing translator failed > [2014-11-24 09:22:22.840264] E [graph.c:525:glusterfs_graph_activate] > 0-graph: init failed > [2014-11-24 09:22:22.840754] W [glusterfsd.c:1194:cleanup_and_exit] (--> > 0-: received signum (0), shutting down > > Thanks, > Punit > > > > > On Wed, Nov 26, 2014 at 7:14 PM, Kaushal M <kshlmster@gmail.com> wrote: >> >> Based on the logs I can guess that glusterd is being started before >> the network has come up and that the addresses given to bricks do not >> directly match the addresses used in during peer probe. >> >> The gluster_after_reboot log has the line "[2014-11-25 >> 06:46:09.972113] E [glusterd-store.c:2632:glusterd_resolve_all_bricks] >> 0-glusterd: resolve brick failed in restore". >> >> Brick resolution fails when glusterd cannot match the address for
>> brick, with one of the peers. Brick resolution happens in two
>> 1. We first try to identify the peer by performing string comparisions >> with the brick address and the peer addresses (The peer names will be >> the names/addresses that were given when the peer was probed). >> 2. If we don't find a match from step 1, we will then resolve all
>> brick address and the peer addresses into addrinfo structs, and
>> compare these structs to find a match. This process should generally >> find a match if available. This will fail only if the network is not >> up yet as we cannot resolve addresses. >> >> The above steps are applicable only to glusterfs versions >=3.6. They >> were introduced to reduce problems with peer identification, like
>> one you encountered >> >> Since both of the steps failed to find a match in one run, but >> succeeded later, we can come to the conclusion that, >> a) the bricks don't have the exact same string used in peer probe for >> their addresses as step 1 failed, and >> b) the network was not up in the initial run, as step 2 failed during >> the initial run, but passed in the second run. >> >> Please let me know if my conclusion is correct. >> >> If it is, you can solve your problem in two ways. >> 1. Use the same string for doing the peer probe and for the brick >> address during volume create/add-brick. Ideally, we suggest you use >> properly resolvable FQDNs everywhere. If that is not possible, then >> use only IP addresses. Try to avoid short names. >> 2. During boot up, make sure to launch glusterd only after the network >> is up. This will allow the new peer identification mechanism to do its >> job correctly. >> >> >> If you have already followed these steps and yet still hit the >> problem, then please provide more information (setup, logs, etc.). It >> could be much different problem that you are facing. >> >> ~kaushal >> >> On Wed, Nov 26, 2014 at 4:01 PM, Punit Dambiwal <hypunit@gmail.com
>> wrote: >> > Is there any one can help on this ?? >> > >> > Thanks, >> > punit >> > >> > On Wed, Nov 26, 2014 at 9:42 AM, Punit Dambiwal < hypunit@gmail.com> >> > wrote: >> >> >> >> Hi, >> >> >> >> My Glusterfs version is :- glusterfs-3.6.1-1.el7 >> >> >> >> On Wed, Nov 26, 2014 at 1:59 AM, Kanagaraj Mayilsamy >> >> <kmayilsa@redhat.com> >> >> wrote: >> >>> >> >>> [+Gluster-users@gluster.org] >> >>> >> >>> "Initialization of volume 'management' failed, review your volfile >> >>> again", glusterd throws this error when the service is started >> >>> automatically >> >>> after the reboot. But the service is successfully started later >> >>> manually by >> >>> the user. >> >>> >> >>> can somebody from gluster-users please help on this? >> >>> >> >>> glusterfs version: 3.5.1 >> >>> >> >>> Thanks, >> >>> Kanagaraj >> >>> >> >>> ----- Original Message ----- >> >>> > From: "Punit Dambiwal" <hypunit@gmail.com> >> >>> > To: "Kanagaraj" <kmayilsa@redhat.com> >> >>> > Cc: users@ovirt.org >> >>> > Sent: Tuesday, November 25, 2014 7:24:45 PM >> >>> > Subject: Re: [ovirt-users] Gluster command [<UNKNOWN>] failed on >> >>> > server... >> >>> > >> >>> > Hi Kanagraj, >> >>> > >> >>> > Please check the attached log files....i didn't find any
>> >>> > special.... >> >>> > >> >>> > On Tue, Nov 25, 2014 at 12:12 PM, Kanagaraj < kmayilsa@redhat.com> >> >>> > wrote: >> >>> > >> >>> > > Do you see any errors in >> >>> > > /var/log/glusterfs/etc-glusterfs-glusterd.vol.log or vdsm.log >> >>> > > when >> >>> > > the >> >>> > > service is trying to start automatically after the reboot? >> >>> > > >> >>> > > Thanks, >> >>> > > Kanagaraj >> >>> > > >> >>> > > >> >>> > > On 11/24/2014 08:13 PM, Punit Dambiwal wrote: >> >>> > > >> >>> > > Hi Kanagaraj, >> >>> > > >> >>> > > Yes...once i will start the gluster service and then vdsmd >> >>> > > ...the >> >>> > > host >> >>> > > can connect to cluster...but the question is why it's not started >> >>> > > even it >> >>> > > has chkconfig enabled... >> >>> > > >> >>> > > I have tested it in two host cluster environment...(Centos 6.6 >> >>> > > and >> >>> > > centos 7.0) on both hypervisior cluster..it's failed to reconnect >> >>> > > in >> >>> > > to >> >>> > > cluster after reboot.... >> >>> > > >> >>> > > In both the environment glusterd enabled for next boot....but >> >>> > > it's >> >>> > > failed with the same error....seems it's bug in either gluster or >> >>> > > Ovirt ?? >> >>> > > >> >>> > > Please help me to find the workaround here if can not resolve >> >>> > > it...as >> >>> > > without this the Host machine can not connect after >> >>> > > reboot....that >> >>> > > means >> >>> > > engine will consider it as down and every time need to manually >> >>> > > start >> >>> > > the >> >>> > > gluster service and vdsmd... ?? >> >>> > > >> >>> > > Thanks, >> >>> > > Punit >> >>> > > >> >>> > > On Mon, Nov 24, 2014 at 10:20 PM, Kanagaraj < kmayilsa@redhat.com> >> >>> > > wrote: >> >>> > > >> >>> > >> From vdsm.log "error: Connection failed. Please check if >> >>> > >> gluster >> >>> > >> daemon >> >>> > >> is operational." >> >>> > >> >> >>> > >> Starting glusterd service should fix this issue. 'service >> >>> > >> glusterd >> >>> > >> start' >> >>> > >> But i am wondering why the glusterd was not started >> >>> > >> automatically >> >>> > >> after >> >>> > >> the reboot. >> >>> > >> >> >>> > >> Thanks, >> >>> > >> Kanagaraj >> >>> > >> >> >>> > >> >> >>> > >> >> >>> > >> On 11/24/2014 07:18 PM, Punit Dambiwal wrote: >> >>> > >> >> >>> > >> Hi Kanagaraj, >> >>> > >> >> >>> > >> Please find the attached VDSM logs :- >> >>> > >> >> >>> > >> ---------------- >> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> > >> >> >>> > >> >> >>> > >> 21:41:17,182::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >> >>> > >> Owner.cancelAll requests {} >> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> > >> 21:41:17,182::task::993::Storage.TaskManager.Task::(_decref) >> >>> > >> Task=`1691d409-9b27-4585-8281-5ec26154367a`::ref 0 aborting >> >>> > >> False >> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> > >> >> >>> > >> 21:41:32,393::task::595::Storage.TaskManager.Task::(_updateState) >> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from state >> >>> > >> init >> >>> > >> -> >> >>> > >> state preparing >> >>> > >> Thread-13::INFO::2014-11-24 >> >>> > >> 21:41:32,393::logUtils::44::dispatcher::(wrapper) Run and >> >>> > >> protect: >> >>> > >> repoStats(options=None) >> >>> > >> Thread-13::INFO::2014-11-24 >> >>> > >> 21:41:32,393::logUtils::47::dispatcher::(wrapper) Run and >> >>> > >> protect: >> >>> > >> repoStats, Return response: {} >> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> > >> 21:41:32,393::task::1191::Storage.TaskManager.Task::(prepare) >> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::finished: {} >> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> > >> >> >>> > >> 21:41:32,394::task::595::Storage.TaskManager.Task::(_updateState) >> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from state >> >>> > >> preparing >> >>> > >> -> >> >>> > >> state finished >> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> > >> >> >>> > >> >> >>> > >> 21:41:32,394::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) >> >>> > >> Owner.releaseAll requests {} resources {} >> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> > >> >> >>> > >> >> >>> > >> 21:41:32,394::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >> >>> > >> Owner.cancelAll requests {} >> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> > >> 21:41:32,394::task::993::Storage.TaskManager.Task::(_decref) >> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::ref 0 aborting >> >>> > >> False >> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> > >> 21:41:41,550::BindingXMLRPC::1132::vds::(wrapper) client >> >>> > >> [10.10.10.2]::call >> >>> > >> getCapabilities with () {} >> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> > >> 21:41:41,553::utils::738::root::(execCmd) >> >>> > >> /sbin/ip route show to 0.0.0.0/0 table all (cwd None) >> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> > >> 21:41:41,560::utils::758::root::(execCmd) >> >>> > >> SUCCESS: <err> = ''; <rc> = 0 >> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> > >> 21:41:41,588::caps::728::root::(_getKeyPackages) rpm
>> >>> > >> ('gluster-swift',) not found >> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> > >> 21:41:41,592::caps::728::root::(_getKeyPackages) rpm
>> >>> > >> ('gluster-swift-object',) not found >> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> > >> 21:41:41,593::caps::728::root::(_getKeyPackages) rpm
>> >>> > >> ('gluster-swift-plugin',) not found >> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm
>> >>> > >> ('gluster-swift-account',) not found >> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm
>> >>> > >> ('gluster-swift-proxy',) not found >> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm
>> >>> > >> ('gluster-swift-doc',) not found >> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> > >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm
>> >>> > >> ('gluster-swift-container',) not found >> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> > >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm
>> >>> > >> ('glusterfs-geo-replication',) not found >> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> > >> 21:41:41,600::caps::646::root::(get) >> >>> > >> VirtioRNG DISABLED: libvirt version 0.10.2-29.el6_5.9 required >> >>> > >> >= >> >>> > >> 0.10.2-31 >> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> > >> 21:41:41,603::BindingXMLRPC::1139::vds::(wrapper) return >> >>> > >> getCapabilities >> >>> > >> with {'status': {'message': 'Done', 'code': 0}, 'info': >> >>> > >> {'HBAInventory': >> >>> > >> {'iSCSI': [{'InitiatorName': >> >>> > >> 'iqn.1994-05.com.redhat:32151ce183c8'}], >> >>> > >> 'FC': >> >>> > >> []}, 'packages2': {'kernel': {'release': '431.el6.x86_64', >> >>> > >> 'buildtime': >> >>> > >> 1385061309.0, 'version': '2.6.32'}, 'glusterfs-rdma': >> >>> > >> {'release': >> >>> > >> '1.el6', >> >>> > >> 'buildtime': 1403622628L, 'version': '3.5.1'}, 'glusterfs-fuse': >> >>> > >> {'release': '1.el6', 'buildtime': 1403622628L, 'version': >> >>> > >> '3.5.1'}, >> >>> > >> 'spice-server': {'release': '6.el6_5.2', 'buildtime': >> >>> > >> 1402324637L, >> >>> > >> 'version': '0.12.4'}, 'vdsm': {'release': '1.gitdb83943.el6', >> >>> > >> 'buildtime': >> >>> > >> 1412784567L, 'version': '4.16.7'}, 'qemu-kvm': {'release': >> >>> > >> '2.415.el6_5.10', 'buildtime': 1402435700L, 'version': >> >>> > >> '0.12.1.2'}, >> >>> > >> 'qemu-img': {'release': '2.415.el6_5.10', 'buildtime': >> >>> > >> 1402435700L, >> >>> > >> 'version': '0.12.1.2'}, 'libvirt': {'release': '29.el6_5.9', >> >>> > >> 'buildtime': >> >>> > >> 1402404612L, 'version': '0.10.2'}, 'glusterfs': {'release': >> >>> > >> '1.el6', >> >>> > >> 'buildtime': 1403622628L, 'version': '3.5.1'}, 'mom': >> >>> > >> {'release': >> >>> > >> '2.el6', >> >>> > >> 'buildtime': 1403794344L, 'version': '0.4.1'}, >> >>> > >> 'glusterfs-server': >> >>> > >> {'release': '1.el6', 'buildtime': 1403622628L, 'version': >> >>> > >> '3.5.1'}}, >> >>> > >> 'numaNodeDistance': {'1': [20, 10], '0': [10, 20]}, 'cpuModel': >> >>> > >> 'Intel(R) >> >>> > >> Xeon(R) CPU X5650 @ 2.67GHz', 'liveMerge': 'false', >> >>> > >> 'hooks': >> >>> > >> {}, >> >>> > >> 'cpuSockets': '2', 'vmTypes': ['kvm'], 'selinux': {'mode': '1'}, >> >>> > >> 'kdumpStatus': 0, 'supportedProtocols': ['2.2', '2.3'], >> >>> > >> 'networks': >> >>> > >> {'ovirtmgmt': {'iface': u'bond0.10', 'addr': '43.252.176.16', >> >>> > >> 'bridged': >> >>> > >> False, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': >> >>> > >> '1500', >> >>> > >> 'bootproto4': 'none', 'netmask': '255.255.255.0', 'ipv4addrs': >> >>> > >> [' >> >>> > >> 43.252.176.16/24' <http://43.252.176.16/24%27>], 'interface': >> >>> > >> u'bond0.10', 'ipv6gateway': '::', 'gateway': '43.25.17.1'}, >> >>> > >> 'Internal': >> >>> > >> {'iface': 'Internal', 'addr': '', 'cfg': {'DEFROUTE': 'no', >> >>> > >> 'HOTPLUG': >> >>> > >> 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', >> >>> > >> 'BOOTPROTO': >> >>> > >> 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': 'Bridge', >> >>> > >> 'ONBOOT': >> >>> > >> 'no'}, 'bridged': True, 'ipv6addrs': >> >>> > >> ['fe80::210:18ff:fecd:daac/64'], >> >>> > >> 'gateway': '', 'bootproto4': 'none', 'netmask': '', 'stp': >> >>> > >> 'off', >> >>> > >> 'ipv4addrs': [], 'mtu': '9000', 'ipv6gateway': '::', 'ports': >> >>> > >> ['bond1.100']}, 'storage': {'iface': u'bond1', 'addr': >> >>> > >> '10.10.10.6', >> >>> > >> 'bridged': False, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], >> >>> > >> 'mtu': >> >>> > >> '9000', 'bootproto4': 'none', 'netmask': '255.255.255.0', >> >>> > >> 'ipv4addrs': [' >> >>> > >> 10.10.10.6/24' <http://10.10.10.6/24%27>], 'interface': >> >>> > >> u'bond1', >> >>> > >> 'ipv6gateway': '::', 'gateway': ''}, 'VMNetwork': {'iface': >> >>> > >> 'VMNetwork', >> >>> > >> 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': >> >>> > >> '1500', >> >>> > >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': >> >>> > >> 'off', >> >>> > >> 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, >> >>> > >> 'bridged': >> >>> > >> True, >> >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'gateway': '', >> >>> > >> 'bootproto4': >> >>> > >> 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], 'mtu': >> >>> > >> '1500', >> >>> > >> 'ipv6gateway': '::', 'ports': ['bond0.36']}}, 'bridges': >> >>> > >> {'Internal': >> >>> > >> {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', 'MTU': >> >>> > >> '9000', >> >>> > >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'STP': >> >>> > >> 'off', >> >>> > >> 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, >> >>> > >> 'ipv6addrs': >> >>> > >> ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', 'netmask': '', >> >>> > >> 'stp': >> >>> > >> 'off', 'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': '', >> >>> > >> 'opts': >> >>> > >> {'topology_change_detected': '0', 'multicast_last_member_count': >> >>> > >> '2', >> >>> > >> 'hash_elasticity': '4', 'multicast_query_response_interval': >> >>> > >> '999', >> >>> > >> 'multicast_snooping': '1', 'multicast_startup_query_interval': >> >>> > >> '3124', >> >>> > >> 'hello_timer': '31', 'multicast_querier_interval': '25496', >> >>> > >> 'max_age': >> >>> > >> '1999', 'hash_max': '512', 'stp_state': '0', 'root_id': >> >>> > >> '8000.001018cddaac', 'priority': '32768', >> >>> > >> 'multicast_membership_interval': >> >>> > >> '25996', 'root_path_cost': '0', 'root_port': '0', >> >>> > >> 'multicast_querier': >> >>> > >> '0', >> >>> > >> 'multicast_startup_query_count': '2', 'hello_time': '199', >> >>> > >> 'topology_change': '0', 'bridge_id': '8000.001018cddaac', >> >>> > >> 'topology_change_timer': '0', 'ageing_time': '29995', >> >>> > >> 'gc_timer': >> >>> > >> '31', >> >>> > >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', >> >>> > >> 'multicast_query_interval': '12498', >> >>> > >> 'multicast_last_member_interval': >> >>> > >> '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': >> >>> > >> ['bond1.100']}, 'VMNetwork': {'addr': '', 'cfg': {'DEFROUTE': >> >>> > >> 'no', >> >>> > >> 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', 'NM_CONTROLLED': >> >>> > >> 'no', >> >>> > >> 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'VMNetwork', >> >>> > >> 'TYPE': >> >>> > >> 'Bridge', >> >>> > >> 'ONBOOT': 'no'}, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], >> >>> > >> 'mtu': >> >>> > >> '1500', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], >> >>> > >> 'ipv6gateway': >> >>> > >> '::', >> >>> > >> 'gateway': '', 'opts': {'topology_change_detected': '0', >> >>> > >> 'multicast_last_member_count': '2', 'hash_elasticity': '4', >> >>> > >> 'multicast_query_response_interval': '999', >> >>> > >> 'multicast_snooping': >> >>> > >> '1', >> >>> > >> 'multicast_startup_query_interval': '3124', 'hello_timer': >> >>> > >> '131', >> >>> > >> 'multicast_querier_interval': '25496', 'max_age': '1999', >> >>> > >> 'hash_max': >> >>> > >> '512', 'stp_state': '0', 'root_id': '8000.60eb6920b46c', >> >>> > >> 'priority': >> >>> > >> '32768', 'multicast_membership_interval': '25996', >> >>> > >> 'root_path_cost': >> >>> > >> '0', >> >>> > >> 'root_port': '0', 'multicast_querier': '0', >> >>> > >> 'multicast_startup_query_count': '2', 'hello_time': '199', >> >>> > >> 'topology_change': '0', 'bridge_id': '8000.60eb6920b46c', >> >>> > >> 'topology_change_timer': '0', 'ageing_time': '29995', >> >>> > >> 'gc_timer': >> >>> > >> '31', >> >>> > >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', >> >>> > >> 'multicast_query_interval': '12498', >> >>> > >> 'multicast_last_member_interval': >> >>> > >> '99', 'multicast_router': '1', 'forward_delay': '0'}, 'ports': >> >>> > >> ['bond0.36']}}, 'uuid': '44454C4C-4C00-1057-8053-B7C04F504E31', >> >>> > >> 'lastClientIface': 'bond1', 'nics': {'eth3': {'permhwaddr': >> >>> > >> '00:10:18:cd:da:ae', 'addr': '', 'cfg': {'SLAVE': 'yes', >> >>> > >> 'NM_CONTROLLED': >> >>> > >> 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ae', 'MASTER': >> >>> > >> 'bond1', >> >>> > >> 'DEVICE': 'eth3', 'ONBOOT': 'no'}, 'ipv6addrs': [], 'mtu': >> >>> > >> '9000', >> >>> > >> 'netmask': '', 'ipv4addrs': [], 'hwaddr': '00:10:18:cd:da:ac', >> >>> > >> 'speed': >> >>> > >> 1000}, 'eth2': {'permhwaddr': '00:10:18:cd:da:ac', 'addr': '', >> >>> > >> 'cfg': >> >>> > >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '9000', 'HWADDR': >> >>> > >> '00:10:18:cd:da:ac', 'MASTER': 'bond1', 'DEVICE': 'eth2', >> >>> > >> 'ONBOOT': >> >>> > >> 'no'}, >> >>> > >> 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', 'ipv4addrs': [], >> >>> > >> 'hwaddr': >> >>> > >> '00:10:18:cd:da:ac', 'speed': 1000}, 'eth1': {'permhwaddr': >> >>> > >> '60:eb:69:20:b4:6d', 'addr': '', 'cfg': {'SLAVE': 'yes', >> >>> > >> 'NM_CONTROLLED': >> >>> > >> 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6d', 'MASTER': >> >>> > >> 'bond0', >> >>> > >> 'DEVICE': 'eth1', 'ONBOOT': 'yes'}, 'ipv6addrs': [], 'mtu': >> >>> > >> '1500', >> >>> > >> 'netmask': '', 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', >> >>> > >> 'speed': >> >>> > >> 1000}, 'eth0': {'permhwaddr': '60:eb:69:20:b4:6c', 'addr': '', >> >>> > >> 'cfg': >> >>> > >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', 'HWADDR': >> >>> > >> '60:eb:69:20:b4:6c', 'MASTER': 'bond0', 'DEVICE': 'eth0', >> >>> > >> 'ONBOOT': >> >>> > >> 'yes'}, >> >>> > >> 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', 'ipv4addrs': [], >> >>> > >> 'hwaddr': >> >>> > >> '60:eb:69:20:b4:6c', 'speed': 1000}}, 'software_revision': '1', >> >>> > >> 'clusterLevels': ['3.0', '3.1', '3.2', '3.3', '3.4', '3.5'], >> >>> > >> 'cpuFlags': >> >>> > >> >> >>> > >> >> >>> > >> u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,popcnt,aes,lahf_lm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270', >> >>> > >> 'ISCSIInitiatorName': 'iqn.1994-05.com.redhat:32151ce183c8', >> >>> > >> 'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', '3.1', >> >>> > >> '3.2', >> >>> > >> '3.3', >> >>> > >> '3.4', '3.5'], 'autoNumaBalancing': 2, 'reservedMem': '321', >> >>> > >> 'bondings': >> >>> > >> {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', >> >>> > >> 'slaves': >> >>> > >> [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', 'cfg': >> >>> > >> {'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': 'no', >> >>> > >> 'BONDING_OPTS': >> >>> > >> 'mode=4 miimon=100', 'DEVICE': 'bond0', 'ONBOOT': 'yes'}, >> >>> > >> 'ipv6addrs': >> >>> > >> ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', 'netmask': '', >> >>> > >> 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'slaves': >> >>> > >> ['eth0', >> >>> > >> 'eth1'], >> >>> > >> 'opts': {'miimon': '100', 'mode': '4'}}, 'bond1': {'addr': >> >>> > >> '10.10.10.6', >> >>> > >> 'cfg': {'DEFROUTE': 'no', 'IPADDR': '10.10.10.6', 'HOTPLUG': >> >>> > >> 'no', >> >>> > >> 'MTU': >> >>> > >> '9000', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', >> >>> > >> 'BOOTPROTO': >> >>> > >> 'none', 'BONDING_OPTS': 'mode=4 miimon=100', 'DEVICE': 'bond1', >> >>> > >> 'ONBOOT': >> >>> > >> 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'mtu': >> >>> > >> '9000', >> >>> > >> 'netmask': '255.255.255.0', 'ipv4addrs': ['10.10.10.6/24' >> >>> > >> <http://10.10.10.6/24%27>], 'hwaddr': '00:10:18:cd:da:ac', >> >>> > >> 'slaves': >> >>> > >> ['eth2', 'eth3'], 'opts': {'miimon': '100', 'mode': '4'}}, >> >>> > >> 'bond2': >> >>> > >> {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': >> >>> > >> [], >> >>> > >> 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', 'cfg': {}, >> >>> > >> 'mtu': >> >>> > >> '1500', 'netmask': '', 'slaves': [], 'hwaddr': >> >>> > >> '00:00:00:00:00:00'}}, >> >>> > >> 'software_version': '4.16', 'memSize': '24019', 'cpuSpeed': >> >>> > >> '2667.000', >> >>> > >> 'numaNodes': {u'1': {'totalMemory': '12288', 'cpus': [6, 7, 8, >> >>> > >> 9, >> >>> > >> 10, 11, >> >>> > >> 18, 19, 20, 21, 22, 23]}, u'0': {'totalMemory': '12278', 'cpus': >> >>> > >> [0, >> >>> > >> 1, 2, >> >>> > >> 3, 4, 5, 12, 13, 14, 15, 16, 17]}}, 'version_name': 'Snow Man', >> >>> > >> 'vlans': >> >>> > >> {'bond0.10': {'iface': 'bond0', 'addr': '43.25.17.16', 'cfg': >> >>> > >> {'DEFROUTE': >> >>> > >> 'yes', 'VLAN': 'yes', 'IPADDR': '43.25.17.16', 'HOTPLUG': 'no', >> >>> > >> 'GATEWAY': >> >>> > >> '43.25.17.1', 'NM_CONTROLLED': 'no', 'NETMASK': '255.255.255.0', >> >>> > >> 'BOOTPROTO': 'none', 'DEVICE': 'bond0.10', 'MTU': '1500', >> >>> > >> 'ONBOOT': >> >>> > >> 'yes'}, >> >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 10, >> >>> > >> 'mtu': >> >>> > >> '1500', >> >>> > >> 'netmask': '255.255.255.0', 'ipv4addrs': ['43.25.17.16/24 '] >> >>> > >> <http://43.25.17.16/24%27%5D>}, 'bond0.36': {'iface': 'bond0', >> >>> > >> 'addr': >> >>> > >> '', 'cfg': {'BRIDGE': 'VMNetwork', 'VLAN': 'yes', 'HOTPLUG': >> >>> > >> 'no', >> >>> > >> 'MTU': >> >>> > >> '1500', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond0.36', 'ONBOOT': >> >>> > >> 'no'}, >> >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': 36, >> >>> > >> 'mtu': >> >>> > >> '1500', >> >>> > >> 'netmask': '', 'ipv4addrs': []}, 'bond1.100': {'iface': 'bond1', >> >>> > >> 'addr': >> >>> > >> '', 'cfg': {'BRIDGE': 'Internal', 'VLAN': 'yes', 'HOTPLUG': >> >>> > >> 'no', >> >>> > >> 'MTU': >> >>> > >> '9000', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond1.100', 'ONBOOT': >> >>> > >> 'no'}, >> >>> > >> 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'vlanid': 100, >> >>> > >> 'mtu': >> >>> > >> '9000', >> >>> > >> 'netmask': '', 'ipv4addrs': []}}, 'cpuCores': '12', >> >>> > >> 'kvmEnabled': >> >>> > >> 'true', >> >>> > >> 'guestOverhead': '65', 'cpuThreads': '24', 'emulatedMachines': >> >>> > >> [u'rhel6.5.0', u'pc', u'rhel6.4.0', u'rhel6.3.0', u'rhel6.2.0', >> >>> > >> u'rhel6.1.0', u'rhel6.0.0', u'rhel5.5.0', u'rhel5.4.4', >> >>> > >> u'rhel5.4.0'], >> >>> > >> 'operatingSystem': {'release': '5.el6.centos.11.1', 'version': >> >>> > >> '6', >> >>> > >> 'name': >> >>> > >> 'RHEL'}, 'lastClient': '10.10.10.2'}} >> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> > >> 21:41:41,620::BindingXMLRPC::1132::vds::(wrapper) client >> >>> > >> [10.10.10.2]::call >> >>> > >> getHardwareInfo with () {} >> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> > >> 21:41:41,621::BindingXMLRPC::1139::vds::(wrapper) return >> >>> > >> getHardwareInfo >> >>> > >> with {'status': {'message': 'Done', 'code': 0}, 'info': >> >>> > >> {'systemProductName': 'CS24-TY', 'systemSerialNumber': >> >>> > >> '7LWSPN1', >> >>> > >> 'systemFamily': 'Server', 'systemVersion': 'A00', 'systemUUID': >> >>> > >> '44454c4c-4c00-1057-8053-b7c04f504e31', 'systemManufacturer': >> >>> > >> 'Dell'}} >> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> > >> 21:41:41,733::BindingXMLRPC::1132::vds::(wrapper) client >> >>> > >> [10.10.10.2]::call >> >>> > >> hostsList with () {} flowID [222e8036] >> >>> > >> Thread-13::ERROR::2014-11-24 >> >>> > >> 21:41:44,753::BindingXMLRPC::1148::vds::(wrapper) vdsm exception >> >>> > >> occured >> >>> > >> Traceback (most recent call last): >> >>> > >> File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1135, in >> >>> > >> wrapper >> >>> > >> res = f(*args, **kwargs) >> >>> > >> File "/usr/share/vdsm/gluster/api.py", line 54, in wrapper >> >>> > >> rv = func(*args, **kwargs) >> >>> > >> File "/usr/share/vdsm/gluster/api.py", line 251, in hostsList >> >>> > >> return {'hosts': self.svdsmProxy.glusterPeerStatus()} >> >>> > >> File "/usr/share/vdsm/supervdsm.py", line 50, in __call__ >> >>> > >> return callMethod() >> >>> > >> File "/usr/share/vdsm/supervdsm.py", line 48, in <lambda> >> >>> > >> **kwargs) >> >>> > >> File "<string>", line 2, in glusterPeerStatus >> >>> > >> File "/usr/lib64/python2.6/multiprocessing/managers.py", line >> >>> > >> 740, >> >>> > >> in >> >>> > >> _callmethod >> >>> > >> raise convert_to_error(kind, result) >> >>> > >> GlusterCmdExecFailedException: Command execution failed >> >>> > >> error: Connection failed. Please check if gluster daemon is >> >>> > >> operational. >> >>> > >> return code: 1 >> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> > >> >> >>> > >> 21:41:50,949::task::595::Storage.TaskManager.Task::(_updateState) >> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from state >> >>> > >> init >> >>> > >> -> >> >>> > >> state preparing >> >>> > >> Thread-13::INFO::2014-11-24 >> >>> > >> 21:41:50,950::logUtils::44::dispatcher::(wrapper) Run and >> >>> > >> protect: >> >>> > >> repoStats(options=None) >> >>> > >> Thread-13::INFO::2014-11-24 >> >>> > >> 21:41:50,950::logUtils::47::dispatcher::(wrapper) Run and >> >>> > >> protect: >> >>> > >> repoStats, Return response: {} >> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> > >> 21:41:50,950::task::1191::Storage.TaskManager.Task::(prepare) >> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::finished: {} >> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> > >> >> >>> > >> 21:41:50,950::task::595::Storage.TaskManager.Task::(_updateState) >> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from state >> >>> > >> preparing >> >>> > >> -> >> >>> > >> state finished >> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> > >> >> >>> > >> >> >>> > >> 21:41:50,951::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) >> >>> > >> Owner.releaseAll requests {} resources {} >> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> > >> >> >>> > >> >> >>> > >> 21:41:50,951::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >> >>> > >> Owner.cancelAll requests {} >> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> > >> 21:41:50,951::task::993::Storage.TaskManager.Task::(_decref) >> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::ref 0 aborting >> >>> > >> False >> >>> > >> ------------------------------- >> >>> > >> >> >>> > >> [root@compute4 ~]# service glusterd status >> >>> > >> glusterd is stopped >> >>> > >> [root@compute4 ~]# chkconfig --list | grep glusterd >> >>> > >> glusterd 0:off 1:off 2:on 3:on 4:on 5:on >> >>> > >> 6:off >> >>> > >> [root@compute4 ~]# >> >>> > >> >> >>> > >> Thanks, >> >>> > >> Punit >> >>> > >> >> >>> > >> On Mon, Nov 24, 2014 at 6:36 PM, Kanagaraj < kmayilsa@redhat.com> >> >>> > >> wrote: >> >>> > >> >> >>> > >>> Can you send the corresponding error in vdsm.log from
>> >>> > >>> host? >> >>> > >>> >> >>> > >>> Also check if glusterd service is running. >> >>> > >>> >> >>> > >>> Thanks, >> >>> > >>> Kanagaraj >> >>> > >>> >> >>> > >>> >> >>> > >>> On 11/24/2014 03:39 PM, Punit Dambiwal wrote: >> >>> > >>> >> >>> > >>> Hi, >> >>> > >>> >> >>> > >>> After reboot my Hypervisior host can not activate again in the >> >>> > >>> cluster >> >>> > >>> and failed with the following error :- >> >>> > >>> >> >>> > >>> Gluster command [<UNKNOWN>] failed on server... >> >>> > >>> >> >>> > >>> Engine logs :- >> >>> > >>> >> >>> > >>> 2014-11-24 18:05:28,397 INFO >> >>> > >>> >> >>> > >>> >> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >> >>> > >>> (DefaultQuartzScheduler_Worker-64) START, >> >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, HostId = >> >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 5f251c90 >> >>> > >>> 2014-11-24 18:05:30,609 INFO >> >>> > >>> >> >>> > >>> >> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >> >>> > >>> (DefaultQuartzScheduler_Worker-64) FINISH, >> >>> > >>> GlusterVolumesListVDSCommand, >> >>> > >>> return: >> >>> > >>> >> >>> > >>> >> >>> > >>> {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@d95203e0 }, >> >>> > >>> log id: 5f251c90 >> >>> > >>> 2014-11-24 18:05:33,768 INFO >> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >> >>> > >>> (ajp--127.0.0.1-8702-8) >> >>> > >>> [287d570d] Lock Acquired to object EngineLock [exclusiveLocks= >> >>> > >>> key: >> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS >> >>> > >>> , sharedLocks= ] >> >>> > >>> 2014-11-24 18:05:33,795 INFO >> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] Running command: >> >>> > >>> ActivateVdsCommand internal: false. Entities affected : ID: >> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDSAction group >> >>> > >>> MANIPULATE_HOST >> >>> > >>> with role type ADMIN >> >>> > >>> 2014-11-24 18:05:33,796 INFO >> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] Before acquiring >> >>> > >>> lock in >> >>> > >>> order to prevent monitoring for host Compute5 from data-center >> >>> > >>> SV_WTC >> >>> > >>> 2014-11-24 18:05:33,797 INFO >> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] Lock acquired, >> >>> > >>> from >> >>> > >>> now a >> >>> > >>> monitoring of host will be skipped for host Compute5 from >> >>> > >>> data-center >> >>> > >>> SV_WTC >> >>> > >>> 2014-11-24 18:05:33,817 INFO >> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] START, >> >>> > >>> SetVdsStatusVDSCommand(HostName = Compute5, HostId = >> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=Unassigned, >> >>> > >>> nonOperationalReason=NONE, stopSpmFailureLogged=false), log id: >> >>> > >>> 1cbc7311 >> >>> > >>> 2014-11-24 18:05:33,820 INFO >> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] FINISH, >> >>> > >>> SetVdsStatusVDSCommand, log id: 1cbc7311 >> >>> > >>> 2014-11-24 18:05:34,086 INFO >> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >> >>> > >>> (org.ovirt.thread.pool-8-thread-45) Activate finished. Lock >> >>> > >>> released. >> >>> > >>> Monitoring can run now for host Compute5 from data-center >> >>> > >>> SV_WTC >> >>> > >>> 2014-11-24 18:05:34,088 INFO >> >>> > >>> >> >>> > >>> >> >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >> >>> > >>> (org.ovirt.thread.pool-8-thread-45) Correlation ID: 287d570d, >> >>> > >>> Job >> >>> > >>> ID: >> >>> > >>> 5ef8e4d6-b2bc-469e-8e81-7ef74b2a001a, Call Stack: null, Custom >> >>> > >>> Event ID: >> >>> > >>> -1, Message: Host Compute5 was activated by admin. >> >>> > >>> 2014-11-24 18:05:34,090 INFO >> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >> >>> > >>> (org.ovirt.thread.pool-8-thread-45) Lock freed to object >> >>> > >>> EngineLock >> >>> > >>> [exclusiveLocks= key: 0bf6b00f-7947-4411-b55a-cc5eea2b381a >> >>> > >>> value: >> >>> > >>> VDS >> >>> > >>> , sharedLocks= ] >> >>> > >>> 2014-11-24 18:05:35,792 INFO >> >>> > >>> >> >>> > >>> >> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >> >>> > >>> (DefaultQuartzScheduler_Worker-55) [3706e836] START, >> >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, HostId = >> >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 48a0c832 >> >>> > >>> 2014-11-24 18:05:37,064 INFO >> >>> > >>> >> >>> > >>> >> >>> > >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] >> >>> > >>> (DefaultQuartzScheduler_Worker-69) START, >> >>> > >>> GetHardwareInfoVDSCommand(HostName = Compute5, HostId = >> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, >> >>> > >>> vds=Host[Compute5,0bf6b00f-7947-4411-b55a-cc5eea2b381a]), log >> >>> > >>> id: >> >>> > >>> 6d560cc2 >> >>> > >>> 2014-11-24 18:05:37,074 INFO >> >>> > >>> >> >>> > >>> >> >>> > >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] >> >>> > >>> (DefaultQuartzScheduler_Worker-69) FINISH, >> >>> > >>> GetHardwareInfoVDSCommand, log >> >>> > >>> id: 6d560cc2 >> >>> > >>> 2014-11-24 18:05:37,093 WARN >> >>> > >>> [org.ovirt.engine.core.vdsbroker.VdsManager] >> >>> > >>> (DefaultQuartzScheduler_Worker-69) Host Compute5 is running >> >>> > >>> with >> >>> > >>> disabled >> >>> > >>> SELinux. >> >>> > >>> 2014-11-24 18:05:37,127 INFO >> >>> > >>> >> >>> > >>> >> >>> > >>> [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] >> >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] Running command: >> >>> > >>> HandleVdsCpuFlagsOrClusterChangedCommand internal: true. >> >>> > >>> Entities >> >>> > >>> affected >> >>> > >>> : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS >> >>> > >>> 2014-11-24 18:05:37,147 INFO >> >>> > >>> >> >>> > >>> >> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >> >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] START, >> >>> > >>> GlusterServersListVDSCommand(HostName = Compute5, HostId = >> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a), log id: 4faed87 >> >>> > >>> 2014-11-24 18:05:37,164 INFO >> >>> > >>> >> >>> > >>> >> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >> >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] FINISH, >> >>> > >>> GlusterServersListVDSCommand, log id: 4faed87 >> >>> > >>> 2014-11-24 18:05:37,189 INFO >> >>> > >>> [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] >> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Running command: >> >>> > >>> SetNonOperationalVdsCommand internal: true. Entities affected : >> >>> > >>> ID: >> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS >> >>> > >>> 2014-11-24 18:05:37,206 INFO >> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] START, >> >>> > >>> SetVdsStatusVDSCommand(HostName = Compute5, HostId = >> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=NonOperational, >> >>> > >>> nonOperationalReason=GLUSTER_COMMAND_FAILED, >> >>> > >>> stopSpmFailureLogged=false), >> >>> > >>> log id: fed5617 >> >>> > >>> 2014-11-24 18:05:37,209 INFO >> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] FINISH, >> >>> > >>> SetVdsStatusVDSCommand, log id: fed5617 >> >>> > >>> 2014-11-24 18:05:37,223 ERROR >> >>> > >>> >> >>> > >>> >> >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: >> >>> > >>> 4a84c4e5, >> >>> > >>> Job >> >>> > >>> ID: 4bfd4a6d-c3ef-468f-a40e-a3a6ca13011b, Call Stack: null, >> >>> > >>> Custom >> >>> > >>> Event >> >>> > >>> ID: -1, Message: Gluster command [<UNKNOWN>] failed on server >> >>> > >>> Compute5. >> >>> > >>> 2014-11-24 18:05:37,243 INFO >> >>> > >>> >> >>> > >>> >> >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Correlation ID: >> >>> > >>> null, >> >>> > >>> Call >> >>> > >>> Stack: null, Custom Event ID: -1, Message: Status of host >> >>> > >>> Compute5 >> >>> > >>> was >> >>> > >>> set >> >>> > >>> to NonOperational. >> >>> > >>> 2014-11-24 18:05:37,272 INFO >> >>> > >>> [org.ovirt.engine.core.bll.HandleVdsVersionCommand] >> >>> > >>> (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Running command: >> >>> > >>> HandleVdsVersionCommand internal: true. Entities affected : >> >>> > >>> ID: >> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS >> >>> > >>> 2014-11-24 18:05:37,274 INFO >> >>> > >>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] >> >>> > >>> (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Host >> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a : Compute5 is already in >> >>> > >>> NonOperational status for reason GLUSTER_COMMAND_FAILED. >> >>> > >>> SetNonOperationalVds command is skipped. >> >>> > >>> 2014-11-24 18:05:38,065 INFO >> >>> > >>> >> >>> > >>> >> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >> >>> > >>> (DefaultQuartzScheduler_Worker-55) [3706e836] FINISH, >> >>> > >>> GlusterVolumesListVDSCommand, return: >> >>> > >>> >> >>> > >>> >> >>> > >>> {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4e72a1b1 }, >> >>> > >>> log id: 48a0c832 >> >>> > >>> 2014-11-24 18:05:43,243 INFO >> >>> > >>> >> >>> > >>> >> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >> >>> > >>> (DefaultQuartzScheduler_Worker-35) START, >> >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, HostId = >> >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 3ce13ebc >> >>> > >>> ^C >> >>> > >>> [root@ccr01 ~]# >> >>> > >>> >> >>> > >>> Thanks, >> >>> > >>> Punit >> >>> > >>> >> >>> > >>> >> >>> > >>> _______________________________________________ >> >>> > >>> Users mailing >> >>> > >>> >> >>> > >>> listUsers@ovirt.orghttp://
On Mon, Dec 1, 2014 at 9:05 AM, Punit Dambiwal <hypunit@gmail.com> wrote: the phases, the then the thing package package package package package package package package the lists.ovirt.org/mailman/listinfo/users
>> >>> > >>> >> >>> > >>> >> >>> > >>> >> >>> > >> >> >>> > >> >> >>> > > >> >>> > > >> >>> > >> >> >> >> >> > >> > >> > _______________________________________________ >> > Gluster-users mailing list >> > Gluster-users@gluster.org >> > http://supercolony.gluster.org/mailman/listinfo/gluster-users > >

I don't know much about how the network target is brought up in CentOS7, but I'll try as much as I can. It seems to me that, after the network has been brought up and by the time GlusterD is started, a. The machine hasn't yet recieved it's hostname, or b. It hasn't yet registered with the name server. This is causing name resolution failures. I don't know if the network target could come up without the machine getting its hostname, so I'm pretty sure it's not a. So it seems to be b. But these kind of signing in happens only in DDNS systems, which doesn't seem to be the case for you. Both of these reasons might be wrong (most likely wrong). You'd do good if you could ask for help from someone with more experience in systemd + networking. ~kaushal On Wed, Dec 3, 2014 at 10:54 AM, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi Kaushal,
This is the host...which i rebooted...would you mind to let me know how i can make the glusterd sevice come up after network...i am using centos7...if network is the issue...
On Wed, Dec 3, 2014 at 11:54 AM, Kaushal M <kshlmster@gmail.com> wrote:
This peer cannot be identified.
" [2014-12-03 02:29:25.998153] D [glusterd-peer-utils.c:121:glusterd_peerinfo_find_by_hostname] 0-management: Unable to find friend: cpu05.zne01.hkg1.ovt.36stack.com"
I don't know why this address is not being resolved during boot time. If this is a valid peer, the the only reason I can think of this that the network is not up.
If you had previously detached the peer forcefully, the that could have left stale entries in some volumes. In this case as well, GlusterD will fail to identify the peer.
Do either of these reasons seem a possibility to you?
On Dec 3, 2014 8:07 AM, "Punit Dambiwal" <hypunit@gmail.com> wrote:
Hi Kaushal,
Please find the logs here :- http://ur1.ca/iyoe5 and http://ur1.ca/iyoed
On Tue, Dec 2, 2014 at 10:43 PM, Kaushal M <kshlmster@gmail.com> wrote:
Hey Punit, In the logs you've provided, GlusterD appears to be running correctly. Could you provide the logs for the time period when GlusterD attempts to start but fails.
~kaushal
On Dec 2, 2014 8:03 PM, "Punit Dambiwal" <hypunit@gmail.com> wrote:
Hi Kaushal,
Please find the logs here :- http://ur1.ca/iyhs5 and http://ur1.ca/iyhue
Thanks, punit
On Tue, Dec 2, 2014 at 12:00 PM, Kaushal M <kshlmster@gmail.com> wrote:
Hey Punit, Could you start Glusterd in debug mode and provide the logs here? To start it in debug mode, append '-LDEBUG' to the ExecStart line in the service file.
~kaushal
On Mon, Dec 1, 2014 at 9:05 AM, Punit Dambiwal <hypunit@gmail.com> wrote: > Hi, > > Can Any body help me on this ?? > > On Thu, Nov 27, 2014 at 9:29 AM, Punit Dambiwal <hypunit@gmail.com> > wrote: >> >> Hi Kaushal, >> >> Thanks for the detailed reply....let me explain my setup first :- >> >> 1. Ovirt Engine >> 2. 4* host as well as storage machine (Host and gluster combined) >> 3. Every host has 24 bricks... >> >> Now whenever the host machine reboot...it can come up but can not >> join the >> cluster again and through the following error "Gluster command >> [<UNKNOWN>] >> failed on server.." >> >> Please check my comment in line :- >> >> 1. Use the same string for doing the peer probe and for the brick >> address >> during volume create/add-brick. Ideally, we suggest you use >> properly >> resolvable FQDNs everywhere. If that is not possible, then use only >> IP >> addresses. Try to avoid short names. >> --------------- >> [root@cpu05 ~]# gluster peer status >> Number of Peers: 3 >> >> Hostname: cpu03.stack.com >> Uuid: 5729b8c4-e80d-4353-b456-6f467bddbdfb >> State: Peer in Cluster (Connected) >> >> Hostname: cpu04.stack.com >> Uuid: d272b790-c4b2-4bed-ba68-793656e6d7b0 >> State: Peer in Cluster (Connected) >> Other names: >> 10.10.0.8 >> >> Hostname: cpu02.stack.com >> Uuid: 8d8a7041-950e-40d0-85f9-58d14340ca25 >> State: Peer in Cluster (Connected) >> [root@cpu05 ~]# >> ---------------- >> 2. During boot up, make sure to launch glusterd only after the >> network is >> up. This will allow the new peer identification mechanism to do its >> job correctly. >> >> I think the service itself doing the same job.... >> >> [root@cpu05 ~]# cat /usr/lib/systemd/system/glusterd.service >> [Unit] >> Description=GlusterFS, a clustered file-system server >> After=network.target rpcbind.service >> Before=network-online.target >> >> [Service] >> Type=forking >> PIDFile=/var/run/glusterd.pid >> LimitNOFILE=65536 >> ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid >> KillMode=process >> >> [Install] >> WantedBy=multi-user.target >> [root@cpu05 ~]# >> -------------------- >> >> gluster logs :- >> >> [2014-11-24 09:22:22.147471] I [MSGID: 100030] >> [glusterfsd.c:2018:main] >> 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version >> 3.6.1 >> (args: /usr/sbin/glusterd -p /var/run/glusterd.pid) >> [2014-11-24 09:22:22.151565] I [glusterd.c:1214:init] 0-management: >> Maximum allowed open file descriptors set to 65536 >> [2014-11-24 09:22:22.151599] I [glusterd.c:1259:init] 0-management: >> Using >> /var/lib/glusterd as working directory >> [2014-11-24 09:22:22.155216] W [rdma.c:4195:__gf_rdma_ctx_create] >> 0-rpc-transport/rdma: rdma_cm event channel creation failed (No >> such device) >> [2014-11-24 09:22:22.155264] E [rdma.c:4483:init] >> 0-rdma.management: >> Failed to initialize IB Device >> [2014-11-24 09:22:22.155285] E >> [rpc-transport.c:333:rpc_transport_load] >> 0-rpc-transport: 'rdma' initialization failed >> [2014-11-24 09:22:22.155354] W >> [rpcsvc.c:1524:rpcsvc_transport_create] >> 0-rpc-service: cannot create listener, initing the transport failed >> [2014-11-24 09:22:22.156290] I >> [glusterd.c:413:glusterd_check_gsync_present] 0-glusterd: >> geo-replication >> module not installed in the system >> [2014-11-24 09:22:22.161318] I >> [glusterd-store.c:2043:glusterd_restore_op_version] 0-glusterd: >> retrieved >> op-version: 30600 >> [2014-11-24 09:22:22.821800] I >> [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] >> 0-management: >> connect returned 0 >> [2014-11-24 09:22:22.825810] I >> [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] >> 0-management: >> connect returned 0 >> [2014-11-24 09:22:22.828705] I >> [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] >> 0-management: >> connect returned 0 >> [2014-11-24 09:22:22.828771] I >> [rpc-clnt.c:969:rpc_clnt_connection_init] >> 0-management: setting frame-timeout to 600 >> [2014-11-24 09:22:22.832670] I >> [rpc-clnt.c:969:rpc_clnt_connection_init] >> 0-management: setting frame-timeout to 600 >> [2014-11-24 09:22:22.835919] I >> [rpc-clnt.c:969:rpc_clnt_connection_init] >> 0-management: setting frame-timeout to 600 >> [2014-11-24 09:22:22.840209] E >> [glusterd-store.c:4248:glusterd_resolve_all_bricks] 0-glusterd: >> resolve >> brick failed in restore >> [2014-11-24 09:22:22.840233] E [xlator.c:425:xlator_init] >> 0-management: >> Initialization of volume 'management' failed, review your volfile >> again >> [2014-11-24 09:22:22.840245] E [graph.c:322:glusterfs_graph_init] >> 0-management: initializing translator failed >> [2014-11-24 09:22:22.840264] E >> [graph.c:525:glusterfs_graph_activate] >> 0-graph: init failed >> [2014-11-24 09:22:22.840754] W [glusterfsd.c:1194:cleanup_and_exit] >> (--> >> 0-: received signum (0), shutting down >> >> Thanks, >> Punit >> >> >> >> >> On Wed, Nov 26, 2014 at 7:14 PM, Kaushal M <kshlmster@gmail.com> >> wrote: >>> >>> Based on the logs I can guess that glusterd is being started >>> before >>> the network has come up and that the addresses given to bricks do >>> not >>> directly match the addresses used in during peer probe. >>> >>> The gluster_after_reboot log has the line "[2014-11-25 >>> 06:46:09.972113] E >>> [glusterd-store.c:2632:glusterd_resolve_all_bricks] >>> 0-glusterd: resolve brick failed in restore". >>> >>> Brick resolution fails when glusterd cannot match the address for >>> the >>> brick, with one of the peers. Brick resolution happens in two >>> phases, >>> 1. We first try to identify the peer by performing string >>> comparisions >>> with the brick address and the peer addresses (The peer names will >>> be >>> the names/addresses that were given when the peer was probed). >>> 2. If we don't find a match from step 1, we will then resolve all >>> the >>> brick address and the peer addresses into addrinfo structs, and >>> then >>> compare these structs to find a match. This process should >>> generally >>> find a match if available. This will fail only if the network is >>> not >>> up yet as we cannot resolve addresses. >>> >>> The above steps are applicable only to glusterfs versions >=3.6. >>> They >>> were introduced to reduce problems with peer identification, like >>> the >>> one you encountered >>> >>> Since both of the steps failed to find a match in one run, but >>> succeeded later, we can come to the conclusion that, >>> a) the bricks don't have the exact same string used in peer probe >>> for >>> their addresses as step 1 failed, and >>> b) the network was not up in the initial run, as step 2 failed >>> during >>> the initial run, but passed in the second run. >>> >>> Please let me know if my conclusion is correct. >>> >>> If it is, you can solve your problem in two ways. >>> 1. Use the same string for doing the peer probe and for the brick >>> address during volume create/add-brick. Ideally, we suggest you >>> use >>> properly resolvable FQDNs everywhere. If that is not possible, >>> then >>> use only IP addresses. Try to avoid short names. >>> 2. During boot up, make sure to launch glusterd only after the >>> network >>> is up. This will allow the new peer identification mechanism to do >>> its >>> job correctly. >>> >>> >>> If you have already followed these steps and yet still hit the >>> problem, then please provide more information (setup, logs, etc.). >>> It >>> could be much different problem that you are facing. >>> >>> ~kaushal >>> >>> On Wed, Nov 26, 2014 at 4:01 PM, Punit Dambiwal >>> <hypunit@gmail.com> >>> wrote: >>> > Is there any one can help on this ?? >>> > >>> > Thanks, >>> > punit >>> > >>> > On Wed, Nov 26, 2014 at 9:42 AM, Punit Dambiwal >>> > <hypunit@gmail.com> >>> > wrote: >>> >> >>> >> Hi, >>> >> >>> >> My Glusterfs version is :- glusterfs-3.6.1-1.el7 >>> >> >>> >> On Wed, Nov 26, 2014 at 1:59 AM, Kanagaraj Mayilsamy >>> >> <kmayilsa@redhat.com> >>> >> wrote: >>> >>> >>> >>> [+Gluster-users@gluster.org] >>> >>> >>> >>> "Initialization of volume 'management' failed, review your >>> >>> volfile >>> >>> again", glusterd throws this error when the service is started >>> >>> automatically >>> >>> after the reboot. But the service is successfully started >>> >>> later >>> >>> manually by >>> >>> the user. >>> >>> >>> >>> can somebody from gluster-users please help on this? >>> >>> >>> >>> glusterfs version: 3.5.1 >>> >>> >>> >>> Thanks, >>> >>> Kanagaraj >>> >>> >>> >>> ----- Original Message ----- >>> >>> > From: "Punit Dambiwal" <hypunit@gmail.com> >>> >>> > To: "Kanagaraj" <kmayilsa@redhat.com> >>> >>> > Cc: users@ovirt.org >>> >>> > Sent: Tuesday, November 25, 2014 7:24:45 PM >>> >>> > Subject: Re: [ovirt-users] Gluster command [<UNKNOWN>] >>> >>> > failed on >>> >>> > server... >>> >>> > >>> >>> > Hi Kanagraj, >>> >>> > >>> >>> > Please check the attached log files....i didn't find any >>> >>> > thing >>> >>> > special.... >>> >>> > >>> >>> > On Tue, Nov 25, 2014 at 12:12 PM, Kanagaraj >>> >>> > <kmayilsa@redhat.com> >>> >>> > wrote: >>> >>> > >>> >>> > > Do you see any errors in >>> >>> > > /var/log/glusterfs/etc-glusterfs-glusterd.vol.log or >>> >>> > > vdsm.log >>> >>> > > when >>> >>> > > the >>> >>> > > service is trying to start automatically after the reboot? >>> >>> > > >>> >>> > > Thanks, >>> >>> > > Kanagaraj >>> >>> > > >>> >>> > > >>> >>> > > On 11/24/2014 08:13 PM, Punit Dambiwal wrote: >>> >>> > > >>> >>> > > Hi Kanagaraj, >>> >>> > > >>> >>> > > Yes...once i will start the gluster service and then >>> >>> > > vdsmd >>> >>> > > ...the >>> >>> > > host >>> >>> > > can connect to cluster...but the question is why it's not >>> >>> > > started >>> >>> > > even it >>> >>> > > has chkconfig enabled... >>> >>> > > >>> >>> > > I have tested it in two host cluster >>> >>> > > environment...(Centos 6.6 >>> >>> > > and >>> >>> > > centos 7.0) on both hypervisior cluster..it's failed to >>> >>> > > reconnect >>> >>> > > in >>> >>> > > to >>> >>> > > cluster after reboot.... >>> >>> > > >>> >>> > > In both the environment glusterd enabled for next >>> >>> > > boot....but >>> >>> > > it's >>> >>> > > failed with the same error....seems it's bug in either >>> >>> > > gluster or >>> >>> > > Ovirt ?? >>> >>> > > >>> >>> > > Please help me to find the workaround here if can not >>> >>> > > resolve >>> >>> > > it...as >>> >>> > > without this the Host machine can not connect after >>> >>> > > reboot....that >>> >>> > > means >>> >>> > > engine will consider it as down and every time need to >>> >>> > > manually >>> >>> > > start >>> >>> > > the >>> >>> > > gluster service and vdsmd... ?? >>> >>> > > >>> >>> > > Thanks, >>> >>> > > Punit >>> >>> > > >>> >>> > > On Mon, Nov 24, 2014 at 10:20 PM, Kanagaraj >>> >>> > > <kmayilsa@redhat.com> >>> >>> > > wrote: >>> >>> > > >>> >>> > >> From vdsm.log "error: Connection failed. Please check if >>> >>> > >> gluster >>> >>> > >> daemon >>> >>> > >> is operational." >>> >>> > >> >>> >>> > >> Starting glusterd service should fix this issue. 'service >>> >>> > >> glusterd >>> >>> > >> start' >>> >>> > >> But i am wondering why the glusterd was not started >>> >>> > >> automatically >>> >>> > >> after >>> >>> > >> the reboot. >>> >>> > >> >>> >>> > >> Thanks, >>> >>> > >> Kanagaraj >>> >>> > >> >>> >>> > >> >>> >>> > >> >>> >>> > >> On 11/24/2014 07:18 PM, Punit Dambiwal wrote: >>> >>> > >> >>> >>> > >> Hi Kanagaraj, >>> >>> > >> >>> >>> > >> Please find the attached VDSM logs :- >>> >>> > >> >>> >>> > >> ---------------- >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> > >> >>> >>> > >> >>> >>> > >> >>> >>> > >> 21:41:17,182::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >>> >>> > >> Owner.cancelAll requests {} >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> > >> >>> >>> > >> 21:41:17,182::task::993::Storage.TaskManager.Task::(_decref) >>> >>> > >> Task=`1691d409-9b27-4585-8281-5ec26154367a`::ref 0 >>> >>> > >> aborting >>> >>> > >> False >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> > >> >>> >>> > >> >>> >>> > >> 21:41:32,393::task::595::Storage.TaskManager.Task::(_updateState) >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from >>> >>> > >> state >>> >>> > >> init >>> >>> > >> -> >>> >>> > >> state preparing >>> >>> > >> Thread-13::INFO::2014-11-24 >>> >>> > >> 21:41:32,393::logUtils::44::dispatcher::(wrapper) Run and >>> >>> > >> protect: >>> >>> > >> repoStats(options=None) >>> >>> > >> Thread-13::INFO::2014-11-24 >>> >>> > >> 21:41:32,393::logUtils::47::dispatcher::(wrapper) Run and >>> >>> > >> protect: >>> >>> > >> repoStats, Return response: {} >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> > >> >>> >>> > >> 21:41:32,393::task::1191::Storage.TaskManager.Task::(prepare) >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::finished: {} >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> > >> >>> >>> > >> >>> >>> > >> 21:41:32,394::task::595::Storage.TaskManager.Task::(_updateState) >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from >>> >>> > >> state >>> >>> > >> preparing >>> >>> > >> -> >>> >>> > >> state finished >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> > >> >>> >>> > >> >>> >>> > >> >>> >>> > >> 21:41:32,394::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) >>> >>> > >> Owner.releaseAll requests {} resources {} >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> > >> >>> >>> > >> >>> >>> > >> >>> >>> > >> 21:41:32,394::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >>> >>> > >> Owner.cancelAll requests {} >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> > >> >>> >>> > >> 21:41:32,394::task::993::Storage.TaskManager.Task::(_decref) >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::ref 0 >>> >>> > >> aborting >>> >>> > >> False >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> > >> 21:41:41,550::BindingXMLRPC::1132::vds::(wrapper) client >>> >>> > >> [10.10.10.2]::call >>> >>> > >> getCapabilities with () {} >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> > >> 21:41:41,553::utils::738::root::(execCmd) >>> >>> > >> /sbin/ip route show to 0.0.0.0/0 table all (cwd None) >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> > >> 21:41:41,560::utils::758::root::(execCmd) >>> >>> > >> SUCCESS: <err> = ''; <rc> = 0 >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> > >> 21:41:41,588::caps::728::root::(_getKeyPackages) rpm >>> >>> > >> package >>> >>> > >> ('gluster-swift',) not found >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> > >> 21:41:41,592::caps::728::root::(_getKeyPackages) rpm >>> >>> > >> package >>> >>> > >> ('gluster-swift-object',) not found >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> > >> 21:41:41,593::caps::728::root::(_getKeyPackages) rpm >>> >>> > >> package >>> >>> > >> ('gluster-swift-plugin',) not found >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm >>> >>> > >> package >>> >>> > >> ('gluster-swift-account',) not found >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm >>> >>> > >> package >>> >>> > >> ('gluster-swift-proxy',) not found >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm >>> >>> > >> package >>> >>> > >> ('gluster-swift-doc',) not found >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> > >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm >>> >>> > >> package >>> >>> > >> ('gluster-swift-container',) not found >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> > >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm >>> >>> > >> package >>> >>> > >> ('glusterfs-geo-replication',) not found >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> > >> 21:41:41,600::caps::646::root::(get) >>> >>> > >> VirtioRNG DISABLED: libvirt version 0.10.2-29.el6_5.9 >>> >>> > >> required >>> >>> > >> >= >>> >>> > >> 0.10.2-31 >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> > >> 21:41:41,603::BindingXMLRPC::1139::vds::(wrapper) return >>> >>> > >> getCapabilities >>> >>> > >> with {'status': {'message': 'Done', 'code': 0}, 'info': >>> >>> > >> {'HBAInventory': >>> >>> > >> {'iSCSI': [{'InitiatorName': >>> >>> > >> 'iqn.1994-05.com.redhat:32151ce183c8'}], >>> >>> > >> 'FC': >>> >>> > >> []}, 'packages2': {'kernel': {'release': >>> >>> > >> '431.el6.x86_64', >>> >>> > >> 'buildtime': >>> >>> > >> 1385061309.0, 'version': '2.6.32'}, 'glusterfs-rdma': >>> >>> > >> {'release': >>> >>> > >> '1.el6', >>> >>> > >> 'buildtime': 1403622628L, 'version': '3.5.1'}, >>> >>> > >> 'glusterfs-fuse': >>> >>> > >> {'release': '1.el6', 'buildtime': 1403622628L, 'version': >>> >>> > >> '3.5.1'}, >>> >>> > >> 'spice-server': {'release': '6.el6_5.2', 'buildtime': >>> >>> > >> 1402324637L, >>> >>> > >> 'version': '0.12.4'}, 'vdsm': {'release': >>> >>> > >> '1.gitdb83943.el6', >>> >>> > >> 'buildtime': >>> >>> > >> 1412784567L, 'version': '4.16.7'}, 'qemu-kvm': >>> >>> > >> {'release': >>> >>> > >> '2.415.el6_5.10', 'buildtime': 1402435700L, 'version': >>> >>> > >> '0.12.1.2'}, >>> >>> > >> 'qemu-img': {'release': '2.415.el6_5.10', 'buildtime': >>> >>> > >> 1402435700L, >>> >>> > >> 'version': '0.12.1.2'}, 'libvirt': {'release': >>> >>> > >> '29.el6_5.9', >>> >>> > >> 'buildtime': >>> >>> > >> 1402404612L, 'version': '0.10.2'}, 'glusterfs': >>> >>> > >> {'release': >>> >>> > >> '1.el6', >>> >>> > >> 'buildtime': 1403622628L, 'version': '3.5.1'}, 'mom': >>> >>> > >> {'release': >>> >>> > >> '2.el6', >>> >>> > >> 'buildtime': 1403794344L, 'version': '0.4.1'}, >>> >>> > >> 'glusterfs-server': >>> >>> > >> {'release': '1.el6', 'buildtime': 1403622628L, 'version': >>> >>> > >> '3.5.1'}}, >>> >>> > >> 'numaNodeDistance': {'1': [20, 10], '0': [10, 20]}, >>> >>> > >> 'cpuModel': >>> >>> > >> 'Intel(R) >>> >>> > >> Xeon(R) CPU X5650 @ 2.67GHz', 'liveMerge': >>> >>> > >> 'false', >>> >>> > >> 'hooks': >>> >>> > >> {}, >>> >>> > >> 'cpuSockets': '2', 'vmTypes': ['kvm'], 'selinux': >>> >>> > >> {'mode': '1'}, >>> >>> > >> 'kdumpStatus': 0, 'supportedProtocols': ['2.2', '2.3'], >>> >>> > >> 'networks': >>> >>> > >> {'ovirtmgmt': {'iface': u'bond0.10', 'addr': >>> >>> > >> '43.252.176.16', >>> >>> > >> 'bridged': >>> >>> > >> False, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], >>> >>> > >> 'mtu': >>> >>> > >> '1500', >>> >>> > >> 'bootproto4': 'none', 'netmask': '255.255.255.0', >>> >>> > >> 'ipv4addrs': >>> >>> > >> [' >>> >>> > >> 43.252.176.16/24' <http://43.252.176.16/24%27>], >>> >>> > >> 'interface': >>> >>> > >> u'bond0.10', 'ipv6gateway': '::', 'gateway': >>> >>> > >> '43.25.17.1'}, >>> >>> > >> 'Internal': >>> >>> > >> {'iface': 'Internal', 'addr': '', 'cfg': {'DEFROUTE': >>> >>> > >> 'no', >>> >>> > >> 'HOTPLUG': >>> >>> > >> 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', >>> >>> > >> 'BOOTPROTO': >>> >>> > >> 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': >>> >>> > >> 'Bridge', >>> >>> > >> 'ONBOOT': >>> >>> > >> 'no'}, 'bridged': True, 'ipv6addrs': >>> >>> > >> ['fe80::210:18ff:fecd:daac/64'], >>> >>> > >> 'gateway': '', 'bootproto4': 'none', 'netmask': '', >>> >>> > >> 'stp': >>> >>> > >> 'off', >>> >>> > >> 'ipv4addrs': [], 'mtu': '9000', 'ipv6gateway': '::', >>> >>> > >> 'ports': >>> >>> > >> ['bond1.100']}, 'storage': {'iface': u'bond1', 'addr': >>> >>> > >> '10.10.10.6', >>> >>> > >> 'bridged': False, 'ipv6addrs': >>> >>> > >> ['fe80::210:18ff:fecd:daac/64'], >>> >>> > >> 'mtu': >>> >>> > >> '9000', 'bootproto4': 'none', 'netmask': '255.255.255.0', >>> >>> > >> 'ipv4addrs': [' >>> >>> > >> 10.10.10.6/24' <http://10.10.10.6/24%27>], 'interface': >>> >>> > >> u'bond1', >>> >>> > >> 'ipv6gateway': '::', 'gateway': ''}, 'VMNetwork': >>> >>> > >> {'iface': >>> >>> > >> 'VMNetwork', >>> >>> > >> 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', >>> >>> > >> 'MTU': >>> >>> > >> '1500', >>> >>> > >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', >>> >>> > >> 'STP': >>> >>> > >> 'off', >>> >>> > >> 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, >>> >>> > >> 'bridged': >>> >>> > >> True, >>> >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'gateway': >>> >>> > >> '', >>> >>> > >> 'bootproto4': >>> >>> > >> 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], >>> >>> > >> 'mtu': >>> >>> > >> '1500', >>> >>> > >> 'ipv6gateway': '::', 'ports': ['bond0.36']}}, 'bridges': >>> >>> > >> {'Internal': >>> >>> > >> {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', >>> >>> > >> 'MTU': >>> >>> > >> '9000', >>> >>> > >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', >>> >>> > >> 'STP': >>> >>> > >> 'off', >>> >>> > >> 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, >>> >>> > >> 'ipv6addrs': >>> >>> > >> ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', >>> >>> > >> 'netmask': '', >>> >>> > >> 'stp': >>> >>> > >> 'off', 'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': >>> >>> > >> '', >>> >>> > >> 'opts': >>> >>> > >> {'topology_change_detected': '0', >>> >>> > >> 'multicast_last_member_count': >>> >>> > >> '2', >>> >>> > >> 'hash_elasticity': '4', >>> >>> > >> 'multicast_query_response_interval': >>> >>> > >> '999', >>> >>> > >> 'multicast_snooping': '1', >>> >>> > >> 'multicast_startup_query_interval': >>> >>> > >> '3124', >>> >>> > >> 'hello_timer': '31', 'multicast_querier_interval': >>> >>> > >> '25496', >>> >>> > >> 'max_age': >>> >>> > >> '1999', 'hash_max': '512', 'stp_state': '0', 'root_id': >>> >>> > >> '8000.001018cddaac', 'priority': '32768', >>> >>> > >> 'multicast_membership_interval': >>> >>> > >> '25996', 'root_path_cost': '0', 'root_port': '0', >>> >>> > >> 'multicast_querier': >>> >>> > >> '0', >>> >>> > >> 'multicast_startup_query_count': '2', 'hello_time': >>> >>> > >> '199', >>> >>> > >> 'topology_change': '0', 'bridge_id': '8000.001018cddaac', >>> >>> > >> 'topology_change_timer': '0', 'ageing_time': '29995', >>> >>> > >> 'gc_timer': >>> >>> > >> '31', >>> >>> > >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', >>> >>> > >> 'multicast_query_interval': '12498', >>> >>> > >> 'multicast_last_member_interval': >>> >>> > >> '99', 'multicast_router': '1', 'forward_delay': '0'}, >>> >>> > >> 'ports': >>> >>> > >> ['bond1.100']}, 'VMNetwork': {'addr': '', 'cfg': >>> >>> > >> {'DEFROUTE': >>> >>> > >> 'no', >>> >>> > >> 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', >>> >>> > >> 'NM_CONTROLLED': >>> >>> > >> 'no', >>> >>> > >> 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'VMNetwork', >>> >>> > >> 'TYPE': >>> >>> > >> 'Bridge', >>> >>> > >> 'ONBOOT': 'no'}, 'ipv6addrs': >>> >>> > >> ['fe80::62eb:69ff:fe20:b46c/64'], >>> >>> > >> 'mtu': >>> >>> > >> '1500', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], >>> >>> > >> 'ipv6gateway': >>> >>> > >> '::', >>> >>> > >> 'gateway': '', 'opts': {'topology_change_detected': '0', >>> >>> > >> 'multicast_last_member_count': '2', 'hash_elasticity': >>> >>> > >> '4', >>> >>> > >> 'multicast_query_response_interval': '999', >>> >>> > >> 'multicast_snooping': >>> >>> > >> '1', >>> >>> > >> 'multicast_startup_query_interval': '3124', >>> >>> > >> 'hello_timer': >>> >>> > >> '131', >>> >>> > >> 'multicast_querier_interval': '25496', 'max_age': '1999', >>> >>> > >> 'hash_max': >>> >>> > >> '512', 'stp_state': '0', 'root_id': '8000.60eb6920b46c', >>> >>> > >> 'priority': >>> >>> > >> '32768', 'multicast_membership_interval': '25996', >>> >>> > >> 'root_path_cost': >>> >>> > >> '0', >>> >>> > >> 'root_port': '0', 'multicast_querier': '0', >>> >>> > >> 'multicast_startup_query_count': '2', 'hello_time': >>> >>> > >> '199', >>> >>> > >> 'topology_change': '0', 'bridge_id': '8000.60eb6920b46c', >>> >>> > >> 'topology_change_timer': '0', 'ageing_time': '29995', >>> >>> > >> 'gc_timer': >>> >>> > >> '31', >>> >>> > >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', >>> >>> > >> 'multicast_query_interval': '12498', >>> >>> > >> 'multicast_last_member_interval': >>> >>> > >> '99', 'multicast_router': '1', 'forward_delay': '0'}, >>> >>> > >> 'ports': >>> >>> > >> ['bond0.36']}}, 'uuid': >>> >>> > >> '44454C4C-4C00-1057-8053-B7C04F504E31', >>> >>> > >> 'lastClientIface': 'bond1', 'nics': {'eth3': >>> >>> > >> {'permhwaddr': >>> >>> > >> '00:10:18:cd:da:ae', 'addr': '', 'cfg': {'SLAVE': 'yes', >>> >>> > >> 'NM_CONTROLLED': >>> >>> > >> 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ae', >>> >>> > >> 'MASTER': >>> >>> > >> 'bond1', >>> >>> > >> 'DEVICE': 'eth3', 'ONBOOT': 'no'}, 'ipv6addrs': [], >>> >>> > >> 'mtu': >>> >>> > >> '9000', >>> >>> > >> 'netmask': '', 'ipv4addrs': [], 'hwaddr': >>> >>> > >> '00:10:18:cd:da:ac', >>> >>> > >> 'speed': >>> >>> > >> 1000}, 'eth2': {'permhwaddr': '00:10:18:cd:da:ac', >>> >>> > >> 'addr': '', >>> >>> > >> 'cfg': >>> >>> > >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '9000', >>> >>> > >> 'HWADDR': >>> >>> > >> '00:10:18:cd:da:ac', 'MASTER': 'bond1', 'DEVICE': 'eth2', >>> >>> > >> 'ONBOOT': >>> >>> > >> 'no'}, >>> >>> > >> 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', >>> >>> > >> 'ipv4addrs': [], >>> >>> > >> 'hwaddr': >>> >>> > >> '00:10:18:cd:da:ac', 'speed': 1000}, 'eth1': >>> >>> > >> {'permhwaddr': >>> >>> > >> '60:eb:69:20:b4:6d', 'addr': '', 'cfg': {'SLAVE': 'yes', >>> >>> > >> 'NM_CONTROLLED': >>> >>> > >> 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6d', >>> >>> > >> 'MASTER': >>> >>> > >> 'bond0', >>> >>> > >> 'DEVICE': 'eth1', 'ONBOOT': 'yes'}, 'ipv6addrs': [], >>> >>> > >> 'mtu': >>> >>> > >> '1500', >>> >>> > >> 'netmask': '', 'ipv4addrs': [], 'hwaddr': >>> >>> > >> '60:eb:69:20:b4:6c', >>> >>> > >> 'speed': >>> >>> > >> 1000}, 'eth0': {'permhwaddr': '60:eb:69:20:b4:6c', >>> >>> > >> 'addr': '', >>> >>> > >> 'cfg': >>> >>> > >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', >>> >>> > >> 'HWADDR': >>> >>> > >> '60:eb:69:20:b4:6c', 'MASTER': 'bond0', 'DEVICE': 'eth0', >>> >>> > >> 'ONBOOT': >>> >>> > >> 'yes'}, >>> >>> > >> 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', >>> >>> > >> 'ipv4addrs': [], >>> >>> > >> 'hwaddr': >>> >>> > >> '60:eb:69:20:b4:6c', 'speed': 1000}}, >>> >>> > >> 'software_revision': '1', >>> >>> > >> 'clusterLevels': ['3.0', '3.1', '3.2', '3.3', '3.4', >>> >>> > >> '3.5'], >>> >>> > >> 'cpuFlags': >>> >>> > >> >>> >>> > >> >>> >>> > >> >>> >>> > >> u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,popcnt,aes,lahf_lm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270', >>> >>> > >> 'ISCSIInitiatorName': >>> >>> > >> 'iqn.1994-05.com.redhat:32151ce183c8', >>> >>> > >> 'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', >>> >>> > >> '3.1', >>> >>> > >> '3.2', >>> >>> > >> '3.3', >>> >>> > >> '3.4', '3.5'], 'autoNumaBalancing': 2, 'reservedMem': >>> >>> > >> '321', >>> >>> > >> 'bondings': >>> >>> > >> {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', >>> >>> > >> 'netmask': '', >>> >>> > >> 'slaves': >>> >>> > >> [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', >>> >>> > >> 'cfg': >>> >>> > >> {'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': 'no', >>> >>> > >> 'BONDING_OPTS': >>> >>> > >> 'mode=4 miimon=100', 'DEVICE': 'bond0', 'ONBOOT': 'yes'}, >>> >>> > >> 'ipv6addrs': >>> >>> > >> ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', >>> >>> > >> 'netmask': '', >>> >>> > >> 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'slaves': >>> >>> > >> ['eth0', >>> >>> > >> 'eth1'], >>> >>> > >> 'opts': {'miimon': '100', 'mode': '4'}}, 'bond1': >>> >>> > >> {'addr': >>> >>> > >> '10.10.10.6', >>> >>> > >> 'cfg': {'DEFROUTE': 'no', 'IPADDR': '10.10.10.6', >>> >>> > >> 'HOTPLUG': >>> >>> > >> 'no', >>> >>> > >> 'MTU': >>> >>> > >> '9000', 'NM_CONTROLLED': 'no', 'NETMASK': >>> >>> > >> '255.255.255.0', >>> >>> > >> 'BOOTPROTO': >>> >>> > >> 'none', 'BONDING_OPTS': 'mode=4 miimon=100', 'DEVICE': >>> >>> > >> 'bond1', >>> >>> > >> 'ONBOOT': >>> >>> > >> 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], >>> >>> > >> 'mtu': >>> >>> > >> '9000', >>> >>> > >> 'netmask': '255.255.255.0', 'ipv4addrs': ['10.10.10.6/24' >>> >>> > >> <http://10.10.10.6/24%27>], 'hwaddr': >>> >>> > >> '00:10:18:cd:da:ac', >>> >>> > >> 'slaves': >>> >>> > >> ['eth2', 'eth3'], 'opts': {'miimon': '100', 'mode': >>> >>> > >> '4'}}, >>> >>> > >> 'bond2': >>> >>> > >> {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', >>> >>> > >> 'slaves': >>> >>> > >> [], >>> >>> > >> 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', >>> >>> > >> 'cfg': {}, >>> >>> > >> 'mtu': >>> >>> > >> '1500', 'netmask': '', 'slaves': [], 'hwaddr': >>> >>> > >> '00:00:00:00:00:00'}}, >>> >>> > >> 'software_version': '4.16', 'memSize': '24019', >>> >>> > >> 'cpuSpeed': >>> >>> > >> '2667.000', >>> >>> > >> 'numaNodes': {u'1': {'totalMemory': '12288', 'cpus': [6, >>> >>> > >> 7, 8, >>> >>> > >> 9, >>> >>> > >> 10, 11, >>> >>> > >> 18, 19, 20, 21, 22, 23]}, u'0': {'totalMemory': '12278', >>> >>> > >> 'cpus': >>> >>> > >> [0, >>> >>> > >> 1, 2, >>> >>> > >> 3, 4, 5, 12, 13, 14, 15, 16, 17]}}, 'version_name': 'Snow >>> >>> > >> Man', >>> >>> > >> 'vlans': >>> >>> > >> {'bond0.10': {'iface': 'bond0', 'addr': '43.25.17.16', >>> >>> > >> 'cfg': >>> >>> > >> {'DEFROUTE': >>> >>> > >> 'yes', 'VLAN': 'yes', 'IPADDR': '43.25.17.16', 'HOTPLUG': >>> >>> > >> 'no', >>> >>> > >> 'GATEWAY': >>> >>> > >> '43.25.17.1', 'NM_CONTROLLED': 'no', 'NETMASK': >>> >>> > >> '255.255.255.0', >>> >>> > >> 'BOOTPROTO': 'none', 'DEVICE': 'bond0.10', 'MTU': '1500', >>> >>> > >> 'ONBOOT': >>> >>> > >> 'yes'}, >>> >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': >>> >>> > >> 10, >>> >>> > >> 'mtu': >>> >>> > >> '1500', >>> >>> > >> 'netmask': '255.255.255.0', 'ipv4addrs': >>> >>> > >> ['43.25.17.16/24'] >>> >>> > >> <http://43.25.17.16/24%27%5D>}, 'bond0.36': {'iface': >>> >>> > >> 'bond0', >>> >>> > >> 'addr': >>> >>> > >> '', 'cfg': {'BRIDGE': 'VMNetwork', 'VLAN': 'yes', >>> >>> > >> 'HOTPLUG': >>> >>> > >> 'no', >>> >>> > >> 'MTU': >>> >>> > >> '1500', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond0.36', >>> >>> > >> 'ONBOOT': >>> >>> > >> 'no'}, >>> >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': >>> >>> > >> 36, >>> >>> > >> 'mtu': >>> >>> > >> '1500', >>> >>> > >> 'netmask': '', 'ipv4addrs': []}, 'bond1.100': {'iface': >>> >>> > >> 'bond1', >>> >>> > >> 'addr': >>> >>> > >> '', 'cfg': {'BRIDGE': 'Internal', 'VLAN': 'yes', >>> >>> > >> 'HOTPLUG': >>> >>> > >> 'no', >>> >>> > >> 'MTU': >>> >>> > >> '9000', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond1.100', >>> >>> > >> 'ONBOOT': >>> >>> > >> 'no'}, >>> >>> > >> 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'vlanid': >>> >>> > >> 100, >>> >>> > >> 'mtu': >>> >>> > >> '9000', >>> >>> > >> 'netmask': '', 'ipv4addrs': []}}, 'cpuCores': '12', >>> >>> > >> 'kvmEnabled': >>> >>> > >> 'true', >>> >>> > >> 'guestOverhead': '65', 'cpuThreads': '24', >>> >>> > >> 'emulatedMachines': >>> >>> > >> [u'rhel6.5.0', u'pc', u'rhel6.4.0', u'rhel6.3.0', >>> >>> > >> u'rhel6.2.0', >>> >>> > >> u'rhel6.1.0', u'rhel6.0.0', u'rhel5.5.0', u'rhel5.4.4', >>> >>> > >> u'rhel5.4.0'], >>> >>> > >> 'operatingSystem': {'release': '5.el6.centos.11.1', >>> >>> > >> 'version': >>> >>> > >> '6', >>> >>> > >> 'name': >>> >>> > >> 'RHEL'}, 'lastClient': '10.10.10.2'}} >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> > >> 21:41:41,620::BindingXMLRPC::1132::vds::(wrapper) client >>> >>> > >> [10.10.10.2]::call >>> >>> > >> getHardwareInfo with () {} >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> > >> 21:41:41,621::BindingXMLRPC::1139::vds::(wrapper) return >>> >>> > >> getHardwareInfo >>> >>> > >> with {'status': {'message': 'Done', 'code': 0}, 'info': >>> >>> > >> {'systemProductName': 'CS24-TY', 'systemSerialNumber': >>> >>> > >> '7LWSPN1', >>> >>> > >> 'systemFamily': 'Server', 'systemVersion': 'A00', >>> >>> > >> 'systemUUID': >>> >>> > >> '44454c4c-4c00-1057-8053-b7c04f504e31', >>> >>> > >> 'systemManufacturer': >>> >>> > >> 'Dell'}} >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> > >> 21:41:41,733::BindingXMLRPC::1132::vds::(wrapper) client >>> >>> > >> [10.10.10.2]::call >>> >>> > >> hostsList with () {} flowID [222e8036] >>> >>> > >> Thread-13::ERROR::2014-11-24 >>> >>> > >> 21:41:44,753::BindingXMLRPC::1148::vds::(wrapper) vdsm >>> >>> > >> exception >>> >>> > >> occured >>> >>> > >> Traceback (most recent call last): >>> >>> > >> File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1135, >>> >>> > >> in >>> >>> > >> wrapper >>> >>> > >> res = f(*args, **kwargs) >>> >>> > >> File "/usr/share/vdsm/gluster/api.py", line 54, in >>> >>> > >> wrapper >>> >>> > >> rv = func(*args, **kwargs) >>> >>> > >> File "/usr/share/vdsm/gluster/api.py", line 251, in >>> >>> > >> hostsList >>> >>> > >> return {'hosts': self.svdsmProxy.glusterPeerStatus()} >>> >>> > >> File "/usr/share/vdsm/supervdsm.py", line 50, in >>> >>> > >> __call__ >>> >>> > >> return callMethod() >>> >>> > >> File "/usr/share/vdsm/supervdsm.py", line 48, in >>> >>> > >> <lambda> >>> >>> > >> **kwargs) >>> >>> > >> File "<string>", line 2, in glusterPeerStatus >>> >>> > >> File >>> >>> > >> "/usr/lib64/python2.6/multiprocessing/managers.py", line >>> >>> > >> 740, >>> >>> > >> in >>> >>> > >> _callmethod >>> >>> > >> raise convert_to_error(kind, result) >>> >>> > >> GlusterCmdExecFailedException: Command execution failed >>> >>> > >> error: Connection failed. Please check if gluster daemon >>> >>> > >> is >>> >>> > >> operational. >>> >>> > >> return code: 1 >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> > >> >>> >>> > >> >>> >>> > >> 21:41:50,949::task::595::Storage.TaskManager.Task::(_updateState) >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from >>> >>> > >> state >>> >>> > >> init >>> >>> > >> -> >>> >>> > >> state preparing >>> >>> > >> Thread-13::INFO::2014-11-24 >>> >>> > >> 21:41:50,950::logUtils::44::dispatcher::(wrapper) Run and >>> >>> > >> protect: >>> >>> > >> repoStats(options=None) >>> >>> > >> Thread-13::INFO::2014-11-24 >>> >>> > >> 21:41:50,950::logUtils::47::dispatcher::(wrapper) Run and >>> >>> > >> protect: >>> >>> > >> repoStats, Return response: {} >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> > >> >>> >>> > >> 21:41:50,950::task::1191::Storage.TaskManager.Task::(prepare) >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::finished: {} >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> > >> >>> >>> > >> >>> >>> > >> 21:41:50,950::task::595::Storage.TaskManager.Task::(_updateState) >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from >>> >>> > >> state >>> >>> > >> preparing >>> >>> > >> -> >>> >>> > >> state finished >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> > >> >>> >>> > >> >>> >>> > >> >>> >>> > >> 21:41:50,951::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) >>> >>> > >> Owner.releaseAll requests {} resources {} >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> > >> >>> >>> > >> >>> >>> > >> >>> >>> > >> 21:41:50,951::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >>> >>> > >> Owner.cancelAll requests {} >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> > >> >>> >>> > >> 21:41:50,951::task::993::Storage.TaskManager.Task::(_decref) >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::ref 0 >>> >>> > >> aborting >>> >>> > >> False >>> >>> > >> ------------------------------- >>> >>> > >> >>> >>> > >> [root@compute4 ~]# service glusterd status >>> >>> > >> glusterd is stopped >>> >>> > >> [root@compute4 ~]# chkconfig --list | grep glusterd >>> >>> > >> glusterd 0:off 1:off 2:on 3:on 4:on >>> >>> > >> 5:on >>> >>> > >> 6:off >>> >>> > >> [root@compute4 ~]# >>> >>> > >> >>> >>> > >> Thanks, >>> >>> > >> Punit >>> >>> > >> >>> >>> > >> On Mon, Nov 24, 2014 at 6:36 PM, Kanagaraj >>> >>> > >> <kmayilsa@redhat.com> >>> >>> > >> wrote: >>> >>> > >> >>> >>> > >>> Can you send the corresponding error in vdsm.log from >>> >>> > >>> the >>> >>> > >>> host? >>> >>> > >>> >>> >>> > >>> Also check if glusterd service is running. >>> >>> > >>> >>> >>> > >>> Thanks, >>> >>> > >>> Kanagaraj >>> >>> > >>> >>> >>> > >>> >>> >>> > >>> On 11/24/2014 03:39 PM, Punit Dambiwal wrote: >>> >>> > >>> >>> >>> > >>> Hi, >>> >>> > >>> >>> >>> > >>> After reboot my Hypervisior host can not activate again >>> >>> > >>> in the >>> >>> > >>> cluster >>> >>> > >>> and failed with the following error :- >>> >>> > >>> >>> >>> > >>> Gluster command [<UNKNOWN>] failed on server... >>> >>> > >>> >>> >>> > >>> Engine logs :- >>> >>> > >>> >>> >>> > >>> 2014-11-24 18:05:28,397 INFO >>> >>> > >>> >>> >>> > >>> >>> >>> > >>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>> >>> > >>> (DefaultQuartzScheduler_Worker-64) START, >>> >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, HostId >>> >>> > >>> = >>> >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 5f251c90 >>> >>> > >>> 2014-11-24 18:05:30,609 INFO >>> >>> > >>> >>> >>> > >>> >>> >>> > >>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>> >>> > >>> (DefaultQuartzScheduler_Worker-64) FINISH, >>> >>> > >>> GlusterVolumesListVDSCommand, >>> >>> > >>> return: >>> >>> > >>> >>> >>> > >>> >>> >>> > >>> >>> >>> > >>> {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@d95203e0}, >>> >>> > >>> log id: 5f251c90 >>> >>> > >>> 2014-11-24 18:05:33,768 INFO >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>> >>> > >>> (ajp--127.0.0.1-8702-8) >>> >>> > >>> [287d570d] Lock Acquired to object EngineLock >>> >>> > >>> [exclusiveLocks= >>> >>> > >>> key: >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS >>> >>> > >>> , sharedLocks= ] >>> >>> > >>> 2014-11-24 18:05:33,795 INFO >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] Running >>> >>> > >>> command: >>> >>> > >>> ActivateVdsCommand internal: false. Entities affected : >>> >>> > >>> ID: >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDSAction >>> >>> > >>> group >>> >>> > >>> MANIPULATE_HOST >>> >>> > >>> with role type ADMIN >>> >>> > >>> 2014-11-24 18:05:33,796 INFO >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] Before >>> >>> > >>> acquiring >>> >>> > >>> lock in >>> >>> > >>> order to prevent monitoring for host Compute5 from >>> >>> > >>> data-center >>> >>> > >>> SV_WTC >>> >>> > >>> 2014-11-24 18:05:33,797 INFO >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] Lock >>> >>> > >>> acquired, >>> >>> > >>> from >>> >>> > >>> now a >>> >>> > >>> monitoring of host will be skipped for host Compute5 >>> >>> > >>> from >>> >>> > >>> data-center >>> >>> > >>> SV_WTC >>> >>> > >>> 2014-11-24 18:05:33,817 INFO >>> >>> > >>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] START, >>> >>> > >>> SetVdsStatusVDSCommand(HostName = Compute5, HostId = >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=Unassigned, >>> >>> > >>> nonOperationalReason=NONE, stopSpmFailureLogged=false), >>> >>> > >>> log id: >>> >>> > >>> 1cbc7311 >>> >>> > >>> 2014-11-24 18:05:33,820 INFO >>> >>> > >>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] FINISH, >>> >>> > >>> SetVdsStatusVDSCommand, log id: 1cbc7311 >>> >>> > >>> 2014-11-24 18:05:34,086 INFO >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) Activate finished. >>> >>> > >>> Lock >>> >>> > >>> released. >>> >>> > >>> Monitoring can run now for host Compute5 from >>> >>> > >>> data-center >>> >>> > >>> SV_WTC >>> >>> > >>> 2014-11-24 18:05:34,088 INFO >>> >>> > >>> >>> >>> > >>> >>> >>> > >>> >>> >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) Correlation ID: >>> >>> > >>> 287d570d, >>> >>> > >>> Job >>> >>> > >>> ID: >>> >>> > >>> 5ef8e4d6-b2bc-469e-8e81-7ef74b2a001a, Call Stack: null, >>> >>> > >>> Custom >>> >>> > >>> Event ID: >>> >>> > >>> -1, Message: Host Compute5 was activated by admin. >>> >>> > >>> 2014-11-24 18:05:34,090 INFO >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) Lock freed to object >>> >>> > >>> EngineLock >>> >>> > >>> [exclusiveLocks= key: >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a >>> >>> > >>> value: >>> >>> > >>> VDS >>> >>> > >>> , sharedLocks= ] >>> >>> > >>> 2014-11-24 18:05:35,792 INFO >>> >>> > >>> >>> >>> > >>> >>> >>> > >>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>> >>> > >>> (DefaultQuartzScheduler_Worker-55) [3706e836] START, >>> >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, HostId >>> >>> > >>> = >>> >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 48a0c832 >>> >>> > >>> 2014-11-24 18:05:37,064 INFO >>> >>> > >>> >>> >>> > >>> >>> >>> > >>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) START, >>> >>> > >>> GetHardwareInfoVDSCommand(HostName = Compute5, HostId = >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, >>> >>> > >>> >>> >>> > >>> vds=Host[Compute5,0bf6b00f-7947-4411-b55a-cc5eea2b381a]), log >>> >>> > >>> id: >>> >>> > >>> 6d560cc2 >>> >>> > >>> 2014-11-24 18:05:37,074 INFO >>> >>> > >>> >>> >>> > >>> >>> >>> > >>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) FINISH, >>> >>> > >>> GetHardwareInfoVDSCommand, log >>> >>> > >>> id: 6d560cc2 >>> >>> > >>> 2014-11-24 18:05:37,093 WARN >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.VdsManager] >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) Host Compute5 is >>> >>> > >>> running >>> >>> > >>> with >>> >>> > >>> disabled >>> >>> > >>> SELinux. >>> >>> > >>> 2014-11-24 18:05:37,127 INFO >>> >>> > >>> >>> >>> > >>> >>> >>> > >>> >>> >>> > >>> [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] Running >>> >>> > >>> command: >>> >>> > >>> HandleVdsCpuFlagsOrClusterChangedCommand internal: true. >>> >>> > >>> Entities >>> >>> > >>> affected >>> >>> > >>> : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS >>> >>> > >>> 2014-11-24 18:05:37,147 INFO >>> >>> > >>> >>> >>> > >>> >>> >>> > >>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] START, >>> >>> > >>> GlusterServersListVDSCommand(HostName = Compute5, HostId >>> >>> > >>> = >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a), log id: 4faed87 >>> >>> > >>> 2014-11-24 18:05:37,164 INFO >>> >>> > >>> >>> >>> > >>> >>> >>> > >>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] FINISH, >>> >>> > >>> GlusterServersListVDSCommand, log id: 4faed87 >>> >>> > >>> 2014-11-24 18:05:37,189 INFO >>> >>> > >>> [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Running >>> >>> > >>> command: >>> >>> > >>> SetNonOperationalVdsCommand internal: true. Entities >>> >>> > >>> affected : >>> >>> > >>> ID: >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS >>> >>> > >>> 2014-11-24 18:05:37,206 INFO >>> >>> > >>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] START, >>> >>> > >>> SetVdsStatusVDSCommand(HostName = Compute5, HostId = >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, >>> >>> > >>> status=NonOperational, >>> >>> > >>> nonOperationalReason=GLUSTER_COMMAND_FAILED, >>> >>> > >>> stopSpmFailureLogged=false), >>> >>> > >>> log id: fed5617 >>> >>> > >>> 2014-11-24 18:05:37,209 INFO >>> >>> > >>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] FINISH, >>> >>> > >>> SetVdsStatusVDSCommand, log id: fed5617 >>> >>> > >>> 2014-11-24 18:05:37,223 ERROR >>> >>> > >>> >>> >>> > >>> >>> >>> > >>> >>> >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] >>> >>> > >>> Correlation ID: >>> >>> > >>> 4a84c4e5, >>> >>> > >>> Job >>> >>> > >>> ID: 4bfd4a6d-c3ef-468f-a40e-a3a6ca13011b, Call Stack: >>> >>> > >>> null, >>> >>> > >>> Custom >>> >>> > >>> Event >>> >>> > >>> ID: -1, Message: Gluster command [<UNKNOWN>] failed on >>> >>> > >>> server >>> >>> > >>> Compute5. >>> >>> > >>> 2014-11-24 18:05:37,243 INFO >>> >>> > >>> >>> >>> > >>> >>> >>> > >>> >>> >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] >>> >>> > >>> Correlation ID: >>> >>> > >>> null, >>> >>> > >>> Call >>> >>> > >>> Stack: null, Custom Event ID: -1, Message: Status of >>> >>> > >>> host >>> >>> > >>> Compute5 >>> >>> > >>> was >>> >>> > >>> set >>> >>> > >>> to NonOperational. >>> >>> > >>> 2014-11-24 18:05:37,272 INFO >>> >>> > >>> [org.ovirt.engine.core.bll.HandleVdsVersionCommand] >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Running >>> >>> > >>> command: >>> >>> > >>> HandleVdsVersionCommand internal: true. Entities >>> >>> > >>> affected : >>> >>> > >>> ID: >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS >>> >>> > >>> 2014-11-24 18:05:37,274 INFO >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Host >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a : Compute5 is >>> >>> > >>> already in >>> >>> > >>> NonOperational status for reason GLUSTER_COMMAND_FAILED. >>> >>> > >>> SetNonOperationalVds command is skipped. >>> >>> > >>> 2014-11-24 18:05:38,065 INFO >>> >>> > >>> >>> >>> > >>> >>> >>> > >>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>> >>> > >>> (DefaultQuartzScheduler_Worker-55) [3706e836] FINISH, >>> >>> > >>> GlusterVolumesListVDSCommand, return: >>> >>> > >>> >>> >>> > >>> >>> >>> > >>> >>> >>> > >>> {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4e72a1b1}, >>> >>> > >>> log id: 48a0c832 >>> >>> > >>> 2014-11-24 18:05:43,243 INFO >>> >>> > >>> >>> >>> > >>> >>> >>> > >>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>> >>> > >>> (DefaultQuartzScheduler_Worker-35) START, >>> >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, HostId >>> >>> > >>> = >>> >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 3ce13ebc >>> >>> > >>> ^C >>> >>> > >>> [root@ccr01 ~]# >>> >>> > >>> >>> >>> > >>> Thanks, >>> >>> > >>> Punit >>> >>> > >>> >>> >>> > >>> >>> >>> > >>> _______________________________________________ >>> >>> > >>> Users mailing >>> >>> > >>> >>> >>> > >>> >>> >>> > >>> listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users >>> >>> > >>> >>> >>> > >>> >>> >>> > >>> >>> >>> > >> >>> >>> > >> >>> >>> > > >>> >>> > > >>> >>> > >>> >> >>> >> >>> > >>> > >>> > _______________________________________________ >>> > Gluster-users mailing list >>> > Gluster-users@gluster.org >>> > http://supercolony.gluster.org/mailman/listinfo/gluster-users >> >> >

I just remembered this. There was another user having a similar issue of GlusterD failing to start on the mailing list a while back. The cause of his problem was the way his network was brought up. IIRC, he was using a static network configuration. The problem vanished when he began using dhcp. Or it might have been he was using dhcp.service and it got solved after switching to NetworkManager. This could be one more thing you could look at. I'll try to find the mail thread to see if it was the same problem as you. ~kaushal On Wed, Dec 3, 2014 at 6:22 PM, Kaushal M <kshlmster@gmail.com> wrote:
I don't know much about how the network target is brought up in CentOS7, but I'll try as much as I can.
It seems to me that, after the network has been brought up and by the time GlusterD is started, a. The machine hasn't yet recieved it's hostname, or b. It hasn't yet registered with the name server.
This is causing name resolution failures.
I don't know if the network target could come up without the machine getting its hostname, so I'm pretty sure it's not a.
So it seems to be b. But these kind of signing in happens only in DDNS systems, which doesn't seem to be the case for you.
Both of these reasons might be wrong (most likely wrong). You'd do good if you could ask for help from someone with more experience in systemd + networking.
~kaushal
On Wed, Dec 3, 2014 at 10:54 AM, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi Kaushal,
This is the host...which i rebooted...would you mind to let me know how i can make the glusterd sevice come up after network...i am using centos7...if network is the issue...
On Wed, Dec 3, 2014 at 11:54 AM, Kaushal M <kshlmster@gmail.com> wrote:
This peer cannot be identified.
" [2014-12-03 02:29:25.998153] D [glusterd-peer-utils.c:121:glusterd_peerinfo_find_by_hostname] 0-management: Unable to find friend: cpu05.zne01.hkg1.ovt.36stack.com"
I don't know why this address is not being resolved during boot time. If this is a valid peer, the the only reason I can think of this that the network is not up.
If you had previously detached the peer forcefully, the that could have left stale entries in some volumes. In this case as well, GlusterD will fail to identify the peer.
Do either of these reasons seem a possibility to you?
On Dec 3, 2014 8:07 AM, "Punit Dambiwal" <hypunit@gmail.com> wrote:
Hi Kaushal,
Please find the logs here :- http://ur1.ca/iyoe5 and http://ur1.ca/iyoed
On Tue, Dec 2, 2014 at 10:43 PM, Kaushal M <kshlmster@gmail.com> wrote:
Hey Punit, In the logs you've provided, GlusterD appears to be running correctly. Could you provide the logs for the time period when GlusterD attempts to start but fails.
~kaushal
On Dec 2, 2014 8:03 PM, "Punit Dambiwal" <hypunit@gmail.com> wrote:
Hi Kaushal,
Please find the logs here :- http://ur1.ca/iyhs5 and http://ur1.ca/iyhue
Thanks, punit
On Tue, Dec 2, 2014 at 12:00 PM, Kaushal M <kshlmster@gmail.com> wrote: > > Hey Punit, > Could you start Glusterd in debug mode and provide the logs here? > To start it in debug mode, append '-LDEBUG' to the ExecStart line in > the service file. > > ~kaushal > > On Mon, Dec 1, 2014 at 9:05 AM, Punit Dambiwal <hypunit@gmail.com> > wrote: > > Hi, > > > > Can Any body help me on this ?? > > > > On Thu, Nov 27, 2014 at 9:29 AM, Punit Dambiwal <hypunit@gmail.com> > > wrote: > >> > >> Hi Kaushal, > >> > >> Thanks for the detailed reply....let me explain my setup first :- > >> > >> 1. Ovirt Engine > >> 2. 4* host as well as storage machine (Host and gluster combined) > >> 3. Every host has 24 bricks... > >> > >> Now whenever the host machine reboot...it can come up but can not > >> join the > >> cluster again and through the following error "Gluster command > >> [<UNKNOWN>] > >> failed on server.." > >> > >> Please check my comment in line :- > >> > >> 1. Use the same string for doing the peer probe and for the brick > >> address > >> during volume create/add-brick. Ideally, we suggest you use > >> properly > >> resolvable FQDNs everywhere. If that is not possible, then use only > >> IP > >> addresses. Try to avoid short names. > >> --------------- > >> [root@cpu05 ~]# gluster peer status > >> Number of Peers: 3 > >> > >> Hostname: cpu03.stack.com > >> Uuid: 5729b8c4-e80d-4353-b456-6f467bddbdfb > >> State: Peer in Cluster (Connected) > >> > >> Hostname: cpu04.stack.com > >> Uuid: d272b790-c4b2-4bed-ba68-793656e6d7b0 > >> State: Peer in Cluster (Connected) > >> Other names: > >> 10.10.0.8 > >> > >> Hostname: cpu02.stack.com > >> Uuid: 8d8a7041-950e-40d0-85f9-58d14340ca25 > >> State: Peer in Cluster (Connected) > >> [root@cpu05 ~]# > >> ---------------- > >> 2. During boot up, make sure to launch glusterd only after the > >> network is > >> up. This will allow the new peer identification mechanism to do its > >> job correctly. > >> >> I think the service itself doing the same job.... > >> > >> [root@cpu05 ~]# cat /usr/lib/systemd/system/glusterd.service > >> [Unit] > >> Description=GlusterFS, a clustered file-system server > >> After=network.target rpcbind.service > >> Before=network-online.target > >> > >> [Service] > >> Type=forking > >> PIDFile=/var/run/glusterd.pid > >> LimitNOFILE=65536 > >> ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid > >> KillMode=process > >> > >> [Install] > >> WantedBy=multi-user.target > >> [root@cpu05 ~]# > >> -------------------- > >> > >> gluster logs :- > >> > >> [2014-11-24 09:22:22.147471] I [MSGID: 100030] > >> [glusterfsd.c:2018:main] > >> 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version > >> 3.6.1 > >> (args: /usr/sbin/glusterd -p /var/run/glusterd.pid) > >> [2014-11-24 09:22:22.151565] I [glusterd.c:1214:init] 0-management: > >> Maximum allowed open file descriptors set to 65536 > >> [2014-11-24 09:22:22.151599] I [glusterd.c:1259:init] 0-management: > >> Using > >> /var/lib/glusterd as working directory > >> [2014-11-24 09:22:22.155216] W [rdma.c:4195:__gf_rdma_ctx_create] > >> 0-rpc-transport/rdma: rdma_cm event channel creation failed (No > >> such device) > >> [2014-11-24 09:22:22.155264] E [rdma.c:4483:init] > >> 0-rdma.management: > >> Failed to initialize IB Device > >> [2014-11-24 09:22:22.155285] E > >> [rpc-transport.c:333:rpc_transport_load] > >> 0-rpc-transport: 'rdma' initialization failed > >> [2014-11-24 09:22:22.155354] W > >> [rpcsvc.c:1524:rpcsvc_transport_create] > >> 0-rpc-service: cannot create listener, initing the transport failed > >> [2014-11-24 09:22:22.156290] I > >> [glusterd.c:413:glusterd_check_gsync_present] 0-glusterd: > >> geo-replication > >> module not installed in the system > >> [2014-11-24 09:22:22.161318] I > >> [glusterd-store.c:2043:glusterd_restore_op_version] 0-glusterd: > >> retrieved > >> op-version: 30600 > >> [2014-11-24 09:22:22.821800] I > >> [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] > >> 0-management: > >> connect returned 0 > >> [2014-11-24 09:22:22.825810] I > >> [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] > >> 0-management: > >> connect returned 0 > >> [2014-11-24 09:22:22.828705] I > >> [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] > >> 0-management: > >> connect returned 0 > >> [2014-11-24 09:22:22.828771] I > >> [rpc-clnt.c:969:rpc_clnt_connection_init] > >> 0-management: setting frame-timeout to 600 > >> [2014-11-24 09:22:22.832670] I > >> [rpc-clnt.c:969:rpc_clnt_connection_init] > >> 0-management: setting frame-timeout to 600 > >> [2014-11-24 09:22:22.835919] I > >> [rpc-clnt.c:969:rpc_clnt_connection_init] > >> 0-management: setting frame-timeout to 600 > >> [2014-11-24 09:22:22.840209] E > >> [glusterd-store.c:4248:glusterd_resolve_all_bricks] 0-glusterd: > >> resolve > >> brick failed in restore > >> [2014-11-24 09:22:22.840233] E [xlator.c:425:xlator_init] > >> 0-management: > >> Initialization of volume 'management' failed, review your volfile > >> again > >> [2014-11-24 09:22:22.840245] E [graph.c:322:glusterfs_graph_init] > >> 0-management: initializing translator failed > >> [2014-11-24 09:22:22.840264] E > >> [graph.c:525:glusterfs_graph_activate] > >> 0-graph: init failed > >> [2014-11-24 09:22:22.840754] W [glusterfsd.c:1194:cleanup_and_exit] > >> (--> > >> 0-: received signum (0), shutting down > >> > >> Thanks, > >> Punit > >> > >> > >> > >> > >> On Wed, Nov 26, 2014 at 7:14 PM, Kaushal M <kshlmster@gmail.com> > >> wrote: > >>> > >>> Based on the logs I can guess that glusterd is being started > >>> before > >>> the network has come up and that the addresses given to bricks do > >>> not > >>> directly match the addresses used in during peer probe. > >>> > >>> The gluster_after_reboot log has the line "[2014-11-25 > >>> 06:46:09.972113] E > >>> [glusterd-store.c:2632:glusterd_resolve_all_bricks] > >>> 0-glusterd: resolve brick failed in restore". > >>> > >>> Brick resolution fails when glusterd cannot match the address for > >>> the > >>> brick, with one of the peers. Brick resolution happens in two > >>> phases, > >>> 1. We first try to identify the peer by performing string > >>> comparisions > >>> with the brick address and the peer addresses (The peer names will > >>> be > >>> the names/addresses that were given when the peer was probed). > >>> 2. If we don't find a match from step 1, we will then resolve all > >>> the > >>> brick address and the peer addresses into addrinfo structs, and > >>> then > >>> compare these structs to find a match. This process should > >>> generally > >>> find a match if available. This will fail only if the network is > >>> not > >>> up yet as we cannot resolve addresses. > >>> > >>> The above steps are applicable only to glusterfs versions >=3.6. > >>> They > >>> were introduced to reduce problems with peer identification, like > >>> the > >>> one you encountered > >>> > >>> Since both of the steps failed to find a match in one run, but > >>> succeeded later, we can come to the conclusion that, > >>> a) the bricks don't have the exact same string used in peer probe > >>> for > >>> their addresses as step 1 failed, and > >>> b) the network was not up in the initial run, as step 2 failed > >>> during > >>> the initial run, but passed in the second run. > >>> > >>> Please let me know if my conclusion is correct. > >>> > >>> If it is, you can solve your problem in two ways. > >>> 1. Use the same string for doing the peer probe and for the brick > >>> address during volume create/add-brick. Ideally, we suggest you > >>> use > >>> properly resolvable FQDNs everywhere. If that is not possible, > >>> then > >>> use only IP addresses. Try to avoid short names. > >>> 2. During boot up, make sure to launch glusterd only after the > >>> network > >>> is up. This will allow the new peer identification mechanism to do > >>> its > >>> job correctly. > >>> > >>> > >>> If you have already followed these steps and yet still hit the > >>> problem, then please provide more information (setup, logs, etc.). > >>> It > >>> could be much different problem that you are facing. > >>> > >>> ~kaushal > >>> > >>> On Wed, Nov 26, 2014 at 4:01 PM, Punit Dambiwal > >>> <hypunit@gmail.com> > >>> wrote: > >>> > Is there any one can help on this ?? > >>> > > >>> > Thanks, > >>> > punit > >>> > > >>> > On Wed, Nov 26, 2014 at 9:42 AM, Punit Dambiwal > >>> > <hypunit@gmail.com> > >>> > wrote: > >>> >> > >>> >> Hi, > >>> >> > >>> >> My Glusterfs version is :- glusterfs-3.6.1-1.el7 > >>> >> > >>> >> On Wed, Nov 26, 2014 at 1:59 AM, Kanagaraj Mayilsamy > >>> >> <kmayilsa@redhat.com> > >>> >> wrote: > >>> >>> > >>> >>> [+Gluster-users@gluster.org] > >>> >>> > >>> >>> "Initialization of volume 'management' failed, review your > >>> >>> volfile > >>> >>> again", glusterd throws this error when the service is started > >>> >>> automatically > >>> >>> after the reboot. But the service is successfully started > >>> >>> later > >>> >>> manually by > >>> >>> the user. > >>> >>> > >>> >>> can somebody from gluster-users please help on this? > >>> >>> > >>> >>> glusterfs version: 3.5.1 > >>> >>> > >>> >>> Thanks, > >>> >>> Kanagaraj > >>> >>> > >>> >>> ----- Original Message ----- > >>> >>> > From: "Punit Dambiwal" <hypunit@gmail.com> > >>> >>> > To: "Kanagaraj" <kmayilsa@redhat.com> > >>> >>> > Cc: users@ovirt.org > >>> >>> > Sent: Tuesday, November 25, 2014 7:24:45 PM > >>> >>> > Subject: Re: [ovirt-users] Gluster command [<UNKNOWN>] > >>> >>> > failed on > >>> >>> > server... > >>> >>> > > >>> >>> > Hi Kanagraj, > >>> >>> > > >>> >>> > Please check the attached log files....i didn't find any > >>> >>> > thing > >>> >>> > special.... > >>> >>> > > >>> >>> > On Tue, Nov 25, 2014 at 12:12 PM, Kanagaraj > >>> >>> > <kmayilsa@redhat.com> > >>> >>> > wrote: > >>> >>> > > >>> >>> > > Do you see any errors in > >>> >>> > > /var/log/glusterfs/etc-glusterfs-glusterd.vol.log or > >>> >>> > > vdsm.log > >>> >>> > > when > >>> >>> > > the > >>> >>> > > service is trying to start automatically after the reboot? > >>> >>> > > > >>> >>> > > Thanks, > >>> >>> > > Kanagaraj > >>> >>> > > > >>> >>> > > > >>> >>> > > On 11/24/2014 08:13 PM, Punit Dambiwal wrote: > >>> >>> > > > >>> >>> > > Hi Kanagaraj, > >>> >>> > > > >>> >>> > > Yes...once i will start the gluster service and then > >>> >>> > > vdsmd > >>> >>> > > ...the > >>> >>> > > host > >>> >>> > > can connect to cluster...but the question is why it's not > >>> >>> > > started > >>> >>> > > even it > >>> >>> > > has chkconfig enabled... > >>> >>> > > > >>> >>> > > I have tested it in two host cluster > >>> >>> > > environment...(Centos 6.6 > >>> >>> > > and > >>> >>> > > centos 7.0) on both hypervisior cluster..it's failed to > >>> >>> > > reconnect > >>> >>> > > in > >>> >>> > > to > >>> >>> > > cluster after reboot.... > >>> >>> > > > >>> >>> > > In both the environment glusterd enabled for next > >>> >>> > > boot....but > >>> >>> > > it's > >>> >>> > > failed with the same error....seems it's bug in either > >>> >>> > > gluster or > >>> >>> > > Ovirt ?? > >>> >>> > > > >>> >>> > > Please help me to find the workaround here if can not > >>> >>> > > resolve > >>> >>> > > it...as > >>> >>> > > without this the Host machine can not connect after > >>> >>> > > reboot....that > >>> >>> > > means > >>> >>> > > engine will consider it as down and every time need to > >>> >>> > > manually > >>> >>> > > start > >>> >>> > > the > >>> >>> > > gluster service and vdsmd... ?? > >>> >>> > > > >>> >>> > > Thanks, > >>> >>> > > Punit > >>> >>> > > > >>> >>> > > On Mon, Nov 24, 2014 at 10:20 PM, Kanagaraj > >>> >>> > > <kmayilsa@redhat.com> > >>> >>> > > wrote: > >>> >>> > > > >>> >>> > >> From vdsm.log "error: Connection failed. Please check if > >>> >>> > >> gluster > >>> >>> > >> daemon > >>> >>> > >> is operational." > >>> >>> > >> > >>> >>> > >> Starting glusterd service should fix this issue. 'service > >>> >>> > >> glusterd > >>> >>> > >> start' > >>> >>> > >> But i am wondering why the glusterd was not started > >>> >>> > >> automatically > >>> >>> > >> after > >>> >>> > >> the reboot. > >>> >>> > >> > >>> >>> > >> Thanks, > >>> >>> > >> Kanagaraj > >>> >>> > >> > >>> >>> > >> > >>> >>> > >> > >>> >>> > >> On 11/24/2014 07:18 PM, Punit Dambiwal wrote: > >>> >>> > >> > >>> >>> > >> Hi Kanagaraj, > >>> >>> > >> > >>> >>> > >> Please find the attached VDSM logs :- > >>> >>> > >> > >>> >>> > >> ---------------- > >>> >>> > >> Thread-13::DEBUG::2014-11-24 > >>> >>> > >> > >>> >>> > >> > >>> >>> > >> > >>> >>> > >> 21:41:17,182::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) > >>> >>> > >> Owner.cancelAll requests {} > >>> >>> > >> Thread-13::DEBUG::2014-11-24 > >>> >>> > >> > >>> >>> > >> 21:41:17,182::task::993::Storage.TaskManager.Task::(_decref) > >>> >>> > >> Task=`1691d409-9b27-4585-8281-5ec26154367a`::ref 0 > >>> >>> > >> aborting > >>> >>> > >> False > >>> >>> > >> Thread-13::DEBUG::2014-11-24 > >>> >>> > >> > >>> >>> > >> > >>> >>> > >> 21:41:32,393::task::595::Storage.TaskManager.Task::(_updateState) > >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from > >>> >>> > >> state > >>> >>> > >> init > >>> >>> > >> -> > >>> >>> > >> state preparing > >>> >>> > >> Thread-13::INFO::2014-11-24 > >>> >>> > >> 21:41:32,393::logUtils::44::dispatcher::(wrapper) Run and > >>> >>> > >> protect: > >>> >>> > >> repoStats(options=None) > >>> >>> > >> Thread-13::INFO::2014-11-24 > >>> >>> > >> 21:41:32,393::logUtils::47::dispatcher::(wrapper) Run and > >>> >>> > >> protect: > >>> >>> > >> repoStats, Return response: {} > >>> >>> > >> Thread-13::DEBUG::2014-11-24 > >>> >>> > >> > >>> >>> > >> 21:41:32,393::task::1191::Storage.TaskManager.Task::(prepare) > >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::finished: {} > >>> >>> > >> Thread-13::DEBUG::2014-11-24 > >>> >>> > >> > >>> >>> > >> > >>> >>> > >> 21:41:32,394::task::595::Storage.TaskManager.Task::(_updateState) > >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from > >>> >>> > >> state > >>> >>> > >> preparing > >>> >>> > >> -> > >>> >>> > >> state finished > >>> >>> > >> Thread-13::DEBUG::2014-11-24 > >>> >>> > >> > >>> >>> > >> > >>> >>> > >> > >>> >>> > >> 21:41:32,394::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) > >>> >>> > >> Owner.releaseAll requests {} resources {} > >>> >>> > >> Thread-13::DEBUG::2014-11-24 > >>> >>> > >> > >>> >>> > >> > >>> >>> > >> > >>> >>> > >> 21:41:32,394::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) > >>> >>> > >> Owner.cancelAll requests {} > >>> >>> > >> Thread-13::DEBUG::2014-11-24 > >>> >>> > >> > >>> >>> > >> 21:41:32,394::task::993::Storage.TaskManager.Task::(_decref) > >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::ref 0 > >>> >>> > >> aborting > >>> >>> > >> False > >>> >>> > >> Thread-13::DEBUG::2014-11-24 > >>> >>> > >> 21:41:41,550::BindingXMLRPC::1132::vds::(wrapper) client > >>> >>> > >> [10.10.10.2]::call > >>> >>> > >> getCapabilities with () {} > >>> >>> > >> Thread-13::DEBUG::2014-11-24 > >>> >>> > >> 21:41:41,553::utils::738::root::(execCmd) > >>> >>> > >> /sbin/ip route show to 0.0.0.0/0 table all (cwd None) > >>> >>> > >> Thread-13::DEBUG::2014-11-24 > >>> >>> > >> 21:41:41,560::utils::758::root::(execCmd) > >>> >>> > >> SUCCESS: <err> = ''; <rc> = 0 > >>> >>> > >> Thread-13::DEBUG::2014-11-24 > >>> >>> > >> 21:41:41,588::caps::728::root::(_getKeyPackages) rpm > >>> >>> > >> package > >>> >>> > >> ('gluster-swift',) not found > >>> >>> > >> Thread-13::DEBUG::2014-11-24 > >>> >>> > >> 21:41:41,592::caps::728::root::(_getKeyPackages) rpm > >>> >>> > >> package > >>> >>> > >> ('gluster-swift-object',) not found > >>> >>> > >> Thread-13::DEBUG::2014-11-24 > >>> >>> > >> 21:41:41,593::caps::728::root::(_getKeyPackages) rpm > >>> >>> > >> package > >>> >>> > >> ('gluster-swift-plugin',) not found > >>> >>> > >> Thread-13::DEBUG::2014-11-24 > >>> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm > >>> >>> > >> package > >>> >>> > >> ('gluster-swift-account',) not found > >>> >>> > >> Thread-13::DEBUG::2014-11-24 > >>> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm > >>> >>> > >> package > >>> >>> > >> ('gluster-swift-proxy',) not found > >>> >>> > >> Thread-13::DEBUG::2014-11-24 > >>> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm > >>> >>> > >> package > >>> >>> > >> ('gluster-swift-doc',) not found > >>> >>> > >> Thread-13::DEBUG::2014-11-24 > >>> >>> > >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm > >>> >>> > >> package > >>> >>> > >> ('gluster-swift-container',) not found > >>> >>> > >> Thread-13::DEBUG::2014-11-24 > >>> >>> > >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm > >>> >>> > >> package > >>> >>> > >> ('glusterfs-geo-replication',) not found > >>> >>> > >> Thread-13::DEBUG::2014-11-24 > >>> >>> > >> 21:41:41,600::caps::646::root::(get) > >>> >>> > >> VirtioRNG DISABLED: libvirt version 0.10.2-29.el6_5.9 > >>> >>> > >> required > >>> >>> > >> >= > >>> >>> > >> 0.10.2-31 > >>> >>> > >> Thread-13::DEBUG::2014-11-24 > >>> >>> > >> 21:41:41,603::BindingXMLRPC::1139::vds::(wrapper) return > >>> >>> > >> getCapabilities > >>> >>> > >> with {'status': {'message': 'Done', 'code': 0}, 'info': > >>> >>> > >> {'HBAInventory': > >>> >>> > >> {'iSCSI': [{'InitiatorName': > >>> >>> > >> 'iqn.1994-05.com.redhat:32151ce183c8'}], > >>> >>> > >> 'FC': > >>> >>> > >> []}, 'packages2': {'kernel': {'release': > >>> >>> > >> '431.el6.x86_64', > >>> >>> > >> 'buildtime': > >>> >>> > >> 1385061309.0, 'version': '2.6.32'}, 'glusterfs-rdma': > >>> >>> > >> {'release': > >>> >>> > >> '1.el6', > >>> >>> > >> 'buildtime': 1403622628L, 'version': '3.5.1'}, > >>> >>> > >> 'glusterfs-fuse': > >>> >>> > >> {'release': '1.el6', 'buildtime': 1403622628L, 'version': > >>> >>> > >> '3.5.1'}, > >>> >>> > >> 'spice-server': {'release': '6.el6_5.2', 'buildtime': > >>> >>> > >> 1402324637L, > >>> >>> > >> 'version': '0.12.4'}, 'vdsm': {'release': > >>> >>> > >> '1.gitdb83943.el6', > >>> >>> > >> 'buildtime': > >>> >>> > >> 1412784567L, 'version': '4.16.7'}, 'qemu-kvm': > >>> >>> > >> {'release': > >>> >>> > >> '2.415.el6_5.10', 'buildtime': 1402435700L, 'version': > >>> >>> > >> '0.12.1.2'}, > >>> >>> > >> 'qemu-img': {'release': '2.415.el6_5.10', 'buildtime': > >>> >>> > >> 1402435700L, > >>> >>> > >> 'version': '0.12.1.2'}, 'libvirt': {'release': > >>> >>> > >> '29.el6_5.9', > >>> >>> > >> 'buildtime': > >>> >>> > >> 1402404612L, 'version': '0.10.2'}, 'glusterfs': > >>> >>> > >> {'release': > >>> >>> > >> '1.el6', > >>> >>> > >> 'buildtime': 1403622628L, 'version': '3.5.1'}, 'mom': > >>> >>> > >> {'release': > >>> >>> > >> '2.el6', > >>> >>> > >> 'buildtime': 1403794344L, 'version': '0.4.1'}, > >>> >>> > >> 'glusterfs-server': > >>> >>> > >> {'release': '1.el6', 'buildtime': 1403622628L, 'version': > >>> >>> > >> '3.5.1'}}, > >>> >>> > >> 'numaNodeDistance': {'1': [20, 10], '0': [10, 20]}, > >>> >>> > >> 'cpuModel': > >>> >>> > >> 'Intel(R) > >>> >>> > >> Xeon(R) CPU X5650 @ 2.67GHz', 'liveMerge': > >>> >>> > >> 'false', > >>> >>> > >> 'hooks': > >>> >>> > >> {}, > >>> >>> > >> 'cpuSockets': '2', 'vmTypes': ['kvm'], 'selinux': > >>> >>> > >> {'mode': '1'}, > >>> >>> > >> 'kdumpStatus': 0, 'supportedProtocols': ['2.2', '2.3'], > >>> >>> > >> 'networks': > >>> >>> > >> {'ovirtmgmt': {'iface': u'bond0.10', 'addr': > >>> >>> > >> '43.252.176.16', > >>> >>> > >> 'bridged': > >>> >>> > >> False, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], > >>> >>> > >> 'mtu': > >>> >>> > >> '1500', > >>> >>> > >> 'bootproto4': 'none', 'netmask': '255.255.255.0', > >>> >>> > >> 'ipv4addrs': > >>> >>> > >> [' > >>> >>> > >> 43.252.176.16/24' <http://43.252.176.16/24%27>], > >>> >>> > >> 'interface': > >>> >>> > >> u'bond0.10', 'ipv6gateway': '::', 'gateway': > >>> >>> > >> '43.25.17.1'}, > >>> >>> > >> 'Internal': > >>> >>> > >> {'iface': 'Internal', 'addr': '', 'cfg': {'DEFROUTE': > >>> >>> > >> 'no', > >>> >>> > >> 'HOTPLUG': > >>> >>> > >> 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', > >>> >>> > >> 'BOOTPROTO': > >>> >>> > >> 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': > >>> >>> > >> 'Bridge', > >>> >>> > >> 'ONBOOT': > >>> >>> > >> 'no'}, 'bridged': True, 'ipv6addrs': > >>> >>> > >> ['fe80::210:18ff:fecd:daac/64'], > >>> >>> > >> 'gateway': '', 'bootproto4': 'none', 'netmask': '', > >>> >>> > >> 'stp': > >>> >>> > >> 'off', > >>> >>> > >> 'ipv4addrs': [], 'mtu': '9000', 'ipv6gateway': '::', > >>> >>> > >> 'ports': > >>> >>> > >> ['bond1.100']}, 'storage': {'iface': u'bond1', 'addr': > >>> >>> > >> '10.10.10.6', > >>> >>> > >> 'bridged': False, 'ipv6addrs': > >>> >>> > >> ['fe80::210:18ff:fecd:daac/64'], > >>> >>> > >> 'mtu': > >>> >>> > >> '9000', 'bootproto4': 'none', 'netmask': '255.255.255.0', > >>> >>> > >> 'ipv4addrs': [' > >>> >>> > >> 10.10.10.6/24' <http://10.10.10.6/24%27>], 'interface': > >>> >>> > >> u'bond1', > >>> >>> > >> 'ipv6gateway': '::', 'gateway': ''}, 'VMNetwork': > >>> >>> > >> {'iface': > >>> >>> > >> 'VMNetwork', > >>> >>> > >> 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', > >>> >>> > >> 'MTU': > >>> >>> > >> '1500', > >>> >>> > >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', > >>> >>> > >> 'STP': > >>> >>> > >> 'off', > >>> >>> > >> 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, > >>> >>> > >> 'bridged': > >>> >>> > >> True, > >>> >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'gateway': > >>> >>> > >> '', > >>> >>> > >> 'bootproto4': > >>> >>> > >> 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], > >>> >>> > >> 'mtu': > >>> >>> > >> '1500', > >>> >>> > >> 'ipv6gateway': '::', 'ports': ['bond0.36']}}, 'bridges': > >>> >>> > >> {'Internal': > >>> >>> > >> {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', > >>> >>> > >> 'MTU': > >>> >>> > >> '9000', > >>> >>> > >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', > >>> >>> > >> 'STP': > >>> >>> > >> 'off', > >>> >>> > >> 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, > >>> >>> > >> 'ipv6addrs': > >>> >>> > >> ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', > >>> >>> > >> 'netmask': '', > >>> >>> > >> 'stp': > >>> >>> > >> 'off', 'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': > >>> >>> > >> '', > >>> >>> > >> 'opts': > >>> >>> > >> {'topology_change_detected': '0', > >>> >>> > >> 'multicast_last_member_count': > >>> >>> > >> '2', > >>> >>> > >> 'hash_elasticity': '4', > >>> >>> > >> 'multicast_query_response_interval': > >>> >>> > >> '999', > >>> >>> > >> 'multicast_snooping': '1', > >>> >>> > >> 'multicast_startup_query_interval': > >>> >>> > >> '3124', > >>> >>> > >> 'hello_timer': '31', 'multicast_querier_interval': > >>> >>> > >> '25496', > >>> >>> > >> 'max_age': > >>> >>> > >> '1999', 'hash_max': '512', 'stp_state': '0', 'root_id': > >>> >>> > >> '8000.001018cddaac', 'priority': '32768', > >>> >>> > >> 'multicast_membership_interval': > >>> >>> > >> '25996', 'root_path_cost': '0', 'root_port': '0', > >>> >>> > >> 'multicast_querier': > >>> >>> > >> '0', > >>> >>> > >> 'multicast_startup_query_count': '2', 'hello_time': > >>> >>> > >> '199', > >>> >>> > >> 'topology_change': '0', 'bridge_id': '8000.001018cddaac', > >>> >>> > >> 'topology_change_timer': '0', 'ageing_time': '29995', > >>> >>> > >> 'gc_timer': > >>> >>> > >> '31', > >>> >>> > >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', > >>> >>> > >> 'multicast_query_interval': '12498', > >>> >>> > >> 'multicast_last_member_interval': > >>> >>> > >> '99', 'multicast_router': '1', 'forward_delay': '0'}, > >>> >>> > >> 'ports': > >>> >>> > >> ['bond1.100']}, 'VMNetwork': {'addr': '', 'cfg': > >>> >>> > >> {'DEFROUTE': > >>> >>> > >> 'no', > >>> >>> > >> 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', > >>> >>> > >> 'NM_CONTROLLED': > >>> >>> > >> 'no', > >>> >>> > >> 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'VMNetwork', > >>> >>> > >> 'TYPE': > >>> >>> > >> 'Bridge', > >>> >>> > >> 'ONBOOT': 'no'}, 'ipv6addrs': > >>> >>> > >> ['fe80::62eb:69ff:fe20:b46c/64'], > >>> >>> > >> 'mtu': > >>> >>> > >> '1500', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], > >>> >>> > >> 'ipv6gateway': > >>> >>> > >> '::', > >>> >>> > >> 'gateway': '', 'opts': {'topology_change_detected': '0', > >>> >>> > >> 'multicast_last_member_count': '2', 'hash_elasticity': > >>> >>> > >> '4', > >>> >>> > >> 'multicast_query_response_interval': '999', > >>> >>> > >> 'multicast_snooping': > >>> >>> > >> '1', > >>> >>> > >> 'multicast_startup_query_interval': '3124', > >>> >>> > >> 'hello_timer': > >>> >>> > >> '131', > >>> >>> > >> 'multicast_querier_interval': '25496', 'max_age': '1999', > >>> >>> > >> 'hash_max': > >>> >>> > >> '512', 'stp_state': '0', 'root_id': '8000.60eb6920b46c', > >>> >>> > >> 'priority': > >>> >>> > >> '32768', 'multicast_membership_interval': '25996', > >>> >>> > >> 'root_path_cost': > >>> >>> > >> '0', > >>> >>> > >> 'root_port': '0', 'multicast_querier': '0', > >>> >>> > >> 'multicast_startup_query_count': '2', 'hello_time': > >>> >>> > >> '199', > >>> >>> > >> 'topology_change': '0', 'bridge_id': '8000.60eb6920b46c', > >>> >>> > >> 'topology_change_timer': '0', 'ageing_time': '29995', > >>> >>> > >> 'gc_timer': > >>> >>> > >> '31', > >>> >>> > >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', > >>> >>> > >> 'multicast_query_interval': '12498', > >>> >>> > >> 'multicast_last_member_interval': > >>> >>> > >> '99', 'multicast_router': '1', 'forward_delay': '0'}, > >>> >>> > >> 'ports': > >>> >>> > >> ['bond0.36']}}, 'uuid': > >>> >>> > >> '44454C4C-4C00-1057-8053-B7C04F504E31', > >>> >>> > >> 'lastClientIface': 'bond1', 'nics': {'eth3': > >>> >>> > >> {'permhwaddr': > >>> >>> > >> '00:10:18:cd:da:ae', 'addr': '', 'cfg': {'SLAVE': 'yes', > >>> >>> > >> 'NM_CONTROLLED': > >>> >>> > >> 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ae', > >>> >>> > >> 'MASTER': > >>> >>> > >> 'bond1', > >>> >>> > >> 'DEVICE': 'eth3', 'ONBOOT': 'no'}, 'ipv6addrs': [], > >>> >>> > >> 'mtu': > >>> >>> > >> '9000', > >>> >>> > >> 'netmask': '', 'ipv4addrs': [], 'hwaddr': > >>> >>> > >> '00:10:18:cd:da:ac', > >>> >>> > >> 'speed': > >>> >>> > >> 1000}, 'eth2': {'permhwaddr': '00:10:18:cd:da:ac', > >>> >>> > >> 'addr': '', > >>> >>> > >> 'cfg': > >>> >>> > >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '9000', > >>> >>> > >> 'HWADDR': > >>> >>> > >> '00:10:18:cd:da:ac', 'MASTER': 'bond1', 'DEVICE': 'eth2', > >>> >>> > >> 'ONBOOT': > >>> >>> > >> 'no'}, > >>> >>> > >> 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', > >>> >>> > >> 'ipv4addrs': [], > >>> >>> > >> 'hwaddr': > >>> >>> > >> '00:10:18:cd:da:ac', 'speed': 1000}, 'eth1': > >>> >>> > >> {'permhwaddr': > >>> >>> > >> '60:eb:69:20:b4:6d', 'addr': '', 'cfg': {'SLAVE': 'yes', > >>> >>> > >> 'NM_CONTROLLED': > >>> >>> > >> 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6d', > >>> >>> > >> 'MASTER': > >>> >>> > >> 'bond0', > >>> >>> > >> 'DEVICE': 'eth1', 'ONBOOT': 'yes'}, 'ipv6addrs': [], > >>> >>> > >> 'mtu': > >>> >>> > >> '1500', > >>> >>> > >> 'netmask': '', 'ipv4addrs': [], 'hwaddr': > >>> >>> > >> '60:eb:69:20:b4:6c', > >>> >>> > >> 'speed': > >>> >>> > >> 1000}, 'eth0': {'permhwaddr': '60:eb:69:20:b4:6c', > >>> >>> > >> 'addr': '', > >>> >>> > >> 'cfg': > >>> >>> > >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', > >>> >>> > >> 'HWADDR': > >>> >>> > >> '60:eb:69:20:b4:6c', 'MASTER': 'bond0', 'DEVICE': 'eth0', > >>> >>> > >> 'ONBOOT': > >>> >>> > >> 'yes'}, > >>> >>> > >> 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', > >>> >>> > >> 'ipv4addrs': [], > >>> >>> > >> 'hwaddr': > >>> >>> > >> '60:eb:69:20:b4:6c', 'speed': 1000}}, > >>> >>> > >> 'software_revision': '1', > >>> >>> > >> 'clusterLevels': ['3.0', '3.1', '3.2', '3.3', '3.4', > >>> >>> > >> '3.5'], > >>> >>> > >> 'cpuFlags': > >>> >>> > >> > >>> >>> > >> > >>> >>> > >> > >>> >>> > >> u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,popcnt,aes,lahf_lm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270', > >>> >>> > >> 'ISCSIInitiatorName': > >>> >>> > >> 'iqn.1994-05.com.redhat:32151ce183c8', > >>> >>> > >> 'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', > >>> >>> > >> '3.1', > >>> >>> > >> '3.2', > >>> >>> > >> '3.3', > >>> >>> > >> '3.4', '3.5'], 'autoNumaBalancing': 2, 'reservedMem': > >>> >>> > >> '321', > >>> >>> > >> 'bondings': > >>> >>> > >> {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', > >>> >>> > >> 'netmask': '', > >>> >>> > >> 'slaves': > >>> >>> > >> [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', > >>> >>> > >> 'cfg': > >>> >>> > >> {'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': 'no', > >>> >>> > >> 'BONDING_OPTS': > >>> >>> > >> 'mode=4 miimon=100', 'DEVICE': 'bond0', 'ONBOOT': 'yes'}, > >>> >>> > >> 'ipv6addrs': > >>> >>> > >> ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', > >>> >>> > >> 'netmask': '', > >>> >>> > >> 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'slaves': > >>> >>> > >> ['eth0', > >>> >>> > >> 'eth1'], > >>> >>> > >> 'opts': {'miimon': '100', 'mode': '4'}}, 'bond1': > >>> >>> > >> {'addr': > >>> >>> > >> '10.10.10.6', > >>> >>> > >> 'cfg': {'DEFROUTE': 'no', 'IPADDR': '10.10.10.6', > >>> >>> > >> 'HOTPLUG': > >>> >>> > >> 'no', > >>> >>> > >> 'MTU': > >>> >>> > >> '9000', 'NM_CONTROLLED': 'no', 'NETMASK': > >>> >>> > >> '255.255.255.0', > >>> >>> > >> 'BOOTPROTO': > >>> >>> > >> 'none', 'BONDING_OPTS': 'mode=4 miimon=100', 'DEVICE': > >>> >>> > >> 'bond1', > >>> >>> > >> 'ONBOOT': > >>> >>> > >> 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], > >>> >>> > >> 'mtu': > >>> >>> > >> '9000', > >>> >>> > >> 'netmask': '255.255.255.0', 'ipv4addrs': ['10.10.10.6/24' > >>> >>> > >> <http://10.10.10.6/24%27>], 'hwaddr': > >>> >>> > >> '00:10:18:cd:da:ac', > >>> >>> > >> 'slaves': > >>> >>> > >> ['eth2', 'eth3'], 'opts': {'miimon': '100', 'mode': > >>> >>> > >> '4'}}, > >>> >>> > >> 'bond2': > >>> >>> > >> {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', > >>> >>> > >> 'slaves': > >>> >>> > >> [], > >>> >>> > >> 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', > >>> >>> > >> 'cfg': {}, > >>> >>> > >> 'mtu': > >>> >>> > >> '1500', 'netmask': '', 'slaves': [], 'hwaddr': > >>> >>> > >> '00:00:00:00:00:00'}}, > >>> >>> > >> 'software_version': '4.16', 'memSize': '24019', > >>> >>> > >> 'cpuSpeed': > >>> >>> > >> '2667.000', > >>> >>> > >> 'numaNodes': {u'1': {'totalMemory': '12288', 'cpus': [6, > >>> >>> > >> 7, 8, > >>> >>> > >> 9, > >>> >>> > >> 10, 11, > >>> >>> > >> 18, 19, 20, 21, 22, 23]}, u'0': {'totalMemory': '12278', > >>> >>> > >> 'cpus': > >>> >>> > >> [0, > >>> >>> > >> 1, 2, > >>> >>> > >> 3, 4, 5, 12, 13, 14, 15, 16, 17]}}, 'version_name': 'Snow > >>> >>> > >> Man', > >>> >>> > >> 'vlans': > >>> >>> > >> {'bond0.10': {'iface': 'bond0', 'addr': '43.25.17.16', > >>> >>> > >> 'cfg': > >>> >>> > >> {'DEFROUTE': > >>> >>> > >> 'yes', 'VLAN': 'yes', 'IPADDR': '43.25.17.16', 'HOTPLUG': > >>> >>> > >> 'no', > >>> >>> > >> 'GATEWAY': > >>> >>> > >> '43.25.17.1', 'NM_CONTROLLED': 'no', 'NETMASK': > >>> >>> > >> '255.255.255.0', > >>> >>> > >> 'BOOTPROTO': 'none', 'DEVICE': 'bond0.10', 'MTU': '1500', > >>> >>> > >> 'ONBOOT': > >>> >>> > >> 'yes'}, > >>> >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': > >>> >>> > >> 10, > >>> >>> > >> 'mtu': > >>> >>> > >> '1500', > >>> >>> > >> 'netmask': '255.255.255.0', 'ipv4addrs': > >>> >>> > >> ['43.25.17.16/24'] > >>> >>> > >> <http://43.25.17.16/24%27%5D>}, 'bond0.36': {'iface': > >>> >>> > >> 'bond0', > >>> >>> > >> 'addr': > >>> >>> > >> '', 'cfg': {'BRIDGE': 'VMNetwork', 'VLAN': 'yes', > >>> >>> > >> 'HOTPLUG': > >>> >>> > >> 'no', > >>> >>> > >> 'MTU': > >>> >>> > >> '1500', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond0.36', > >>> >>> > >> 'ONBOOT': > >>> >>> > >> 'no'}, > >>> >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': > >>> >>> > >> 36, > >>> >>> > >> 'mtu': > >>> >>> > >> '1500', > >>> >>> > >> 'netmask': '', 'ipv4addrs': []}, 'bond1.100': {'iface': > >>> >>> > >> 'bond1', > >>> >>> > >> 'addr': > >>> >>> > >> '', 'cfg': {'BRIDGE': 'Internal', 'VLAN': 'yes', > >>> >>> > >> 'HOTPLUG': > >>> >>> > >> 'no', > >>> >>> > >> 'MTU': > >>> >>> > >> '9000', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond1.100', > >>> >>> > >> 'ONBOOT': > >>> >>> > >> 'no'}, > >>> >>> > >> 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'vlanid': > >>> >>> > >> 100, > >>> >>> > >> 'mtu': > >>> >>> > >> '9000', > >>> >>> > >> 'netmask': '', 'ipv4addrs': []}}, 'cpuCores': '12', > >>> >>> > >> 'kvmEnabled': > >>> >>> > >> 'true', > >>> >>> > >> 'guestOverhead': '65', 'cpuThreads': '24', > >>> >>> > >> 'emulatedMachines': > >>> >>> > >> [u'rhel6.5.0', u'pc', u'rhel6.4.0', u'rhel6.3.0', > >>> >>> > >> u'rhel6.2.0', > >>> >>> > >> u'rhel6.1.0', u'rhel6.0.0', u'rhel5.5.0', u'rhel5.4.4', > >>> >>> > >> u'rhel5.4.0'], > >>> >>> > >> 'operatingSystem': {'release': '5.el6.centos.11.1', > >>> >>> > >> 'version': > >>> >>> > >> '6', > >>> >>> > >> 'name': > >>> >>> > >> 'RHEL'}, 'lastClient': '10.10.10.2'}} > >>> >>> > >> Thread-13::DEBUG::2014-11-24 > >>> >>> > >> 21:41:41,620::BindingXMLRPC::1132::vds::(wrapper) client > >>> >>> > >> [10.10.10.2]::call > >>> >>> > >> getHardwareInfo with () {} > >>> >>> > >> Thread-13::DEBUG::2014-11-24 > >>> >>> > >> 21:41:41,621::BindingXMLRPC::1139::vds::(wrapper) return > >>> >>> > >> getHardwareInfo > >>> >>> > >> with {'status': {'message': 'Done', 'code': 0}, 'info': > >>> >>> > >> {'systemProductName': 'CS24-TY', 'systemSerialNumber': > >>> >>> > >> '7LWSPN1', > >>> >>> > >> 'systemFamily': 'Server', 'systemVersion': 'A00', > >>> >>> > >> 'systemUUID': > >>> >>> > >> '44454c4c-4c00-1057-8053-b7c04f504e31', > >>> >>> > >> 'systemManufacturer': > >>> >>> > >> 'Dell'}} > >>> >>> > >> Thread-13::DEBUG::2014-11-24 > >>> >>> > >> 21:41:41,733::BindingXMLRPC::1132::vds::(wrapper) client > >>> >>> > >> [10.10.10.2]::call > >>> >>> > >> hostsList with () {} flowID [222e8036] > >>> >>> > >> Thread-13::ERROR::2014-11-24 > >>> >>> > >> 21:41:44,753::BindingXMLRPC::1148::vds::(wrapper) vdsm > >>> >>> > >> exception > >>> >>> > >> occured > >>> >>> > >> Traceback (most recent call last): > >>> >>> > >> File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1135, > >>> >>> > >> in > >>> >>> > >> wrapper > >>> >>> > >> res = f(*args, **kwargs) > >>> >>> > >> File "/usr/share/vdsm/gluster/api.py", line 54, in > >>> >>> > >> wrapper > >>> >>> > >> rv = func(*args, **kwargs) > >>> >>> > >> File "/usr/share/vdsm/gluster/api.py", line 251, in > >>> >>> > >> hostsList > >>> >>> > >> return {'hosts': self.svdsmProxy.glusterPeerStatus()} > >>> >>> > >> File "/usr/share/vdsm/supervdsm.py", line 50, in > >>> >>> > >> __call__ > >>> >>> > >> return callMethod() > >>> >>> > >> File "/usr/share/vdsm/supervdsm.py", line 48, in > >>> >>> > >> <lambda> > >>> >>> > >> **kwargs) > >>> >>> > >> File "<string>", line 2, in glusterPeerStatus > >>> >>> > >> File > >>> >>> > >> "/usr/lib64/python2.6/multiprocessing/managers.py", line > >>> >>> > >> 740, > >>> >>> > >> in > >>> >>> > >> _callmethod > >>> >>> > >> raise convert_to_error(kind, result) > >>> >>> > >> GlusterCmdExecFailedException: Command execution failed > >>> >>> > >> error: Connection failed. Please check if gluster daemon > >>> >>> > >> is > >>> >>> > >> operational. > >>> >>> > >> return code: 1 > >>> >>> > >> Thread-13::DEBUG::2014-11-24 > >>> >>> > >> > >>> >>> > >> > >>> >>> > >> 21:41:50,949::task::595::Storage.TaskManager.Task::(_updateState) > >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from > >>> >>> > >> state > >>> >>> > >> init > >>> >>> > >> -> > >>> >>> > >> state preparing > >>> >>> > >> Thread-13::INFO::2014-11-24 > >>> >>> > >> 21:41:50,950::logUtils::44::dispatcher::(wrapper) Run and > >>> >>> > >> protect: > >>> >>> > >> repoStats(options=None) > >>> >>> > >> Thread-13::INFO::2014-11-24 > >>> >>> > >> 21:41:50,950::logUtils::47::dispatcher::(wrapper) Run and > >>> >>> > >> protect: > >>> >>> > >> repoStats, Return response: {} > >>> >>> > >> Thread-13::DEBUG::2014-11-24 > >>> >>> > >> > >>> >>> > >> 21:41:50,950::task::1191::Storage.TaskManager.Task::(prepare) > >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::finished: {} > >>> >>> > >> Thread-13::DEBUG::2014-11-24 > >>> >>> > >> > >>> >>> > >> > >>> >>> > >> 21:41:50,950::task::595::Storage.TaskManager.Task::(_updateState) > >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from > >>> >>> > >> state > >>> >>> > >> preparing > >>> >>> > >> -> > >>> >>> > >> state finished > >>> >>> > >> Thread-13::DEBUG::2014-11-24 > >>> >>> > >> > >>> >>> > >> > >>> >>> > >> > >>> >>> > >> 21:41:50,951::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) > >>> >>> > >> Owner.releaseAll requests {} resources {} > >>> >>> > >> Thread-13::DEBUG::2014-11-24 > >>> >>> > >> > >>> >>> > >> > >>> >>> > >> > >>> >>> > >> 21:41:50,951::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) > >>> >>> > >> Owner.cancelAll requests {} > >>> >>> > >> Thread-13::DEBUG::2014-11-24 > >>> >>> > >> > >>> >>> > >> 21:41:50,951::task::993::Storage.TaskManager.Task::(_decref) > >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::ref 0 > >>> >>> > >> aborting > >>> >>> > >> False > >>> >>> > >> ------------------------------- > >>> >>> > >> > >>> >>> > >> [root@compute4 ~]# service glusterd status > >>> >>> > >> glusterd is stopped > >>> >>> > >> [root@compute4 ~]# chkconfig --list | grep glusterd > >>> >>> > >> glusterd 0:off 1:off 2:on 3:on 4:on > >>> >>> > >> 5:on > >>> >>> > >> 6:off > >>> >>> > >> [root@compute4 ~]# > >>> >>> > >> > >>> >>> > >> Thanks, > >>> >>> > >> Punit > >>> >>> > >> > >>> >>> > >> On Mon, Nov 24, 2014 at 6:36 PM, Kanagaraj > >>> >>> > >> <kmayilsa@redhat.com> > >>> >>> > >> wrote: > >>> >>> > >> > >>> >>> > >>> Can you send the corresponding error in vdsm.log from > >>> >>> > >>> the > >>> >>> > >>> host? > >>> >>> > >>> > >>> >>> > >>> Also check if glusterd service is running. > >>> >>> > >>> > >>> >>> > >>> Thanks, > >>> >>> > >>> Kanagaraj > >>> >>> > >>> > >>> >>> > >>> > >>> >>> > >>> On 11/24/2014 03:39 PM, Punit Dambiwal wrote: > >>> >>> > >>> > >>> >>> > >>> Hi, > >>> >>> > >>> > >>> >>> > >>> After reboot my Hypervisior host can not activate again > >>> >>> > >>> in the > >>> >>> > >>> cluster > >>> >>> > >>> and failed with the following error :- > >>> >>> > >>> > >>> >>> > >>> Gluster command [<UNKNOWN>] failed on server... > >>> >>> > >>> > >>> >>> > >>> Engine logs :- > >>> >>> > >>> > >>> >>> > >>> 2014-11-24 18:05:28,397 INFO > >>> >>> > >>> > >>> >>> > >>> > >>> >>> > >>> > >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] > >>> >>> > >>> (DefaultQuartzScheduler_Worker-64) START, > >>> >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, HostId > >>> >>> > >>> = > >>> >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 5f251c90 > >>> >>> > >>> 2014-11-24 18:05:30,609 INFO > >>> >>> > >>> > >>> >>> > >>> > >>> >>> > >>> > >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] > >>> >>> > >>> (DefaultQuartzScheduler_Worker-64) FINISH, > >>> >>> > >>> GlusterVolumesListVDSCommand, > >>> >>> > >>> return: > >>> >>> > >>> > >>> >>> > >>> > >>> >>> > >>> > >>> >>> > >>> {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@d95203e0}, > >>> >>> > >>> log id: 5f251c90 > >>> >>> > >>> 2014-11-24 18:05:33,768 INFO > >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] > >>> >>> > >>> (ajp--127.0.0.1-8702-8) > >>> >>> > >>> [287d570d] Lock Acquired to object EngineLock > >>> >>> > >>> [exclusiveLocks= > >>> >>> > >>> key: > >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS > >>> >>> > >>> , sharedLocks= ] > >>> >>> > >>> 2014-11-24 18:05:33,795 INFO > >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] > >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] Running > >>> >>> > >>> command: > >>> >>> > >>> ActivateVdsCommand internal: false. Entities affected : > >>> >>> > >>> ID: > >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDSAction > >>> >>> > >>> group > >>> >>> > >>> MANIPULATE_HOST > >>> >>> > >>> with role type ADMIN > >>> >>> > >>> 2014-11-24 18:05:33,796 INFO > >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] > >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] Before > >>> >>> > >>> acquiring > >>> >>> > >>> lock in > >>> >>> > >>> order to prevent monitoring for host Compute5 from > >>> >>> > >>> data-center > >>> >>> > >>> SV_WTC > >>> >>> > >>> 2014-11-24 18:05:33,797 INFO > >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] > >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] Lock > >>> >>> > >>> acquired, > >>> >>> > >>> from > >>> >>> > >>> now a > >>> >>> > >>> monitoring of host will be skipped for host Compute5 > >>> >>> > >>> from > >>> >>> > >>> data-center > >>> >>> > >>> SV_WTC > >>> >>> > >>> 2014-11-24 18:05:33,817 INFO > >>> >>> > >>> > >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] > >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] START, > >>> >>> > >>> SetVdsStatusVDSCommand(HostName = Compute5, HostId = > >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=Unassigned, > >>> >>> > >>> nonOperationalReason=NONE, stopSpmFailureLogged=false), > >>> >>> > >>> log id: > >>> >>> > >>> 1cbc7311 > >>> >>> > >>> 2014-11-24 18:05:33,820 INFO > >>> >>> > >>> > >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] > >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] FINISH, > >>> >>> > >>> SetVdsStatusVDSCommand, log id: 1cbc7311 > >>> >>> > >>> 2014-11-24 18:05:34,086 INFO > >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] > >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) Activate finished. > >>> >>> > >>> Lock > >>> >>> > >>> released. > >>> >>> > >>> Monitoring can run now for host Compute5 from > >>> >>> > >>> data-center > >>> >>> > >>> SV_WTC > >>> >>> > >>> 2014-11-24 18:05:34,088 INFO > >>> >>> > >>> > >>> >>> > >>> > >>> >>> > >>> > >>> >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) Correlation ID: > >>> >>> > >>> 287d570d, > >>> >>> > >>> Job > >>> >>> > >>> ID: > >>> >>> > >>> 5ef8e4d6-b2bc-469e-8e81-7ef74b2a001a, Call Stack: null, > >>> >>> > >>> Custom > >>> >>> > >>> Event ID: > >>> >>> > >>> -1, Message: Host Compute5 was activated by admin. > >>> >>> > >>> 2014-11-24 18:05:34,090 INFO > >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] > >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) Lock freed to object > >>> >>> > >>> EngineLock > >>> >>> > >>> [exclusiveLocks= key: > >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a > >>> >>> > >>> value: > >>> >>> > >>> VDS > >>> >>> > >>> , sharedLocks= ] > >>> >>> > >>> 2014-11-24 18:05:35,792 INFO > >>> >>> > >>> > >>> >>> > >>> > >>> >>> > >>> > >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] > >>> >>> > >>> (DefaultQuartzScheduler_Worker-55) [3706e836] START, > >>> >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, HostId > >>> >>> > >>> = > >>> >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 48a0c832 > >>> >>> > >>> 2014-11-24 18:05:37,064 INFO > >>> >>> > >>> > >>> >>> > >>> > >>> >>> > >>> > >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] > >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) START, > >>> >>> > >>> GetHardwareInfoVDSCommand(HostName = Compute5, HostId = > >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, > >>> >>> > >>> > >>> >>> > >>> vds=Host[Compute5,0bf6b00f-7947-4411-b55a-cc5eea2b381a]), log > >>> >>> > >>> id: > >>> >>> > >>> 6d560cc2 > >>> >>> > >>> 2014-11-24 18:05:37,074 INFO > >>> >>> > >>> > >>> >>> > >>> > >>> >>> > >>> > >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] > >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) FINISH, > >>> >>> > >>> GetHardwareInfoVDSCommand, log > >>> >>> > >>> id: 6d560cc2 > >>> >>> > >>> 2014-11-24 18:05:37,093 WARN > >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.VdsManager] > >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) Host Compute5 is > >>> >>> > >>> running > >>> >>> > >>> with > >>> >>> > >>> disabled > >>> >>> > >>> SELinux. > >>> >>> > >>> 2014-11-24 18:05:37,127 INFO > >>> >>> > >>> > >>> >>> > >>> > >>> >>> > >>> > >>> >>> > >>> [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] > >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] Running > >>> >>> > >>> command: > >>> >>> > >>> HandleVdsCpuFlagsOrClusterChangedCommand internal: true. > >>> >>> > >>> Entities > >>> >>> > >>> affected > >>> >>> > >>> : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS > >>> >>> > >>> 2014-11-24 18:05:37,147 INFO > >>> >>> > >>> > >>> >>> > >>> > >>> >>> > >>> > >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] > >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] START, > >>> >>> > >>> GlusterServersListVDSCommand(HostName = Compute5, HostId > >>> >>> > >>> = > >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a), log id: 4faed87 > >>> >>> > >>> 2014-11-24 18:05:37,164 INFO > >>> >>> > >>> > >>> >>> > >>> > >>> >>> > >>> > >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] > >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] FINISH, > >>> >>> > >>> GlusterServersListVDSCommand, log id: 4faed87 > >>> >>> > >>> 2014-11-24 18:05:37,189 INFO > >>> >>> > >>> [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] > >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Running > >>> >>> > >>> command: > >>> >>> > >>> SetNonOperationalVdsCommand internal: true. Entities > >>> >>> > >>> affected : > >>> >>> > >>> ID: > >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS > >>> >>> > >>> 2014-11-24 18:05:37,206 INFO > >>> >>> > >>> > >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] > >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] START, > >>> >>> > >>> SetVdsStatusVDSCommand(HostName = Compute5, HostId = > >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, > >>> >>> > >>> status=NonOperational, > >>> >>> > >>> nonOperationalReason=GLUSTER_COMMAND_FAILED, > >>> >>> > >>> stopSpmFailureLogged=false), > >>> >>> > >>> log id: fed5617 > >>> >>> > >>> 2014-11-24 18:05:37,209 INFO > >>> >>> > >>> > >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] > >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] FINISH, > >>> >>> > >>> SetVdsStatusVDSCommand, log id: fed5617 > >>> >>> > >>> 2014-11-24 18:05:37,223 ERROR > >>> >>> > >>> > >>> >>> > >>> > >>> >>> > >>> > >>> >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] > >>> >>> > >>> Correlation ID: > >>> >>> > >>> 4a84c4e5, > >>> >>> > >>> Job > >>> >>> > >>> ID: 4bfd4a6d-c3ef-468f-a40e-a3a6ca13011b, Call Stack: > >>> >>> > >>> null, > >>> >>> > >>> Custom > >>> >>> > >>> Event > >>> >>> > >>> ID: -1, Message: Gluster command [<UNKNOWN>] failed on > >>> >>> > >>> server > >>> >>> > >>> Compute5. > >>> >>> > >>> 2014-11-24 18:05:37,243 INFO > >>> >>> > >>> > >>> >>> > >>> > >>> >>> > >>> > >>> >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] > >>> >>> > >>> Correlation ID: > >>> >>> > >>> null, > >>> >>> > >>> Call > >>> >>> > >>> Stack: null, Custom Event ID: -1, Message: Status of > >>> >>> > >>> host > >>> >>> > >>> Compute5 > >>> >>> > >>> was > >>> >>> > >>> set > >>> >>> > >>> to NonOperational. > >>> >>> > >>> 2014-11-24 18:05:37,272 INFO > >>> >>> > >>> [org.ovirt.engine.core.bll.HandleVdsVersionCommand] > >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Running > >>> >>> > >>> command: > >>> >>> > >>> HandleVdsVersionCommand internal: true. Entities > >>> >>> > >>> affected : > >>> >>> > >>> ID: > >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS > >>> >>> > >>> 2014-11-24 18:05:37,274 INFO > >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] > >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Host > >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a : Compute5 is > >>> >>> > >>> already in > >>> >>> > >>> NonOperational status for reason GLUSTER_COMMAND_FAILED. > >>> >>> > >>> SetNonOperationalVds command is skipped. > >>> >>> > >>> 2014-11-24 18:05:38,065 INFO > >>> >>> > >>> > >>> >>> > >>> > >>> >>> > >>> > >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] > >>> >>> > >>> (DefaultQuartzScheduler_Worker-55) [3706e836] FINISH, > >>> >>> > >>> GlusterVolumesListVDSCommand, return: > >>> >>> > >>> > >>> >>> > >>> > >>> >>> > >>> > >>> >>> > >>> {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4e72a1b1}, > >>> >>> > >>> log id: 48a0c832 > >>> >>> > >>> 2014-11-24 18:05:43,243 INFO > >>> >>> > >>> > >>> >>> > >>> > >>> >>> > >>> > >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] > >>> >>> > >>> (DefaultQuartzScheduler_Worker-35) START, > >>> >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, HostId > >>> >>> > >>> = > >>> >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 3ce13ebc > >>> >>> > >>> ^C > >>> >>> > >>> [root@ccr01 ~]# > >>> >>> > >>> > >>> >>> > >>> Thanks, > >>> >>> > >>> Punit > >>> >>> > >>> > >>> >>> > >>> > >>> >>> > >>> _______________________________________________ > >>> >>> > >>> Users mailing > >>> >>> > >>> > >>> >>> > >>> > >>> >>> > >>> listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users > >>> >>> > >>> > >>> >>> > >>> > >>> >>> > >>> > >>> >>> > >> > >>> >>> > >> > >>> >>> > > > >>> >>> > > > >>> >>> > > >>> >> > >>> >> > >>> > > >>> > > >>> > _______________________________________________ > >>> > Gluster-users mailing list > >>> > Gluster-users@gluster.org > >>> > http://supercolony.gluster.org/mailman/listinfo/gluster-users > >> > >> > >

Hi Kaushal, It seems it's bug in glusterfs 3.6....even i manage my systemd to start the network service before glusterd...but it's still fail... --------------- [Unit] Description=GlusterFS, a clustered file-system server After=network.target rpcbind.service Before=network-online.target [Service] Type=forking PIDFile=/var/run/glusterd.pid LimitNOFILE=65536 ExecStartPre=/etc/rc.d/init.d/network start ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid KillMode=process [Install] WantedBy=multi-user.target ---------------- Thanks, Punit On Wed, Dec 3, 2014 at 8:56 PM, Kaushal M <kshlmster@gmail.com> wrote:
I just remembered this.
There was another user having a similar issue of GlusterD failing to start on the mailing list a while back. The cause of his problem was the way his network was brought up. IIRC, he was using a static network configuration. The problem vanished when he began using dhcp. Or it might have been he was using dhcp.service and it got solved after switching to NetworkManager.
This could be one more thing you could look at.
I'll try to find the mail thread to see if it was the same problem as you.
~kaushal
I don't know much about how the network target is brought up in CentOS7, but I'll try as much as I can.
It seems to me that, after the network has been brought up and by the time GlusterD is started, a. The machine hasn't yet recieved it's hostname, or b. It hasn't yet registered with the name server.
This is causing name resolution failures.
I don't know if the network target could come up without the machine getting its hostname, so I'm pretty sure it's not a.
So it seems to be b. But these kind of signing in happens only in DDNS systems, which doesn't seem to be the case for you.
Both of these reasons might be wrong (most likely wrong). You'd do good if you could ask for help from someone with more experience in systemd + networking.
~kaushal
On Wed, Dec 3, 2014 at 10:54 AM, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi Kaushal,
This is the host...which i rebooted...would you mind to let me know how i can make the glusterd sevice come up after network...i am using centos7...if network is the issue...
On Wed, Dec 3, 2014 at 11:54 AM, Kaushal M <kshlmster@gmail.com> wrote:
This peer cannot be identified.
" [2014-12-03 02:29:25.998153] D [glusterd-peer-utils.c:121:glusterd_peerinfo_find_by_hostname]
0-management:
Unable to find friend: cpu05.zne01.hkg1.ovt.36stack.com"
I don't know why this address is not being resolved during boot time. If this is a valid peer, the the only reason I can think of this that the network is not up.
If you had previously detached the peer forcefully, the that could have left stale entries in some volumes. In this case as well, GlusterD will fail to identify the peer.
Do either of these reasons seem a possibility to you?
On Dec 3, 2014 8:07 AM, "Punit Dambiwal" <hypunit@gmail.com> wrote:
Hi Kaushal,
Please find the logs here :- http://ur1.ca/iyoe5 and
On Tue, Dec 2, 2014 at 10:43 PM, Kaushal M <kshlmster@gmail.com>
wrote:
Hey Punit, In the logs you've provided, GlusterD appears to be running
correctly.
Could you provide the logs for the time period when GlusterD attempts to start but fails.
~kaushal
On Dec 2, 2014 8:03 PM, "Punit Dambiwal" <hypunit@gmail.com> wrote: > > Hi Kaushal, > > Please find the logs here :- http://ur1.ca/iyhs5 and > http://ur1.ca/iyhue > > Thanks, > punit > > > On Tue, Dec 2, 2014 at 12:00 PM, Kaushal M <kshlmster@gmail.com> wrote: >> >> Hey Punit, >> Could you start Glusterd in debug mode and provide the logs here? >> To start it in debug mode, append '-LDEBUG' to the ExecStart line in >> the service file. >> >> ~kaushal >> >> On Mon, Dec 1, 2014 at 9:05 AM, Punit Dambiwal <hypunit@gmail.com> >> wrote: >> > Hi, >> > >> > Can Any body help me on this ?? >> > >> > On Thu, Nov 27, 2014 at 9:29 AM, Punit Dambiwal < hypunit@gmail.com> >> > wrote: >> >> >> >> Hi Kaushal, >> >> >> >> Thanks for the detailed reply....let me explain my setup first :- >> >> >> >> 1. Ovirt Engine >> >> 2. 4* host as well as storage machine (Host and gluster combined) >> >> 3. Every host has 24 bricks... >> >> >> >> Now whenever the host machine reboot...it can come up but can not >> >> join the >> >> cluster again and through the following error "Gluster command >> >> [<UNKNOWN>] >> >> failed on server.." >> >> >> >> Please check my comment in line :- >> >> >> >> 1. Use the same string for doing the peer probe and for the brick >> >> address >> >> during volume create/add-brick. Ideally, we suggest you use >> >> properly >> >> resolvable FQDNs everywhere. If that is not possible, then use only >> >> IP >> >> addresses. Try to avoid short names. >> >> --------------- >> >> [root@cpu05 ~]# gluster peer status >> >> Number of Peers: 3 >> >> >> >> Hostname: cpu03.stack.com >> >> Uuid: 5729b8c4-e80d-4353-b456-6f467bddbdfb >> >> State: Peer in Cluster (Connected) >> >> >> >> Hostname: cpu04.stack.com >> >> Uuid: d272b790-c4b2-4bed-ba68-793656e6d7b0 >> >> State: Peer in Cluster (Connected) >> >> Other names: >> >> 10.10.0.8 >> >> >> >> Hostname: cpu02.stack.com >> >> Uuid: 8d8a7041-950e-40d0-85f9-58d14340ca25 >> >> State: Peer in Cluster (Connected) >> >> [root@cpu05 ~]# >> >> ---------------- >> >> 2. During boot up, make sure to launch glusterd only after the >> >> network is >> >> up. This will allow the new peer identification mechanism to do its >> >> job correctly. >> >> >> I think the service itself doing the same job.... >> >> >> >> [root@cpu05 ~]# cat /usr/lib/systemd/system/glusterd.service >> >> [Unit] >> >> Description=GlusterFS, a clustered file-system server >> >> After=network.target rpcbind.service >> >> Before=network-online.target >> >> >> >> [Service] >> >> Type=forking >> >> PIDFile=/var/run/glusterd.pid >> >> LimitNOFILE=65536 >> >> ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid >> >> KillMode=process >> >> >> >> [Install] >> >> WantedBy=multi-user.target >> >> [root@cpu05 ~]# >> >> -------------------- >> >> >> >> gluster logs :- >> >> >> >> [2014-11-24 09:22:22.147471] I [MSGID: 100030] >> >> [glusterfsd.c:2018:main] >> >> 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version >> >> 3.6.1 >> >> (args: /usr/sbin/glusterd -p /var/run/glusterd.pid) >> >> [2014-11-24 09:22:22.151565] I [glusterd.c:1214:init] 0-management: >> >> Maximum allowed open file descriptors set to 65536 >> >> [2014-11-24 09:22:22.151599] I [glusterd.c:1259:init] 0-management: >> >> Using >> >> /var/lib/glusterd as working directory >> >> [2014-11-24 09:22:22.155216] W [rdma.c:4195:__gf_rdma_ctx_create] >> >> 0-rpc-transport/rdma: rdma_cm event channel creation failed (No >> >> such device) >> >> [2014-11-24 09:22:22.155264] E [rdma.c:4483:init] >> >> 0-rdma.management: >> >> Failed to initialize IB Device >> >> [2014-11-24 09:22:22.155285] E >> >> [rpc-transport.c:333:rpc_transport_load] >> >> 0-rpc-transport: 'rdma' initialization failed >> >> [2014-11-24 09:22:22.155354] W >> >> [rpcsvc.c:1524:rpcsvc_transport_create] >> >> 0-rpc-service: cannot create listener, initing the transport failed >> >> [2014-11-24 09:22:22.156290] I >> >> [glusterd.c:413:glusterd_check_gsync_present] 0-glusterd: >> >> geo-replication >> >> module not installed in the system >> >> [2014-11-24 09:22:22.161318] I >> >> [glusterd-store.c:2043:glusterd_restore_op_version] 0-glusterd: >> >> retrieved >> >> op-version: 30600 >> >> [2014-11-24 09:22:22.821800] I >> >> [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] >> >> 0-management: >> >> connect returned 0 >> >> [2014-11-24 09:22:22.825810] I >> >> [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] >> >> 0-management: >> >> connect returned 0 >> >> [2014-11-24 09:22:22.828705] I >> >> [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] >> >> 0-management: >> >> connect returned 0 >> >> [2014-11-24 09:22:22.828771] I >> >> [rpc-clnt.c:969:rpc_clnt_connection_init] >> >> 0-management: setting frame-timeout to 600 >> >> [2014-11-24 09:22:22.832670] I >> >> [rpc-clnt.c:969:rpc_clnt_connection_init] >> >> 0-management: setting frame-timeout to 600 >> >> [2014-11-24 09:22:22.835919] I >> >> [rpc-clnt.c:969:rpc_clnt_connection_init] >> >> 0-management: setting frame-timeout to 600 >> >> [2014-11-24 09:22:22.840209] E >> >> [glusterd-store.c:4248:glusterd_resolve_all_bricks] 0-glusterd: >> >> resolve >> >> brick failed in restore >> >> [2014-11-24 09:22:22.840233] E [xlator.c:425:xlator_init] >> >> 0-management: >> >> Initialization of volume 'management' failed, review your volfile >> >> again >> >> [2014-11-24 09:22:22.840245] E [graph.c:322:glusterfs_graph_init] >> >> 0-management: initializing translator failed >> >> [2014-11-24 09:22:22.840264] E >> >> [graph.c:525:glusterfs_graph_activate] >> >> 0-graph: init failed >> >> [2014-11-24 09:22:22.840754] W [glusterfsd.c:1194:cleanup_and_exit] >> >> (--> >> >> 0-: received signum (0), shutting down >> >> >> >> Thanks, >> >> Punit >> >> >> >> >> >> >> >> >> >> On Wed, Nov 26, 2014 at 7:14 PM, Kaushal M <kshlmster@gmail.com
>> >> wrote: >> >>> >> >>> Based on the logs I can guess that glusterd is being started >> >>> before >> >>> the network has come up and that the addresses given to bricks do >> >>> not >> >>> directly match the addresses used in during peer probe. >> >>> >> >>> The gluster_after_reboot log has the line "[2014-11-25 >> >>> 06:46:09.972113] E >> >>> [glusterd-store.c:2632:glusterd_resolve_all_bricks] >> >>> 0-glusterd: resolve brick failed in restore". >> >>> >> >>> Brick resolution fails when glusterd cannot match the address for >> >>> the >> >>> brick, with one of the peers. Brick resolution happens in two >> >>> phases, >> >>> 1. We first try to identify the peer by performing string >> >>> comparisions >> >>> with the brick address and the peer addresses (The peer names will >> >>> be >> >>> the names/addresses that were given when the peer was probed). >> >>> 2. If we don't find a match from step 1, we will then resolve all >> >>> the >> >>> brick address and the peer addresses into addrinfo structs, and >> >>> then >> >>> compare these structs to find a match. This process should >> >>> generally >> >>> find a match if available. This will fail only if the network is >> >>> not >> >>> up yet as we cannot resolve addresses. >> >>> >> >>> The above steps are applicable only to glusterfs versions =3.6. >> >>> They >> >>> were introduced to reduce problems with peer identification,
>> >>> the >> >>> one you encountered >> >>> >> >>> Since both of the steps failed to find a match in one run, but >> >>> succeeded later, we can come to the conclusion that, >> >>> a) the bricks don't have the exact same string used in peer
>> >>> for >> >>> their addresses as step 1 failed, and >> >>> b) the network was not up in the initial run, as step 2 failed >> >>> during >> >>> the initial run, but passed in the second run. >> >>> >> >>> Please let me know if my conclusion is correct. >> >>> >> >>> If it is, you can solve your problem in two ways. >> >>> 1. Use the same string for doing the peer probe and for the brick >> >>> address during volume create/add-brick. Ideally, we suggest you >> >>> use >> >>> properly resolvable FQDNs everywhere. If that is not possible, >> >>> then >> >>> use only IP addresses. Try to avoid short names. >> >>> 2. During boot up, make sure to launch glusterd only after the >> >>> network >> >>> is up. This will allow the new peer identification mechanism to do >> >>> its >> >>> job correctly. >> >>> >> >>> >> >>> If you have already followed these steps and yet still hit the >> >>> problem, then please provide more information (setup, logs, etc.). >> >>> It >> >>> could be much different problem that you are facing. >> >>> >> >>> ~kaushal >> >>> >> >>> On Wed, Nov 26, 2014 at 4:01 PM, Punit Dambiwal >> >>> <hypunit@gmail.com> >> >>> wrote: >> >>> > Is there any one can help on this ?? >> >>> > >> >>> > Thanks, >> >>> > punit >> >>> > >> >>> > On Wed, Nov 26, 2014 at 9:42 AM, Punit Dambiwal >> >>> > <hypunit@gmail.com> >> >>> > wrote: >> >>> >> >> >>> >> Hi, >> >>> >> >> >>> >> My Glusterfs version is :- glusterfs-3.6.1-1.el7 >> >>> >> >> >>> >> On Wed, Nov 26, 2014 at 1:59 AM, Kanagaraj Mayilsamy >> >>> >> <kmayilsa@redhat.com> >> >>> >> wrote: >> >>> >>> >> >>> >>> [+Gluster-users@gluster.org] >> >>> >>> >> >>> >>> "Initialization of volume 'management' failed, review your >> >>> >>> volfile >> >>> >>> again", glusterd throws this error when the service is started >> >>> >>> automatically >> >>> >>> after the reboot. But the service is successfully started >> >>> >>> later >> >>> >>> manually by >> >>> >>> the user. >> >>> >>> >> >>> >>> can somebody from gluster-users please help on this? >> >>> >>> >> >>> >>> glusterfs version: 3.5.1 >> >>> >>> >> >>> >>> Thanks, >> >>> >>> Kanagaraj >> >>> >>> >> >>> >>> ----- Original Message ----- >> >>> >>> > From: "Punit Dambiwal" <hypunit@gmail.com> >> >>> >>> > To: "Kanagaraj" <kmayilsa@redhat.com> >> >>> >>> > Cc: users@ovirt.org >> >>> >>> > Sent: Tuesday, November 25, 2014 7:24:45 PM >> >>> >>> > Subject: Re: [ovirt-users] Gluster command [<UNKNOWN>] >> >>> >>> > failed on >> >>> >>> > server... >> >>> >>> > >> >>> >>> > Hi Kanagraj, >> >>> >>> > >> >>> >>> > Please check the attached log files....i didn't find any >> >>> >>> > thing >> >>> >>> > special.... >> >>> >>> > >> >>> >>> > On Tue, Nov 25, 2014 at 12:12 PM, Kanagaraj >> >>> >>> > <kmayilsa@redhat.com> >> >>> >>> > wrote: >> >>> >>> > >> >>> >>> > > Do you see any errors in >> >>> >>> > > /var/log/glusterfs/etc-glusterfs-glusterd.vol.log or >> >>> >>> > > vdsm.log >> >>> >>> > > when >> >>> >>> > > the >> >>> >>> > > service is trying to start automatically after the reboot? >> >>> >>> > > >> >>> >>> > > Thanks, >> >>> >>> > > Kanagaraj >> >>> >>> > > >> >>> >>> > > >> >>> >>> > > On 11/24/2014 08:13 PM, Punit Dambiwal wrote: >> >>> >>> > > >> >>> >>> > > Hi Kanagaraj, >> >>> >>> > > >> >>> >>> > > Yes...once i will start the gluster service and then >> >>> >>> > > vdsmd >> >>> >>> > > ...the >> >>> >>> > > host >> >>> >>> > > can connect to cluster...but the question is why it's not >> >>> >>> > > started >> >>> >>> > > even it >> >>> >>> > > has chkconfig enabled... >> >>> >>> > > >> >>> >>> > > I have tested it in two host cluster >> >>> >>> > > environment...(Centos 6.6 >> >>> >>> > > and >> >>> >>> > > centos 7.0) on both hypervisior cluster..it's failed to >> >>> >>> > > reconnect >> >>> >>> > > in >> >>> >>> > > to >> >>> >>> > > cluster after reboot.... >> >>> >>> > > >> >>> >>> > > In both the environment glusterd enabled for next >> >>> >>> > > boot....but >> >>> >>> > > it's >> >>> >>> > > failed with the same error....seems it's bug in either >> >>> >>> > > gluster or >> >>> >>> > > Ovirt ?? >> >>> >>> > > >> >>> >>> > > Please help me to find the workaround here if can not >> >>> >>> > > resolve >> >>> >>> > > it...as >> >>> >>> > > without this the Host machine can not connect after >> >>> >>> > > reboot....that >> >>> >>> > > means >> >>> >>> > > engine will consider it as down and every time need to >> >>> >>> > > manually >> >>> >>> > > start >> >>> >>> > > the >> >>> >>> > > gluster service and vdsmd... ?? >> >>> >>> > > >> >>> >>> > > Thanks, >> >>> >>> > > Punit >> >>> >>> > > >> >>> >>> > > On Mon, Nov 24, 2014 at 10:20 PM, Kanagaraj >> >>> >>> > > <kmayilsa@redhat.com> >> >>> >>> > > wrote: >> >>> >>> > > >> >>> >>> > >> From vdsm.log "error: Connection failed. Please check if >> >>> >>> > >> gluster >> >>> >>> > >> daemon >> >>> >>> > >> is operational." >> >>> >>> > >> >> >>> >>> > >> Starting glusterd service should fix this issue. 'service >> >>> >>> > >> glusterd >> >>> >>> > >> start' >> >>> >>> > >> But i am wondering why the glusterd was not started >> >>> >>> > >> automatically >> >>> >>> > >> after >> >>> >>> > >> the reboot. >> >>> >>> > >> >> >>> >>> > >> Thanks, >> >>> >>> > >> Kanagaraj >> >>> >>> > >> >> >>> >>> > >> >> >>> >>> > >> >> >>> >>> > >> On 11/24/2014 07:18 PM, Punit Dambiwal wrote: >> >>> >>> > >> >> >>> >>> > >> Hi Kanagaraj, >> >>> >>> > >> >> >>> >>> > >> Please find the attached VDSM logs :- >> >>> >>> > >> >> >>> >>> > >> ---------------- >> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> >>> > >> >> >>> >>> > >> >> >>> >>> > >> >> >>> >>> > >> 21:41:17,182::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >> >>> >>> > >> Owner.cancelAll requests {} >> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> >>> > >> >> >>> >>> > >> 21:41:17,182::task::993::Storage.TaskManager.Task::(_decref) >> >>> >>> > >> Task=`1691d409-9b27-4585-8281-5ec26154367a`::ref 0 >> >>> >>> > >> aborting >> >>> >>> > >> False >> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> >>> > >> >> >>> >>> > >> >> >>> >>> > >> 21:41:32,393::task::595::Storage.TaskManager.Task::(_updateState) >> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from >> >>> >>> > >> state >> >>> >>> > >> init >> >>> >>> > >> -> >> >>> >>> > >> state preparing >> >>> >>> > >> Thread-13::INFO::2014-11-24 >> >>> >>> > >> 21:41:32,393::logUtils::44::dispatcher::(wrapper) Run and >> >>> >>> > >> protect: >> >>> >>> > >> repoStats(options=None) >> >>> >>> > >> Thread-13::INFO::2014-11-24 >> >>> >>> > >> 21:41:32,393::logUtils::47::dispatcher::(wrapper) Run and >> >>> >>> > >> protect: >> >>> >>> > >> repoStats, Return response: {} >> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> >>> > >> >> >>> >>> > >> 21:41:32,393::task::1191::Storage.TaskManager.Task::(prepare) >> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::finished: {} >> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> >>> > >> >> >>> >>> > >> >> >>> >>> > >> 21:41:32,394::task::595::Storage.TaskManager.Task::(_updateState) >> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving from >> >>> >>> > >> state >> >>> >>> > >> preparing >> >>> >>> > >> -> >> >>> >>> > >> state finished >> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> >>> > >> >> >>> >>> > >> >> >>> >>> > >> >> >>> >>> > >> 21:41:32,394::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) >> >>> >>> > >> Owner.releaseAll requests {} resources {} >> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> >>> > >> >> >>> >>> > >> >> >>> >>> > >> >> >>> >>> > >> 21:41:32,394::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >> >>> >>> > >> Owner.cancelAll requests {} >> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> >>> > >> >> >>> >>> > >> 21:41:32,394::task::993::Storage.TaskManager.Task::(_decref) >> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::ref 0 >> >>> >>> > >> aborting >> >>> >>> > >> False >> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> >>> > >> 21:41:41,550::BindingXMLRPC::1132::vds::(wrapper) client >> >>> >>> > >> [10.10.10.2]::call >> >>> >>> > >> getCapabilities with () {} >> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> >>> > >> 21:41:41,553::utils::738::root::(execCmd) >> >>> >>> > >> /sbin/ip route show to 0.0.0.0/0 table all (cwd None) >> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> >>> > >> 21:41:41,560::utils::758::root::(execCmd) >> >>> >>> > >> SUCCESS: <err> = ''; <rc> = 0 >> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> >>> > >> 21:41:41,588::caps::728::root::(_getKeyPackages) rpm >> >>> >>> > >> package >> >>> >>> > >> ('gluster-swift',) not found >> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> >>> > >> 21:41:41,592::caps::728::root::(_getKeyPackages) rpm >> >>> >>> > >> package >> >>> >>> > >> ('gluster-swift-object',) not found >> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> >>> > >> 21:41:41,593::caps::728::root::(_getKeyPackages) rpm >> >>> >>> > >> package >> >>> >>> > >> ('gluster-swift-plugin',) not found >> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm >> >>> >>> > >> package >> >>> >>> > >> ('gluster-swift-account',) not found >> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm >> >>> >>> > >> package >> >>> >>> > >> ('gluster-swift-proxy',) not found >> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm >> >>> >>> > >> package >> >>> >>> > >> ('gluster-swift-doc',) not found >> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> >>> > >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm >> >>> >>> > >> package >> >>> >>> > >> ('gluster-swift-container',) not found >> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> >>> > >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm >> >>> >>> > >> package >> >>> >>> > >> ('glusterfs-geo-replication',) not found >> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> >>> > >> 21:41:41,600::caps::646::root::(get) >> >>> >>> > >> VirtioRNG DISABLED: libvirt version 0.10.2-29.el6_5.9 >> >>> >>> > >> required >> >>> >>> > >> >= >> >>> >>> > >> 0.10.2-31 >> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> >>> > >> 21:41:41,603::BindingXMLRPC::1139::vds::(wrapper) return >> >>> >>> > >> getCapabilities >> >>> >>> > >> with {'status': {'message': 'Done', 'code': 0}, 'info': >> >>> >>> > >> {'HBAInventory': >> >>> >>> > >> {'iSCSI': [{'InitiatorName': >> >>> >>> > >> 'iqn.1994-05.com.redhat:32151ce183c8'}], >> >>> >>> > >> 'FC': >> >>> >>> > >> []}, 'packages2': {'kernel': {'release': >> >>> >>> > >> '431.el6.x86_64', >> >>> >>> > >> 'buildtime': >> >>> >>> > >> 1385061309.0, 'version': '2.6.32'}, 'glusterfs-rdma': >> >>> >>> > >> {'release': >> >>> >>> > >> '1.el6', >> >>> >>> > >> 'buildtime': 1403622628L, 'version': '3.5.1'}, >> >>> >>> > >> 'glusterfs-fuse': >> >>> >>> > >> {'release': '1.el6', 'buildtime': 1403622628L, 'version': >> >>> >>> > >> '3.5.1'}, >> >>> >>> > >> 'spice-server': {'release': '6.el6_5.2', 'buildtime': >> >>> >>> > >> 1402324637L, >> >>> >>> > >> 'version': '0.12.4'}, 'vdsm': {'release': >> >>> >>> > >> '1.gitdb83943.el6', >> >>> >>> > >> 'buildtime': >> >>> >>> > >> 1412784567L, 'version': '4.16.7'}, 'qemu-kvm': >> >>> >>> > >> {'release': >> >>> >>> > >> '2.415.el6_5.10', 'buildtime': 1402435700L, 'version': >> >>> >>> > >> '0.12.1.2'}, >> >>> >>> > >> 'qemu-img': {'release': '2.415.el6_5.10', 'buildtime': >> >>> >>> > >> 1402435700L, >> >>> >>> > >> 'version': '0.12.1.2'}, 'libvirt': {'release': >> >>> >>> > >> '29.el6_5.9', >> >>> >>> > >> 'buildtime': >> >>> >>> > >> 1402404612L, 'version': '0.10.2'}, 'glusterfs': >> >>> >>> > >> {'release': >> >>> >>> > >> '1.el6', >> >>> >>> > >> 'buildtime': 1403622628L, 'version': '3.5.1'}, 'mom': >> >>> >>> > >> {'release': >> >>> >>> > >> '2.el6', >> >>> >>> > >> 'buildtime': 1403794344L, 'version': '0.4.1'}, >> >>> >>> > >> 'glusterfs-server': >> >>> >>> > >> {'release': '1.el6', 'buildtime': 1403622628L, 'version': >> >>> >>> > >> '3.5.1'}}, >> >>> >>> > >> 'numaNodeDistance': {'1': [20, 10], '0': [10, 20]}, >> >>> >>> > >> 'cpuModel': >> >>> >>> > >> 'Intel(R) >> >>> >>> > >> Xeon(R) CPU X5650 @ 2.67GHz', 'liveMerge': >> >>> >>> > >> 'false', >> >>> >>> > >> 'hooks': >> >>> >>> > >> {}, >> >>> >>> > >> 'cpuSockets': '2', 'vmTypes': ['kvm'], 'selinux': >> >>> >>> > >> {'mode': '1'}, >> >>> >>> > >> 'kdumpStatus': 0, 'supportedProtocols': ['2.2', '2.3'], >> >>> >>> > >> 'networks': >> >>> >>> > >> {'ovirtmgmt': {'iface': u'bond0.10', 'addr': >> >>> >>> > >> '43.252.176.16', >> >>> >>> > >> 'bridged': >> >>> >>> > >> False, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], >> >>> >>> > >> 'mtu': >> >>> >>> > >> '1500', >> >>> >>> > >> 'bootproto4': 'none', 'netmask': '255.255.255.0', >> >>> >>> > >> 'ipv4addrs': >> >>> >>> > >> [' >> >>> >>> > >> 43.252.176.16/24' <http://43.252.176.16/24%27>], >> >>> >>> > >> 'interface': >> >>> >>> > >> u'bond0.10', 'ipv6gateway': '::', 'gateway': >> >>> >>> > >> '43.25.17.1'}, >> >>> >>> > >> 'Internal': >> >>> >>> > >> {'iface': 'Internal', 'addr': '', 'cfg': {'DEFROUTE': >> >>> >>> > >> 'no', >> >>> >>> > >> 'HOTPLUG': >> >>> >>> > >> 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': 'no', >> >>> >>> > >> 'BOOTPROTO': >> >>> >>> > >> 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': >> >>> >>> > >> 'Bridge', >> >>> >>> > >> 'ONBOOT': >> >>> >>> > >> 'no'}, 'bridged': True, 'ipv6addrs': >> >>> >>> > >> ['fe80::210:18ff:fecd:daac/64'], >> >>> >>> > >> 'gateway': '', 'bootproto4': 'none', 'netmask': '', >> >>> >>> > >> 'stp': >> >>> >>> > >> 'off', >> >>> >>> > >> 'ipv4addrs': [], 'mtu': '9000', 'ipv6gateway': '::', >> >>> >>> > >> 'ports': >> >>> >>> > >> ['bond1.100']}, 'storage': {'iface': u'bond1', 'addr': >> >>> >>> > >> '10.10.10.6', >> >>> >>> > >> 'bridged': False, 'ipv6addrs': >> >>> >>> > >> ['fe80::210:18ff:fecd:daac/64'], >> >>> >>> > >> 'mtu': >> >>> >>> > >> '9000', 'bootproto4': 'none', 'netmask': '255.255.255.0', >> >>> >>> > >> 'ipv4addrs': [' >> >>> >>> > >> 10.10.10.6/24' <http://10.10.10.6/24%27>], 'interface': >> >>> >>> > >> u'bond1', >> >>> >>> > >> 'ipv6gateway': '::', 'gateway': ''}, 'VMNetwork': >> >>> >>> > >> {'iface': >> >>> >>> > >> 'VMNetwork', >> >>> >>> > >> 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', >> >>> >>> > >> 'MTU': >> >>> >>> > >> '1500', >> >>> >>> > >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', >> >>> >>> > >> 'STP': >> >>> >>> > >> 'off', >> >>> >>> > >> 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, >> >>> >>> > >> 'bridged': >> >>> >>> > >> True, >> >>> >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'gateway': >> >>> >>> > >> '', >> >>> >>> > >> 'bootproto4': >> >>> >>> > >> 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], >> >>> >>> > >> 'mtu': >> >>> >>> > >> '1500', >> >>> >>> > >> 'ipv6gateway': '::', 'ports': ['bond0.36']}}, 'bridges': >> >>> >>> > >> {'Internal': >> >>> >>> > >> {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': 'no', >> >>> >>> > >> 'MTU': >> >>> >>> > >> '9000', >> >>> >>> > >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', >> >>> >>> > >> 'STP': >> >>> >>> > >> 'off', >> >>> >>> > >> 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': 'no'}, >> >>> >>> > >> 'ipv6addrs': >> >>> >>> > >> ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', >> >>> >>> > >> 'netmask': '', >> >>> >>> > >> 'stp': >> >>> >>> > >> 'off', 'ipv4addrs': [], 'ipv6gateway': '::', 'gateway': >> >>> >>> > >> '', >> >>> >>> > >> 'opts': >> >>> >>> > >> {'topology_change_detected': '0', >> >>> >>> > >> 'multicast_last_member_count': >> >>> >>> > >> '2', >> >>> >>> > >> 'hash_elasticity': '4', >> >>> >>> > >> 'multicast_query_response_interval': >> >>> >>> > >> '999', >> >>> >>> > >> 'multicast_snooping': '1', >> >>> >>> > >> 'multicast_startup_query_interval': >> >>> >>> > >> '3124', >> >>> >>> > >> 'hello_timer': '31', 'multicast_querier_interval': >> >>> >>> > >> '25496', >> >>> >>> > >> 'max_age': >> >>> >>> > >> '1999', 'hash_max': '512', 'stp_state': '0', 'root_id': >> >>> >>> > >> '8000.001018cddaac', 'priority': '32768', >> >>> >>> > >> 'multicast_membership_interval': >> >>> >>> > >> '25996', 'root_path_cost': '0', 'root_port': '0', >> >>> >>> > >> 'multicast_querier': >> >>> >>> > >> '0', >> >>> >>> > >> 'multicast_startup_query_count': '2', 'hello_time': >> >>> >>> > >> '199', >> >>> >>> > >> 'topology_change': '0', 'bridge_id': '8000.001018cddaac', >> >>> >>> > >> 'topology_change_timer': '0', 'ageing_time': '29995', >> >>> >>> > >> 'gc_timer': >> >>> >>> > >> '31', >> >>> >>> > >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', >> >>> >>> > >> 'multicast_query_interval': '12498', >> >>> >>> > >> 'multicast_last_member_interval': >> >>> >>> > >> '99', 'multicast_router': '1', 'forward_delay': '0'}, >> >>> >>> > >> 'ports': >> >>> >>> > >> ['bond1.100']}, 'VMNetwork': {'addr': '', 'cfg': >> >>> >>> > >> {'DEFROUTE': >> >>> >>> > >> 'no', >> >>> >>> > >> 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', >> >>> >>> > >> 'NM_CONTROLLED': >> >>> >>> > >> 'no', >> >>> >>> > >> 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': 'VMNetwork', >> >>> >>> > >> 'TYPE': >> >>> >>> > >> 'Bridge', >> >>> >>> > >> 'ONBOOT': 'no'}, 'ipv6addrs': >> >>> >>> > >> ['fe80::62eb:69ff:fe20:b46c/64'], >> >>> >>> > >> 'mtu': >> >>> >>> > >> '1500', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], >> >>> >>> > >> 'ipv6gateway': >> >>> >>> > >> '::', >> >>> >>> > >> 'gateway': '', 'opts': {'topology_change_detected': '0', >> >>> >>> > >> 'multicast_last_member_count': '2', 'hash_elasticity': >> >>> >>> > >> '4', >> >>> >>> > >> 'multicast_query_response_interval': '999', >> >>> >>> > >> 'multicast_snooping': >> >>> >>> > >> '1', >> >>> >>> > >> 'multicast_startup_query_interval': '3124', >> >>> >>> > >> 'hello_timer': >> >>> >>> > >> '131', >> >>> >>> > >> 'multicast_querier_interval': '25496', 'max_age': '1999', >> >>> >>> > >> 'hash_max': >> >>> >>> > >> '512', 'stp_state': '0', 'root_id': '8000.60eb6920b46c', >> >>> >>> > >> 'priority': >> >>> >>> > >> '32768', 'multicast_membership_interval': '25996', >> >>> >>> > >> 'root_path_cost': >> >>> >>> > >> '0', >> >>> >>> > >> 'root_port': '0', 'multicast_querier': '0', >> >>> >>> > >> 'multicast_startup_query_count': '2', 'hello_time': >> >>> >>> > >> '199', >> >>> >>> > >> 'topology_change': '0', 'bridge_id': '8000.60eb6920b46c', >> >>> >>> > >> 'topology_change_timer': '0', 'ageing_time': '29995', >> >>> >>> > >> 'gc_timer': >> >>> >>> > >> '31', >> >>> >>> > >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', >> >>> >>> > >> 'multicast_query_interval': '12498', >> >>> >>> > >> 'multicast_last_member_interval': >> >>> >>> > >> '99', 'multicast_router': '1', 'forward_delay': '0'}, >> >>> >>> > >> 'ports': >> >>> >>> > >> ['bond0.36']}}, 'uuid': >> >>> >>> > >> '44454C4C-4C00-1057-8053-B7C04F504E31', >> >>> >>> > >> 'lastClientIface': 'bond1', 'nics': {'eth3': >> >>> >>> > >> {'permhwaddr': >> >>> >>> > >> '00:10:18:cd:da:ae', 'addr': '', 'cfg': {'SLAVE': 'yes', >> >>> >>> > >> 'NM_CONTROLLED': >> >>> >>> > >> 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ae', >> >>> >>> > >> 'MASTER': >> >>> >>> > >> 'bond1', >> >>> >>> > >> 'DEVICE': 'eth3', 'ONBOOT': 'no'}, 'ipv6addrs': [], >> >>> >>> > >> 'mtu': >> >>> >>> > >> '9000', >> >>> >>> > >> 'netmask': '', 'ipv4addrs': [], 'hwaddr': >> >>> >>> > >> '00:10:18:cd:da:ac', >> >>> >>> > >> 'speed': >> >>> >>> > >> 1000}, 'eth2': {'permhwaddr': '00:10:18:cd:da:ac', >> >>> >>> > >> 'addr': '', >> >>> >>> > >> 'cfg': >> >>> >>> > >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '9000', >> >>> >>> > >> 'HWADDR': >> >>> >>> > >> '00:10:18:cd:da:ac', 'MASTER': 'bond1', 'DEVICE': 'eth2', >> >>> >>> > >> 'ONBOOT': >> >>> >>> > >> 'no'}, >> >>> >>> > >> 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', >> >>> >>> > >> 'ipv4addrs': [], >> >>> >>> > >> 'hwaddr': >> >>> >>> > >> '00:10:18:cd:da:ac', 'speed': 1000}, 'eth1': >> >>> >>> > >> {'permhwaddr': >> >>> >>> > >> '60:eb:69:20:b4:6d', 'addr': '', 'cfg': {'SLAVE': 'yes', >> >>> >>> > >> 'NM_CONTROLLED': >> >>> >>> > >> 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6d', >> >>> >>> > >> 'MASTER': >> >>> >>> > >> 'bond0', >> >>> >>> > >> 'DEVICE': 'eth1', 'ONBOOT': 'yes'}, 'ipv6addrs': [], >> >>> >>> > >> 'mtu': >> >>> >>> > >> '1500', >> >>> >>> > >> 'netmask': '', 'ipv4addrs': [], 'hwaddr': >> >>> >>> > >> '60:eb:69:20:b4:6c', >> >>> >>> > >> 'speed': >> >>> >>> > >> 1000}, 'eth0': {'permhwaddr': '60:eb:69:20:b4:6c', >> >>> >>> > >> 'addr': '', >> >>> >>> > >> 'cfg': >> >>> >>> > >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': '1500', >> >>> >>> > >> 'HWADDR': >> >>> >>> > >> '60:eb:69:20:b4:6c', 'MASTER': 'bond0', 'DEVICE': 'eth0', >> >>> >>> > >> 'ONBOOT': >> >>> >>> > >> 'yes'}, >> >>> >>> > >> 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', >> >>> >>> > >> 'ipv4addrs': [], >> >>> >>> > >> 'hwaddr': >> >>> >>> > >> '60:eb:69:20:b4:6c', 'speed': 1000}}, >> >>> >>> > >> 'software_revision': '1', >> >>> >>> > >> 'clusterLevels': ['3.0', '3.1', '3.2', '3.3', '3.4', >> >>> >>> > >> '3.5'], >> >>> >>> > >> 'cpuFlags': >> >>> >>> > >> >> >>> >>> > >> >> >>> >>> > >> >> >>> >>> > >> u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,popcnt,aes,lahf_lm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270', >> >>> >>> > >> 'ISCSIInitiatorName': >> >>> >>> > >> 'iqn.1994-05.com.redhat:32151ce183c8', >> >>> >>> > >> 'netConfigDirty': 'False', 'supportedENGINEs': ['3.0', >> >>> >>> > >> '3.1', >> >>> >>> > >> '3.2', >> >>> >>> > >> '3.3', >> >>> >>> > >> '3.4', '3.5'], 'autoNumaBalancing': 2, 'reservedMem': >> >>> >>> > >> '321', >> >>> >>> > >> 'bondings': >> >>> >>> > >> {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', >> >>> >>> > >> 'netmask': '', >> >>> >>> > >> 'slaves': >> >>> >>> > >> [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '', >> >>> >>> > >> 'cfg': >> >>> >>> > >> {'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': 'no', >> >>> >>> > >> 'BONDING_OPTS': >> >>> >>> > >> 'mode=4 miimon=100', 'DEVICE': 'bond0', 'ONBOOT': 'yes'}, >> >>> >>> > >> 'ipv6addrs': >> >>> >>> > >> ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', >> >>> >>> > >> 'netmask': '', >> >>> >>> > >> 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', 'slaves': >> >>> >>> > >> ['eth0', >> >>> >>> > >> 'eth1'], >> >>> >>> > >> 'opts': {'miimon': '100', 'mode': '4'}}, 'bond1': >> >>> >>> > >> {'addr': >> >>> >>> > >> '10.10.10.6', >> >>> >>> > >> 'cfg': {'DEFROUTE': 'no', 'IPADDR': '10.10.10.6', >> >>> >>> > >> 'HOTPLUG': >> >>> >>> > >> 'no', >> >>> >>> > >> 'MTU': >> >>> >>> > >> '9000', 'NM_CONTROLLED': 'no', 'NETMASK': >> >>> >>> > >> '255.255.255.0', >> >>> >>> > >> 'BOOTPROTO': >> >>> >>> > >> 'none', 'BONDING_OPTS': 'mode=4 miimon=100', 'DEVICE': >> >>> >>> > >> 'bond1', >> >>> >>> > >> 'ONBOOT': >> >>> >>> > >> 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], >> >>> >>> > >> 'mtu': >> >>> >>> > >> '9000', >> >>> >>> > >> 'netmask': '255.255.255.0', 'ipv4addrs': [' 10.10.10.6/24' >> >>> >>> > >> <http://10.10.10.6/24%27>], 'hwaddr': >> >>> >>> > >> '00:10:18:cd:da:ac', >> >>> >>> > >> 'slaves': >> >>> >>> > >> ['eth2', 'eth3'], 'opts': {'miimon': '100', 'mode': >> >>> >>> > >> '4'}}, >> >>> >>> > >> 'bond2': >> >>> >>> > >> {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', >> >>> >>> > >> 'slaves': >> >>> >>> > >> [], >> >>> >>> > >> 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', >> >>> >>> > >> 'cfg': {}, >> >>> >>> > >> 'mtu': >> >>> >>> > >> '1500', 'netmask': '', 'slaves': [], 'hwaddr': >> >>> >>> > >> '00:00:00:00:00:00'}}, >> >>> >>> > >> 'software_version': '4.16', 'memSize': '24019', >> >>> >>> > >> 'cpuSpeed': >> >>> >>> > >> '2667.000', >> >>> >>> > >> 'numaNodes': {u'1': {'totalMemory': '12288', 'cpus': [6, >> >>> >>> > >> 7, 8, >> >>> >>> > >> 9, >> >>> >>> > >> 10, 11, >> >>> >>> > >> 18, 19, 20, 21, 22, 23]}, u'0': {'totalMemory': '12278', >> >>> >>> > >> 'cpus': >> >>> >>> > >> [0, >> >>> >>> > >> 1, 2, >> >>> >>> > >> 3, 4, 5, 12, 13, 14, 15, 16, 17]}}, 'version_name': 'Snow >> >>> >>> > >> Man', >> >>> >>> > >> 'vlans': >> >>> >>> > >> {'bond0.10': {'iface': 'bond0', 'addr': '43.25.17.16', >> >>> >>> > >> 'cfg': >> >>> >>> > >> {'DEFROUTE': >> >>> >>> > >> 'yes', 'VLAN': 'yes', 'IPADDR': '43.25.17.16', 'HOTPLUG': >> >>> >>> > >> 'no', >> >>> >>> > >> 'GATEWAY': >> >>> >>> > >> '43.25.17.1', 'NM_CONTROLLED': 'no', 'NETMASK': >> >>> >>> > >> '255.255.255.0', >> >>> >>> > >> 'BOOTPROTO': 'none', 'DEVICE': 'bond0.10', 'MTU': '1500', >> >>> >>> > >> 'ONBOOT': >> >>> >>> > >> 'yes'}, >> >>> >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': >> >>> >>> > >> 10, >> >>> >>> > >> 'mtu': >> >>> >>> > >> '1500', >> >>> >>> > >> 'netmask': '255.255.255.0', 'ipv4addrs': >> >>> >>> > >> ['43.25.17.16/24'] >> >>> >>> > >> <http://43.25.17.16/24%27%5D>}, 'bond0.36': {'iface': >> >>> >>> > >> 'bond0', >> >>> >>> > >> 'addr': >> >>> >>> > >> '', 'cfg': {'BRIDGE': 'VMNetwork', 'VLAN': 'yes', >> >>> >>> > >> 'HOTPLUG': >> >>> >>> > >> 'no', >> >>> >>> > >> 'MTU': >> >>> >>> > >> '1500', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond0.36', >> >>> >>> > >> 'ONBOOT': >> >>> >>> > >> 'no'}, >> >>> >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], 'vlanid': >> >>> >>> > >> 36, >> >>> >>> > >> 'mtu': >> >>> >>> > >> '1500', >> >>> >>> > >> 'netmask': '', 'ipv4addrs': []}, 'bond1.100': {'iface': >> >>> >>> > >> 'bond1', >> >>> >>> > >> 'addr': >> >>> >>> > >> '', 'cfg': {'BRIDGE': 'Internal', 'VLAN': 'yes', >> >>> >>> > >> 'HOTPLUG': >> >>> >>> > >> 'no', >> >>> >>> > >> 'MTU': >> >>> >>> > >> '9000', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond1.100', >> >>> >>> > >> 'ONBOOT': >> >>> >>> > >> 'no'}, >> >>> >>> > >> 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], 'vlanid': >> >>> >>> > >> 100, >> >>> >>> > >> 'mtu': >> >>> >>> > >> '9000', >> >>> >>> > >> 'netmask': '', 'ipv4addrs': []}}, 'cpuCores': '12', >> >>> >>> > >> 'kvmEnabled': >> >>> >>> > >> 'true', >> >>> >>> > >> 'guestOverhead': '65', 'cpuThreads': '24', >> >>> >>> > >> 'emulatedMachines': >> >>> >>> > >> [u'rhel6.5.0', u'pc', u'rhel6.4.0', u'rhel6.3.0', >> >>> >>> > >> u'rhel6.2.0', >> >>> >>> > >> u'rhel6.1.0', u'rhel6.0.0', u'rhel5.5.0', u'rhel5.4.4', >> >>> >>> > >> u'rhel5.4.0'], >> >>> >>> > >> 'operatingSystem': {'release': '5.el6.centos.11.1', >> >>> >>> > >> 'version': >> >>> >>> > >> '6', >> >>> >>> > >> 'name': >> >>> >>> > >> 'RHEL'}, 'lastClient': '10.10.10.2'}} >> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> >>> > >> 21:41:41,620::BindingXMLRPC::1132::vds::(wrapper) client >> >>> >>> > >> [10.10.10.2]::call >> >>> >>> > >> getHardwareInfo with () {} >> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> >>> > >> 21:41:41,621::BindingXMLRPC::1139::vds::(wrapper) return >> >>> >>> > >> getHardwareInfo >> >>> >>> > >> with {'status': {'message': 'Done', 'code': 0}, 'info': >> >>> >>> > >> {'systemProductName': 'CS24-TY', 'systemSerialNumber': >> >>> >>> > >> '7LWSPN1', >> >>> >>> > >> 'systemFamily': 'Server', 'systemVersion': 'A00', >> >>> >>> > >> 'systemUUID': >> >>> >>> > >> '44454c4c-4c00-1057-8053-b7c04f504e31', >> >>> >>> > >> 'systemManufacturer': >> >>> >>> > >> 'Dell'}} >> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> >>> > >> 21:41:41,733::BindingXMLRPC::1132::vds::(wrapper) client >> >>> >>> > >> [10.10.10.2]::call >> >>> >>> > >> hostsList with () {} flowID [222e8036] >> >>> >>> > >> Thread-13::ERROR::2014-11-24 >> >>> >>> > >> 21:41:44,753::BindingXMLRPC::1148::vds::(wrapper) vdsm >> >>> >>> > >> exception >> >>> >>> > >> occured >> >>> >>> > >> Traceback (most recent call last): >> >>> >>> > >> File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line 1135, >> >>> >>> > >> in >> >>> >>> > >> wrapper >> >>> >>> > >> res = f(*args, **kwargs) >> >>> >>> > >> File "/usr/share/vdsm/gluster/api.py", line 54, in >> >>> >>> > >> wrapper >> >>> >>> > >> rv = func(*args, **kwargs) >> >>> >>> > >> File "/usr/share/vdsm/gluster/api.py", line 251, in >> >>> >>> > >> hostsList >> >>> >>> > >> return {'hosts': self.svdsmProxy.glusterPeerStatus()} >> >>> >>> > >> File "/usr/share/vdsm/supervdsm.py", line 50, in >> >>> >>> > >> __call__ >> >>> >>> > >> return callMethod() >> >>> >>> > >> File "/usr/share/vdsm/supervdsm.py", line 48, in >> >>> >>> > >> <lambda> >> >>> >>> > >> **kwargs) >> >>> >>> > >> File "<string>", line 2, in glusterPeerStatus >> >>> >>> > >> File >> >>> >>> > >> "/usr/lib64/python2.6/multiprocessing/managers.py",
>> >>> >>> > >> 740, >> >>> >>> > >> in >> >>> >>> > >> _callmethod >> >>> >>> > >> raise convert_to_error(kind, result) >> >>> >>> > >> GlusterCmdExecFailedException: Command execution failed >> >>> >>> > >> error: Connection failed. Please check if gluster daemon >> >>> >>> > >> is >> >>> >>> > >> operational. >> >>> >>> > >> return code: 1 >> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> >>> > >> >> >>> >>> > >> >> >>> >>> > >> 21:41:50,949::task::595::Storage.TaskManager.Task::(_updateState) >> >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from >> >>> >>> > >> state >> >>> >>> > >> init >> >>> >>> > >> -> >> >>> >>> > >> state preparing >> >>> >>> > >> Thread-13::INFO::2014-11-24 >> >>> >>> > >> 21:41:50,950::logUtils::44::dispatcher::(wrapper) Run and >> >>> >>> > >> protect: >> >>> >>> > >> repoStats(options=None) >> >>> >>> > >> Thread-13::INFO::2014-11-24 >> >>> >>> > >> 21:41:50,950::logUtils::47::dispatcher::(wrapper) Run and >> >>> >>> > >> protect: >> >>> >>> > >> repoStats, Return response: {} >> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> >>> > >> >> >>> >>> > >> 21:41:50,950::task::1191::Storage.TaskManager.Task::(prepare) >> >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::finished: {} >> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> >>> > >> >> >>> >>> > >> >> >>> >>> > >> 21:41:50,950::task::595::Storage.TaskManager.Task::(_updateState) >> >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving from >> >>> >>> > >> state >> >>> >>> > >> preparing >> >>> >>> > >> -> >> >>> >>> > >> state finished >> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> >>> > >> >> >>> >>> > >> >> >>> >>> > >> >> >>> >>> > >> 21:41:50,951::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) >> >>> >>> > >> Owner.releaseAll requests {} resources {} >> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> >>> > >> >> >>> >>> > >> >> >>> >>> > >> >> >>> >>> > >> 21:41:50,951::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >> >>> >>> > >> Owner.cancelAll requests {} >> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >> >>> >>> > >> >> >>> >>> > >> 21:41:50,951::task::993::Storage.TaskManager.Task::(_decref) >> >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::ref 0 >> >>> >>> > >> aborting >> >>> >>> > >> False >> >>> >>> > >> ------------------------------- >> >>> >>> > >> >> >>> >>> > >> [root@compute4 ~]# service glusterd status >> >>> >>> > >> glusterd is stopped >> >>> >>> > >> [root@compute4 ~]# chkconfig --list | grep glusterd >> >>> >>> > >> glusterd 0:off 1:off 2:on 3:on 4:on >> >>> >>> > >> 5:on >> >>> >>> > >> 6:off >> >>> >>> > >> [root@compute4 ~]# >> >>> >>> > >> >> >>> >>> > >> Thanks, >> >>> >>> > >> Punit >> >>> >>> > >> >> >>> >>> > >> On Mon, Nov 24, 2014 at 6:36 PM, Kanagaraj >> >>> >>> > >> <kmayilsa@redhat.com> >> >>> >>> > >> wrote: >> >>> >>> > >> >> >>> >>> > >>> Can you send the corresponding error in vdsm.log from >> >>> >>> > >>> the >> >>> >>> > >>> host? >> >>> >>> > >>> >> >>> >>> > >>> Also check if glusterd service is running. >> >>> >>> > >>> >> >>> >>> > >>> Thanks, >> >>> >>> > >>> Kanagaraj >> >>> >>> > >>> >> >>> >>> > >>> >> >>> >>> > >>> On 11/24/2014 03:39 PM, Punit Dambiwal wrote: >> >>> >>> > >>> >> >>> >>> > >>> Hi, >> >>> >>> > >>> >> >>> >>> > >>> After reboot my Hypervisior host can not activate again >> >>> >>> > >>> in the >> >>> >>> > >>> cluster >> >>> >>> > >>> and failed with the following error :- >> >>> >>> > >>> >> >>> >>> > >>> Gluster command [<UNKNOWN>] failed on server... >> >>> >>> > >>> >> >>> >>> > >>> Engine logs :- >> >>> >>> > >>> >> >>> >>> > >>> 2014-11-24 18:05:28,397 INFO >> >>> >>> > >>> >> >>> >>> > >>> >> >>> >>> > >>> >> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >> >>> >>> > >>> (DefaultQuartzScheduler_Worker-64) START, >> >>> >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, HostId >> >>> >>> > >>> = >> >>> >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 5f251c90 >> >>> >>> > >>> 2014-11-24 18:05:30,609 INFO >> >>> >>> > >>> >> >>> >>> > >>> >> >>> >>> > >>> >> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >> >>> >>> > >>> (DefaultQuartzScheduler_Worker-64) FINISH, >> >>> >>> > >>> GlusterVolumesListVDSCommand, >> >>> >>> > >>> return: >> >>> >>> > >>> >> >>> >>> > >>> >> >>> >>> > >>> >> >>> >>> > >>> {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@d95203e0 }, >> >>> >>> > >>> log id: 5f251c90 >> >>> >>> > >>> 2014-11-24 18:05:33,768 INFO >> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >> >>> >>> > >>> (ajp--127.0.0.1-8702-8) >> >>> >>> > >>> [287d570d] Lock Acquired to object EngineLock >> >>> >>> > >>> [exclusiveLocks= >> >>> >>> > >>> key: >> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS >> >>> >>> > >>> , sharedLocks= ] >> >>> >>> > >>> 2014-11-24 18:05:33,795 INFO >> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] Running >> >>> >>> > >>> command: >> >>> >>> > >>> ActivateVdsCommand internal: false. Entities affected : >> >>> >>> > >>> ID: >> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDSAction >> >>> >>> > >>> group >> >>> >>> > >>> MANIPULATE_HOST >> >>> >>> > >>> with role type ADMIN >> >>> >>> > >>> 2014-11-24 18:05:33,796 INFO >> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] Before >> >>> >>> > >>> acquiring >> >>> >>> > >>> lock in >> >>> >>> > >>> order to prevent monitoring for host Compute5 from >> >>> >>> > >>> data-center >> >>> >>> > >>> SV_WTC >> >>> >>> > >>> 2014-11-24 18:05:33,797 INFO >> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] Lock >> >>> >>> > >>> acquired, >> >>> >>> > >>> from >> >>> >>> > >>> now a >> >>> >>> > >>> monitoring of host will be skipped for host Compute5 >> >>> >>> > >>> from >> >>> >>> > >>> data-center >> >>> >>> > >>> SV_WTC >> >>> >>> > >>> 2014-11-24 18:05:33,817 INFO >> >>> >>> > >>> >> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] START, >> >>> >>> > >>> SetVdsStatusVDSCommand(HostName = Compute5, HostId = >> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, status=Unassigned, >> >>> >>> > >>> nonOperationalReason=NONE, stopSpmFailureLogged=false), >> >>> >>> > >>> log id: >> >>> >>> > >>> 1cbc7311 >> >>> >>> > >>> 2014-11-24 18:05:33,820 INFO >> >>> >>> > >>> >> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] FINISH, >> >>> >>> > >>> SetVdsStatusVDSCommand, log id: 1cbc7311 >> >>> >>> > >>> 2014-11-24 18:05:34,086 INFO >> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) Activate finished. >> >>> >>> > >>> Lock >> >>> >>> > >>> released. >> >>> >>> > >>> Monitoring can run now for host Compute5 from >> >>> >>> > >>> data-center >> >>> >>> > >>> SV_WTC >> >>> >>> > >>> 2014-11-24 18:05:34,088 INFO >> >>> >>> > >>> >> >>> >>> > >>> >> >>> >>> > >>> >> >>> >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) Correlation ID: >> >>> >>> > >>> 287d570d, >> >>> >>> > >>> Job >> >>> >>> > >>> ID: >> >>> >>> > >>> 5ef8e4d6-b2bc-469e-8e81-7ef74b2a001a, Call Stack: null, >> >>> >>> > >>> Custom >> >>> >>> > >>> Event ID: >> >>> >>> > >>> -1, Message: Host Compute5 was activated by admin. >> >>> >>> > >>> 2014-11-24 18:05:34,090 INFO >> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) Lock freed to object >> >>> >>> > >>> EngineLock >> >>> >>> > >>> [exclusiveLocks= key: >> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a >> >>> >>> > >>> value: >> >>> >>> > >>> VDS >> >>> >>> > >>> , sharedLocks= ] >> >>> >>> > >>> 2014-11-24 18:05:35,792 INFO >> >>> >>> > >>> >> >>> >>> > >>> >> >>> >>> > >>> >> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >> >>> >>> > >>> (DefaultQuartzScheduler_Worker-55) [3706e836] START, >> >>> >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, HostId >> >>> >>> > >>> = >> >>> >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 48a0c832 >> >>> >>> > >>> 2014-11-24 18:05:37,064 INFO >> >>> >>> > >>> >> >>> >>> > >>> >> >>> >>> > >>> >> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] >> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) START, >> >>> >>> > >>> GetHardwareInfoVDSCommand(HostName = Compute5, HostId = >> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, >> >>> >>> > >>> >> >>> >>> > >>> vds=Host[Compute5,0bf6b00f-7947-4411-b55a-cc5eea2b381a]), log >> >>> >>> > >>> id: >> >>> >>> > >>> 6d560cc2 >> >>> >>> > >>> 2014-11-24 18:05:37,074 INFO >> >>> >>> > >>> >> >>> >>> > >>> >> >>> >>> > >>> >> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] >> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) FINISH, >> >>> >>> > >>> GetHardwareInfoVDSCommand, log >> >>> >>> > >>> id: 6d560cc2 >> >>> >>> > >>> 2014-11-24 18:05:37,093 WARN >> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.VdsManager] >> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) Host Compute5 is >> >>> >>> > >>> running >> >>> >>> > >>> with >> >>> >>> > >>> disabled >> >>> >>> > >>> SELinux. >> >>> >>> > >>> 2014-11-24 18:05:37,127 INFO >> >>> >>> > >>> >> >>> >>> > >>> >> >>> >>> > >>> >> >>> >>> > >>> [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] >> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] Running >> >>> >>> > >>> command: >> >>> >>> > >>> HandleVdsCpuFlagsOrClusterChangedCommand internal:
>> >>> >>> > >>> Entities >> >>> >>> > >>> affected >> >>> >>> > >>> : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS >> >>> >>> > >>> 2014-11-24 18:05:37,147 INFO >> >>> >>> > >>> >> >>> >>> > >>> >> >>> >>> > >>> >> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] START, >> >>> >>> > >>> GlusterServersListVDSCommand(HostName = Compute5, HostId >> >>> >>> > >>> = >> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a), log id: 4faed87 >> >>> >>> > >>> 2014-11-24 18:05:37,164 INFO >> >>> >>> > >>> >> >>> >>> > >>> >> >>> >>> > >>> >> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] FINISH, >> >>> >>> > >>> GlusterServersListVDSCommand, log id: 4faed87 >> >>> >>> > >>> 2014-11-24 18:05:37,189 INFO >> >>> >>> > >>> [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] >> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] Running >> >>> >>> > >>> command: >> >>> >>> > >>> SetNonOperationalVdsCommand internal: true. Entities >> >>> >>> > >>> affected : >> >>> >>> > >>> ID: >> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS >> >>> >>> > >>> 2014-11-24 18:05:37,206 INFO >> >>> >>> > >>> >> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] START, >> >>> >>> > >>> SetVdsStatusVDSCommand(HostName = Compute5, HostId = >> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, >> >>> >>> > >>> status=NonOperational, >> >>> >>> > >>> nonOperationalReason=GLUSTER_COMMAND_FAILED, >> >>> >>> > >>> stopSpmFailureLogged=false), >> >>> >>> > >>> log id: fed5617 >> >>> >>> > >>> 2014-11-24 18:05:37,209 INFO >> >>> >>> > >>> >> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] FINISH, >> >>> >>> > >>> SetVdsStatusVDSCommand, log id: fed5617 >> >>> >>> > >>> 2014-11-24 18:05:37,223 ERROR >> >>> >>> > >>> >> >>> >>> > >>> >> >>> >>> > >>> >> >>> >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] >> >>> >>> > >>> Correlation ID: >> >>> >>> > >>> 4a84c4e5, >> >>> >>> > >>> Job >> >>> >>> > >>> ID: 4bfd4a6d-c3ef-468f-a40e-a3a6ca13011b, Call Stack: >> >>> >>> > >>> null, >> >>> >>> > >>> Custom >> >>> >>> > >>> Event >> >>> >>> > >>> ID: -1, Message: Gluster command [<UNKNOWN>] failed on >> >>> >>> > >>> server >> >>> >>> > >>> Compute5. >> >>> >>> > >>> 2014-11-24 18:05:37,243 INFO >> >>> >>> > >>> >> >>> >>> > >>> >> >>> >>> > >>> >> >>> >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] >> >>> >>> > >>> Correlation ID: >> >>> >>> > >>> null, >> >>> >>> > >>> Call >> >>> >>> > >>> Stack: null, Custom Event ID: -1, Message: Status of >> >>> >>> > >>> host >> >>> >>> > >>> Compute5 >> >>> >>> > >>> was >> >>> >>> > >>> set >> >>> >>> > >>> to NonOperational. >> >>> >>> > >>> 2014-11-24 18:05:37,272 INFO >> >>> >>> > >>> [org.ovirt.engine.core.bll.HandleVdsVersionCommand] >> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Running >> >>> >>> > >>> command: >> >>> >>> > >>> HandleVdsVersionCommand internal: true. Entities >> >>> >>> > >>> affected : >> >>> >>> > >>> ID: >> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS >> >>> >>> > >>> 2014-11-24 18:05:37,274 INFO >> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] >> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Host >> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a : Compute5 is >> >>> >>> > >>> already in >> >>> >>> > >>> NonOperational status for reason GLUSTER_COMMAND_FAILED. >> >>> >>> > >>> SetNonOperationalVds command is skipped. >> >>> >>> > >>> 2014-11-24 18:05:38,065 INFO >> >>> >>> > >>> >> >>> >>> > >>> >> >>> >>> > >>> >> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >> >>> >>> > >>> (DefaultQuartzScheduler_Worker-55) [3706e836] FINISH, >> >>> >>> > >>> GlusterVolumesListVDSCommand, return: >> >>> >>> > >>> >> >>> >>> > >>> >> >>> >>> > >>> >> >>> >>> > >>> {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4e72a1b1 }, >> >>> >>> > >>> log id: 48a0c832 >> >>> >>> > >>> 2014-11-24 18:05:43,243 INFO >> >>> >>> > >>> >> >>> >>> > >>> >> >>> >>> > >>> >> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >> >>> >>> > >>> (DefaultQuartzScheduler_Worker-35) START, >> >>> >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, HostId >> >>> >>> > >>> = >> >>> >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: 3ce13ebc >> >>> >>> > >>> ^C >> >>> >>> > >>> [root@ccr01 ~]# >> >>> >>> > >>> >> >>> >>> > >>> Thanks, >> >>> >>> > >>> Punit >> >>> >>> > >>> >> >>> >>> > >>> >> >>> >>> > >>> _______________________________________________ >> >>> >>> > >>> Users mailing >> >>> >>> > >>> >> >>> >>> > >>> >> >>> >>> > >>> listUsers@ovirt.orghttp://
On Wed, Dec 3, 2014 at 6:22 PM, Kaushal M <kshlmster@gmail.com> wrote: like probe line true. lists.ovirt.org/mailman/listinfo/users
>> >>> >>> > >>> >> >>> >>> > >>> >> >>> >>> > >>> >> >>> >>> > >> >> >>> >>> > >> >> >>> >>> > > >> >>> >>> > > >> >>> >>> > >> >>> >> >> >>> >> >> >>> > >> >>> > >> >>> > _______________________________________________ >> >>> > Gluster-users mailing list >> >>> > Gluster-users@gluster.org >> >>> > http://supercolony.gluster.org/mailman/listinfo/gluster-users >> >> >> >> >> > > >

Can you replace 'Before=network-online.target' with 'Wants=network-online.target' and try the boot again? This should force the network to be online before starting GlusterD. If even that fails, you could try adding an entry into /etc/hosts with the hostname of the system. This should prevent any more failures. I still don't believe it's a problem with Gluster. Gluster uses apis provided by the system to perform name resolution. These definitely work correctly because you can start GlusterD later. Since the resolution failure only happens during boot, it points to system or network setup issues during boot. To me it seems like the network isn't completely setup at that point of time. ~kaushal On Fri, Dec 5, 2014 at 12:47 PM, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi Kaushal,
It seems it's bug in glusterfs 3.6....even i manage my systemd to start the network service before glusterd...but it's still fail...
--------------- [Unit] Description=GlusterFS, a clustered file-system server After=network.target rpcbind.service Before=network-online.target
[Service] Type=forking PIDFile=/var/run/glusterd.pid LimitNOFILE=65536 ExecStartPre=/etc/rc.d/init.d/network start ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid KillMode=process
[Install] WantedBy=multi-user.target ----------------
Thanks, Punit
On Wed, Dec 3, 2014 at 8:56 PM, Kaushal M <kshlmster@gmail.com> wrote:
I just remembered this.
There was another user having a similar issue of GlusterD failing to start on the mailing list a while back. The cause of his problem was the way his network was brought up. IIRC, he was using a static network configuration. The problem vanished when he began using dhcp. Or it might have been he was using dhcp.service and it got solved after switching to NetworkManager.
This could be one more thing you could look at.
I'll try to find the mail thread to see if it was the same problem as you.
~kaushal
On Wed, Dec 3, 2014 at 6:22 PM, Kaushal M <kshlmster@gmail.com> wrote:
I don't know much about how the network target is brought up in CentOS7, but I'll try as much as I can.
It seems to me that, after the network has been brought up and by the time GlusterD is started, a. The machine hasn't yet recieved it's hostname, or b. It hasn't yet registered with the name server.
This is causing name resolution failures.
I don't know if the network target could come up without the machine getting its hostname, so I'm pretty sure it's not a.
So it seems to be b. But these kind of signing in happens only in DDNS systems, which doesn't seem to be the case for you.
Both of these reasons might be wrong (most likely wrong). You'd do good if you could ask for help from someone with more experience in systemd + networking.
~kaushal
On Wed, Dec 3, 2014 at 10:54 AM, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi Kaushal,
This is the host...which i rebooted...would you mind to let me know how i can make the glusterd sevice come up after network...i am using centos7...if network is the issue...
On Wed, Dec 3, 2014 at 11:54 AM, Kaushal M <kshlmster@gmail.com> wrote:
This peer cannot be identified.
" [2014-12-03 02:29:25.998153] D [glusterd-peer-utils.c:121:glusterd_peerinfo_find_by_hostname] 0-management: Unable to find friend: cpu05.zne01.hkg1.ovt.36stack.com"
I don't know why this address is not being resolved during boot time. If this is a valid peer, the the only reason I can think of this that the network is not up.
If you had previously detached the peer forcefully, the that could have left stale entries in some volumes. In this case as well, GlusterD will fail to identify the peer.
Do either of these reasons seem a possibility to you?
On Dec 3, 2014 8:07 AM, "Punit Dambiwal" <hypunit@gmail.com> wrote:
Hi Kaushal,
Please find the logs here :- http://ur1.ca/iyoe5 and http://ur1.ca/iyoed
On Tue, Dec 2, 2014 at 10:43 PM, Kaushal M <kshlmster@gmail.com> wrote: > > Hey Punit, > In the logs you've provided, GlusterD appears to be running > correctly. > Could you provide the logs for the time period when GlusterD > attempts to > start but fails. > > ~kaushal > > On Dec 2, 2014 8:03 PM, "Punit Dambiwal" <hypunit@gmail.com> wrote: >> >> Hi Kaushal, >> >> Please find the logs here :- http://ur1.ca/iyhs5 and >> http://ur1.ca/iyhue >> >> Thanks, >> punit >> >> >> On Tue, Dec 2, 2014 at 12:00 PM, Kaushal M <kshlmster@gmail.com> >> wrote: >>> >>> Hey Punit, >>> Could you start Glusterd in debug mode and provide the logs here? >>> To start it in debug mode, append '-LDEBUG' to the ExecStart line >>> in >>> the service file. >>> >>> ~kaushal >>> >>> On Mon, Dec 1, 2014 at 9:05 AM, Punit Dambiwal <hypunit@gmail.com> >>> wrote: >>> > Hi, >>> > >>> > Can Any body help me on this ?? >>> > >>> > On Thu, Nov 27, 2014 at 9:29 AM, Punit Dambiwal >>> > <hypunit@gmail.com> >>> > wrote: >>> >> >>> >> Hi Kaushal, >>> >> >>> >> Thanks for the detailed reply....let me explain my setup first >>> >> :- >>> >> >>> >> 1. Ovirt Engine >>> >> 2. 4* host as well as storage machine (Host and gluster >>> >> combined) >>> >> 3. Every host has 24 bricks... >>> >> >>> >> Now whenever the host machine reboot...it can come up but can >>> >> not >>> >> join the >>> >> cluster again and through the following error "Gluster command >>> >> [<UNKNOWN>] >>> >> failed on server.." >>> >> >>> >> Please check my comment in line :- >>> >> >>> >> 1. Use the same string for doing the peer probe and for the >>> >> brick >>> >> address >>> >> during volume create/add-brick. Ideally, we suggest you use >>> >> properly >>> >> resolvable FQDNs everywhere. If that is not possible, then use >>> >> only >>> >> IP >>> >> addresses. Try to avoid short names. >>> >> --------------- >>> >> [root@cpu05 ~]# gluster peer status >>> >> Number of Peers: 3 >>> >> >>> >> Hostname: cpu03.stack.com >>> >> Uuid: 5729b8c4-e80d-4353-b456-6f467bddbdfb >>> >> State: Peer in Cluster (Connected) >>> >> >>> >> Hostname: cpu04.stack.com >>> >> Uuid: d272b790-c4b2-4bed-ba68-793656e6d7b0 >>> >> State: Peer in Cluster (Connected) >>> >> Other names: >>> >> 10.10.0.8 >>> >> >>> >> Hostname: cpu02.stack.com >>> >> Uuid: 8d8a7041-950e-40d0-85f9-58d14340ca25 >>> >> State: Peer in Cluster (Connected) >>> >> [root@cpu05 ~]# >>> >> ---------------- >>> >> 2. During boot up, make sure to launch glusterd only after the >>> >> network is >>> >> up. This will allow the new peer identification mechanism to do >>> >> its >>> >> job correctly. >>> >> >> I think the service itself doing the same job.... >>> >> >>> >> [root@cpu05 ~]# cat /usr/lib/systemd/system/glusterd.service >>> >> [Unit] >>> >> Description=GlusterFS, a clustered file-system server >>> >> After=network.target rpcbind.service >>> >> Before=network-online.target >>> >> >>> >> [Service] >>> >> Type=forking >>> >> PIDFile=/var/run/glusterd.pid >>> >> LimitNOFILE=65536 >>> >> ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid >>> >> KillMode=process >>> >> >>> >> [Install] >>> >> WantedBy=multi-user.target >>> >> [root@cpu05 ~]# >>> >> -------------------- >>> >> >>> >> gluster logs :- >>> >> >>> >> [2014-11-24 09:22:22.147471] I [MSGID: 100030] >>> >> [glusterfsd.c:2018:main] >>> >> 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd >>> >> version >>> >> 3.6.1 >>> >> (args: /usr/sbin/glusterd -p /var/run/glusterd.pid) >>> >> [2014-11-24 09:22:22.151565] I [glusterd.c:1214:init] >>> >> 0-management: >>> >> Maximum allowed open file descriptors set to 65536 >>> >> [2014-11-24 09:22:22.151599] I [glusterd.c:1259:init] >>> >> 0-management: >>> >> Using >>> >> /var/lib/glusterd as working directory >>> >> [2014-11-24 09:22:22.155216] W >>> >> [rdma.c:4195:__gf_rdma_ctx_create] >>> >> 0-rpc-transport/rdma: rdma_cm event channel creation failed (No >>> >> such device) >>> >> [2014-11-24 09:22:22.155264] E [rdma.c:4483:init] >>> >> 0-rdma.management: >>> >> Failed to initialize IB Device >>> >> [2014-11-24 09:22:22.155285] E >>> >> [rpc-transport.c:333:rpc_transport_load] >>> >> 0-rpc-transport: 'rdma' initialization failed >>> >> [2014-11-24 09:22:22.155354] W >>> >> [rpcsvc.c:1524:rpcsvc_transport_create] >>> >> 0-rpc-service: cannot create listener, initing the transport >>> >> failed >>> >> [2014-11-24 09:22:22.156290] I >>> >> [glusterd.c:413:glusterd_check_gsync_present] 0-glusterd: >>> >> geo-replication >>> >> module not installed in the system >>> >> [2014-11-24 09:22:22.161318] I >>> >> [glusterd-store.c:2043:glusterd_restore_op_version] 0-glusterd: >>> >> retrieved >>> >> op-version: 30600 >>> >> [2014-11-24 09:22:22.821800] I >>> >> [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] >>> >> 0-management: >>> >> connect returned 0 >>> >> [2014-11-24 09:22:22.825810] I >>> >> [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] >>> >> 0-management: >>> >> connect returned 0 >>> >> [2014-11-24 09:22:22.828705] I >>> >> [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] >>> >> 0-management: >>> >> connect returned 0 >>> >> [2014-11-24 09:22:22.828771] I >>> >> [rpc-clnt.c:969:rpc_clnt_connection_init] >>> >> 0-management: setting frame-timeout to 600 >>> >> [2014-11-24 09:22:22.832670] I >>> >> [rpc-clnt.c:969:rpc_clnt_connection_init] >>> >> 0-management: setting frame-timeout to 600 >>> >> [2014-11-24 09:22:22.835919] I >>> >> [rpc-clnt.c:969:rpc_clnt_connection_init] >>> >> 0-management: setting frame-timeout to 600 >>> >> [2014-11-24 09:22:22.840209] E >>> >> [glusterd-store.c:4248:glusterd_resolve_all_bricks] 0-glusterd: >>> >> resolve >>> >> brick failed in restore >>> >> [2014-11-24 09:22:22.840233] E [xlator.c:425:xlator_init] >>> >> 0-management: >>> >> Initialization of volume 'management' failed, review your >>> >> volfile >>> >> again >>> >> [2014-11-24 09:22:22.840245] E >>> >> [graph.c:322:glusterfs_graph_init] >>> >> 0-management: initializing translator failed >>> >> [2014-11-24 09:22:22.840264] E >>> >> [graph.c:525:glusterfs_graph_activate] >>> >> 0-graph: init failed >>> >> [2014-11-24 09:22:22.840754] W >>> >> [glusterfsd.c:1194:cleanup_and_exit] >>> >> (--> >>> >> 0-: received signum (0), shutting down >>> >> >>> >> Thanks, >>> >> Punit >>> >> >>> >> >>> >> >>> >> >>> >> On Wed, Nov 26, 2014 at 7:14 PM, Kaushal M >>> >> <kshlmster@gmail.com> >>> >> wrote: >>> >>> >>> >>> Based on the logs I can guess that glusterd is being started >>> >>> before >>> >>> the network has come up and that the addresses given to bricks >>> >>> do >>> >>> not >>> >>> directly match the addresses used in during peer probe. >>> >>> >>> >>> The gluster_after_reboot log has the line "[2014-11-25 >>> >>> 06:46:09.972113] E >>> >>> [glusterd-store.c:2632:glusterd_resolve_all_bricks] >>> >>> 0-glusterd: resolve brick failed in restore". >>> >>> >>> >>> Brick resolution fails when glusterd cannot match the address >>> >>> for >>> >>> the >>> >>> brick, with one of the peers. Brick resolution happens in two >>> >>> phases, >>> >>> 1. We first try to identify the peer by performing string >>> >>> comparisions >>> >>> with the brick address and the peer addresses (The peer names >>> >>> will >>> >>> be >>> >>> the names/addresses that were given when the peer was probed). >>> >>> 2. If we don't find a match from step 1, we will then resolve >>> >>> all >>> >>> the >>> >>> brick address and the peer addresses into addrinfo structs, >>> >>> and >>> >>> then >>> >>> compare these structs to find a match. This process should >>> >>> generally >>> >>> find a match if available. This will fail only if the network >>> >>> is >>> >>> not >>> >>> up yet as we cannot resolve addresses. >>> >>> >>> >>> The above steps are applicable only to glusterfs versions >>> >>> >=3.6. >>> >>> They >>> >>> were introduced to reduce problems with peer identification, >>> >>> like >>> >>> the >>> >>> one you encountered >>> >>> >>> >>> Since both of the steps failed to find a match in one run, but >>> >>> succeeded later, we can come to the conclusion that, >>> >>> a) the bricks don't have the exact same string used in peer >>> >>> probe >>> >>> for >>> >>> their addresses as step 1 failed, and >>> >>> b) the network was not up in the initial run, as step 2 failed >>> >>> during >>> >>> the initial run, but passed in the second run. >>> >>> >>> >>> Please let me know if my conclusion is correct. >>> >>> >>> >>> If it is, you can solve your problem in two ways. >>> >>> 1. Use the same string for doing the peer probe and for the >>> >>> brick >>> >>> address during volume create/add-brick. Ideally, we suggest >>> >>> you >>> >>> use >>> >>> properly resolvable FQDNs everywhere. If that is not possible, >>> >>> then >>> >>> use only IP addresses. Try to avoid short names. >>> >>> 2. During boot up, make sure to launch glusterd only after the >>> >>> network >>> >>> is up. This will allow the new peer identification mechanism >>> >>> to do >>> >>> its >>> >>> job correctly. >>> >>> >>> >>> >>> >>> If you have already followed these steps and yet still hit the >>> >>> problem, then please provide more information (setup, logs, >>> >>> etc.). >>> >>> It >>> >>> could be much different problem that you are facing. >>> >>> >>> >>> ~kaushal >>> >>> >>> >>> On Wed, Nov 26, 2014 at 4:01 PM, Punit Dambiwal >>> >>> <hypunit@gmail.com> >>> >>> wrote: >>> >>> > Is there any one can help on this ?? >>> >>> > >>> >>> > Thanks, >>> >>> > punit >>> >>> > >>> >>> > On Wed, Nov 26, 2014 at 9:42 AM, Punit Dambiwal >>> >>> > <hypunit@gmail.com> >>> >>> > wrote: >>> >>> >> >>> >>> >> Hi, >>> >>> >> >>> >>> >> My Glusterfs version is :- glusterfs-3.6.1-1.el7 >>> >>> >> >>> >>> >> On Wed, Nov 26, 2014 at 1:59 AM, Kanagaraj Mayilsamy >>> >>> >> <kmayilsa@redhat.com> >>> >>> >> wrote: >>> >>> >>> >>> >>> >>> [+Gluster-users@gluster.org] >>> >>> >>> >>> >>> >>> "Initialization of volume 'management' failed, review your >>> >>> >>> volfile >>> >>> >>> again", glusterd throws this error when the service is >>> >>> >>> started >>> >>> >>> automatically >>> >>> >>> after the reboot. But the service is successfully started >>> >>> >>> later >>> >>> >>> manually by >>> >>> >>> the user. >>> >>> >>> >>> >>> >>> can somebody from gluster-users please help on this? >>> >>> >>> >>> >>> >>> glusterfs version: 3.5.1 >>> >>> >>> >>> >>> >>> Thanks, >>> >>> >>> Kanagaraj >>> >>> >>> >>> >>> >>> ----- Original Message ----- >>> >>> >>> > From: "Punit Dambiwal" <hypunit@gmail.com> >>> >>> >>> > To: "Kanagaraj" <kmayilsa@redhat.com> >>> >>> >>> > Cc: users@ovirt.org >>> >>> >>> > Sent: Tuesday, November 25, 2014 7:24:45 PM >>> >>> >>> > Subject: Re: [ovirt-users] Gluster command [<UNKNOWN>] >>> >>> >>> > failed on >>> >>> >>> > server... >>> >>> >>> > >>> >>> >>> > Hi Kanagraj, >>> >>> >>> > >>> >>> >>> > Please check the attached log files....i didn't find any >>> >>> >>> > thing >>> >>> >>> > special.... >>> >>> >>> > >>> >>> >>> > On Tue, Nov 25, 2014 at 12:12 PM, Kanagaraj >>> >>> >>> > <kmayilsa@redhat.com> >>> >>> >>> > wrote: >>> >>> >>> > >>> >>> >>> > > Do you see any errors in >>> >>> >>> > > /var/log/glusterfs/etc-glusterfs-glusterd.vol.log or >>> >>> >>> > > vdsm.log >>> >>> >>> > > when >>> >>> >>> > > the >>> >>> >>> > > service is trying to start automatically after the >>> >>> >>> > > reboot? >>> >>> >>> > > >>> >>> >>> > > Thanks, >>> >>> >>> > > Kanagaraj >>> >>> >>> > > >>> >>> >>> > > >>> >>> >>> > > On 11/24/2014 08:13 PM, Punit Dambiwal wrote: >>> >>> >>> > > >>> >>> >>> > > Hi Kanagaraj, >>> >>> >>> > > >>> >>> >>> > > Yes...once i will start the gluster service and then >>> >>> >>> > > vdsmd >>> >>> >>> > > ...the >>> >>> >>> > > host >>> >>> >>> > > can connect to cluster...but the question is why it's >>> >>> >>> > > not >>> >>> >>> > > started >>> >>> >>> > > even it >>> >>> >>> > > has chkconfig enabled... >>> >>> >>> > > >>> >>> >>> > > I have tested it in two host cluster >>> >>> >>> > > environment...(Centos 6.6 >>> >>> >>> > > and >>> >>> >>> > > centos 7.0) on both hypervisior cluster..it's failed >>> >>> >>> > > to >>> >>> >>> > > reconnect >>> >>> >>> > > in >>> >>> >>> > > to >>> >>> >>> > > cluster after reboot.... >>> >>> >>> > > >>> >>> >>> > > In both the environment glusterd enabled for next >>> >>> >>> > > boot....but >>> >>> >>> > > it's >>> >>> >>> > > failed with the same error....seems it's bug in either >>> >>> >>> > > gluster or >>> >>> >>> > > Ovirt ?? >>> >>> >>> > > >>> >>> >>> > > Please help me to find the workaround here if can not >>> >>> >>> > > resolve >>> >>> >>> > > it...as >>> >>> >>> > > without this the Host machine can not connect after >>> >>> >>> > > reboot....that >>> >>> >>> > > means >>> >>> >>> > > engine will consider it as down and every time need to >>> >>> >>> > > manually >>> >>> >>> > > start >>> >>> >>> > > the >>> >>> >>> > > gluster service and vdsmd... ?? >>> >>> >>> > > >>> >>> >>> > > Thanks, >>> >>> >>> > > Punit >>> >>> >>> > > >>> >>> >>> > > On Mon, Nov 24, 2014 at 10:20 PM, Kanagaraj >>> >>> >>> > > <kmayilsa@redhat.com> >>> >>> >>> > > wrote: >>> >>> >>> > > >>> >>> >>> > >> From vdsm.log "error: Connection failed. Please >>> >>> >>> > >> check if >>> >>> >>> > >> gluster >>> >>> >>> > >> daemon >>> >>> >>> > >> is operational." >>> >>> >>> > >> >>> >>> >>> > >> Starting glusterd service should fix this issue. >>> >>> >>> > >> 'service >>> >>> >>> > >> glusterd >>> >>> >>> > >> start' >>> >>> >>> > >> But i am wondering why the glusterd was not started >>> >>> >>> > >> automatically >>> >>> >>> > >> after >>> >>> >>> > >> the reboot. >>> >>> >>> > >> >>> >>> >>> > >> Thanks, >>> >>> >>> > >> Kanagaraj >>> >>> >>> > >> >>> >>> >>> > >> >>> >>> >>> > >> >>> >>> >>> > >> On 11/24/2014 07:18 PM, Punit Dambiwal wrote: >>> >>> >>> > >> >>> >>> >>> > >> Hi Kanagaraj, >>> >>> >>> > >> >>> >>> >>> > >> Please find the attached VDSM logs :- >>> >>> >>> > >> >>> >>> >>> > >> ---------------- >>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> >>> > >> >>> >>> >>> > >> >>> >>> >>> > >> >>> >>> >>> > >> >>> >>> >>> > >> 21:41:17,182::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >>> >>> >>> > >> Owner.cancelAll requests {} >>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> >>> > >> >>> >>> >>> > >> >>> >>> >>> > >> 21:41:17,182::task::993::Storage.TaskManager.Task::(_decref) >>> >>> >>> > >> Task=`1691d409-9b27-4585-8281-5ec26154367a`::ref 0 >>> >>> >>> > >> aborting >>> >>> >>> > >> False >>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> >>> > >> >>> >>> >>> > >> >>> >>> >>> > >> >>> >>> >>> > >> 21:41:32,393::task::595::Storage.TaskManager.Task::(_updateState) >>> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving >>> >>> >>> > >> from >>> >>> >>> > >> state >>> >>> >>> > >> init >>> >>> >>> > >> -> >>> >>> >>> > >> state preparing >>> >>> >>> > >> Thread-13::INFO::2014-11-24 >>> >>> >>> > >> 21:41:32,393::logUtils::44::dispatcher::(wrapper) Run >>> >>> >>> > >> and >>> >>> >>> > >> protect: >>> >>> >>> > >> repoStats(options=None) >>> >>> >>> > >> Thread-13::INFO::2014-11-24 >>> >>> >>> > >> 21:41:32,393::logUtils::47::dispatcher::(wrapper) Run >>> >>> >>> > >> and >>> >>> >>> > >> protect: >>> >>> >>> > >> repoStats, Return response: {} >>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> >>> > >> >>> >>> >>> > >> >>> >>> >>> > >> 21:41:32,393::task::1191::Storage.TaskManager.Task::(prepare) >>> >>> >>> > >> >>> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::finished: {} >>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> >>> > >> >>> >>> >>> > >> >>> >>> >>> > >> >>> >>> >>> > >> 21:41:32,394::task::595::Storage.TaskManager.Task::(_updateState) >>> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving >>> >>> >>> > >> from >>> >>> >>> > >> state >>> >>> >>> > >> preparing >>> >>> >>> > >> -> >>> >>> >>> > >> state finished >>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> >>> > >> >>> >>> >>> > >> >>> >>> >>> > >> >>> >>> >>> > >> >>> >>> >>> > >> 21:41:32,394::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) >>> >>> >>> > >> Owner.releaseAll requests {} resources {} >>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> >>> > >> >>> >>> >>> > >> >>> >>> >>> > >> >>> >>> >>> > >> >>> >>> >>> > >> 21:41:32,394::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >>> >>> >>> > >> Owner.cancelAll requests {} >>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> >>> > >> >>> >>> >>> > >> >>> >>> >>> > >> 21:41:32,394::task::993::Storage.TaskManager.Task::(_decref) >>> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::ref 0 >>> >>> >>> > >> aborting >>> >>> >>> > >> False >>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> >>> > >> 21:41:41,550::BindingXMLRPC::1132::vds::(wrapper) >>> >>> >>> > >> client >>> >>> >>> > >> [10.10.10.2]::call >>> >>> >>> > >> getCapabilities with () {} >>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> >>> > >> 21:41:41,553::utils::738::root::(execCmd) >>> >>> >>> > >> /sbin/ip route show to 0.0.0.0/0 table all (cwd None) >>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> >>> > >> 21:41:41,560::utils::758::root::(execCmd) >>> >>> >>> > >> SUCCESS: <err> = ''; <rc> = 0 >>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> >>> > >> 21:41:41,588::caps::728::root::(_getKeyPackages) rpm >>> >>> >>> > >> package >>> >>> >>> > >> ('gluster-swift',) not found >>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> >>> > >> 21:41:41,592::caps::728::root::(_getKeyPackages) rpm >>> >>> >>> > >> package >>> >>> >>> > >> ('gluster-swift-object',) not found >>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> >>> > >> 21:41:41,593::caps::728::root::(_getKeyPackages) rpm >>> >>> >>> > >> package >>> >>> >>> > >> ('gluster-swift-plugin',) not found >>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm >>> >>> >>> > >> package >>> >>> >>> > >> ('gluster-swift-account',) not found >>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm >>> >>> >>> > >> package >>> >>> >>> > >> ('gluster-swift-proxy',) not found >>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm >>> >>> >>> > >> package >>> >>> >>> > >> ('gluster-swift-doc',) not found >>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> >>> > >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm >>> >>> >>> > >> package >>> >>> >>> > >> ('gluster-swift-container',) not found >>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> >>> > >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm >>> >>> >>> > >> package >>> >>> >>> > >> ('glusterfs-geo-replication',) not found >>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> >>> > >> 21:41:41,600::caps::646::root::(get) >>> >>> >>> > >> VirtioRNG DISABLED: libvirt version 0.10.2-29.el6_5.9 >>> >>> >>> > >> required >>> >>> >>> > >> >= >>> >>> >>> > >> 0.10.2-31 >>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> >>> > >> 21:41:41,603::BindingXMLRPC::1139::vds::(wrapper) >>> >>> >>> > >> return >>> >>> >>> > >> getCapabilities >>> >>> >>> > >> with {'status': {'message': 'Done', 'code': 0}, >>> >>> >>> > >> 'info': >>> >>> >>> > >> {'HBAInventory': >>> >>> >>> > >> {'iSCSI': [{'InitiatorName': >>> >>> >>> > >> 'iqn.1994-05.com.redhat:32151ce183c8'}], >>> >>> >>> > >> 'FC': >>> >>> >>> > >> []}, 'packages2': {'kernel': {'release': >>> >>> >>> > >> '431.el6.x86_64', >>> >>> >>> > >> 'buildtime': >>> >>> >>> > >> 1385061309.0, 'version': '2.6.32'}, 'glusterfs-rdma': >>> >>> >>> > >> {'release': >>> >>> >>> > >> '1.el6', >>> >>> >>> > >> 'buildtime': 1403622628L, 'version': '3.5.1'}, >>> >>> >>> > >> 'glusterfs-fuse': >>> >>> >>> > >> {'release': '1.el6', 'buildtime': 1403622628L, >>> >>> >>> > >> 'version': >>> >>> >>> > >> '3.5.1'}, >>> >>> >>> > >> 'spice-server': {'release': '6.el6_5.2', 'buildtime': >>> >>> >>> > >> 1402324637L, >>> >>> >>> > >> 'version': '0.12.4'}, 'vdsm': {'release': >>> >>> >>> > >> '1.gitdb83943.el6', >>> >>> >>> > >> 'buildtime': >>> >>> >>> > >> 1412784567L, 'version': '4.16.7'}, 'qemu-kvm': >>> >>> >>> > >> {'release': >>> >>> >>> > >> '2.415.el6_5.10', 'buildtime': 1402435700L, >>> >>> >>> > >> 'version': >>> >>> >>> > >> '0.12.1.2'}, >>> >>> >>> > >> 'qemu-img': {'release': '2.415.el6_5.10', >>> >>> >>> > >> 'buildtime': >>> >>> >>> > >> 1402435700L, >>> >>> >>> > >> 'version': '0.12.1.2'}, 'libvirt': {'release': >>> >>> >>> > >> '29.el6_5.9', >>> >>> >>> > >> 'buildtime': >>> >>> >>> > >> 1402404612L, 'version': '0.10.2'}, 'glusterfs': >>> >>> >>> > >> {'release': >>> >>> >>> > >> '1.el6', >>> >>> >>> > >> 'buildtime': 1403622628L, 'version': '3.5.1'}, 'mom': >>> >>> >>> > >> {'release': >>> >>> >>> > >> '2.el6', >>> >>> >>> > >> 'buildtime': 1403794344L, 'version': '0.4.1'}, >>> >>> >>> > >> 'glusterfs-server': >>> >>> >>> > >> {'release': '1.el6', 'buildtime': 1403622628L, >>> >>> >>> > >> 'version': >>> >>> >>> > >> '3.5.1'}}, >>> >>> >>> > >> 'numaNodeDistance': {'1': [20, 10], '0': [10, 20]}, >>> >>> >>> > >> 'cpuModel': >>> >>> >>> > >> 'Intel(R) >>> >>> >>> > >> Xeon(R) CPU X5650 @ 2.67GHz', 'liveMerge': >>> >>> >>> > >> 'false', >>> >>> >>> > >> 'hooks': >>> >>> >>> > >> {}, >>> >>> >>> > >> 'cpuSockets': '2', 'vmTypes': ['kvm'], 'selinux': >>> >>> >>> > >> {'mode': '1'}, >>> >>> >>> > >> 'kdumpStatus': 0, 'supportedProtocols': ['2.2', >>> >>> >>> > >> '2.3'], >>> >>> >>> > >> 'networks': >>> >>> >>> > >> {'ovirtmgmt': {'iface': u'bond0.10', 'addr': >>> >>> >>> > >> '43.252.176.16', >>> >>> >>> > >> 'bridged': >>> >>> >>> > >> False, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], >>> >>> >>> > >> 'mtu': >>> >>> >>> > >> '1500', >>> >>> >>> > >> 'bootproto4': 'none', 'netmask': '255.255.255.0', >>> >>> >>> > >> 'ipv4addrs': >>> >>> >>> > >> [' >>> >>> >>> > >> 43.252.176.16/24' <http://43.252.176.16/24%27>], >>> >>> >>> > >> 'interface': >>> >>> >>> > >> u'bond0.10', 'ipv6gateway': '::', 'gateway': >>> >>> >>> > >> '43.25.17.1'}, >>> >>> >>> > >> 'Internal': >>> >>> >>> > >> {'iface': 'Internal', 'addr': '', 'cfg': {'DEFROUTE': >>> >>> >>> > >> 'no', >>> >>> >>> > >> 'HOTPLUG': >>> >>> >>> > >> 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': >>> >>> >>> > >> 'no', >>> >>> >>> > >> 'BOOTPROTO': >>> >>> >>> > >> 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': >>> >>> >>> > >> 'Bridge', >>> >>> >>> > >> 'ONBOOT': >>> >>> >>> > >> 'no'}, 'bridged': True, 'ipv6addrs': >>> >>> >>> > >> ['fe80::210:18ff:fecd:daac/64'], >>> >>> >>> > >> 'gateway': '', 'bootproto4': 'none', 'netmask': '', >>> >>> >>> > >> 'stp': >>> >>> >>> > >> 'off', >>> >>> >>> > >> 'ipv4addrs': [], 'mtu': '9000', 'ipv6gateway': '::', >>> >>> >>> > >> 'ports': >>> >>> >>> > >> ['bond1.100']}, 'storage': {'iface': u'bond1', >>> >>> >>> > >> 'addr': >>> >>> >>> > >> '10.10.10.6', >>> >>> >>> > >> 'bridged': False, 'ipv6addrs': >>> >>> >>> > >> ['fe80::210:18ff:fecd:daac/64'], >>> >>> >>> > >> 'mtu': >>> >>> >>> > >> '9000', 'bootproto4': 'none', 'netmask': >>> >>> >>> > >> '255.255.255.0', >>> >>> >>> > >> 'ipv4addrs': [' >>> >>> >>> > >> 10.10.10.6/24' <http://10.10.10.6/24%27>], >>> >>> >>> > >> 'interface': >>> >>> >>> > >> u'bond1', >>> >>> >>> > >> 'ipv6gateway': '::', 'gateway': ''}, 'VMNetwork': >>> >>> >>> > >> {'iface': >>> >>> >>> > >> 'VMNetwork', >>> >>> >>> > >> 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': >>> >>> >>> > >> 'no', >>> >>> >>> > >> 'MTU': >>> >>> >>> > >> '1500', >>> >>> >>> > >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': >>> >>> >>> > >> 'none', >>> >>> >>> > >> 'STP': >>> >>> >>> > >> 'off', >>> >>> >>> > >> 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': >>> >>> >>> > >> 'no'}, >>> >>> >>> > >> 'bridged': >>> >>> >>> > >> True, >>> >>> >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], >>> >>> >>> > >> 'gateway': >>> >>> >>> > >> '', >>> >>> >>> > >> 'bootproto4': >>> >>> >>> > >> 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], >>> >>> >>> > >> 'mtu': >>> >>> >>> > >> '1500', >>> >>> >>> > >> 'ipv6gateway': '::', 'ports': ['bond0.36']}}, >>> >>> >>> > >> 'bridges': >>> >>> >>> > >> {'Internal': >>> >>> >>> > >> {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': >>> >>> >>> > >> 'no', >>> >>> >>> > >> 'MTU': >>> >>> >>> > >> '9000', >>> >>> >>> > >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': >>> >>> >>> > >> 'none', >>> >>> >>> > >> 'STP': >>> >>> >>> > >> 'off', >>> >>> >>> > >> 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': >>> >>> >>> > >> 'no'}, >>> >>> >>> > >> 'ipv6addrs': >>> >>> >>> > >> ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', >>> >>> >>> > >> 'netmask': '', >>> >>> >>> > >> 'stp': >>> >>> >>> > >> 'off', 'ipv4addrs': [], 'ipv6gateway': '::', >>> >>> >>> > >> 'gateway': >>> >>> >>> > >> '', >>> >>> >>> > >> 'opts': >>> >>> >>> > >> {'topology_change_detected': '0', >>> >>> >>> > >> 'multicast_last_member_count': >>> >>> >>> > >> '2', >>> >>> >>> > >> 'hash_elasticity': '4', >>> >>> >>> > >> 'multicast_query_response_interval': >>> >>> >>> > >> '999', >>> >>> >>> > >> 'multicast_snooping': '1', >>> >>> >>> > >> 'multicast_startup_query_interval': >>> >>> >>> > >> '3124', >>> >>> >>> > >> 'hello_timer': '31', 'multicast_querier_interval': >>> >>> >>> > >> '25496', >>> >>> >>> > >> 'max_age': >>> >>> >>> > >> '1999', 'hash_max': '512', 'stp_state': '0', >>> >>> >>> > >> 'root_id': >>> >>> >>> > >> '8000.001018cddaac', 'priority': '32768', >>> >>> >>> > >> 'multicast_membership_interval': >>> >>> >>> > >> '25996', 'root_path_cost': '0', 'root_port': '0', >>> >>> >>> > >> 'multicast_querier': >>> >>> >>> > >> '0', >>> >>> >>> > >> 'multicast_startup_query_count': '2', 'hello_time': >>> >>> >>> > >> '199', >>> >>> >>> > >> 'topology_change': '0', 'bridge_id': >>> >>> >>> > >> '8000.001018cddaac', >>> >>> >>> > >> 'topology_change_timer': '0', 'ageing_time': '29995', >>> >>> >>> > >> 'gc_timer': >>> >>> >>> > >> '31', >>> >>> >>> > >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', >>> >>> >>> > >> 'multicast_query_interval': '12498', >>> >>> >>> > >> 'multicast_last_member_interval': >>> >>> >>> > >> '99', 'multicast_router': '1', 'forward_delay': '0'}, >>> >>> >>> > >> 'ports': >>> >>> >>> > >> ['bond1.100']}, 'VMNetwork': {'addr': '', 'cfg': >>> >>> >>> > >> {'DEFROUTE': >>> >>> >>> > >> 'no', >>> >>> >>> > >> 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', >>> >>> >>> > >> 'NM_CONTROLLED': >>> >>> >>> > >> 'no', >>> >>> >>> > >> 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': >>> >>> >>> > >> 'VMNetwork', >>> >>> >>> > >> 'TYPE': >>> >>> >>> > >> 'Bridge', >>> >>> >>> > >> 'ONBOOT': 'no'}, 'ipv6addrs': >>> >>> >>> > >> ['fe80::62eb:69ff:fe20:b46c/64'], >>> >>> >>> > >> 'mtu': >>> >>> >>> > >> '1500', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], >>> >>> >>> > >> 'ipv6gateway': >>> >>> >>> > >> '::', >>> >>> >>> > >> 'gateway': '', 'opts': {'topology_change_detected': >>> >>> >>> > >> '0', >>> >>> >>> > >> 'multicast_last_member_count': '2', >>> >>> >>> > >> 'hash_elasticity': >>> >>> >>> > >> '4', >>> >>> >>> > >> 'multicast_query_response_interval': '999', >>> >>> >>> > >> 'multicast_snooping': >>> >>> >>> > >> '1', >>> >>> >>> > >> 'multicast_startup_query_interval': '3124', >>> >>> >>> > >> 'hello_timer': >>> >>> >>> > >> '131', >>> >>> >>> > >> 'multicast_querier_interval': '25496', 'max_age': >>> >>> >>> > >> '1999', >>> >>> >>> > >> 'hash_max': >>> >>> >>> > >> '512', 'stp_state': '0', 'root_id': >>> >>> >>> > >> '8000.60eb6920b46c', >>> >>> >>> > >> 'priority': >>> >>> >>> > >> '32768', 'multicast_membership_interval': '25996', >>> >>> >>> > >> 'root_path_cost': >>> >>> >>> > >> '0', >>> >>> >>> > >> 'root_port': '0', 'multicast_querier': '0', >>> >>> >>> > >> 'multicast_startup_query_count': '2', 'hello_time': >>> >>> >>> > >> '199', >>> >>> >>> > >> 'topology_change': '0', 'bridge_id': >>> >>> >>> > >> '8000.60eb6920b46c', >>> >>> >>> > >> 'topology_change_timer': '0', 'ageing_time': '29995', >>> >>> >>> > >> 'gc_timer': >>> >>> >>> > >> '31', >>> >>> >>> > >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', >>> >>> >>> > >> 'multicast_query_interval': '12498', >>> >>> >>> > >> 'multicast_last_member_interval': >>> >>> >>> > >> '99', 'multicast_router': '1', 'forward_delay': '0'}, >>> >>> >>> > >> 'ports': >>> >>> >>> > >> ['bond0.36']}}, 'uuid': >>> >>> >>> > >> '44454C4C-4C00-1057-8053-B7C04F504E31', >>> >>> >>> > >> 'lastClientIface': 'bond1', 'nics': {'eth3': >>> >>> >>> > >> {'permhwaddr': >>> >>> >>> > >> '00:10:18:cd:da:ae', 'addr': '', 'cfg': {'SLAVE': >>> >>> >>> > >> 'yes', >>> >>> >>> > >> 'NM_CONTROLLED': >>> >>> >>> > >> 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ae', >>> >>> >>> > >> 'MASTER': >>> >>> >>> > >> 'bond1', >>> >>> >>> > >> 'DEVICE': 'eth3', 'ONBOOT': 'no'}, 'ipv6addrs': [], >>> >>> >>> > >> 'mtu': >>> >>> >>> > >> '9000', >>> >>> >>> > >> 'netmask': '', 'ipv4addrs': [], 'hwaddr': >>> >>> >>> > >> '00:10:18:cd:da:ac', >>> >>> >>> > >> 'speed': >>> >>> >>> > >> 1000}, 'eth2': {'permhwaddr': '00:10:18:cd:da:ac', >>> >>> >>> > >> 'addr': '', >>> >>> >>> > >> 'cfg': >>> >>> >>> > >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': >>> >>> >>> > >> '9000', >>> >>> >>> > >> 'HWADDR': >>> >>> >>> > >> '00:10:18:cd:da:ac', 'MASTER': 'bond1', 'DEVICE': >>> >>> >>> > >> 'eth2', >>> >>> >>> > >> 'ONBOOT': >>> >>> >>> > >> 'no'}, >>> >>> >>> > >> 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', >>> >>> >>> > >> 'ipv4addrs': [], >>> >>> >>> > >> 'hwaddr': >>> >>> >>> > >> '00:10:18:cd:da:ac', 'speed': 1000}, 'eth1': >>> >>> >>> > >> {'permhwaddr': >>> >>> >>> > >> '60:eb:69:20:b4:6d', 'addr': '', 'cfg': {'SLAVE': >>> >>> >>> > >> 'yes', >>> >>> >>> > >> 'NM_CONTROLLED': >>> >>> >>> > >> 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6d', >>> >>> >>> > >> 'MASTER': >>> >>> >>> > >> 'bond0', >>> >>> >>> > >> 'DEVICE': 'eth1', 'ONBOOT': 'yes'}, 'ipv6addrs': [], >>> >>> >>> > >> 'mtu': >>> >>> >>> > >> '1500', >>> >>> >>> > >> 'netmask': '', 'ipv4addrs': [], 'hwaddr': >>> >>> >>> > >> '60:eb:69:20:b4:6c', >>> >>> >>> > >> 'speed': >>> >>> >>> > >> 1000}, 'eth0': {'permhwaddr': '60:eb:69:20:b4:6c', >>> >>> >>> > >> 'addr': '', >>> >>> >>> > >> 'cfg': >>> >>> >>> > >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': >>> >>> >>> > >> '1500', >>> >>> >>> > >> 'HWADDR': >>> >>> >>> > >> '60:eb:69:20:b4:6c', 'MASTER': 'bond0', 'DEVICE': >>> >>> >>> > >> 'eth0', >>> >>> >>> > >> 'ONBOOT': >>> >>> >>> > >> 'yes'}, >>> >>> >>> > >> 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', >>> >>> >>> > >> 'ipv4addrs': [], >>> >>> >>> > >> 'hwaddr': >>> >>> >>> > >> '60:eb:69:20:b4:6c', 'speed': 1000}}, >>> >>> >>> > >> 'software_revision': '1', >>> >>> >>> > >> 'clusterLevels': ['3.0', '3.1', '3.2', '3.3', '3.4', >>> >>> >>> > >> '3.5'], >>> >>> >>> > >> 'cpuFlags': >>> >>> >>> > >> >>> >>> >>> > >> >>> >>> >>> > >> >>> >>> >>> > >> >>> >>> >>> > >> u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,popcnt,aes,lahf_lm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270', >>> >>> >>> > >> 'ISCSIInitiatorName': >>> >>> >>> > >> 'iqn.1994-05.com.redhat:32151ce183c8', >>> >>> >>> > >> 'netConfigDirty': 'False', 'supportedENGINEs': >>> >>> >>> > >> ['3.0', >>> >>> >>> > >> '3.1', >>> >>> >>> > >> '3.2', >>> >>> >>> > >> '3.3', >>> >>> >>> > >> '3.4', '3.5'], 'autoNumaBalancing': 2, 'reservedMem': >>> >>> >>> > >> '321', >>> >>> >>> > >> 'bondings': >>> >>> >>> > >> {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', >>> >>> >>> > >> 'netmask': '', >>> >>> >>> > >> 'slaves': >>> >>> >>> > >> [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': >>> >>> >>> > >> '', >>> >>> >>> > >> 'cfg': >>> >>> >>> > >> {'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': >>> >>> >>> > >> 'no', >>> >>> >>> > >> 'BONDING_OPTS': >>> >>> >>> > >> 'mode=4 miimon=100', 'DEVICE': 'bond0', 'ONBOOT': >>> >>> >>> > >> 'yes'}, >>> >>> >>> > >> 'ipv6addrs': >>> >>> >>> > >> ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', >>> >>> >>> > >> 'netmask': '', >>> >>> >>> > >> 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', >>> >>> >>> > >> 'slaves': >>> >>> >>> > >> ['eth0', >>> >>> >>> > >> 'eth1'], >>> >>> >>> > >> 'opts': {'miimon': '100', 'mode': '4'}}, 'bond1': >>> >>> >>> > >> {'addr': >>> >>> >>> > >> '10.10.10.6', >>> >>> >>> > >> 'cfg': {'DEFROUTE': 'no', 'IPADDR': '10.10.10.6', >>> >>> >>> > >> 'HOTPLUG': >>> >>> >>> > >> 'no', >>> >>> >>> > >> 'MTU': >>> >>> >>> > >> '9000', 'NM_CONTROLLED': 'no', 'NETMASK': >>> >>> >>> > >> '255.255.255.0', >>> >>> >>> > >> 'BOOTPROTO': >>> >>> >>> > >> 'none', 'BONDING_OPTS': 'mode=4 miimon=100', >>> >>> >>> > >> 'DEVICE': >>> >>> >>> > >> 'bond1', >>> >>> >>> > >> 'ONBOOT': >>> >>> >>> > >> 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], >>> >>> >>> > >> 'mtu': >>> >>> >>> > >> '9000', >>> >>> >>> > >> 'netmask': '255.255.255.0', 'ipv4addrs': >>> >>> >>> > >> ['10.10.10.6/24' >>> >>> >>> > >> <http://10.10.10.6/24%27>], 'hwaddr': >>> >>> >>> > >> '00:10:18:cd:da:ac', >>> >>> >>> > >> 'slaves': >>> >>> >>> > >> ['eth2', 'eth3'], 'opts': {'miimon': '100', 'mode': >>> >>> >>> > >> '4'}}, >>> >>> >>> > >> 'bond2': >>> >>> >>> > >> {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', >>> >>> >>> > >> 'slaves': >>> >>> >>> > >> [], >>> >>> >>> > >> 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', >>> >>> >>> > >> 'cfg': {}, >>> >>> >>> > >> 'mtu': >>> >>> >>> > >> '1500', 'netmask': '', 'slaves': [], 'hwaddr': >>> >>> >>> > >> '00:00:00:00:00:00'}}, >>> >>> >>> > >> 'software_version': '4.16', 'memSize': '24019', >>> >>> >>> > >> 'cpuSpeed': >>> >>> >>> > >> '2667.000', >>> >>> >>> > >> 'numaNodes': {u'1': {'totalMemory': '12288', 'cpus': >>> >>> >>> > >> [6, >>> >>> >>> > >> 7, 8, >>> >>> >>> > >> 9, >>> >>> >>> > >> 10, 11, >>> >>> >>> > >> 18, 19, 20, 21, 22, 23]}, u'0': {'totalMemory': >>> >>> >>> > >> '12278', >>> >>> >>> > >> 'cpus': >>> >>> >>> > >> [0, >>> >>> >>> > >> 1, 2, >>> >>> >>> > >> 3, 4, 5, 12, 13, 14, 15, 16, 17]}}, 'version_name': >>> >>> >>> > >> 'Snow >>> >>> >>> > >> Man', >>> >>> >>> > >> 'vlans': >>> >>> >>> > >> {'bond0.10': {'iface': 'bond0', 'addr': >>> >>> >>> > >> '43.25.17.16', >>> >>> >>> > >> 'cfg': >>> >>> >>> > >> {'DEFROUTE': >>> >>> >>> > >> 'yes', 'VLAN': 'yes', 'IPADDR': '43.25.17.16', >>> >>> >>> > >> 'HOTPLUG': >>> >>> >>> > >> 'no', >>> >>> >>> > >> 'GATEWAY': >>> >>> >>> > >> '43.25.17.1', 'NM_CONTROLLED': 'no', 'NETMASK': >>> >>> >>> > >> '255.255.255.0', >>> >>> >>> > >> 'BOOTPROTO': 'none', 'DEVICE': 'bond0.10', 'MTU': >>> >>> >>> > >> '1500', >>> >>> >>> > >> 'ONBOOT': >>> >>> >>> > >> 'yes'}, >>> >>> >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], >>> >>> >>> > >> 'vlanid': >>> >>> >>> > >> 10, >>> >>> >>> > >> 'mtu': >>> >>> >>> > >> '1500', >>> >>> >>> > >> 'netmask': '255.255.255.0', 'ipv4addrs': >>> >>> >>> > >> ['43.25.17.16/24'] >>> >>> >>> > >> <http://43.25.17.16/24%27%5D>}, 'bond0.36': {'iface': >>> >>> >>> > >> 'bond0', >>> >>> >>> > >> 'addr': >>> >>> >>> > >> '', 'cfg': {'BRIDGE': 'VMNetwork', 'VLAN': 'yes', >>> >>> >>> > >> 'HOTPLUG': >>> >>> >>> > >> 'no', >>> >>> >>> > >> 'MTU': >>> >>> >>> > >> '1500', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond0.36', >>> >>> >>> > >> 'ONBOOT': >>> >>> >>> > >> 'no'}, >>> >>> >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], >>> >>> >>> > >> 'vlanid': >>> >>> >>> > >> 36, >>> >>> >>> > >> 'mtu': >>> >>> >>> > >> '1500', >>> >>> >>> > >> 'netmask': '', 'ipv4addrs': []}, 'bond1.100': >>> >>> >>> > >> {'iface': >>> >>> >>> > >> 'bond1', >>> >>> >>> > >> 'addr': >>> >>> >>> > >> '', 'cfg': {'BRIDGE': 'Internal', 'VLAN': 'yes', >>> >>> >>> > >> 'HOTPLUG': >>> >>> >>> > >> 'no', >>> >>> >>> > >> 'MTU': >>> >>> >>> > >> '9000', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond1.100', >>> >>> >>> > >> 'ONBOOT': >>> >>> >>> > >> 'no'}, >>> >>> >>> > >> 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], >>> >>> >>> > >> 'vlanid': >>> >>> >>> > >> 100, >>> >>> >>> > >> 'mtu': >>> >>> >>> > >> '9000', >>> >>> >>> > >> 'netmask': '', 'ipv4addrs': []}}, 'cpuCores': '12', >>> >>> >>> > >> 'kvmEnabled': >>> >>> >>> > >> 'true', >>> >>> >>> > >> 'guestOverhead': '65', 'cpuThreads': '24', >>> >>> >>> > >> 'emulatedMachines': >>> >>> >>> > >> [u'rhel6.5.0', u'pc', u'rhel6.4.0', u'rhel6.3.0', >>> >>> >>> > >> u'rhel6.2.0', >>> >>> >>> > >> u'rhel6.1.0', u'rhel6.0.0', u'rhel5.5.0', >>> >>> >>> > >> u'rhel5.4.4', >>> >>> >>> > >> u'rhel5.4.0'], >>> >>> >>> > >> 'operatingSystem': {'release': '5.el6.centos.11.1', >>> >>> >>> > >> 'version': >>> >>> >>> > >> '6', >>> >>> >>> > >> 'name': >>> >>> >>> > >> 'RHEL'}, 'lastClient': '10.10.10.2'}} >>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> >>> > >> 21:41:41,620::BindingXMLRPC::1132::vds::(wrapper) >>> >>> >>> > >> client >>> >>> >>> > >> [10.10.10.2]::call >>> >>> >>> > >> getHardwareInfo with () {} >>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> >>> > >> 21:41:41,621::BindingXMLRPC::1139::vds::(wrapper) >>> >>> >>> > >> return >>> >>> >>> > >> getHardwareInfo >>> >>> >>> > >> with {'status': {'message': 'Done', 'code': 0}, >>> >>> >>> > >> 'info': >>> >>> >>> > >> {'systemProductName': 'CS24-TY', >>> >>> >>> > >> 'systemSerialNumber': >>> >>> >>> > >> '7LWSPN1', >>> >>> >>> > >> 'systemFamily': 'Server', 'systemVersion': 'A00', >>> >>> >>> > >> 'systemUUID': >>> >>> >>> > >> '44454c4c-4c00-1057-8053-b7c04f504e31', >>> >>> >>> > >> 'systemManufacturer': >>> >>> >>> > >> 'Dell'}} >>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> >>> > >> 21:41:41,733::BindingXMLRPC::1132::vds::(wrapper) >>> >>> >>> > >> client >>> >>> >>> > >> [10.10.10.2]::call >>> >>> >>> > >> hostsList with () {} flowID [222e8036] >>> >>> >>> > >> Thread-13::ERROR::2014-11-24 >>> >>> >>> > >> 21:41:44,753::BindingXMLRPC::1148::vds::(wrapper) >>> >>> >>> > >> vdsm >>> >>> >>> > >> exception >>> >>> >>> > >> occured >>> >>> >>> > >> Traceback (most recent call last): >>> >>> >>> > >> File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line >>> >>> >>> > >> 1135, >>> >>> >>> > >> in >>> >>> >>> > >> wrapper >>> >>> >>> > >> res = f(*args, **kwargs) >>> >>> >>> > >> File "/usr/share/vdsm/gluster/api.py", line 54, in >>> >>> >>> > >> wrapper >>> >>> >>> > >> rv = func(*args, **kwargs) >>> >>> >>> > >> File "/usr/share/vdsm/gluster/api.py", line 251, in >>> >>> >>> > >> hostsList >>> >>> >>> > >> return {'hosts': >>> >>> >>> > >> self.svdsmProxy.glusterPeerStatus()} >>> >>> >>> > >> File "/usr/share/vdsm/supervdsm.py", line 50, in >>> >>> >>> > >> __call__ >>> >>> >>> > >> return callMethod() >>> >>> >>> > >> File "/usr/share/vdsm/supervdsm.py", line 48, in >>> >>> >>> > >> <lambda> >>> >>> >>> > >> **kwargs) >>> >>> >>> > >> File "<string>", line 2, in glusterPeerStatus >>> >>> >>> > >> File >>> >>> >>> > >> "/usr/lib64/python2.6/multiprocessing/managers.py", >>> >>> >>> > >> line >>> >>> >>> > >> 740, >>> >>> >>> > >> in >>> >>> >>> > >> _callmethod >>> >>> >>> > >> raise convert_to_error(kind, result) >>> >>> >>> > >> GlusterCmdExecFailedException: Command execution >>> >>> >>> > >> failed >>> >>> >>> > >> error: Connection failed. Please check if gluster >>> >>> >>> > >> daemon >>> >>> >>> > >> is >>> >>> >>> > >> operational. >>> >>> >>> > >> return code: 1 >>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> >>> > >> >>> >>> >>> > >> >>> >>> >>> > >> >>> >>> >>> > >> 21:41:50,949::task::595::Storage.TaskManager.Task::(_updateState) >>> >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving >>> >>> >>> > >> from >>> >>> >>> > >> state >>> >>> >>> > >> init >>> >>> >>> > >> -> >>> >>> >>> > >> state preparing >>> >>> >>> > >> Thread-13::INFO::2014-11-24 >>> >>> >>> > >> 21:41:50,950::logUtils::44::dispatcher::(wrapper) Run >>> >>> >>> > >> and >>> >>> >>> > >> protect: >>> >>> >>> > >> repoStats(options=None) >>> >>> >>> > >> Thread-13::INFO::2014-11-24 >>> >>> >>> > >> 21:41:50,950::logUtils::47::dispatcher::(wrapper) Run >>> >>> >>> > >> and >>> >>> >>> > >> protect: >>> >>> >>> > >> repoStats, Return response: {} >>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> >>> > >> >>> >>> >>> > >> >>> >>> >>> > >> 21:41:50,950::task::1191::Storage.TaskManager.Task::(prepare) >>> >>> >>> > >> >>> >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::finished: {} >>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> >>> > >> >>> >>> >>> > >> >>> >>> >>> > >> >>> >>> >>> > >> 21:41:50,950::task::595::Storage.TaskManager.Task::(_updateState) >>> >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving >>> >>> >>> > >> from >>> >>> >>> > >> state >>> >>> >>> > >> preparing >>> >>> >>> > >> -> >>> >>> >>> > >> state finished >>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> >>> > >> >>> >>> >>> > >> >>> >>> >>> > >> >>> >>> >>> > >> >>> >>> >>> > >> 21:41:50,951::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) >>> >>> >>> > >> Owner.releaseAll requests {} resources {} >>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> >>> > >> >>> >>> >>> > >> >>> >>> >>> > >> >>> >>> >>> > >> >>> >>> >>> > >> 21:41:50,951::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >>> >>> >>> > >> Owner.cancelAll requests {} >>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>> >>> >>> > >> >>> >>> >>> > >> >>> >>> >>> > >> 21:41:50,951::task::993::Storage.TaskManager.Task::(_decref) >>> >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::ref 0 >>> >>> >>> > >> aborting >>> >>> >>> > >> False >>> >>> >>> > >> ------------------------------- >>> >>> >>> > >> >>> >>> >>> > >> [root@compute4 ~]# service glusterd status >>> >>> >>> > >> glusterd is stopped >>> >>> >>> > >> [root@compute4 ~]# chkconfig --list | grep glusterd >>> >>> >>> > >> glusterd 0:off 1:off 2:on 3:on 4:on >>> >>> >>> > >> 5:on >>> >>> >>> > >> 6:off >>> >>> >>> > >> [root@compute4 ~]# >>> >>> >>> > >> >>> >>> >>> > >> Thanks, >>> >>> >>> > >> Punit >>> >>> >>> > >> >>> >>> >>> > >> On Mon, Nov 24, 2014 at 6:36 PM, Kanagaraj >>> >>> >>> > >> <kmayilsa@redhat.com> >>> >>> >>> > >> wrote: >>> >>> >>> > >> >>> >>> >>> > >>> Can you send the corresponding error in vdsm.log >>> >>> >>> > >>> from >>> >>> >>> > >>> the >>> >>> >>> > >>> host? >>> >>> >>> > >>> >>> >>> >>> > >>> Also check if glusterd service is running. >>> >>> >>> > >>> >>> >>> >>> > >>> Thanks, >>> >>> >>> > >>> Kanagaraj >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> On 11/24/2014 03:39 PM, Punit Dambiwal wrote: >>> >>> >>> > >>> >>> >>> >>> > >>> Hi, >>> >>> >>> > >>> >>> >>> >>> > >>> After reboot my Hypervisior host can not activate >>> >>> >>> > >>> again >>> >>> >>> > >>> in the >>> >>> >>> > >>> cluster >>> >>> >>> > >>> and failed with the following error :- >>> >>> >>> > >>> >>> >>> >>> > >>> Gluster command [<UNKNOWN>] failed on server... >>> >>> >>> > >>> >>> >>> >>> > >>> Engine logs :- >>> >>> >>> > >>> >>> >>> >>> > >>> 2014-11-24 18:05:28,397 INFO >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-64) START, >>> >>> >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, >>> >>> >>> > >>> HostId >>> >>> >>> > >>> = >>> >>> >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: >>> >>> >>> > >>> 5f251c90 >>> >>> >>> > >>> 2014-11-24 18:05:30,609 INFO >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-64) FINISH, >>> >>> >>> > >>> GlusterVolumesListVDSCommand, >>> >>> >>> > >>> return: >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@d95203e0}, >>> >>> >>> > >>> log id: 5f251c90 >>> >>> >>> > >>> 2014-11-24 18:05:33,768 INFO >>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>> >>> >>> > >>> (ajp--127.0.0.1-8702-8) >>> >>> >>> > >>> [287d570d] Lock Acquired to object EngineLock >>> >>> >>> > >>> [exclusiveLocks= >>> >>> >>> > >>> key: >>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS >>> >>> >>> > >>> , sharedLocks= ] >>> >>> >>> > >>> 2014-11-24 18:05:33,795 INFO >>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] >>> >>> >>> > >>> Running >>> >>> >>> > >>> command: >>> >>> >>> > >>> ActivateVdsCommand internal: false. Entities >>> >>> >>> > >>> affected : >>> >>> >>> > >>> ID: >>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDSAction >>> >>> >>> > >>> group >>> >>> >>> > >>> MANIPULATE_HOST >>> >>> >>> > >>> with role type ADMIN >>> >>> >>> > >>> 2014-11-24 18:05:33,796 INFO >>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] >>> >>> >>> > >>> Before >>> >>> >>> > >>> acquiring >>> >>> >>> > >>> lock in >>> >>> >>> > >>> order to prevent monitoring for host Compute5 from >>> >>> >>> > >>> data-center >>> >>> >>> > >>> SV_WTC >>> >>> >>> > >>> 2014-11-24 18:05:33,797 INFO >>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] Lock >>> >>> >>> > >>> acquired, >>> >>> >>> > >>> from >>> >>> >>> > >>> now a >>> >>> >>> > >>> monitoring of host will be skipped for host Compute5 >>> >>> >>> > >>> from >>> >>> >>> > >>> data-center >>> >>> >>> > >>> SV_WTC >>> >>> >>> > >>> 2014-11-24 18:05:33,817 INFO >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] >>> >>> >>> > >>> START, >>> >>> >>> > >>> SetVdsStatusVDSCommand(HostName = Compute5, HostId = >>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, >>> >>> >>> > >>> status=Unassigned, >>> >>> >>> > >>> nonOperationalReason=NONE, >>> >>> >>> > >>> stopSpmFailureLogged=false), >>> >>> >>> > >>> log id: >>> >>> >>> > >>> 1cbc7311 >>> >>> >>> > >>> 2014-11-24 18:05:33,820 INFO >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] >>> >>> >>> > >>> FINISH, >>> >>> >>> > >>> SetVdsStatusVDSCommand, log id: 1cbc7311 >>> >>> >>> > >>> 2014-11-24 18:05:34,086 INFO >>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) Activate >>> >>> >>> > >>> finished. >>> >>> >>> > >>> Lock >>> >>> >>> > >>> released. >>> >>> >>> > >>> Monitoring can run now for host Compute5 from >>> >>> >>> > >>> data-center >>> >>> >>> > >>> SV_WTC >>> >>> >>> > >>> 2014-11-24 18:05:34,088 INFO >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) Correlation ID: >>> >>> >>> > >>> 287d570d, >>> >>> >>> > >>> Job >>> >>> >>> > >>> ID: >>> >>> >>> > >>> 5ef8e4d6-b2bc-469e-8e81-7ef74b2a001a, Call Stack: >>> >>> >>> > >>> null, >>> >>> >>> > >>> Custom >>> >>> >>> > >>> Event ID: >>> >>> >>> > >>> -1, Message: Host Compute5 was activated by admin. >>> >>> >>> > >>> 2014-11-24 18:05:34,090 INFO >>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) Lock freed to >>> >>> >>> > >>> object >>> >>> >>> > >>> EngineLock >>> >>> >>> > >>> [exclusiveLocks= key: >>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a >>> >>> >>> > >>> value: >>> >>> >>> > >>> VDS >>> >>> >>> > >>> , sharedLocks= ] >>> >>> >>> > >>> 2014-11-24 18:05:35,792 INFO >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-55) [3706e836] START, >>> >>> >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, >>> >>> >>> > >>> HostId >>> >>> >>> > >>> = >>> >>> >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: >>> >>> >>> > >>> 48a0c832 >>> >>> >>> > >>> 2014-11-24 18:05:37,064 INFO >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] >>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) START, >>> >>> >>> > >>> GetHardwareInfoVDSCommand(HostName = Compute5, >>> >>> >>> > >>> HostId = >>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> vds=Host[Compute5,0bf6b00f-7947-4411-b55a-cc5eea2b381a]), log >>> >>> >>> > >>> id: >>> >>> >>> > >>> 6d560cc2 >>> >>> >>> > >>> 2014-11-24 18:05:37,074 INFO >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] >>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) FINISH, >>> >>> >>> > >>> GetHardwareInfoVDSCommand, log >>> >>> >>> > >>> id: 6d560cc2 >>> >>> >>> > >>> 2014-11-24 18:05:37,093 WARN >>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.VdsManager] >>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) Host Compute5 is >>> >>> >>> > >>> running >>> >>> >>> > >>> with >>> >>> >>> > >>> disabled >>> >>> >>> > >>> SELinux. >>> >>> >>> > >>> 2014-11-24 18:05:37,127 INFO >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] >>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] >>> >>> >>> > >>> Running >>> >>> >>> > >>> command: >>> >>> >>> > >>> HandleVdsCpuFlagsOrClusterChangedCommand internal: >>> >>> >>> > >>> true. >>> >>> >>> > >>> Entities >>> >>> >>> > >>> affected >>> >>> >>> > >>> : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: >>> >>> >>> > >>> VDS >>> >>> >>> > >>> 2014-11-24 18:05:37,147 INFO >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] START, >>> >>> >>> > >>> GlusterServersListVDSCommand(HostName = Compute5, >>> >>> >>> > >>> HostId >>> >>> >>> > >>> = >>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a), log id: >>> >>> >>> > >>> 4faed87 >>> >>> >>> > >>> 2014-11-24 18:05:37,164 INFO >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] >>> >>> >>> > >>> FINISH, >>> >>> >>> > >>> GlusterServersListVDSCommand, log id: 4faed87 >>> >>> >>> > >>> 2014-11-24 18:05:37,189 INFO >>> >>> >>> > >>> >>> >>> >>> > >>> [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] >>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] >>> >>> >>> > >>> Running >>> >>> >>> > >>> command: >>> >>> >>> > >>> SetNonOperationalVdsCommand internal: true. Entities >>> >>> >>> > >>> affected : >>> >>> >>> > >>> ID: >>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS >>> >>> >>> > >>> 2014-11-24 18:05:37,206 INFO >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] START, >>> >>> >>> > >>> SetVdsStatusVDSCommand(HostName = Compute5, HostId = >>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, >>> >>> >>> > >>> status=NonOperational, >>> >>> >>> > >>> nonOperationalReason=GLUSTER_COMMAND_FAILED, >>> >>> >>> > >>> stopSpmFailureLogged=false), >>> >>> >>> > >>> log id: fed5617 >>> >>> >>> > >>> 2014-11-24 18:05:37,209 INFO >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] >>> >>> >>> > >>> FINISH, >>> >>> >>> > >>> SetVdsStatusVDSCommand, log id: fed5617 >>> >>> >>> > >>> 2014-11-24 18:05:37,223 ERROR >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] >>> >>> >>> > >>> Correlation ID: >>> >>> >>> > >>> 4a84c4e5, >>> >>> >>> > >>> Job >>> >>> >>> > >>> ID: 4bfd4a6d-c3ef-468f-a40e-a3a6ca13011b, Call >>> >>> >>> > >>> Stack: >>> >>> >>> > >>> null, >>> >>> >>> > >>> Custom >>> >>> >>> > >>> Event >>> >>> >>> > >>> ID: -1, Message: Gluster command [<UNKNOWN>] failed >>> >>> >>> > >>> on >>> >>> >>> > >>> server >>> >>> >>> > >>> Compute5. >>> >>> >>> > >>> 2014-11-24 18:05:37,243 INFO >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] >>> >>> >>> > >>> Correlation ID: >>> >>> >>> > >>> null, >>> >>> >>> > >>> Call >>> >>> >>> > >>> Stack: null, Custom Event ID: -1, Message: Status of >>> >>> >>> > >>> host >>> >>> >>> > >>> Compute5 >>> >>> >>> > >>> was >>> >>> >>> > >>> set >>> >>> >>> > >>> to NonOperational. >>> >>> >>> > >>> 2014-11-24 18:05:37,272 INFO >>> >>> >>> > >>> [org.ovirt.engine.core.bll.HandleVdsVersionCommand] >>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Running >>> >>> >>> > >>> command: >>> >>> >>> > >>> HandleVdsVersionCommand internal: true. Entities >>> >>> >>> > >>> affected : >>> >>> >>> > >>> ID: >>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS >>> >>> >>> > >>> 2014-11-24 18:05:37,274 INFO >>> >>> >>> > >>> >>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] >>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Host >>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a : Compute5 is >>> >>> >>> > >>> already in >>> >>> >>> > >>> NonOperational status for reason >>> >>> >>> > >>> GLUSTER_COMMAND_FAILED. >>> >>> >>> > >>> SetNonOperationalVds command is skipped. >>> >>> >>> > >>> 2014-11-24 18:05:38,065 INFO >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-55) [3706e836] >>> >>> >>> > >>> FINISH, >>> >>> >>> > >>> GlusterVolumesListVDSCommand, return: >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4e72a1b1}, >>> >>> >>> > >>> log id: 48a0c832 >>> >>> >>> > >>> 2014-11-24 18:05:43,243 INFO >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-35) START, >>> >>> >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, >>> >>> >>> > >>> HostId >>> >>> >>> > >>> = >>> >>> >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: >>> >>> >>> > >>> 3ce13ebc >>> >>> >>> > >>> ^C >>> >>> >>> > >>> [root@ccr01 ~]# >>> >>> >>> > >>> >>> >>> >>> > >>> Thanks, >>> >>> >>> > >>> Punit >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> _______________________________________________ >>> >>> >>> > >>> Users mailing >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >>> >>> >>> >>> > >> >>> >>> >>> > >> >>> >>> >>> > > >>> >>> >>> > > >>> >>> >>> > >>> >>> >> >>> >>> >> >>> >>> > >>> >>> > >>> >>> > _______________________________________________ >>> >>> > Gluster-users mailing list >>> >>> > Gluster-users@gluster.org >>> >>> > >>> >>> > http://supercolony.gluster.org/mailman/listinfo/gluster-users >>> >> >>> >> >>> > >> >>

Hi Kaushal, I already have all the hosts entry in the /etc/hosts for the easy resolution....i will try your method in the glusterd.services and check and let you know....weather problem solve or not..... On Fri, Dec 5, 2014 at 9:50 PM, Kaushal M <kshlmster@gmail.com> wrote:
Can you replace 'Before=network-online.target' with 'Wants=network-online.target' and try the boot again? This should force the network to be online before starting GlusterD.
If even that fails, you could try adding an entry into /etc/hosts with the hostname of the system. This should prevent any more failures.
I still don't believe it's a problem with Gluster. Gluster uses apis provided by the system to perform name resolution. These definitely work correctly because you can start GlusterD later. Since the resolution failure only happens during boot, it points to system or network setup issues during boot. To me it seems like the network isn't completely setup at that point of time.
~kaushal
Hi Kaushal,
It seems it's bug in glusterfs 3.6....even i manage my systemd to start
network service before glusterd...but it's still fail...
--------------- [Unit] Description=GlusterFS, a clustered file-system server After=network.target rpcbind.service Before=network-online.target
[Service] Type=forking PIDFile=/var/run/glusterd.pid LimitNOFILE=65536 ExecStartPre=/etc/rc.d/init.d/network start ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid KillMode=process
[Install] WantedBy=multi-user.target ----------------
Thanks, Punit
On Wed, Dec 3, 2014 at 8:56 PM, Kaushal M <kshlmster@gmail.com> wrote:
I just remembered this.
There was another user having a similar issue of GlusterD failing to start on the mailing list a while back. The cause of his problem was the way his network was brought up. IIRC, he was using a static network configuration. The problem vanished when he began using dhcp. Or it might have been he was using dhcp.service and it got solved after switching to NetworkManager.
This could be one more thing you could look at.
I'll try to find the mail thread to see if it was the same problem as
you.
~kaushal
On Wed, Dec 3, 2014 at 6:22 PM, Kaushal M <kshlmster@gmail.com> wrote:
I don't know much about how the network target is brought up in CentOS7, but I'll try as much as I can.
It seems to me that, after the network has been brought up and by the time GlusterD is started, a. The machine hasn't yet recieved it's hostname, or b. It hasn't yet registered with the name server.
This is causing name resolution failures.
I don't know if the network target could come up without the machine getting its hostname, so I'm pretty sure it's not a.
So it seems to be b. But these kind of signing in happens only in DDNS systems, which doesn't seem to be the case for you.
Both of these reasons might be wrong (most likely wrong). You'd do good if you could ask for help from someone with more experience in systemd + networking.
~kaushal
On Wed, Dec 3, 2014 at 10:54 AM, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi Kaushal,
This is the host...which i rebooted...would you mind to let me know
how
i can make the glusterd sevice come up after network...i am using centos7...if network is the issue...
On Wed, Dec 3, 2014 at 11:54 AM, Kaushal M <kshlmster@gmail.com> wrote:
This peer cannot be identified.
" [2014-12-03 02:29:25.998153] D [glusterd-peer-utils.c:121:glusterd_peerinfo_find_by_hostname] 0-management: Unable to find friend: cpu05.zne01.hkg1.ovt.36stack.com"
I don't know why this address is not being resolved during boot
time.
If this is a valid peer, the the only reason I can think of this that
network is not up.
If you had previously detached the peer forcefully, the that could have left stale entries in some volumes. In this case as well, GlusterD will fail to identify the peer.
Do either of these reasons seem a possibility to you?
On Dec 3, 2014 8:07 AM, "Punit Dambiwal" <hypunit@gmail.com> wrote: > > Hi Kaushal, > > Please find the logs here :- http://ur1.ca/iyoe5 and > http://ur1.ca/iyoed > > On Tue, Dec 2, 2014 at 10:43 PM, Kaushal M <kshlmster@gmail.com> > wrote: >> >> Hey Punit, >> In the logs you've provided, GlusterD appears to be running >> correctly. >> Could you provide the logs for the time period when GlusterD >> attempts to >> start but fails. >> >> ~kaushal >> >> On Dec 2, 2014 8:03 PM, "Punit Dambiwal" <hypunit@gmail.com> wrote: >>> >>> Hi Kaushal, >>> >>> Please find the logs here :- http://ur1.ca/iyhs5 and >>> http://ur1.ca/iyhue >>> >>> Thanks, >>> punit >>> >>> >>> On Tue, Dec 2, 2014 at 12:00 PM, Kaushal M <kshlmster@gmail.com> >>> wrote: >>>> >>>> Hey Punit, >>>> Could you start Glusterd in debug mode and provide the logs here? >>>> To start it in debug mode, append '-LDEBUG' to the ExecStart
>>>> in >>>> the service file. >>>> >>>> ~kaushal >>>> >>>> On Mon, Dec 1, 2014 at 9:05 AM, Punit Dambiwal < hypunit@gmail.com> >>>> wrote: >>>> > Hi, >>>> > >>>> > Can Any body help me on this ?? >>>> > >>>> > On Thu, Nov 27, 2014 at 9:29 AM, Punit Dambiwal >>>> > <hypunit@gmail.com> >>>> > wrote: >>>> >> >>>> >> Hi Kaushal, >>>> >> >>>> >> Thanks for the detailed reply....let me explain my setup first >>>> >> :- >>>> >> >>>> >> 1. Ovirt Engine >>>> >> 2. 4* host as well as storage machine (Host and gluster >>>> >> combined) >>>> >> 3. Every host has 24 bricks... >>>> >> >>>> >> Now whenever the host machine reboot...it can come up but can >>>> >> not >>>> >> join the >>>> >> cluster again and through the following error "Gluster command >>>> >> [<UNKNOWN>] >>>> >> failed on server.." >>>> >> >>>> >> Please check my comment in line :- >>>> >> >>>> >> 1. Use the same string for doing the peer probe and for the >>>> >> brick >>>> >> address >>>> >> during volume create/add-brick. Ideally, we suggest you use >>>> >> properly >>>> >> resolvable FQDNs everywhere. If that is not possible, then use >>>> >> only >>>> >> IP >>>> >> addresses. Try to avoid short names. >>>> >> --------------- >>>> >> [root@cpu05 ~]# gluster peer status >>>> >> Number of Peers: 3 >>>> >> >>>> >> Hostname: cpu03.stack.com >>>> >> Uuid: 5729b8c4-e80d-4353-b456-6f467bddbdfb >>>> >> State: Peer in Cluster (Connected) >>>> >> >>>> >> Hostname: cpu04.stack.com >>>> >> Uuid: d272b790-c4b2-4bed-ba68-793656e6d7b0 >>>> >> State: Peer in Cluster (Connected) >>>> >> Other names: >>>> >> 10.10.0.8 >>>> >> >>>> >> Hostname: cpu02.stack.com >>>> >> Uuid: 8d8a7041-950e-40d0-85f9-58d14340ca25 >>>> >> State: Peer in Cluster (Connected) >>>> >> [root@cpu05 ~]# >>>> >> ---------------- >>>> >> 2. During boot up, make sure to launch glusterd only after
>>>> >> network is >>>> >> up. This will allow the new peer identification mechanism to do >>>> >> its >>>> >> job correctly. >>>> >> >> I think the service itself doing the same job.... >>>> >> >>>> >> [root@cpu05 ~]# cat /usr/lib/systemd/system/glusterd.service >>>> >> [Unit] >>>> >> Description=GlusterFS, a clustered file-system server >>>> >> After=network.target rpcbind.service >>>> >> Before=network-online.target >>>> >> >>>> >> [Service] >>>> >> Type=forking >>>> >> PIDFile=/var/run/glusterd.pid >>>> >> LimitNOFILE=65536 >>>> >> ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid >>>> >> KillMode=process >>>> >> >>>> >> [Install] >>>> >> WantedBy=multi-user.target >>>> >> [root@cpu05 ~]# >>>> >> -------------------- >>>> >> >>>> >> gluster logs :- >>>> >> >>>> >> [2014-11-24 09:22:22.147471] I [MSGID: 100030] >>>> >> [glusterfsd.c:2018:main] >>>> >> 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd >>>> >> version >>>> >> 3.6.1 >>>> >> (args: /usr/sbin/glusterd -p /var/run/glusterd.pid) >>>> >> [2014-11-24 09:22:22.151565] I [glusterd.c:1214:init] >>>> >> 0-management: >>>> >> Maximum allowed open file descriptors set to 65536 >>>> >> [2014-11-24 09:22:22.151599] I [glusterd.c:1259:init] >>>> >> 0-management: >>>> >> Using >>>> >> /var/lib/glusterd as working directory >>>> >> [2014-11-24 09:22:22.155216] W >>>> >> [rdma.c:4195:__gf_rdma_ctx_create] >>>> >> 0-rpc-transport/rdma: rdma_cm event channel creation failed (No >>>> >> such device) >>>> >> [2014-11-24 09:22:22.155264] E [rdma.c:4483:init] >>>> >> 0-rdma.management: >>>> >> Failed to initialize IB Device >>>> >> [2014-11-24 09:22:22.155285] E >>>> >> [rpc-transport.c:333:rpc_transport_load] >>>> >> 0-rpc-transport: 'rdma' initialization failed >>>> >> [2014-11-24 09:22:22.155354] W >>>> >> [rpcsvc.c:1524:rpcsvc_transport_create] >>>> >> 0-rpc-service: cannot create listener, initing the transport >>>> >> failed >>>> >> [2014-11-24 09:22:22.156290] I >>>> >> [glusterd.c:413:glusterd_check_gsync_present] 0-glusterd: >>>> >> geo-replication >>>> >> module not installed in the system >>>> >> [2014-11-24 09:22:22.161318] I >>>> >> [glusterd-store.c:2043:glusterd_restore_op_version] 0-glusterd: >>>> >> retrieved >>>> >> op-version: 30600 >>>> >> [2014-11-24 09:22:22.821800] I >>>> >> [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] >>>> >> 0-management: >>>> >> connect returned 0 >>>> >> [2014-11-24 09:22:22.825810] I >>>> >> [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] >>>> >> 0-management: >>>> >> connect returned 0 >>>> >> [2014-11-24 09:22:22.828705] I >>>> >> [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] >>>> >> 0-management: >>>> >> connect returned 0 >>>> >> [2014-11-24 09:22:22.828771] I >>>> >> [rpc-clnt.c:969:rpc_clnt_connection_init] >>>> >> 0-management: setting frame-timeout to 600 >>>> >> [2014-11-24 09:22:22.832670] I >>>> >> [rpc-clnt.c:969:rpc_clnt_connection_init] >>>> >> 0-management: setting frame-timeout to 600 >>>> >> [2014-11-24 09:22:22.835919] I >>>> >> [rpc-clnt.c:969:rpc_clnt_connection_init] >>>> >> 0-management: setting frame-timeout to 600 >>>> >> [2014-11-24 09:22:22.840209] E >>>> >> [glusterd-store.c:4248:glusterd_resolve_all_bricks] 0-glusterd: >>>> >> resolve >>>> >> brick failed in restore >>>> >> [2014-11-24 09:22:22.840233] E [xlator.c:425:xlator_init] >>>> >> 0-management: >>>> >> Initialization of volume 'management' failed, review your >>>> >> volfile >>>> >> again >>>> >> [2014-11-24 09:22:22.840245] E >>>> >> [graph.c:322:glusterfs_graph_init] >>>> >> 0-management: initializing translator failed >>>> >> [2014-11-24 09:22:22.840264] E >>>> >> [graph.c:525:glusterfs_graph_activate] >>>> >> 0-graph: init failed >>>> >> [2014-11-24 09:22:22.840754] W >>>> >> [glusterfsd.c:1194:cleanup_and_exit] >>>> >> (--> >>>> >> 0-: received signum (0), shutting down >>>> >> >>>> >> Thanks, >>>> >> Punit >>>> >> >>>> >> >>>> >> >>>> >> >>>> >> On Wed, Nov 26, 2014 at 7:14 PM, Kaushal M >>>> >> <kshlmster@gmail.com> >>>> >> wrote: >>>> >>> >>>> >>> Based on the logs I can guess that glusterd is being started >>>> >>> before >>>> >>> the network has come up and that the addresses given to bricks >>>> >>> do >>>> >>> not >>>> >>> directly match the addresses used in during peer probe. >>>> >>> >>>> >>> The gluster_after_reboot log has the line "[2014-11-25 >>>> >>> 06:46:09.972113] E >>>> >>> [glusterd-store.c:2632:glusterd_resolve_all_bricks] >>>> >>> 0-glusterd: resolve brick failed in restore". >>>> >>> >>>> >>> Brick resolution fails when glusterd cannot match the address >>>> >>> for >>>> >>> the >>>> >>> brick, with one of the peers. Brick resolution happens in two >>>> >>> phases, >>>> >>> 1. We first try to identify the peer by performing string >>>> >>> comparisions >>>> >>> with the brick address and the peer addresses (The peer names >>>> >>> will >>>> >>> be >>>> >>> the names/addresses that were given when the peer was
>>>> >>> 2. If we don't find a match from step 1, we will then resolve >>>> >>> all >>>> >>> the >>>> >>> brick address and the peer addresses into addrinfo structs, >>>> >>> and >>>> >>> then >>>> >>> compare these structs to find a match. This process should >>>> >>> generally >>>> >>> find a match if available. This will fail only if the network >>>> >>> is >>>> >>> not >>>> >>> up yet as we cannot resolve addresses. >>>> >>> >>>> >>> The above steps are applicable only to glusterfs versions >>>> >>> >=3.6. >>>> >>> They >>>> >>> were introduced to reduce problems with peer identification, >>>> >>> like >>>> >>> the >>>> >>> one you encountered >>>> >>> >>>> >>> Since both of the steps failed to find a match in one run, but >>>> >>> succeeded later, we can come to the conclusion that, >>>> >>> a) the bricks don't have the exact same string used in peer >>>> >>> probe >>>> >>> for >>>> >>> their addresses as step 1 failed, and >>>> >>> b) the network was not up in the initial run, as step 2 failed >>>> >>> during >>>> >>> the initial run, but passed in the second run. >>>> >>> >>>> >>> Please let me know if my conclusion is correct. >>>> >>> >>>> >>> If it is, you can solve your problem in two ways. >>>> >>> 1. Use the same string for doing the peer probe and for the >>>> >>> brick >>>> >>> address during volume create/add-brick. Ideally, we suggest >>>> >>> you >>>> >>> use >>>> >>> properly resolvable FQDNs everywhere. If that is not
>>>> >>> then >>>> >>> use only IP addresses. Try to avoid short names. >>>> >>> 2. During boot up, make sure to launch glusterd only after
>>>> >>> network >>>> >>> is up. This will allow the new peer identification mechanism >>>> >>> to do >>>> >>> its >>>> >>> job correctly. >>>> >>> >>>> >>> >>>> >>> If you have already followed these steps and yet still hit
>>>> >>> problem, then please provide more information (setup, logs, >>>> >>> etc.). >>>> >>> It >>>> >>> could be much different problem that you are facing. >>>> >>> >>>> >>> ~kaushal >>>> >>> >>>> >>> On Wed, Nov 26, 2014 at 4:01 PM, Punit Dambiwal >>>> >>> <hypunit@gmail.com> >>>> >>> wrote: >>>> >>> > Is there any one can help on this ?? >>>> >>> > >>>> >>> > Thanks, >>>> >>> > punit >>>> >>> > >>>> >>> > On Wed, Nov 26, 2014 at 9:42 AM, Punit Dambiwal >>>> >>> > <hypunit@gmail.com> >>>> >>> > wrote: >>>> >>> >> >>>> >>> >> Hi, >>>> >>> >> >>>> >>> >> My Glusterfs version is :- glusterfs-3.6.1-1.el7 >>>> >>> >> >>>> >>> >> On Wed, Nov 26, 2014 at 1:59 AM, Kanagaraj Mayilsamy >>>> >>> >> <kmayilsa@redhat.com> >>>> >>> >> wrote: >>>> >>> >>> >>>> >>> >>> [+Gluster-users@gluster.org] >>>> >>> >>> >>>> >>> >>> "Initialization of volume 'management' failed, review your >>>> >>> >>> volfile >>>> >>> >>> again", glusterd throws this error when the service is >>>> >>> >>> started >>>> >>> >>> automatically >>>> >>> >>> after the reboot. But the service is successfully started >>>> >>> >>> later >>>> >>> >>> manually by >>>> >>> >>> the user. >>>> >>> >>> >>>> >>> >>> can somebody from gluster-users please help on this? >>>> >>> >>> >>>> >>> >>> glusterfs version: 3.5.1 >>>> >>> >>> >>>> >>> >>> Thanks, >>>> >>> >>> Kanagaraj >>>> >>> >>> >>>> >>> >>> ----- Original Message ----- >>>> >>> >>> > From: "Punit Dambiwal" <hypunit@gmail.com> >>>> >>> >>> > To: "Kanagaraj" <kmayilsa@redhat.com> >>>> >>> >>> > Cc: users@ovirt.org >>>> >>> >>> > Sent: Tuesday, November 25, 2014 7:24:45 PM >>>> >>> >>> > Subject: Re: [ovirt-users] Gluster command [<UNKNOWN>] >>>> >>> >>> > failed on >>>> >>> >>> > server... >>>> >>> >>> > >>>> >>> >>> > Hi Kanagraj, >>>> >>> >>> > >>>> >>> >>> > Please check the attached log files....i didn't find any >>>> >>> >>> > thing >>>> >>> >>> > special.... >>>> >>> >>> > >>>> >>> >>> > On Tue, Nov 25, 2014 at 12:12 PM, Kanagaraj >>>> >>> >>> > <kmayilsa@redhat.com> >>>> >>> >>> > wrote: >>>> >>> >>> > >>>> >>> >>> > > Do you see any errors in >>>> >>> >>> > > /var/log/glusterfs/etc-glusterfs-glusterd.vol.log or >>>> >>> >>> > > vdsm.log >>>> >>> >>> > > when >>>> >>> >>> > > the >>>> >>> >>> > > service is trying to start automatically after the >>>> >>> >>> > > reboot? >>>> >>> >>> > > >>>> >>> >>> > > Thanks, >>>> >>> >>> > > Kanagaraj >>>> >>> >>> > > >>>> >>> >>> > > >>>> >>> >>> > > On 11/24/2014 08:13 PM, Punit Dambiwal wrote: >>>> >>> >>> > > >>>> >>> >>> > > Hi Kanagaraj, >>>> >>> >>> > > >>>> >>> >>> > > Yes...once i will start the gluster service and
>>>> >>> >>> > > vdsmd >>>> >>> >>> > > ...the >>>> >>> >>> > > host >>>> >>> >>> > > can connect to cluster...but the question is why it's >>>> >>> >>> > > not >>>> >>> >>> > > started >>>> >>> >>> > > even it >>>> >>> >>> > > has chkconfig enabled... >>>> >>> >>> > > >>>> >>> >>> > > I have tested it in two host cluster >>>> >>> >>> > > environment...(Centos 6.6 >>>> >>> >>> > > and >>>> >>> >>> > > centos 7.0) on both hypervisior cluster..it's failed >>>> >>> >>> > > to >>>> >>> >>> > > reconnect >>>> >>> >>> > > in >>>> >>> >>> > > to >>>> >>> >>> > > cluster after reboot.... >>>> >>> >>> > > >>>> >>> >>> > > In both the environment glusterd enabled for next >>>> >>> >>> > > boot....but >>>> >>> >>> > > it's >>>> >>> >>> > > failed with the same error....seems it's bug in either >>>> >>> >>> > > gluster or >>>> >>> >>> > > Ovirt ?? >>>> >>> >>> > > >>>> >>> >>> > > Please help me to find the workaround here if can not >>>> >>> >>> > > resolve >>>> >>> >>> > > it...as >>>> >>> >>> > > without this the Host machine can not connect after >>>> >>> >>> > > reboot....that >>>> >>> >>> > > means >>>> >>> >>> > > engine will consider it as down and every time need to >>>> >>> >>> > > manually >>>> >>> >>> > > start >>>> >>> >>> > > the >>>> >>> >>> > > gluster service and vdsmd... ?? >>>> >>> >>> > > >>>> >>> >>> > > Thanks, >>>> >>> >>> > > Punit >>>> >>> >>> > > >>>> >>> >>> > > On Mon, Nov 24, 2014 at 10:20 PM, Kanagaraj >>>> >>> >>> > > <kmayilsa@redhat.com> >>>> >>> >>> > > wrote: >>>> >>> >>> > > >>>> >>> >>> > >> From vdsm.log "error: Connection failed. Please >>>> >>> >>> > >> check if >>>> >>> >>> > >> gluster >>>> >>> >>> > >> daemon >>>> >>> >>> > >> is operational." >>>> >>> >>> > >> >>>> >>> >>> > >> Starting glusterd service should fix this issue. >>>> >>> >>> > >> 'service >>>> >>> >>> > >> glusterd >>>> >>> >>> > >> start' >>>> >>> >>> > >> But i am wondering why the glusterd was not started >>>> >>> >>> > >> automatically >>>> >>> >>> > >> after >>>> >>> >>> > >> the reboot. >>>> >>> >>> > >> >>>> >>> >>> > >> Thanks, >>>> >>> >>> > >> Kanagaraj >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> On 11/24/2014 07:18 PM, Punit Dambiwal wrote: >>>> >>> >>> > >> >>>> >>> >>> > >> Hi Kanagaraj, >>>> >>> >>> > >> >>>> >>> >>> > >> Please find the attached VDSM logs :- >>>> >>> >>> > >> >>>> >>> >>> > >> ---------------- >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> 21:41:17,182::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >>>> >>> >>> > >> Owner.cancelAll requests {} >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> 21:41:17,182::task::993::Storage.TaskManager.Task::(_decref) >>>> >>> >>> > >> Task=`1691d409-9b27-4585-8281-5ec26154367a`::ref 0 >>>> >>> >>> > >> aborting >>>> >>> >>> > >> False >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> 21:41:32,393::task::595::Storage.TaskManager.Task::(_updateState) >>>> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving >>>> >>> >>> > >> from >>>> >>> >>> > >> state >>>> >>> >>> > >> init >>>> >>> >>> > >> -> >>>> >>> >>> > >> state preparing >>>> >>> >>> > >> Thread-13::INFO::2014-11-24 >>>> >>> >>> > >> 21:41:32,393::logUtils::44::dispatcher::(wrapper) Run >>>> >>> >>> > >> and >>>> >>> >>> > >> protect: >>>> >>> >>> > >> repoStats(options=None) >>>> >>> >>> > >> Thread-13::INFO::2014-11-24 >>>> >>> >>> > >> 21:41:32,393::logUtils::47::dispatcher::(wrapper) Run >>>> >>> >>> > >> and >>>> >>> >>> > >> protect: >>>> >>> >>> > >> repoStats, Return response: {} >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> 21:41:32,393::task::1191::Storage.TaskManager.Task::(prepare) >>>> >>> >>> > >> >>>> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::finished: {} >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> 21:41:32,394::task::595::Storage.TaskManager.Task::(_updateState) >>>> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving >>>> >>> >>> > >> from >>>> >>> >>> > >> state >>>> >>> >>> > >> preparing >>>> >>> >>> > >> -> >>>> >>> >>> > >> state finished >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> 21:41:32,394::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) >>>> >>> >>> > >> Owner.releaseAll requests {} resources {} >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> 21:41:32,394::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >>>> >>> >>> > >> Owner.cancelAll requests {} >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> 21:41:32,394::task::993::Storage.TaskManager.Task::(_decref) >>>> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::ref 0 >>>> >>> >>> > >> aborting >>>> >>> >>> > >> False >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> 21:41:41,550::BindingXMLRPC::1132::vds::(wrapper) >>>> >>> >>> > >> client >>>> >>> >>> > >> [10.10.10.2]::call >>>> >>> >>> > >> getCapabilities with () {} >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> 21:41:41,553::utils::738::root::(execCmd) >>>> >>> >>> > >> /sbin/ip route show to 0.0.0.0/0 table all (cwd None) >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> 21:41:41,560::utils::758::root::(execCmd) >>>> >>> >>> > >> SUCCESS: <err> = ''; <rc> = 0 >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> 21:41:41,588::caps::728::root::(_getKeyPackages) rpm >>>> >>> >>> > >> package >>>> >>> >>> > >> ('gluster-swift',) not found >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> 21:41:41,592::caps::728::root::(_getKeyPackages) rpm >>>> >>> >>> > >> package >>>> >>> >>> > >> ('gluster-swift-object',) not found >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> 21:41:41,593::caps::728::root::(_getKeyPackages) rpm >>>> >>> >>> > >> package >>>> >>> >>> > >> ('gluster-swift-plugin',) not found >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm >>>> >>> >>> > >> package >>>> >>> >>> > >> ('gluster-swift-account',) not found >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm >>>> >>> >>> > >> package >>>> >>> >>> > >> ('gluster-swift-proxy',) not found >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm >>>> >>> >>> > >> package >>>> >>> >>> > >> ('gluster-swift-doc',) not found >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm >>>> >>> >>> > >> package >>>> >>> >>> > >> ('gluster-swift-container',) not found >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm >>>> >>> >>> > >> package >>>> >>> >>> > >> ('glusterfs-geo-replication',) not found >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> 21:41:41,600::caps::646::root::(get) >>>> >>> >>> > >> VirtioRNG DISABLED: libvirt version 0.10.2-29.el6_5.9 >>>> >>> >>> > >> required >>>> >>> >>> > >> >= >>>> >>> >>> > >> 0.10.2-31 >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> 21:41:41,603::BindingXMLRPC::1139::vds::(wrapper) >>>> >>> >>> > >> return >>>> >>> >>> > >> getCapabilities >>>> >>> >>> > >> with {'status': {'message': 'Done', 'code': 0}, >>>> >>> >>> > >> 'info': >>>> >>> >>> > >> {'HBAInventory': >>>> >>> >>> > >> {'iSCSI': [{'InitiatorName': >>>> >>> >>> > >> 'iqn.1994-05.com.redhat:32151ce183c8'}], >>>> >>> >>> > >> 'FC': >>>> >>> >>> > >> []}, 'packages2': {'kernel': {'release': >>>> >>> >>> > >> '431.el6.x86_64', >>>> >>> >>> > >> 'buildtime': >>>> >>> >>> > >> 1385061309.0, 'version': '2.6.32'}, 'glusterfs-rdma': >>>> >>> >>> > >> {'release': >>>> >>> >>> > >> '1.el6', >>>> >>> >>> > >> 'buildtime': 1403622628L, 'version': '3.5.1'}, >>>> >>> >>> > >> 'glusterfs-fuse': >>>> >>> >>> > >> {'release': '1.el6', 'buildtime': 1403622628L, >>>> >>> >>> > >> 'version': >>>> >>> >>> > >> '3.5.1'}, >>>> >>> >>> > >> 'spice-server': {'release': '6.el6_5.2', 'buildtime': >>>> >>> >>> > >> 1402324637L, >>>> >>> >>> > >> 'version': '0.12.4'}, 'vdsm': {'release': >>>> >>> >>> > >> '1.gitdb83943.el6', >>>> >>> >>> > >> 'buildtime': >>>> >>> >>> > >> 1412784567L, 'version': '4.16.7'}, 'qemu-kvm': >>>> >>> >>> > >> {'release': >>>> >>> >>> > >> '2.415.el6_5.10', 'buildtime': 1402435700L, >>>> >>> >>> > >> 'version': >>>> >>> >>> > >> '0.12.1.2'}, >>>> >>> >>> > >> 'qemu-img': {'release': '2.415.el6_5.10', >>>> >>> >>> > >> 'buildtime': >>>> >>> >>> > >> 1402435700L, >>>> >>> >>> > >> 'version': '0.12.1.2'}, 'libvirt': {'release': >>>> >>> >>> > >> '29.el6_5.9', >>>> >>> >>> > >> 'buildtime': >>>> >>> >>> > >> 1402404612L, 'version': '0.10.2'}, 'glusterfs': >>>> >>> >>> > >> {'release': >>>> >>> >>> > >> '1.el6', >>>> >>> >>> > >> 'buildtime': 1403622628L, 'version': '3.5.1'}, 'mom': >>>> >>> >>> > >> {'release': >>>> >>> >>> > >> '2.el6', >>>> >>> >>> > >> 'buildtime': 1403794344L, 'version': '0.4.1'}, >>>> >>> >>> > >> 'glusterfs-server': >>>> >>> >>> > >> {'release': '1.el6', 'buildtime': 1403622628L, >>>> >>> >>> > >> 'version': >>>> >>> >>> > >> '3.5.1'}}, >>>> >>> >>> > >> 'numaNodeDistance': {'1': [20, 10], '0': [10, 20]}, >>>> >>> >>> > >> 'cpuModel': >>>> >>> >>> > >> 'Intel(R) >>>> >>> >>> > >> Xeon(R) CPU X5650 @ 2.67GHz', 'liveMerge': >>>> >>> >>> > >> 'false', >>>> >>> >>> > >> 'hooks': >>>> >>> >>> > >> {}, >>>> >>> >>> > >> 'cpuSockets': '2', 'vmTypes': ['kvm'], 'selinux': >>>> >>> >>> > >> {'mode': '1'}, >>>> >>> >>> > >> 'kdumpStatus': 0, 'supportedProtocols': ['2.2', >>>> >>> >>> > >> '2.3'], >>>> >>> >>> > >> 'networks': >>>> >>> >>> > >> {'ovirtmgmt': {'iface': u'bond0.10', 'addr': >>>> >>> >>> > >> '43.252.176.16', >>>> >>> >>> > >> 'bridged': >>>> >>> >>> > >> False, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], >>>> >>> >>> > >> 'mtu': >>>> >>> >>> > >> '1500', >>>> >>> >>> > >> 'bootproto4': 'none', 'netmask': '255.255.255.0', >>>> >>> >>> > >> 'ipv4addrs': >>>> >>> >>> > >> [' >>>> >>> >>> > >> 43.252.176.16/24' <http://43.252.176.16/24%27>], >>>> >>> >>> > >> 'interface': >>>> >>> >>> > >> u'bond0.10', 'ipv6gateway': '::', 'gateway': >>>> >>> >>> > >> '43.25.17.1'}, >>>> >>> >>> > >> 'Internal': >>>> >>> >>> > >> {'iface': 'Internal', 'addr': '', 'cfg': {'DEFROUTE': >>>> >>> >>> > >> 'no', >>>> >>> >>> > >> 'HOTPLUG': >>>> >>> >>> > >> 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': >>>> >>> >>> > >> 'no', >>>> >>> >>> > >> 'BOOTPROTO': >>>> >>> >>> > >> 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': >>>> >>> >>> > >> 'Bridge', >>>> >>> >>> > >> 'ONBOOT': >>>> >>> >>> > >> 'no'}, 'bridged': True, 'ipv6addrs': >>>> >>> >>> > >> ['fe80::210:18ff:fecd:daac/64'], >>>> >>> >>> > >> 'gateway': '', 'bootproto4': 'none', 'netmask': '', >>>> >>> >>> > >> 'stp': >>>> >>> >>> > >> 'off', >>>> >>> >>> > >> 'ipv4addrs': [], 'mtu': '9000', 'ipv6gateway': '::', >>>> >>> >>> > >> 'ports': >>>> >>> >>> > >> ['bond1.100']}, 'storage': {'iface': u'bond1', >>>> >>> >>> > >> 'addr': >>>> >>> >>> > >> '10.10.10.6', >>>> >>> >>> > >> 'bridged': False, 'ipv6addrs': >>>> >>> >>> > >> ['fe80::210:18ff:fecd:daac/64'], >>>> >>> >>> > >> 'mtu': >>>> >>> >>> > >> '9000', 'bootproto4': 'none', 'netmask': >>>> >>> >>> > >> '255.255.255.0', >>>> >>> >>> > >> 'ipv4addrs': [' >>>> >>> >>> > >> 10.10.10.6/24' <http://10.10.10.6/24%27>], >>>> >>> >>> > >> 'interface': >>>> >>> >>> > >> u'bond1', >>>> >>> >>> > >> 'ipv6gateway': '::', 'gateway': ''}, 'VMNetwork': >>>> >>> >>> > >> {'iface': >>>> >>> >>> > >> 'VMNetwork', >>>> >>> >>> > >> 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': >>>> >>> >>> > >> 'no', >>>> >>> >>> > >> 'MTU': >>>> >>> >>> > >> '1500', >>>> >>> >>> > >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': >>>> >>> >>> > >> 'none', >>>> >>> >>> > >> 'STP': >>>> >>> >>> > >> 'off', >>>> >>> >>> > >> 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': >>>> >>> >>> > >> 'no'}, >>>> >>> >>> > >> 'bridged': >>>> >>> >>> > >> True, >>>> >>> >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], >>>> >>> >>> > >> 'gateway': >>>> >>> >>> > >> '', >>>> >>> >>> > >> 'bootproto4': >>>> >>> >>> > >> 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], >>>> >>> >>> > >> 'mtu': >>>> >>> >>> > >> '1500', >>>> >>> >>> > >> 'ipv6gateway': '::', 'ports': ['bond0.36']}}, >>>> >>> >>> > >> 'bridges': >>>> >>> >>> > >> {'Internal': >>>> >>> >>> > >> {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': >>>> >>> >>> > >> 'no', >>>> >>> >>> > >> 'MTU': >>>> >>> >>> > >> '9000', >>>> >>> >>> > >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': >>>> >>> >>> > >> 'none', >>>> >>> >>> > >> 'STP': >>>> >>> >>> > >> 'off', >>>> >>> >>> > >> 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': >>>> >>> >>> > >> 'no'}, >>>> >>> >>> > >> 'ipv6addrs': >>>> >>> >>> > >> ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', >>>> >>> >>> > >> 'netmask': '', >>>> >>> >>> > >> 'stp': >>>> >>> >>> > >> 'off', 'ipv4addrs': [], 'ipv6gateway': '::', >>>> >>> >>> > >> 'gateway': >>>> >>> >>> > >> '', >>>> >>> >>> > >> 'opts': >>>> >>> >>> > >> {'topology_change_detected': '0', >>>> >>> >>> > >> 'multicast_last_member_count': >>>> >>> >>> > >> '2', >>>> >>> >>> > >> 'hash_elasticity': '4', >>>> >>> >>> > >> 'multicast_query_response_interval': >>>> >>> >>> > >> '999', >>>> >>> >>> > >> 'multicast_snooping': '1', >>>> >>> >>> > >> 'multicast_startup_query_interval': >>>> >>> >>> > >> '3124', >>>> >>> >>> > >> 'hello_timer': '31', 'multicast_querier_interval': >>>> >>> >>> > >> '25496', >>>> >>> >>> > >> 'max_age': >>>> >>> >>> > >> '1999', 'hash_max': '512', 'stp_state': '0', >>>> >>> >>> > >> 'root_id': >>>> >>> >>> > >> '8000.001018cddaac', 'priority': '32768', >>>> >>> >>> > >> 'multicast_membership_interval': >>>> >>> >>> > >> '25996', 'root_path_cost': '0', 'root_port': '0', >>>> >>> >>> > >> 'multicast_querier': >>>> >>> >>> > >> '0', >>>> >>> >>> > >> 'multicast_startup_query_count': '2', 'hello_time': >>>> >>> >>> > >> '199', >>>> >>> >>> > >> 'topology_change': '0', 'bridge_id': >>>> >>> >>> > >> '8000.001018cddaac', >>>> >>> >>> > >> 'topology_change_timer': '0', 'ageing_time': '29995', >>>> >>> >>> > >> 'gc_timer': >>>> >>> >>> > >> '31', >>>> >>> >>> > >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', >>>> >>> >>> > >> 'multicast_query_interval': '12498', >>>> >>> >>> > >> 'multicast_last_member_interval': >>>> >>> >>> > >> '99', 'multicast_router': '1', 'forward_delay': '0'}, >>>> >>> >>> > >> 'ports': >>>> >>> >>> > >> ['bond1.100']}, 'VMNetwork': {'addr': '', 'cfg': >>>> >>> >>> > >> {'DEFROUTE': >>>> >>> >>> > >> 'no', >>>> >>> >>> > >> 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', >>>> >>> >>> > >> 'NM_CONTROLLED': >>>> >>> >>> > >> 'no', >>>> >>> >>> > >> 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': >>>> >>> >>> > >> 'VMNetwork', >>>> >>> >>> > >> 'TYPE': >>>> >>> >>> > >> 'Bridge', >>>> >>> >>> > >> 'ONBOOT': 'no'}, 'ipv6addrs': >>>> >>> >>> > >> ['fe80::62eb:69ff:fe20:b46c/64'], >>>> >>> >>> > >> 'mtu': >>>> >>> >>> > >> '1500', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], >>>> >>> >>> > >> 'ipv6gateway': >>>> >>> >>> > >> '::', >>>> >>> >>> > >> 'gateway': '', 'opts': {'topology_change_detected': >>>> >>> >>> > >> '0', >>>> >>> >>> > >> 'multicast_last_member_count': '2', >>>> >>> >>> > >> 'hash_elasticity': >>>> >>> >>> > >> '4', >>>> >>> >>> > >> 'multicast_query_response_interval': '999', >>>> >>> >>> > >> 'multicast_snooping': >>>> >>> >>> > >> '1', >>>> >>> >>> > >> 'multicast_startup_query_interval': '3124', >>>> >>> >>> > >> 'hello_timer': >>>> >>> >>> > >> '131', >>>> >>> >>> > >> 'multicast_querier_interval': '25496', 'max_age': >>>> >>> >>> > >> '1999', >>>> >>> >>> > >> 'hash_max': >>>> >>> >>> > >> '512', 'stp_state': '0', 'root_id': >>>> >>> >>> > >> '8000.60eb6920b46c', >>>> >>> >>> > >> 'priority': >>>> >>> >>> > >> '32768', 'multicast_membership_interval': '25996', >>>> >>> >>> > >> 'root_path_cost': >>>> >>> >>> > >> '0', >>>> >>> >>> > >> 'root_port': '0', 'multicast_querier': '0', >>>> >>> >>> > >> 'multicast_startup_query_count': '2', 'hello_time': >>>> >>> >>> > >> '199', >>>> >>> >>> > >> 'topology_change': '0', 'bridge_id': >>>> >>> >>> > >> '8000.60eb6920b46c', >>>> >>> >>> > >> 'topology_change_timer': '0', 'ageing_time': '29995', >>>> >>> >>> > >> 'gc_timer': >>>> >>> >>> > >> '31', >>>> >>> >>> > >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', >>>> >>> >>> > >> 'multicast_query_interval': '12498', >>>> >>> >>> > >> 'multicast_last_member_interval': >>>> >>> >>> > >> '99', 'multicast_router': '1', 'forward_delay': '0'}, >>>> >>> >>> > >> 'ports': >>>> >>> >>> > >> ['bond0.36']}}, 'uuid': >>>> >>> >>> > >> '44454C4C-4C00-1057-8053-B7C04F504E31', >>>> >>> >>> > >> 'lastClientIface': 'bond1', 'nics': {'eth3': >>>> >>> >>> > >> {'permhwaddr': >>>> >>> >>> > >> '00:10:18:cd:da:ae', 'addr': '', 'cfg': {'SLAVE': >>>> >>> >>> > >> 'yes', >>>> >>> >>> > >> 'NM_CONTROLLED': >>>> >>> >>> > >> 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ae', >>>> >>> >>> > >> 'MASTER': >>>> >>> >>> > >> 'bond1', >>>> >>> >>> > >> 'DEVICE': 'eth3', 'ONBOOT': 'no'}, 'ipv6addrs': [], >>>> >>> >>> > >> 'mtu': >>>> >>> >>> > >> '9000', >>>> >>> >>> > >> 'netmask': '', 'ipv4addrs': [], 'hwaddr': >>>> >>> >>> > >> '00:10:18:cd:da:ac', >>>> >>> >>> > >> 'speed': >>>> >>> >>> > >> 1000}, 'eth2': {'permhwaddr': '00:10:18:cd:da:ac', >>>> >>> >>> > >> 'addr': '', >>>> >>> >>> > >> 'cfg': >>>> >>> >>> > >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': >>>> >>> >>> > >> '9000', >>>> >>> >>> > >> 'HWADDR': >>>> >>> >>> > >> '00:10:18:cd:da:ac', 'MASTER': 'bond1', 'DEVICE': >>>> >>> >>> > >> 'eth2', >>>> >>> >>> > >> 'ONBOOT': >>>> >>> >>> > >> 'no'}, >>>> >>> >>> > >> 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', >>>> >>> >>> > >> 'ipv4addrs': [], >>>> >>> >>> > >> 'hwaddr': >>>> >>> >>> > >> '00:10:18:cd:da:ac', 'speed': 1000}, 'eth1': >>>> >>> >>> > >> {'permhwaddr': >>>> >>> >>> > >> '60:eb:69:20:b4:6d', 'addr': '', 'cfg': {'SLAVE': >>>> >>> >>> > >> 'yes', >>>> >>> >>> > >> 'NM_CONTROLLED': >>>> >>> >>> > >> 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6d', >>>> >>> >>> > >> 'MASTER': >>>> >>> >>> > >> 'bond0', >>>> >>> >>> > >> 'DEVICE': 'eth1', 'ONBOOT': 'yes'}, 'ipv6addrs': [], >>>> >>> >>> > >> 'mtu': >>>> >>> >>> > >> '1500', >>>> >>> >>> > >> 'netmask': '', 'ipv4addrs': [], 'hwaddr': >>>> >>> >>> > >> '60:eb:69:20:b4:6c', >>>> >>> >>> > >> 'speed': >>>> >>> >>> > >> 1000}, 'eth0': {'permhwaddr': '60:eb:69:20:b4:6c', >>>> >>> >>> > >> 'addr': '', >>>> >>> >>> > >> 'cfg': >>>> >>> >>> > >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': >>>> >>> >>> > >> '1500', >>>> >>> >>> > >> 'HWADDR': >>>> >>> >>> > >> '60:eb:69:20:b4:6c', 'MASTER': 'bond0', 'DEVICE': >>>> >>> >>> > >> 'eth0', >>>> >>> >>> > >> 'ONBOOT': >>>> >>> >>> > >> 'yes'}, >>>> >>> >>> > >> 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', >>>> >>> >>> > >> 'ipv4addrs': [], >>>> >>> >>> > >> 'hwaddr': >>>> >>> >>> > >> '60:eb:69:20:b4:6c', 'speed': 1000}}, >>>> >>> >>> > >> 'software_revision': '1', >>>> >>> >>> > >> 'clusterLevels': ['3.0', '3.1', '3.2', '3.3', '3.4', >>>> >>> >>> > >> '3.5'], >>>> >>> >>> > >> 'cpuFlags': >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,popcnt,aes,lahf_lm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270', >>>> >>> >>> > >> 'ISCSIInitiatorName': >>>> >>> >>> > >> 'iqn.1994-05.com.redhat:32151ce183c8', >>>> >>> >>> > >> 'netConfigDirty': 'False', 'supportedENGINEs': >>>> >>> >>> > >> ['3.0', >>>> >>> >>> > >> '3.1', >>>> >>> >>> > >> '3.2', >>>> >>> >>> > >> '3.3', >>>> >>> >>> > >> '3.4', '3.5'], 'autoNumaBalancing': 2, 'reservedMem': >>>> >>> >>> > >> '321', >>>> >>> >>> > >> 'bondings': >>>> >>> >>> > >> {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', >>>> >>> >>> > >> 'netmask': '', >>>> >>> >>> > >> 'slaves': >>>> >>> >>> > >> [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': >>>> >>> >>> > >> '', >>>> >>> >>> > >> 'cfg': >>>> >>> >>> > >> {'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': >>>> >>> >>> > >> 'no', >>>> >>> >>> > >> 'BONDING_OPTS': >>>> >>> >>> > >> 'mode=4 miimon=100', 'DEVICE': 'bond0', 'ONBOOT': >>>> >>> >>> > >> 'yes'}, >>>> >>> >>> > >> 'ipv6addrs': >>>> >>> >>> > >> ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', >>>> >>> >>> > >> 'netmask': '', >>>> >>> >>> > >> 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', >>>> >>> >>> > >> 'slaves': >>>> >>> >>> > >> ['eth0', >>>> >>> >>> > >> 'eth1'], >>>> >>> >>> > >> 'opts': {'miimon': '100', 'mode': '4'}}, 'bond1': >>>> >>> >>> > >> {'addr': >>>> >>> >>> > >> '10.10.10.6', >>>> >>> >>> > >> 'cfg': {'DEFROUTE': 'no', 'IPADDR': '10.10.10.6', >>>> >>> >>> > >> 'HOTPLUG': >>>> >>> >>> > >> 'no', >>>> >>> >>> > >> 'MTU': >>>> >>> >>> > >> '9000', 'NM_CONTROLLED': 'no', 'NETMASK': >>>> >>> >>> > >> '255.255.255.0', >>>> >>> >>> > >> 'BOOTPROTO': >>>> >>> >>> > >> 'none', 'BONDING_OPTS': 'mode=4 miimon=100', >>>> >>> >>> > >> 'DEVICE': >>>> >>> >>> > >> 'bond1', >>>> >>> >>> > >> 'ONBOOT': >>>> >>> >>> > >> 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], >>>> >>> >>> > >> 'mtu': >>>> >>> >>> > >> '9000', >>>> >>> >>> > >> 'netmask': '255.255.255.0', 'ipv4addrs': >>>> >>> >>> > >> ['10.10.10.6/24' >>>> >>> >>> > >> <http://10.10.10.6/24%27>], 'hwaddr': >>>> >>> >>> > >> '00:10:18:cd:da:ac', >>>> >>> >>> > >> 'slaves': >>>> >>> >>> > >> ['eth2', 'eth3'], 'opts': {'miimon': '100', 'mode': >>>> >>> >>> > >> '4'}}, >>>> >>> >>> > >> 'bond2': >>>> >>> >>> > >> {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', >>>> >>> >>> > >> 'slaves': >>>> >>> >>> > >> [], >>>> >>> >>> > >> 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', >>>> >>> >>> > >> 'cfg': {}, >>>> >>> >>> > >> 'mtu': >>>> >>> >>> > >> '1500', 'netmask': '', 'slaves': [], 'hwaddr': >>>> >>> >>> > >> '00:00:00:00:00:00'}}, >>>> >>> >>> > >> 'software_version': '4.16', 'memSize': '24019', >>>> >>> >>> > >> 'cpuSpeed': >>>> >>> >>> > >> '2667.000', >>>> >>> >>> > >> 'numaNodes': {u'1': {'totalMemory': '12288', 'cpus': >>>> >>> >>> > >> [6, >>>> >>> >>> > >> 7, 8, >>>> >>> >>> > >> 9, >>>> >>> >>> > >> 10, 11, >>>> >>> >>> > >> 18, 19, 20, 21, 22, 23]}, u'0': {'totalMemory': >>>> >>> >>> > >> '12278', >>>> >>> >>> > >> 'cpus': >>>> >>> >>> > >> [0, >>>> >>> >>> > >> 1, 2, >>>> >>> >>> > >> 3, 4, 5, 12, 13, 14, 15, 16, 17]}}, 'version_name': >>>> >>> >>> > >> 'Snow >>>> >>> >>> > >> Man', >>>> >>> >>> > >> 'vlans': >>>> >>> >>> > >> {'bond0.10': {'iface': 'bond0', 'addr': >>>> >>> >>> > >> '43.25.17.16', >>>> >>> >>> > >> 'cfg': >>>> >>> >>> > >> {'DEFROUTE': >>>> >>> >>> > >> 'yes', 'VLAN': 'yes', 'IPADDR': '43.25.17.16', >>>> >>> >>> > >> 'HOTPLUG': >>>> >>> >>> > >> 'no', >>>> >>> >>> > >> 'GATEWAY': >>>> >>> >>> > >> '43.25.17.1', 'NM_CONTROLLED': 'no', 'NETMASK': >>>> >>> >>> > >> '255.255.255.0', >>>> >>> >>> > >> 'BOOTPROTO': 'none', 'DEVICE': 'bond0.10', 'MTU': >>>> >>> >>> > >> '1500', >>>> >>> >>> > >> 'ONBOOT': >>>> >>> >>> > >> 'yes'}, >>>> >>> >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], >>>> >>> >>> > >> 'vlanid': >>>> >>> >>> > >> 10, >>>> >>> >>> > >> 'mtu': >>>> >>> >>> > >> '1500', >>>> >>> >>> > >> 'netmask': '255.255.255.0', 'ipv4addrs': >>>> >>> >>> > >> ['43.25.17.16/24'] >>>> >>> >>> > >> <http://43.25.17.16/24%27%5D>}, 'bond0.36': {'iface': >>>> >>> >>> > >> 'bond0', >>>> >>> >>> > >> 'addr': >>>> >>> >>> > >> '', 'cfg': {'BRIDGE': 'VMNetwork', 'VLAN': 'yes', >>>> >>> >>> > >> 'HOTPLUG': >>>> >>> >>> > >> 'no', >>>> >>> >>> > >> 'MTU': >>>> >>> >>> > >> '1500', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond0.36', >>>> >>> >>> > >> 'ONBOOT': >>>> >>> >>> > >> 'no'}, >>>> >>> >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], >>>> >>> >>> > >> 'vlanid': >>>> >>> >>> > >> 36, >>>> >>> >>> > >> 'mtu': >>>> >>> >>> > >> '1500', >>>> >>> >>> > >> 'netmask': '', 'ipv4addrs': []}, 'bond1.100': >>>> >>> >>> > >> {'iface': >>>> >>> >>> > >> 'bond1', >>>> >>> >>> > >> 'addr': >>>> >>> >>> > >> '', 'cfg': {'BRIDGE': 'Internal', 'VLAN': 'yes', >>>> >>> >>> > >> 'HOTPLUG': >>>> >>> >>> > >> 'no', >>>> >>> >>> > >> 'MTU': >>>> >>> >>> > >> '9000', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond1.100', >>>> >>> >>> > >> 'ONBOOT': >>>> >>> >>> > >> 'no'}, >>>> >>> >>> > >> 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], >>>> >>> >>> > >> 'vlanid': >>>> >>> >>> > >> 100, >>>> >>> >>> > >> 'mtu': >>>> >>> >>> > >> '9000', >>>> >>> >>> > >> 'netmask': '', 'ipv4addrs': []}}, 'cpuCores': '12', >>>> >>> >>> > >> 'kvmEnabled': >>>> >>> >>> > >> 'true', >>>> >>> >>> > >> 'guestOverhead': '65', 'cpuThreads': '24', >>>> >>> >>> > >> 'emulatedMachines': >>>> >>> >>> > >> [u'rhel6.5.0', u'pc', u'rhel6.4.0', u'rhel6.3.0', >>>> >>> >>> > >> u'rhel6.2.0', >>>> >>> >>> > >> u'rhel6.1.0', u'rhel6.0.0', u'rhel5.5.0', >>>> >>> >>> > >> u'rhel5.4.4', >>>> >>> >>> > >> u'rhel5.4.0'], >>>> >>> >>> > >> 'operatingSystem': {'release': '5.el6.centos.11.1', >>>> >>> >>> > >> 'version': >>>> >>> >>> > >> '6', >>>> >>> >>> > >> 'name': >>>> >>> >>> > >> 'RHEL'}, 'lastClient': '10.10.10.2'}} >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> 21:41:41,620::BindingXMLRPC::1132::vds::(wrapper) >>>> >>> >>> > >> client >>>> >>> >>> > >> [10.10.10.2]::call >>>> >>> >>> > >> getHardwareInfo with () {} >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> 21:41:41,621::BindingXMLRPC::1139::vds::(wrapper) >>>> >>> >>> > >> return >>>> >>> >>> > >> getHardwareInfo >>>> >>> >>> > >> with {'status': {'message': 'Done', 'code': 0}, >>>> >>> >>> > >> 'info': >>>> >>> >>> > >> {'systemProductName': 'CS24-TY', >>>> >>> >>> > >> 'systemSerialNumber': >>>> >>> >>> > >> '7LWSPN1', >>>> >>> >>> > >> 'systemFamily': 'Server', 'systemVersion': 'A00', >>>> >>> >>> > >> 'systemUUID': >>>> >>> >>> > >> '44454c4c-4c00-1057-8053-b7c04f504e31', >>>> >>> >>> > >> 'systemManufacturer': >>>> >>> >>> > >> 'Dell'}} >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> 21:41:41,733::BindingXMLRPC::1132::vds::(wrapper) >>>> >>> >>> > >> client >>>> >>> >>> > >> [10.10.10.2]::call >>>> >>> >>> > >> hostsList with () {} flowID [222e8036] >>>> >>> >>> > >> Thread-13::ERROR::2014-11-24 >>>> >>> >>> > >> 21:41:44,753::BindingXMLRPC::1148::vds::(wrapper) >>>> >>> >>> > >> vdsm >>>> >>> >>> > >> exception >>>> >>> >>> > >> occured >>>> >>> >>> > >> Traceback (most recent call last): >>>> >>> >>> > >> File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line >>>> >>> >>> > >> 1135, >>>> >>> >>> > >> in >>>> >>> >>> > >> wrapper >>>> >>> >>> > >> res = f(*args, **kwargs) >>>> >>> >>> > >> File "/usr/share/vdsm/gluster/api.py", line 54, in >>>> >>> >>> > >> wrapper >>>> >>> >>> > >> rv = func(*args, **kwargs) >>>> >>> >>> > >> File "/usr/share/vdsm/gluster/api.py", line 251, in >>>> >>> >>> > >> hostsList >>>> >>> >>> > >> return {'hosts': >>>> >>> >>> > >> self.svdsmProxy.glusterPeerStatus()} >>>> >>> >>> > >> File "/usr/share/vdsm/supervdsm.py", line 50, in >>>> >>> >>> > >> __call__ >>>> >>> >>> > >> return callMethod() >>>> >>> >>> > >> File "/usr/share/vdsm/supervdsm.py", line 48, in >>>> >>> >>> > >> <lambda> >>>> >>> >>> > >> **kwargs) >>>> >>> >>> > >> File "<string>", line 2, in glusterPeerStatus >>>> >>> >>> > >> File >>>> >>> >>> > >> "/usr/lib64/python2.6/multiprocessing/managers.py", >>>> >>> >>> > >> line >>>> >>> >>> > >> 740, >>>> >>> >>> > >> in >>>> >>> >>> > >> _callmethod >>>> >>> >>> > >> raise convert_to_error(kind, result) >>>> >>> >>> > >> GlusterCmdExecFailedException: Command execution >>>> >>> >>> > >> failed >>>> >>> >>> > >> error: Connection failed. Please check if gluster >>>> >>> >>> > >> daemon >>>> >>> >>> > >> is >>>> >>> >>> > >> operational. >>>> >>> >>> > >> return code: 1 >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> 21:41:50,949::task::595::Storage.TaskManager.Task::(_updateState) >>>> >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving >>>> >>> >>> > >> from >>>> >>> >>> > >> state >>>> >>> >>> > >> init >>>> >>> >>> > >> -> >>>> >>> >>> > >> state preparing >>>> >>> >>> > >> Thread-13::INFO::2014-11-24 >>>> >>> >>> > >> 21:41:50,950::logUtils::44::dispatcher::(wrapper) Run >>>> >>> >>> > >> and >>>> >>> >>> > >> protect: >>>> >>> >>> > >> repoStats(options=None) >>>> >>> >>> > >> Thread-13::INFO::2014-11-24 >>>> >>> >>> > >> 21:41:50,950::logUtils::47::dispatcher::(wrapper) Run >>>> >>> >>> > >> and >>>> >>> >>> > >> protect: >>>> >>> >>> > >> repoStats, Return response: {} >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> 21:41:50,950::task::1191::Storage.TaskManager.Task::(prepare) >>>> >>> >>> > >> >>>> >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::finished: {} >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> 21:41:50,950::task::595::Storage.TaskManager.Task::(_updateState) >>>> >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving >>>> >>> >>> > >> from >>>> >>> >>> > >> state >>>> >>> >>> > >> preparing >>>> >>> >>> > >> -> >>>> >>> >>> > >> state finished >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> 21:41:50,951::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) >>>> >>> >>> > >> Owner.releaseAll requests {} resources {} >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> 21:41:50,951::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >>>> >>> >>> > >> Owner.cancelAll requests {} >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> 21:41:50,951::task::993::Storage.TaskManager.Task::(_decref) >>>> >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::ref 0 >>>> >>> >>> > >> aborting >>>> >>> >>> > >> False >>>> >>> >>> > >> ------------------------------- >>>> >>> >>> > >> >>>> >>> >>> > >> [root@compute4 ~]# service glusterd status >>>> >>> >>> > >> glusterd is stopped >>>> >>> >>> > >> [root@compute4 ~]# chkconfig --list | grep glusterd >>>> >>> >>> > >> glusterd 0:off 1:off 2:on 3:on 4:on >>>> >>> >>> > >> 5:on >>>> >>> >>> > >> 6:off >>>> >>> >>> > >> [root@compute4 ~]# >>>> >>> >>> > >> >>>> >>> >>> > >> Thanks, >>>> >>> >>> > >> Punit >>>> >>> >>> > >> >>>> >>> >>> > >> On Mon, Nov 24, 2014 at 6:36 PM, Kanagaraj >>>> >>> >>> > >> <kmayilsa@redhat.com> >>>> >>> >>> > >> wrote: >>>> >>> >>> > >> >>>> >>> >>> > >>> Can you send the corresponding error in vdsm.log >>>> >>> >>> > >>> from >>>> >>> >>> > >>> the >>>> >>> >>> > >>> host? >>>> >>> >>> > >>> >>>> >>> >>> > >>> Also check if glusterd service is running. >>>> >>> >>> > >>> >>>> >>> >>> > >>> Thanks, >>>> >>> >>> > >>> Kanagaraj >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> On 11/24/2014 03:39 PM, Punit Dambiwal wrote: >>>> >>> >>> > >>> >>>> >>> >>> > >>> Hi, >>>> >>> >>> > >>> >>>> >>> >>> > >>> After reboot my Hypervisior host can not activate >>>> >>> >>> > >>> again >>>> >>> >>> > >>> in the >>>> >>> >>> > >>> cluster >>>> >>> >>> > >>> and failed with the following error :- >>>> >>> >>> > >>> >>>> >>> >>> > >>> Gluster command [<UNKNOWN>] failed on server... >>>> >>> >>> > >>> >>>> >>> >>> > >>> Engine logs :- >>>> >>> >>> > >>> >>>> >>> >>> > >>> 2014-11-24 18:05:28,397 INFO >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-64) START, >>>> >>> >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, >>>> >>> >>> > >>> HostId >>>> >>> >>> > >>> = >>>> >>> >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: >>>> >>> >>> > >>> 5f251c90 >>>> >>> >>> > >>> 2014-11-24 18:05:30,609 INFO >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-64) FINISH, >>>> >>> >>> > >>> GlusterVolumesListVDSCommand, >>>> >>> >>> > >>> return: >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@d95203e0 }, >>>> >>> >>> > >>> log id: 5f251c90 >>>> >>> >>> > >>> 2014-11-24 18:05:33,768 INFO >>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>>> >>> >>> > >>> (ajp--127.0.0.1-8702-8) >>>> >>> >>> > >>> [287d570d] Lock Acquired to object EngineLock >>>> >>> >>> > >>> [exclusiveLocks= >>>> >>> >>> > >>> key: >>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS >>>> >>> >>> > >>> , sharedLocks= ] >>>> >>> >>> > >>> 2014-11-24 18:05:33,795 INFO >>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] >>>> >>> >>> > >>> Running >>>> >>> >>> > >>> command: >>>> >>> >>> > >>> ActivateVdsCommand internal: false. Entities >>>> >>> >>> > >>> affected : >>>> >>> >>> > >>> ID: >>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDSAction >>>> >>> >>> > >>> group >>>> >>> >>> > >>> MANIPULATE_HOST >>>> >>> >>> > >>> with role type ADMIN >>>> >>> >>> > >>> 2014-11-24 18:05:33,796 INFO >>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] >>>> >>> >>> > >>> Before >>>> >>> >>> > >>> acquiring >>>> >>> >>> > >>> lock in >>>> >>> >>> > >>> order to prevent monitoring for host Compute5 from >>>> >>> >>> > >>> data-center >>>> >>> >>> > >>> SV_WTC >>>> >>> >>> > >>> 2014-11-24 18:05:33,797 INFO >>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] Lock >>>> >>> >>> > >>> acquired, >>>> >>> >>> > >>> from >>>> >>> >>> > >>> now a >>>> >>> >>> > >>> monitoring of host will be skipped for host Compute5 >>>> >>> >>> > >>> from >>>> >>> >>> > >>> data-center >>>> >>> >>> > >>> SV_WTC >>>> >>> >>> > >>> 2014-11-24 18:05:33,817 INFO >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] >>>> >>> >>> > >>> START, >>>> >>> >>> > >>> SetVdsStatusVDSCommand(HostName = Compute5, HostId = >>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, >>>> >>> >>> > >>> status=Unassigned, >>>> >>> >>> > >>> nonOperationalReason=NONE, >>>> >>> >>> > >>> stopSpmFailureLogged=false), >>>> >>> >>> > >>> log id: >>>> >>> >>> > >>> 1cbc7311 >>>> >>> >>> > >>> 2014-11-24 18:05:33,820 INFO >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] >>>> >>> >>> > >>> FINISH, >>>> >>> >>> > >>> SetVdsStatusVDSCommand, log id: 1cbc7311 >>>> >>> >>> > >>> 2014-11-24 18:05:34,086 INFO >>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) Activate >>>> >>> >>> > >>> finished. >>>> >>> >>> > >>> Lock >>>> >>> >>> > >>> released. >>>> >>> >>> > >>> Monitoring can run now for host Compute5 from >>>> >>> >>> > >>> data-center >>>> >>> >>> > >>> SV_WTC >>>> >>> >>> > >>> 2014-11-24 18:05:34,088 INFO >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) Correlation ID: >>>> >>> >>> > >>> 287d570d, >>>> >>> >>> > >>> Job >>>> >>> >>> > >>> ID: >>>> >>> >>> > >>> 5ef8e4d6-b2bc-469e-8e81-7ef74b2a001a, Call Stack: >>>> >>> >>> > >>> null, >>>> >>> >>> > >>> Custom >>>> >>> >>> > >>> Event ID: >>>> >>> >>> > >>> -1, Message: Host Compute5 was activated by admin. >>>> >>> >>> > >>> 2014-11-24 18:05:34,090 INFO >>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) Lock freed to >>>> >>> >>> > >>> object >>>> >>> >>> > >>> EngineLock >>>> >>> >>> > >>> [exclusiveLocks= key: >>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a >>>> >>> >>> > >>> value: >>>> >>> >>> > >>> VDS >>>> >>> >>> > >>> , sharedLocks= ] >>>> >>> >>> > >>> 2014-11-24 18:05:35,792 INFO >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-55) [3706e836] START, >>>> >>> >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, >>>> >>> >>> > >>> HostId >>>> >>> >>> > >>> = >>>> >>> >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: >>>> >>> >>> > >>> 48a0c832 >>>> >>> >>> > >>> 2014-11-24 18:05:37,064 INFO >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] >>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) START, >>>> >>> >>> > >>> GetHardwareInfoVDSCommand(HostName = Compute5, >>>> >>> >>> > >>> HostId = >>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> vds=Host[Compute5,0bf6b00f-7947-4411-b55a-cc5eea2b381a]), log >>>> >>> >>> > >>> id: >>>> >>> >>> > >>> 6d560cc2 >>>> >>> >>> > >>> 2014-11-24 18:05:37,074 INFO >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] >>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) FINISH, >>>> >>> >>> > >>> GetHardwareInfoVDSCommand, log >>>> >>> >>> > >>> id: 6d560cc2 >>>> >>> >>> > >>> 2014-11-24 18:05:37,093 WARN >>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.VdsManager] >>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) Host Compute5 is >>>> >>> >>> > >>> running >>>> >>> >>> > >>> with >>>> >>> >>> > >>> disabled >>>> >>> >>> > >>> SELinux. >>>> >>> >>> > >>> 2014-11-24 18:05:37,127 INFO >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] >>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] >>>> >>> >>> > >>> Running >>>> >>> >>> > >>> command: >>>> >>> >>> > >>> HandleVdsCpuFlagsOrClusterChangedCommand internal: >>>> >>> >>> > >>> true. >>>> >>> >>> > >>> Entities >>>> >>> >>> > >>> affected >>>> >>> >>> > >>> : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: >>>> >>> >>> > >>> VDS >>>> >>> >>> > >>> 2014-11-24 18:05:37,147 INFO >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] START, >>>> >>> >>> > >>> GlusterServersListVDSCommand(HostName = Compute5, >>>> >>> >>> > >>> HostId >>>> >>> >>> > >>> = >>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a), log id: >>>> >>> >>> > >>> 4faed87 >>>> >>> >>> > >>> 2014-11-24 18:05:37,164 INFO >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] >>>> >>> >>> > >>> FINISH, >>>> >>> >>> > >>> GlusterServersListVDSCommand, log id: 4faed87 >>>> >>> >>> > >>> 2014-11-24 18:05:37,189 INFO >>>> >>> >>> > >>> >>>> >>> >>> > >>> [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] >>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] >>>> >>> >>> > >>> Running >>>> >>> >>> > >>> command: >>>> >>> >>> > >>> SetNonOperationalVdsCommand internal: true. Entities >>>> >>> >>> > >>> affected : >>>> >>> >>> > >>> ID: >>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS >>>> >>> >>> > >>> 2014-11-24 18:05:37,206 INFO >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] START, >>>> >>> >>> > >>> SetVdsStatusVDSCommand(HostName = Compute5, HostId = >>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, >>>> >>> >>> > >>> status=NonOperational, >>>> >>> >>> > >>> nonOperationalReason=GLUSTER_COMMAND_FAILED, >>>> >>> >>> > >>> stopSpmFailureLogged=false), >>>> >>> >>> > >>> log id: fed5617 >>>> >>> >>> > >>> 2014-11-24 18:05:37,209 INFO >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] >>>> >>> >>> > >>> FINISH, >>>> >>> >>> > >>> SetVdsStatusVDSCommand, log id: fed5617 >>>> >>> >>> > >>> 2014-11-24 18:05:37,223 ERROR >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] >>>> >>> >>> > >>> Correlation ID: >>>> >>> >>> > >>> 4a84c4e5, >>>> >>> >>> > >>> Job >>>> >>> >>> > >>> ID: 4bfd4a6d-c3ef-468f-a40e-a3a6ca13011b, Call >>>> >>> >>> > >>> Stack: >>>> >>> >>> > >>> null, >>>> >>> >>> > >>> Custom >>>> >>> >>> > >>> Event >>>> >>> >>> > >>> ID: -1, Message: Gluster command [<UNKNOWN>] failed >>>> >>> >>> > >>> on >>>> >>> >>> > >>> server >>>> >>> >>> > >>> Compute5. >>>> >>> >>> > >>> 2014-11-24 18:05:37,243 INFO >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] >>>> >>> >>> > >>> Correlation ID: >>>> >>> >>> > >>> null, >>>> >>> >>> > >>> Call >>>> >>> >>> > >>> Stack: null, Custom Event ID: -1, Message: Status of >>>> >>> >>> > >>> host >>>> >>> >>> > >>> Compute5 >>>> >>> >>> > >>> was >>>> >>> >>> > >>> set >>>> >>> >>> > >>> to NonOperational. >>>> >>> >>> > >>> 2014-11-24 18:05:37,272 INFO >>>> >>> >>> > >>> [org.ovirt.engine.core.bll.HandleVdsVersionCommand] >>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Running >>>> >>> >>> > >>> command: >>>> >>> >>> > >>> HandleVdsVersionCommand internal: true. Entities >>>> >>> >>> > >>> affected : >>>> >>> >>> > >>> ID: >>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS >>>> >>> >>> > >>> 2014-11-24 18:05:37,274 INFO >>>> >>> >>> > >>> >>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] >>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Host >>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a : Compute5 is >>>> >>> >>> > >>> already in >>>> >>> >>> > >>> NonOperational status for reason >>>> >>> >>> > >>> GLUSTER_COMMAND_FAILED. >>>> >>> >>> > >>> SetNonOperationalVds command is skipped. >>>> >>> >>> > >>> 2014-11-24 18:05:38,065 INFO >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-55) [3706e836] >>>> >>> >>> > >>> FINISH, >>>> >>> >>> > >>> GlusterVolumesListVDSCommand, return: >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4e72a1b1 }, >>>> >>> >>> > >>> log id: 48a0c832 >>>> >>> >>> > >>> 2014-11-24 18:05:43,243 INFO >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-35) START, >>>> >>> >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, >>>> >>> >>> > >>> HostId >>>> >>> >>> > >>> = >>>> >>> >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: >>>> >>> >>> > >>> 3ce13ebc >>>> >>> >>> > >>> ^C >>>> >>> >>> > >>> [root@ccr01 ~]# >>>> >>> >>> > >>> >>>> >>> >>> > >>> Thanks, >>>> >>> >>> > >>> Punit >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> _______________________________________________ >>>> >>> >>> > >>> Users mailing >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> listUsers@ovirt.orghttp://
On Fri, Dec 5, 2014 at 12:47 PM, Punit Dambiwal <hypunit@gmail.com> wrote: the the line the probed). possible, the the then lists.ovirt.org/mailman/listinfo/users
>>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > > >>>> >>> >>> > > >>>> >>> >>> > >>>> >>> >> >>>> >>> >> >>>> >>> > >>>> >>> > >>>> >>> > _______________________________________________ >>>> >>> > Gluster-users mailing list >>>> >>> > Gluster-users@gluster.org >>>> >>> > >>>> >>> > http://supercolony.gluster.org/mailman/listinfo/gluster-users >>>> >> >>>> >> >>>> > >>> >>> >

Hi Kaushal, Still the same error...even try your suggested workaround :- ------------------- Can you replace 'Before=network-online.target' with 'Wants=network-online.target' and try the boot again? This should force the network to be online before starting GlusterD. ------------------- Thanks, Punit On Sat, Dec 6, 2014 at 11:44 AM, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi Kaushal,
I already have all the hosts entry in the /etc/hosts for the easy resolution....i will try your method in the glusterd.services and check and let you know....weather problem solve or not.....
On Fri, Dec 5, 2014 at 9:50 PM, Kaushal M <kshlmster@gmail.com> wrote:
Can you replace 'Before=network-online.target' with 'Wants=network-online.target' and try the boot again? This should force the network to be online before starting GlusterD.
If even that fails, you could try adding an entry into /etc/hosts with the hostname of the system. This should prevent any more failures.
I still don't believe it's a problem with Gluster. Gluster uses apis provided by the system to perform name resolution. These definitely work correctly because you can start GlusterD later. Since the resolution failure only happens during boot, it points to system or network setup issues during boot. To me it seems like the network isn't completely setup at that point of time.
~kaushal
Hi Kaushal,
It seems it's bug in glusterfs 3.6....even i manage my systemd to start
network service before glusterd...but it's still fail...
--------------- [Unit] Description=GlusterFS, a clustered file-system server After=network.target rpcbind.service Before=network-online.target
[Service] Type=forking PIDFile=/var/run/glusterd.pid LimitNOFILE=65536 ExecStartPre=/etc/rc.d/init.d/network start ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid KillMode=process
[Install] WantedBy=multi-user.target ----------------
Thanks, Punit
On Wed, Dec 3, 2014 at 8:56 PM, Kaushal M <kshlmster@gmail.com> wrote:
I just remembered this.
There was another user having a similar issue of GlusterD failing to start on the mailing list a while back. The cause of his problem was the way his network was brought up. IIRC, he was using a static network configuration. The problem vanished when he began using dhcp. Or it might have been he was using dhcp.service and it got solved after switching to NetworkManager.
This could be one more thing you could look at.
I'll try to find the mail thread to see if it was the same problem as
you.
~kaushal
On Wed, Dec 3, 2014 at 6:22 PM, Kaushal M <kshlmster@gmail.com> wrote:
I don't know much about how the network target is brought up in CentOS7, but I'll try as much as I can.
It seems to me that, after the network has been brought up and by the time GlusterD is started, a. The machine hasn't yet recieved it's hostname, or b. It hasn't yet registered with the name server.
This is causing name resolution failures.
I don't know if the network target could come up without the machine getting its hostname, so I'm pretty sure it's not a.
So it seems to be b. But these kind of signing in happens only in
DDNS
systems, which doesn't seem to be the case for you.
Both of these reasons might be wrong (most likely wrong). You'd do good if you could ask for help from someone with more experience in systemd + networking.
~kaushal
On Wed, Dec 3, 2014 at 10:54 AM, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi Kaushal,
This is the host...which i rebooted...would you mind to let me know how i can make the glusterd sevice come up after network...i am using centos7...if network is the issue...
On Wed, Dec 3, 2014 at 11:54 AM, Kaushal M <kshlmster@gmail.com> wrote: > > This peer cannot be identified. > > " [2014-12-03 02:29:25.998153] D > [glusterd-peer-utils.c:121:glusterd_peerinfo_find_by_hostname] > 0-management: > Unable to find friend: cpu05.zne01.hkg1.ovt.36stack.com" > > I don't know why this address is not being resolved during boot time. > If > this is a valid peer, the the only reason I can think of this that
> network is not up. > > If you had previously detached the peer forcefully, the that could > have > left stale entries in some volumes. In this case as well, GlusterD > will fail > to identify the peer. > > Do either of these reasons seem a possibility to you? > > On Dec 3, 2014 8:07 AM, "Punit Dambiwal" <hypunit@gmail.com> wrote: >> >> Hi Kaushal, >> >> Please find the logs here :- http://ur1.ca/iyoe5 and >> http://ur1.ca/iyoed >> >> On Tue, Dec 2, 2014 at 10:43 PM, Kaushal M <kshlmster@gmail.com> >> wrote: >>> >>> Hey Punit, >>> In the logs you've provided, GlusterD appears to be running >>> correctly. >>> Could you provide the logs for the time period when GlusterD >>> attempts to >>> start but fails. >>> >>> ~kaushal >>> >>> On Dec 2, 2014 8:03 PM, "Punit Dambiwal" <hypunit@gmail.com> wrote: >>>> >>>> Hi Kaushal, >>>> >>>> Please find the logs here :- http://ur1.ca/iyhs5 and >>>> http://ur1.ca/iyhue >>>> >>>> Thanks, >>>> punit >>>> >>>> >>>> On Tue, Dec 2, 2014 at 12:00 PM, Kaushal M <kshlmster@gmail.com
>>>> wrote: >>>>> >>>>> Hey Punit, >>>>> Could you start Glusterd in debug mode and provide the logs here? >>>>> To start it in debug mode, append '-LDEBUG' to the ExecStart
>>>>> in >>>>> the service file. >>>>> >>>>> ~kaushal >>>>> >>>>> On Mon, Dec 1, 2014 at 9:05 AM, Punit Dambiwal < hypunit@gmail.com> >>>>> wrote: >>>>> > Hi, >>>>> > >>>>> > Can Any body help me on this ?? >>>>> > >>>>> > On Thu, Nov 27, 2014 at 9:29 AM, Punit Dambiwal >>>>> > <hypunit@gmail.com> >>>>> > wrote: >>>>> >> >>>>> >> Hi Kaushal, >>>>> >> >>>>> >> Thanks for the detailed reply....let me explain my setup first >>>>> >> :- >>>>> >> >>>>> >> 1. Ovirt Engine >>>>> >> 2. 4* host as well as storage machine (Host and gluster >>>>> >> combined) >>>>> >> 3. Every host has 24 bricks... >>>>> >> >>>>> >> Now whenever the host machine reboot...it can come up but can >>>>> >> not >>>>> >> join the >>>>> >> cluster again and through the following error "Gluster command >>>>> >> [<UNKNOWN>] >>>>> >> failed on server.." >>>>> >> >>>>> >> Please check my comment in line :- >>>>> >> >>>>> >> 1. Use the same string for doing the peer probe and for the >>>>> >> brick >>>>> >> address >>>>> >> during volume create/add-brick. Ideally, we suggest you use >>>>> >> properly >>>>> >> resolvable FQDNs everywhere. If that is not possible, then use >>>>> >> only >>>>> >> IP >>>>> >> addresses. Try to avoid short names. >>>>> >> --------------- >>>>> >> [root@cpu05 ~]# gluster peer status >>>>> >> Number of Peers: 3 >>>>> >> >>>>> >> Hostname: cpu03.stack.com >>>>> >> Uuid: 5729b8c4-e80d-4353-b456-6f467bddbdfb >>>>> >> State: Peer in Cluster (Connected) >>>>> >> >>>>> >> Hostname: cpu04.stack.com >>>>> >> Uuid: d272b790-c4b2-4bed-ba68-793656e6d7b0 >>>>> >> State: Peer in Cluster (Connected) >>>>> >> Other names: >>>>> >> 10.10.0.8 >>>>> >> >>>>> >> Hostname: cpu02.stack.com >>>>> >> Uuid: 8d8a7041-950e-40d0-85f9-58d14340ca25 >>>>> >> State: Peer in Cluster (Connected) >>>>> >> [root@cpu05 ~]# >>>>> >> ---------------- >>>>> >> 2. During boot up, make sure to launch glusterd only after
>>>>> >> network is >>>>> >> up. This will allow the new peer identification mechanism to do >>>>> >> its >>>>> >> job correctly. >>>>> >> >> I think the service itself doing the same job.... >>>>> >> >>>>> >> [root@cpu05 ~]# cat /usr/lib/systemd/system/glusterd.service >>>>> >> [Unit] >>>>> >> Description=GlusterFS, a clustered file-system server >>>>> >> After=network.target rpcbind.service >>>>> >> Before=network-online.target >>>>> >> >>>>> >> [Service] >>>>> >> Type=forking >>>>> >> PIDFile=/var/run/glusterd.pid >>>>> >> LimitNOFILE=65536 >>>>> >> ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid >>>>> >> KillMode=process >>>>> >> >>>>> >> [Install] >>>>> >> WantedBy=multi-user.target >>>>> >> [root@cpu05 ~]# >>>>> >> -------------------- >>>>> >> >>>>> >> gluster logs :- >>>>> >> >>>>> >> [2014-11-24 09:22:22.147471] I [MSGID: 100030] >>>>> >> [glusterfsd.c:2018:main] >>>>> >> 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd >>>>> >> version >>>>> >> 3.6.1 >>>>> >> (args: /usr/sbin/glusterd -p /var/run/glusterd.pid) >>>>> >> [2014-11-24 09:22:22.151565] I [glusterd.c:1214:init] >>>>> >> 0-management: >>>>> >> Maximum allowed open file descriptors set to 65536 >>>>> >> [2014-11-24 09:22:22.151599] I [glusterd.c:1259:init] >>>>> >> 0-management: >>>>> >> Using >>>>> >> /var/lib/glusterd as working directory >>>>> >> [2014-11-24 09:22:22.155216] W >>>>> >> [rdma.c:4195:__gf_rdma_ctx_create] >>>>> >> 0-rpc-transport/rdma: rdma_cm event channel creation failed (No >>>>> >> such device) >>>>> >> [2014-11-24 09:22:22.155264] E [rdma.c:4483:init] >>>>> >> 0-rdma.management: >>>>> >> Failed to initialize IB Device >>>>> >> [2014-11-24 09:22:22.155285] E >>>>> >> [rpc-transport.c:333:rpc_transport_load] >>>>> >> 0-rpc-transport: 'rdma' initialization failed >>>>> >> [2014-11-24 09:22:22.155354] W >>>>> >> [rpcsvc.c:1524:rpcsvc_transport_create] >>>>> >> 0-rpc-service: cannot create listener, initing the transport >>>>> >> failed >>>>> >> [2014-11-24 09:22:22.156290] I >>>>> >> [glusterd.c:413:glusterd_check_gsync_present] 0-glusterd: >>>>> >> geo-replication >>>>> >> module not installed in the system >>>>> >> [2014-11-24 09:22:22.161318] I >>>>> >> [glusterd-store.c:2043:glusterd_restore_op_version] 0-glusterd: >>>>> >> retrieved >>>>> >> op-version: 30600 >>>>> >> [2014-11-24 09:22:22.821800] I >>>>> >> [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] >>>>> >> 0-management: >>>>> >> connect returned 0 >>>>> >> [2014-11-24 09:22:22.825810] I >>>>> >> [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] >>>>> >> 0-management: >>>>> >> connect returned 0 >>>>> >> [2014-11-24 09:22:22.828705] I >>>>> >> [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] >>>>> >> 0-management: >>>>> >> connect returned 0 >>>>> >> [2014-11-24 09:22:22.828771] I >>>>> >> [rpc-clnt.c:969:rpc_clnt_connection_init] >>>>> >> 0-management: setting frame-timeout to 600 >>>>> >> [2014-11-24 09:22:22.832670] I >>>>> >> [rpc-clnt.c:969:rpc_clnt_connection_init] >>>>> >> 0-management: setting frame-timeout to 600 >>>>> >> [2014-11-24 09:22:22.835919] I >>>>> >> [rpc-clnt.c:969:rpc_clnt_connection_init] >>>>> >> 0-management: setting frame-timeout to 600 >>>>> >> [2014-11-24 09:22:22.840209] E >>>>> >> [glusterd-store.c:4248:glusterd_resolve_all_bricks] 0-glusterd: >>>>> >> resolve >>>>> >> brick failed in restore >>>>> >> [2014-11-24 09:22:22.840233] E [xlator.c:425:xlator_init] >>>>> >> 0-management: >>>>> >> Initialization of volume 'management' failed, review your >>>>> >> volfile >>>>> >> again >>>>> >> [2014-11-24 09:22:22.840245] E >>>>> >> [graph.c:322:glusterfs_graph_init] >>>>> >> 0-management: initializing translator failed >>>>> >> [2014-11-24 09:22:22.840264] E >>>>> >> [graph.c:525:glusterfs_graph_activate] >>>>> >> 0-graph: init failed >>>>> >> [2014-11-24 09:22:22.840754] W >>>>> >> [glusterfsd.c:1194:cleanup_and_exit] >>>>> >> (--> >>>>> >> 0-: received signum (0), shutting down >>>>> >> >>>>> >> Thanks, >>>>> >> Punit >>>>> >> >>>>> >> >>>>> >> >>>>> >> >>>>> >> On Wed, Nov 26, 2014 at 7:14 PM, Kaushal M >>>>> >> <kshlmster@gmail.com> >>>>> >> wrote: >>>>> >>> >>>>> >>> Based on the logs I can guess that glusterd is being started >>>>> >>> before >>>>> >>> the network has come up and that the addresses given to bricks >>>>> >>> do >>>>> >>> not >>>>> >>> directly match the addresses used in during peer probe. >>>>> >>> >>>>> >>> The gluster_after_reboot log has the line "[2014-11-25 >>>>> >>> 06:46:09.972113] E >>>>> >>> [glusterd-store.c:2632:glusterd_resolve_all_bricks] >>>>> >>> 0-glusterd: resolve brick failed in restore". >>>>> >>> >>>>> >>> Brick resolution fails when glusterd cannot match the address >>>>> >>> for >>>>> >>> the >>>>> >>> brick, with one of the peers. Brick resolution happens in two >>>>> >>> phases, >>>>> >>> 1. We first try to identify the peer by performing string >>>>> >>> comparisions >>>>> >>> with the brick address and the peer addresses (The peer names >>>>> >>> will >>>>> >>> be >>>>> >>> the names/addresses that were given when the peer was
>>>>> >>> 2. If we don't find a match from step 1, we will then resolve >>>>> >>> all >>>>> >>> the >>>>> >>> brick address and the peer addresses into addrinfo structs, >>>>> >>> and >>>>> >>> then >>>>> >>> compare these structs to find a match. This process should >>>>> >>> generally >>>>> >>> find a match if available. This will fail only if the network >>>>> >>> is >>>>> >>> not >>>>> >>> up yet as we cannot resolve addresses. >>>>> >>> >>>>> >>> The above steps are applicable only to glusterfs versions >>>>> >>> >=3.6. >>>>> >>> They >>>>> >>> were introduced to reduce problems with peer identification, >>>>> >>> like >>>>> >>> the >>>>> >>> one you encountered >>>>> >>> >>>>> >>> Since both of the steps failed to find a match in one run, but >>>>> >>> succeeded later, we can come to the conclusion that, >>>>> >>> a) the bricks don't have the exact same string used in peer >>>>> >>> probe >>>>> >>> for >>>>> >>> their addresses as step 1 failed, and >>>>> >>> b) the network was not up in the initial run, as step 2 failed >>>>> >>> during >>>>> >>> the initial run, but passed in the second run. >>>>> >>> >>>>> >>> Please let me know if my conclusion is correct. >>>>> >>> >>>>> >>> If it is, you can solve your problem in two ways. >>>>> >>> 1. Use the same string for doing the peer probe and for the >>>>> >>> brick >>>>> >>> address during volume create/add-brick. Ideally, we suggest >>>>> >>> you >>>>> >>> use >>>>> >>> properly resolvable FQDNs everywhere. If that is not
>>>>> >>> then >>>>> >>> use only IP addresses. Try to avoid short names. >>>>> >>> 2. During boot up, make sure to launch glusterd only after
>>>>> >>> network >>>>> >>> is up. This will allow the new peer identification mechanism >>>>> >>> to do >>>>> >>> its >>>>> >>> job correctly. >>>>> >>> >>>>> >>> >>>>> >>> If you have already followed these steps and yet still hit
>>>>> >>> problem, then please provide more information (setup, logs, >>>>> >>> etc.). >>>>> >>> It >>>>> >>> could be much different problem that you are facing. >>>>> >>> >>>>> >>> ~kaushal >>>>> >>> >>>>> >>> On Wed, Nov 26, 2014 at 4:01 PM, Punit Dambiwal >>>>> >>> <hypunit@gmail.com> >>>>> >>> wrote: >>>>> >>> > Is there any one can help on this ?? >>>>> >>> > >>>>> >>> > Thanks, >>>>> >>> > punit >>>>> >>> > >>>>> >>> > On Wed, Nov 26, 2014 at 9:42 AM, Punit Dambiwal >>>>> >>> > <hypunit@gmail.com> >>>>> >>> > wrote: >>>>> >>> >> >>>>> >>> >> Hi, >>>>> >>> >> >>>>> >>> >> My Glusterfs version is :- glusterfs-3.6.1-1.el7 >>>>> >>> >> >>>>> >>> >> On Wed, Nov 26, 2014 at 1:59 AM, Kanagaraj Mayilsamy >>>>> >>> >> <kmayilsa@redhat.com> >>>>> >>> >> wrote: >>>>> >>> >>> >>>>> >>> >>> [+Gluster-users@gluster.org] >>>>> >>> >>> >>>>> >>> >>> "Initialization of volume 'management' failed, review your >>>>> >>> >>> volfile >>>>> >>> >>> again", glusterd throws this error when the service is >>>>> >>> >>> started >>>>> >>> >>> automatically >>>>> >>> >>> after the reboot. But the service is successfully started >>>>> >>> >>> later >>>>> >>> >>> manually by >>>>> >>> >>> the user. >>>>> >>> >>> >>>>> >>> >>> can somebody from gluster-users please help on this? >>>>> >>> >>> >>>>> >>> >>> glusterfs version: 3.5.1 >>>>> >>> >>> >>>>> >>> >>> Thanks, >>>>> >>> >>> Kanagaraj >>>>> >>> >>> >>>>> >>> >>> ----- Original Message ----- >>>>> >>> >>> > From: "Punit Dambiwal" <hypunit@gmail.com> >>>>> >>> >>> > To: "Kanagaraj" <kmayilsa@redhat.com> >>>>> >>> >>> > Cc: users@ovirt.org >>>>> >>> >>> > Sent: Tuesday, November 25, 2014 7:24:45 PM >>>>> >>> >>> > Subject: Re: [ovirt-users] Gluster command [<UNKNOWN>] >>>>> >>> >>> > failed on >>>>> >>> >>> > server... >>>>> >>> >>> > >>>>> >>> >>> > Hi Kanagraj, >>>>> >>> >>> > >>>>> >>> >>> > Please check the attached log files....i didn't find any >>>>> >>> >>> > thing >>>>> >>> >>> > special.... >>>>> >>> >>> > >>>>> >>> >>> > On Tue, Nov 25, 2014 at 12:12 PM, Kanagaraj >>>>> >>> >>> > <kmayilsa@redhat.com> >>>>> >>> >>> > wrote: >>>>> >>> >>> > >>>>> >>> >>> > > Do you see any errors in >>>>> >>> >>> > > /var/log/glusterfs/etc-glusterfs-glusterd.vol.log or >>>>> >>> >>> > > vdsm.log >>>>> >>> >>> > > when >>>>> >>> >>> > > the >>>>> >>> >>> > > service is trying to start automatically after the >>>>> >>> >>> > > reboot? >>>>> >>> >>> > > >>>>> >>> >>> > > Thanks, >>>>> >>> >>> > > Kanagaraj >>>>> >>> >>> > > >>>>> >>> >>> > > >>>>> >>> >>> > > On 11/24/2014 08:13 PM, Punit Dambiwal wrote: >>>>> >>> >>> > > >>>>> >>> >>> > > Hi Kanagaraj, >>>>> >>> >>> > > >>>>> >>> >>> > > Yes...once i will start the gluster service and
>>>>> >>> >>> > > vdsmd >>>>> >>> >>> > > ...the >>>>> >>> >>> > > host >>>>> >>> >>> > > can connect to cluster...but the question is why it's >>>>> >>> >>> > > not >>>>> >>> >>> > > started >>>>> >>> >>> > > even it >>>>> >>> >>> > > has chkconfig enabled... >>>>> >>> >>> > > >>>>> >>> >>> > > I have tested it in two host cluster >>>>> >>> >>> > > environment...(Centos 6.6 >>>>> >>> >>> > > and >>>>> >>> >>> > > centos 7.0) on both hypervisior cluster..it's failed >>>>> >>> >>> > > to >>>>> >>> >>> > > reconnect >>>>> >>> >>> > > in >>>>> >>> >>> > > to >>>>> >>> >>> > > cluster after reboot.... >>>>> >>> >>> > > >>>>> >>> >>> > > In both the environment glusterd enabled for next >>>>> >>> >>> > > boot....but >>>>> >>> >>> > > it's >>>>> >>> >>> > > failed with the same error....seems it's bug in either >>>>> >>> >>> > > gluster or >>>>> >>> >>> > > Ovirt ?? >>>>> >>> >>> > > >>>>> >>> >>> > > Please help me to find the workaround here if can not >>>>> >>> >>> > > resolve >>>>> >>> >>> > > it...as >>>>> >>> >>> > > without this the Host machine can not connect after >>>>> >>> >>> > > reboot....that >>>>> >>> >>> > > means >>>>> >>> >>> > > engine will consider it as down and every time need to >>>>> >>> >>> > > manually >>>>> >>> >>> > > start >>>>> >>> >>> > > the >>>>> >>> >>> > > gluster service and vdsmd... ?? >>>>> >>> >>> > > >>>>> >>> >>> > > Thanks, >>>>> >>> >>> > > Punit >>>>> >>> >>> > > >>>>> >>> >>> > > On Mon, Nov 24, 2014 at 10:20 PM, Kanagaraj >>>>> >>> >>> > > <kmayilsa@redhat.com> >>>>> >>> >>> > > wrote: >>>>> >>> >>> > > >>>>> >>> >>> > >> From vdsm.log "error: Connection failed. Please >>>>> >>> >>> > >> check if >>>>> >>> >>> > >> gluster >>>>> >>> >>> > >> daemon >>>>> >>> >>> > >> is operational." >>>>> >>> >>> > >> >>>>> >>> >>> > >> Starting glusterd service should fix this issue. >>>>> >>> >>> > >> 'service >>>>> >>> >>> > >> glusterd >>>>> >>> >>> > >> start' >>>>> >>> >>> > >> But i am wondering why the glusterd was not started >>>>> >>> >>> > >> automatically >>>>> >>> >>> > >> after >>>>> >>> >>> > >> the reboot. >>>>> >>> >>> > >> >>>>> >>> >>> > >> Thanks, >>>>> >>> >>> > >> Kanagaraj >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> On 11/24/2014 07:18 PM, Punit Dambiwal wrote: >>>>> >>> >>> > >> >>>>> >>> >>> > >> Hi Kanagaraj, >>>>> >>> >>> > >> >>>>> >>> >>> > >> Please find the attached VDSM logs :- >>>>> >>> >>> > >> >>>>> >>> >>> > >> ---------------- >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> 21:41:17,182::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >>>>> >>> >>> > >> Owner.cancelAll requests {} >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> 21:41:17,182::task::993::Storage.TaskManager.Task::(_decref) >>>>> >>> >>> > >> Task=`1691d409-9b27-4585-8281-5ec26154367a`::ref 0 >>>>> >>> >>> > >> aborting >>>>> >>> >>> > >> False >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> 21:41:32,393::task::595::Storage.TaskManager.Task::(_updateState) >>>>> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving >>>>> >>> >>> > >> from >>>>> >>> >>> > >> state >>>>> >>> >>> > >> init >>>>> >>> >>> > >> -> >>>>> >>> >>> > >> state preparing >>>>> >>> >>> > >> Thread-13::INFO::2014-11-24 >>>>> >>> >>> > >> 21:41:32,393::logUtils::44::dispatcher::(wrapper) Run >>>>> >>> >>> > >> and >>>>> >>> >>> > >> protect: >>>>> >>> >>> > >> repoStats(options=None) >>>>> >>> >>> > >> Thread-13::INFO::2014-11-24 >>>>> >>> >>> > >> 21:41:32,393::logUtils::47::dispatcher::(wrapper) Run >>>>> >>> >>> > >> and >>>>> >>> >>> > >> protect: >>>>> >>> >>> > >> repoStats, Return response: {} >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> 21:41:32,393::task::1191::Storage.TaskManager.Task::(prepare) >>>>> >>> >>> > >> >>>>> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::finished: {} >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> 21:41:32,394::task::595::Storage.TaskManager.Task::(_updateState) >>>>> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving >>>>> >>> >>> > >> from >>>>> >>> >>> > >> state >>>>> >>> >>> > >> preparing >>>>> >>> >>> > >> -> >>>>> >>> >>> > >> state finished >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> 21:41:32,394::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) >>>>> >>> >>> > >> Owner.releaseAll requests {} resources {} >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> 21:41:32,394::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >>>>> >>> >>> > >> Owner.cancelAll requests {} >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> 21:41:32,394::task::993::Storage.TaskManager.Task::(_decref) >>>>> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::ref 0 >>>>> >>> >>> > >> aborting >>>>> >>> >>> > >> False >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> 21:41:41,550::BindingXMLRPC::1132::vds::(wrapper) >>>>> >>> >>> > >> client >>>>> >>> >>> > >> [10.10.10.2]::call >>>>> >>> >>> > >> getCapabilities with () {} >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> 21:41:41,553::utils::738::root::(execCmd) >>>>> >>> >>> > >> /sbin/ip route show to 0.0.0.0/0 table all (cwd None) >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> 21:41:41,560::utils::758::root::(execCmd) >>>>> >>> >>> > >> SUCCESS: <err> = ''; <rc> = 0 >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> 21:41:41,588::caps::728::root::(_getKeyPackages) rpm >>>>> >>> >>> > >> package >>>>> >>> >>> > >> ('gluster-swift',) not found >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> 21:41:41,592::caps::728::root::(_getKeyPackages) rpm >>>>> >>> >>> > >> package >>>>> >>> >>> > >> ('gluster-swift-object',) not found >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> 21:41:41,593::caps::728::root::(_getKeyPackages) rpm >>>>> >>> >>> > >> package >>>>> >>> >>> > >> ('gluster-swift-plugin',) not found >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm >>>>> >>> >>> > >> package >>>>> >>> >>> > >> ('gluster-swift-account',) not found >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm >>>>> >>> >>> > >> package >>>>> >>> >>> > >> ('gluster-swift-proxy',) not found >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm >>>>> >>> >>> > >> package >>>>> >>> >>> > >> ('gluster-swift-doc',) not found >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm >>>>> >>> >>> > >> package >>>>> >>> >>> > >> ('gluster-swift-container',) not found >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm >>>>> >>> >>> > >> package >>>>> >>> >>> > >> ('glusterfs-geo-replication',) not found >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> 21:41:41,600::caps::646::root::(get) >>>>> >>> >>> > >> VirtioRNG DISABLED: libvirt version 0.10.2-29.el6_5.9 >>>>> >>> >>> > >> required >>>>> >>> >>> > >> >= >>>>> >>> >>> > >> 0.10.2-31 >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> 21:41:41,603::BindingXMLRPC::1139::vds::(wrapper) >>>>> >>> >>> > >> return >>>>> >>> >>> > >> getCapabilities >>>>> >>> >>> > >> with {'status': {'message': 'Done', 'code': 0}, >>>>> >>> >>> > >> 'info': >>>>> >>> >>> > >> {'HBAInventory': >>>>> >>> >>> > >> {'iSCSI': [{'InitiatorName': >>>>> >>> >>> > >> 'iqn.1994-05.com.redhat:32151ce183c8'}], >>>>> >>> >>> > >> 'FC': >>>>> >>> >>> > >> []}, 'packages2': {'kernel': {'release': >>>>> >>> >>> > >> '431.el6.x86_64', >>>>> >>> >>> > >> 'buildtime': >>>>> >>> >>> > >> 1385061309.0, 'version': '2.6.32'}, 'glusterfs-rdma': >>>>> >>> >>> > >> {'release': >>>>> >>> >>> > >> '1.el6', >>>>> >>> >>> > >> 'buildtime': 1403622628L, 'version': '3.5.1'}, >>>>> >>> >>> > >> 'glusterfs-fuse': >>>>> >>> >>> > >> {'release': '1.el6', 'buildtime': 1403622628L, >>>>> >>> >>> > >> 'version': >>>>> >>> >>> > >> '3.5.1'}, >>>>> >>> >>> > >> 'spice-server': {'release': '6.el6_5.2', 'buildtime': >>>>> >>> >>> > >> 1402324637L, >>>>> >>> >>> > >> 'version': '0.12.4'}, 'vdsm': {'release': >>>>> >>> >>> > >> '1.gitdb83943.el6', >>>>> >>> >>> > >> 'buildtime': >>>>> >>> >>> > >> 1412784567L, 'version': '4.16.7'}, 'qemu-kvm': >>>>> >>> >>> > >> {'release': >>>>> >>> >>> > >> '2.415.el6_5.10', 'buildtime': 1402435700L, >>>>> >>> >>> > >> 'version': >>>>> >>> >>> > >> '0.12.1.2'}, >>>>> >>> >>> > >> 'qemu-img': {'release': '2.415.el6_5.10', >>>>> >>> >>> > >> 'buildtime': >>>>> >>> >>> > >> 1402435700L, >>>>> >>> >>> > >> 'version': '0.12.1.2'}, 'libvirt': {'release': >>>>> >>> >>> > >> '29.el6_5.9', >>>>> >>> >>> > >> 'buildtime': >>>>> >>> >>> > >> 1402404612L, 'version': '0.10.2'}, 'glusterfs': >>>>> >>> >>> > >> {'release': >>>>> >>> >>> > >> '1.el6', >>>>> >>> >>> > >> 'buildtime': 1403622628L, 'version': '3.5.1'}, 'mom': >>>>> >>> >>> > >> {'release': >>>>> >>> >>> > >> '2.el6', >>>>> >>> >>> > >> 'buildtime': 1403794344L, 'version': '0.4.1'}, >>>>> >>> >>> > >> 'glusterfs-server': >>>>> >>> >>> > >> {'release': '1.el6', 'buildtime': 1403622628L, >>>>> >>> >>> > >> 'version': >>>>> >>> >>> > >> '3.5.1'}}, >>>>> >>> >>> > >> 'numaNodeDistance': {'1': [20, 10], '0': [10, 20]}, >>>>> >>> >>> > >> 'cpuModel': >>>>> >>> >>> > >> 'Intel(R) >>>>> >>> >>> > >> Xeon(R) CPU X5650 @ 2.67GHz', 'liveMerge': >>>>> >>> >>> > >> 'false', >>>>> >>> >>> > >> 'hooks': >>>>> >>> >>> > >> {}, >>>>> >>> >>> > >> 'cpuSockets': '2', 'vmTypes': ['kvm'], 'selinux': >>>>> >>> >>> > >> {'mode': '1'}, >>>>> >>> >>> > >> 'kdumpStatus': 0, 'supportedProtocols': ['2.2', >>>>> >>> >>> > >> '2.3'], >>>>> >>> >>> > >> 'networks': >>>>> >>> >>> > >> {'ovirtmgmt': {'iface': u'bond0.10', 'addr': >>>>> >>> >>> > >> '43.252.176.16', >>>>> >>> >>> > >> 'bridged': >>>>> >>> >>> > >> False, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], >>>>> >>> >>> > >> 'mtu': >>>>> >>> >>> > >> '1500', >>>>> >>> >>> > >> 'bootproto4': 'none', 'netmask': '255.255.255.0', >>>>> >>> >>> > >> 'ipv4addrs': >>>>> >>> >>> > >> [' >>>>> >>> >>> > >> 43.252.176.16/24' <http://43.252.176.16/24%27>], >>>>> >>> >>> > >> 'interface': >>>>> >>> >>> > >> u'bond0.10', 'ipv6gateway': '::', 'gateway': >>>>> >>> >>> > >> '43.25.17.1'}, >>>>> >>> >>> > >> 'Internal': >>>>> >>> >>> > >> {'iface': 'Internal', 'addr': '', 'cfg': {'DEFROUTE': >>>>> >>> >>> > >> 'no', >>>>> >>> >>> > >> 'HOTPLUG': >>>>> >>> >>> > >> 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': >>>>> >>> >>> > >> 'no', >>>>> >>> >>> > >> 'BOOTPROTO': >>>>> >>> >>> > >> 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': >>>>> >>> >>> > >> 'Bridge', >>>>> >>> >>> > >> 'ONBOOT': >>>>> >>> >>> > >> 'no'}, 'bridged': True, 'ipv6addrs': >>>>> >>> >>> > >> ['fe80::210:18ff:fecd:daac/64'], >>>>> >>> >>> > >> 'gateway': '', 'bootproto4': 'none', 'netmask': '', >>>>> >>> >>> > >> 'stp': >>>>> >>> >>> > >> 'off', >>>>> >>> >>> > >> 'ipv4addrs': [], 'mtu': '9000', 'ipv6gateway': '::', >>>>> >>> >>> > >> 'ports': >>>>> >>> >>> > >> ['bond1.100']}, 'storage': {'iface': u'bond1', >>>>> >>> >>> > >> 'addr': >>>>> >>> >>> > >> '10.10.10.6', >>>>> >>> >>> > >> 'bridged': False, 'ipv6addrs': >>>>> >>> >>> > >> ['fe80::210:18ff:fecd:daac/64'], >>>>> >>> >>> > >> 'mtu': >>>>> >>> >>> > >> '9000', 'bootproto4': 'none', 'netmask': >>>>> >>> >>> > >> '255.255.255.0', >>>>> >>> >>> > >> 'ipv4addrs': [' >>>>> >>> >>> > >> 10.10.10.6/24' <http://10.10.10.6/24%27>], >>>>> >>> >>> > >> 'interface': >>>>> >>> >>> > >> u'bond1', >>>>> >>> >>> > >> 'ipv6gateway': '::', 'gateway': ''}, 'VMNetwork': >>>>> >>> >>> > >> {'iface': >>>>> >>> >>> > >> 'VMNetwork', >>>>> >>> >>> > >> 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': >>>>> >>> >>> > >> 'no', >>>>> >>> >>> > >> 'MTU': >>>>> >>> >>> > >> '1500', >>>>> >>> >>> > >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': >>>>> >>> >>> > >> 'none', >>>>> >>> >>> > >> 'STP': >>>>> >>> >>> > >> 'off', >>>>> >>> >>> > >> 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': >>>>> >>> >>> > >> 'no'}, >>>>> >>> >>> > >> 'bridged': >>>>> >>> >>> > >> True, >>>>> >>> >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], >>>>> >>> >>> > >> 'gateway': >>>>> >>> >>> > >> '', >>>>> >>> >>> > >> 'bootproto4': >>>>> >>> >>> > >> 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], >>>>> >>> >>> > >> 'mtu': >>>>> >>> >>> > >> '1500', >>>>> >>> >>> > >> 'ipv6gateway': '::', 'ports': ['bond0.36']}}, >>>>> >>> >>> > >> 'bridges': >>>>> >>> >>> > >> {'Internal': >>>>> >>> >>> > >> {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': >>>>> >>> >>> > >> 'no', >>>>> >>> >>> > >> 'MTU': >>>>> >>> >>> > >> '9000', >>>>> >>> >>> > >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': >>>>> >>> >>> > >> 'none', >>>>> >>> >>> > >> 'STP': >>>>> >>> >>> > >> 'off', >>>>> >>> >>> > >> 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': >>>>> >>> >>> > >> 'no'}, >>>>> >>> >>> > >> 'ipv6addrs': >>>>> >>> >>> > >> ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', >>>>> >>> >>> > >> 'netmask': '', >>>>> >>> >>> > >> 'stp': >>>>> >>> >>> > >> 'off', 'ipv4addrs': [], 'ipv6gateway': '::', >>>>> >>> >>> > >> 'gateway': >>>>> >>> >>> > >> '', >>>>> >>> >>> > >> 'opts': >>>>> >>> >>> > >> {'topology_change_detected': '0', >>>>> >>> >>> > >> 'multicast_last_member_count': >>>>> >>> >>> > >> '2', >>>>> >>> >>> > >> 'hash_elasticity': '4', >>>>> >>> >>> > >> 'multicast_query_response_interval': >>>>> >>> >>> > >> '999', >>>>> >>> >>> > >> 'multicast_snooping': '1', >>>>> >>> >>> > >> 'multicast_startup_query_interval': >>>>> >>> >>> > >> '3124', >>>>> >>> >>> > >> 'hello_timer': '31', 'multicast_querier_interval': >>>>> >>> >>> > >> '25496', >>>>> >>> >>> > >> 'max_age': >>>>> >>> >>> > >> '1999', 'hash_max': '512', 'stp_state': '0', >>>>> >>> >>> > >> 'root_id': >>>>> >>> >>> > >> '8000.001018cddaac', 'priority': '32768', >>>>> >>> >>> > >> 'multicast_membership_interval': >>>>> >>> >>> > >> '25996', 'root_path_cost': '0', 'root_port': '0', >>>>> >>> >>> > >> 'multicast_querier': >>>>> >>> >>> > >> '0', >>>>> >>> >>> > >> 'multicast_startup_query_count': '2', 'hello_time': >>>>> >>> >>> > >> '199', >>>>> >>> >>> > >> 'topology_change': '0', 'bridge_id': >>>>> >>> >>> > >> '8000.001018cddaac', >>>>> >>> >>> > >> 'topology_change_timer': '0', 'ageing_time': '29995', >>>>> >>> >>> > >> 'gc_timer': >>>>> >>> >>> > >> '31', >>>>> >>> >>> > >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', >>>>> >>> >>> > >> 'multicast_query_interval': '12498', >>>>> >>> >>> > >> 'multicast_last_member_interval': >>>>> >>> >>> > >> '99', 'multicast_router': '1', 'forward_delay': '0'}, >>>>> >>> >>> > >> 'ports': >>>>> >>> >>> > >> ['bond1.100']}, 'VMNetwork': {'addr': '', 'cfg': >>>>> >>> >>> > >> {'DEFROUTE': >>>>> >>> >>> > >> 'no', >>>>> >>> >>> > >> 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', >>>>> >>> >>> > >> 'NM_CONTROLLED': >>>>> >>> >>> > >> 'no', >>>>> >>> >>> > >> 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': >>>>> >>> >>> > >> 'VMNetwork', >>>>> >>> >>> > >> 'TYPE': >>>>> >>> >>> > >> 'Bridge', >>>>> >>> >>> > >> 'ONBOOT': 'no'}, 'ipv6addrs': >>>>> >>> >>> > >> ['fe80::62eb:69ff:fe20:b46c/64'], >>>>> >>> >>> > >> 'mtu': >>>>> >>> >>> > >> '1500', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], >>>>> >>> >>> > >> 'ipv6gateway': >>>>> >>> >>> > >> '::', >>>>> >>> >>> > >> 'gateway': '', 'opts': {'topology_change_detected': >>>>> >>> >>> > >> '0', >>>>> >>> >>> > >> 'multicast_last_member_count': '2', >>>>> >>> >>> > >> 'hash_elasticity': >>>>> >>> >>> > >> '4', >>>>> >>> >>> > >> 'multicast_query_response_interval': '999', >>>>> >>> >>> > >> 'multicast_snooping': >>>>> >>> >>> > >> '1', >>>>> >>> >>> > >> 'multicast_startup_query_interval': '3124', >>>>> >>> >>> > >> 'hello_timer': >>>>> >>> >>> > >> '131', >>>>> >>> >>> > >> 'multicast_querier_interval': '25496', 'max_age': >>>>> >>> >>> > >> '1999', >>>>> >>> >>> > >> 'hash_max': >>>>> >>> >>> > >> '512', 'stp_state': '0', 'root_id': >>>>> >>> >>> > >> '8000.60eb6920b46c', >>>>> >>> >>> > >> 'priority': >>>>> >>> >>> > >> '32768', 'multicast_membership_interval': '25996', >>>>> >>> >>> > >> 'root_path_cost': >>>>> >>> >>> > >> '0', >>>>> >>> >>> > >> 'root_port': '0', 'multicast_querier': '0', >>>>> >>> >>> > >> 'multicast_startup_query_count': '2', 'hello_time': >>>>> >>> >>> > >> '199', >>>>> >>> >>> > >> 'topology_change': '0', 'bridge_id': >>>>> >>> >>> > >> '8000.60eb6920b46c', >>>>> >>> >>> > >> 'topology_change_timer': '0', 'ageing_time': '29995', >>>>> >>> >>> > >> 'gc_timer': >>>>> >>> >>> > >> '31', >>>>> >>> >>> > >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', >>>>> >>> >>> > >> 'multicast_query_interval': '12498', >>>>> >>> >>> > >> 'multicast_last_member_interval': >>>>> >>> >>> > >> '99', 'multicast_router': '1', 'forward_delay': '0'}, >>>>> >>> >>> > >> 'ports': >>>>> >>> >>> > >> ['bond0.36']}}, 'uuid': >>>>> >>> >>> > >> '44454C4C-4C00-1057-8053-B7C04F504E31', >>>>> >>> >>> > >> 'lastClientIface': 'bond1', 'nics': {'eth3': >>>>> >>> >>> > >> {'permhwaddr': >>>>> >>> >>> > >> '00:10:18:cd:da:ae', 'addr': '', 'cfg': {'SLAVE': >>>>> >>> >>> > >> 'yes', >>>>> >>> >>> > >> 'NM_CONTROLLED': >>>>> >>> >>> > >> 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ae', >>>>> >>> >>> > >> 'MASTER': >>>>> >>> >>> > >> 'bond1', >>>>> >>> >>> > >> 'DEVICE': 'eth3', 'ONBOOT': 'no'}, 'ipv6addrs': [], >>>>> >>> >>> > >> 'mtu': >>>>> >>> >>> > >> '9000', >>>>> >>> >>> > >> 'netmask': '', 'ipv4addrs': [], 'hwaddr': >>>>> >>> >>> > >> '00:10:18:cd:da:ac', >>>>> >>> >>> > >> 'speed': >>>>> >>> >>> > >> 1000}, 'eth2': {'permhwaddr': '00:10:18:cd:da:ac', >>>>> >>> >>> > >> 'addr': '', >>>>> >>> >>> > >> 'cfg': >>>>> >>> >>> > >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': >>>>> >>> >>> > >> '9000', >>>>> >>> >>> > >> 'HWADDR': >>>>> >>> >>> > >> '00:10:18:cd:da:ac', 'MASTER': 'bond1', 'DEVICE': >>>>> >>> >>> > >> 'eth2', >>>>> >>> >>> > >> 'ONBOOT': >>>>> >>> >>> > >> 'no'}, >>>>> >>> >>> > >> 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', >>>>> >>> >>> > >> 'ipv4addrs': [], >>>>> >>> >>> > >> 'hwaddr': >>>>> >>> >>> > >> '00:10:18:cd:da:ac', 'speed': 1000}, 'eth1': >>>>> >>> >>> > >> {'permhwaddr': >>>>> >>> >>> > >> '60:eb:69:20:b4:6d', 'addr': '', 'cfg': {'SLAVE': >>>>> >>> >>> > >> 'yes', >>>>> >>> >>> > >> 'NM_CONTROLLED': >>>>> >>> >>> > >> 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6d', >>>>> >>> >>> > >> 'MASTER': >>>>> >>> >>> > >> 'bond0', >>>>> >>> >>> > >> 'DEVICE': 'eth1', 'ONBOOT': 'yes'}, 'ipv6addrs': [], >>>>> >>> >>> > >> 'mtu': >>>>> >>> >>> > >> '1500', >>>>> >>> >>> > >> 'netmask': '', 'ipv4addrs': [], 'hwaddr': >>>>> >>> >>> > >> '60:eb:69:20:b4:6c', >>>>> >>> >>> > >> 'speed': >>>>> >>> >>> > >> 1000}, 'eth0': {'permhwaddr': '60:eb:69:20:b4:6c', >>>>> >>> >>> > >> 'addr': '', >>>>> >>> >>> > >> 'cfg': >>>>> >>> >>> > >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': >>>>> >>> >>> > >> '1500', >>>>> >>> >>> > >> 'HWADDR': >>>>> >>> >>> > >> '60:eb:69:20:b4:6c', 'MASTER': 'bond0', 'DEVICE': >>>>> >>> >>> > >> 'eth0', >>>>> >>> >>> > >> 'ONBOOT': >>>>> >>> >>> > >> 'yes'}, >>>>> >>> >>> > >> 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', >>>>> >>> >>> > >> 'ipv4addrs': [], >>>>> >>> >>> > >> 'hwaddr': >>>>> >>> >>> > >> '60:eb:69:20:b4:6c', 'speed': 1000}}, >>>>> >>> >>> > >> 'software_revision': '1', >>>>> >>> >>> > >> 'clusterLevels': ['3.0', '3.1', '3.2', '3.3', '3.4', >>>>> >>> >>> > >> '3.5'], >>>>> >>> >>> > >> 'cpuFlags': >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,popcnt,aes,lahf_lm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270', >>>>> >>> >>> > >> 'ISCSIInitiatorName': >>>>> >>> >>> > >> 'iqn.1994-05.com.redhat:32151ce183c8', >>>>> >>> >>> > >> 'netConfigDirty': 'False', 'supportedENGINEs': >>>>> >>> >>> > >> ['3.0', >>>>> >>> >>> > >> '3.1', >>>>> >>> >>> > >> '3.2', >>>>> >>> >>> > >> '3.3', >>>>> >>> >>> > >> '3.4', '3.5'], 'autoNumaBalancing': 2, 'reservedMem': >>>>> >>> >>> > >> '321', >>>>> >>> >>> > >> 'bondings': >>>>> >>> >>> > >> {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', >>>>> >>> >>> > >> 'netmask': '', >>>>> >>> >>> > >> 'slaves': >>>>> >>> >>> > >> [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': >>>>> >>> >>> > >> '', >>>>> >>> >>> > >> 'cfg': >>>>> >>> >>> > >> {'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': >>>>> >>> >>> > >> 'no', >>>>> >>> >>> > >> 'BONDING_OPTS': >>>>> >>> >>> > >> 'mode=4 miimon=100', 'DEVICE': 'bond0', 'ONBOOT': >>>>> >>> >>> > >> 'yes'}, >>>>> >>> >>> > >> 'ipv6addrs': >>>>> >>> >>> > >> ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', >>>>> >>> >>> > >> 'netmask': '', >>>>> >>> >>> > >> 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', >>>>> >>> >>> > >> 'slaves': >>>>> >>> >>> > >> ['eth0', >>>>> >>> >>> > >> 'eth1'], >>>>> >>> >>> > >> 'opts': {'miimon': '100', 'mode': '4'}}, 'bond1': >>>>> >>> >>> > >> {'addr': >>>>> >>> >>> > >> '10.10.10.6', >>>>> >>> >>> > >> 'cfg': {'DEFROUTE': 'no', 'IPADDR': '10.10.10.6', >>>>> >>> >>> > >> 'HOTPLUG': >>>>> >>> >>> > >> 'no', >>>>> >>> >>> > >> 'MTU': >>>>> >>> >>> > >> '9000', 'NM_CONTROLLED': 'no', 'NETMASK': >>>>> >>> >>> > >> '255.255.255.0', >>>>> >>> >>> > >> 'BOOTPROTO': >>>>> >>> >>> > >> 'none', 'BONDING_OPTS': 'mode=4 miimon=100', >>>>> >>> >>> > >> 'DEVICE': >>>>> >>> >>> > >> 'bond1', >>>>> >>> >>> > >> 'ONBOOT': >>>>> >>> >>> > >> 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], >>>>> >>> >>> > >> 'mtu': >>>>> >>> >>> > >> '9000', >>>>> >>> >>> > >> 'netmask': '255.255.255.0', 'ipv4addrs': >>>>> >>> >>> > >> ['10.10.10.6/24' >>>>> >>> >>> > >> <http://10.10.10.6/24%27>], 'hwaddr': >>>>> >>> >>> > >> '00:10:18:cd:da:ac', >>>>> >>> >>> > >> 'slaves': >>>>> >>> >>> > >> ['eth2', 'eth3'], 'opts': {'miimon': '100', 'mode': >>>>> >>> >>> > >> '4'}}, >>>>> >>> >>> > >> 'bond2': >>>>> >>> >>> > >> {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', >>>>> >>> >>> > >> 'slaves': >>>>> >>> >>> > >> [], >>>>> >>> >>> > >> 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', >>>>> >>> >>> > >> 'cfg': {}, >>>>> >>> >>> > >> 'mtu': >>>>> >>> >>> > >> '1500', 'netmask': '', 'slaves': [], 'hwaddr': >>>>> >>> >>> > >> '00:00:00:00:00:00'}}, >>>>> >>> >>> > >> 'software_version': '4.16', 'memSize': '24019', >>>>> >>> >>> > >> 'cpuSpeed': >>>>> >>> >>> > >> '2667.000', >>>>> >>> >>> > >> 'numaNodes': {u'1': {'totalMemory': '12288', 'cpus': >>>>> >>> >>> > >> [6, >>>>> >>> >>> > >> 7, 8, >>>>> >>> >>> > >> 9, >>>>> >>> >>> > >> 10, 11, >>>>> >>> >>> > >> 18, 19, 20, 21, 22, 23]}, u'0': {'totalMemory': >>>>> >>> >>> > >> '12278', >>>>> >>> >>> > >> 'cpus': >>>>> >>> >>> > >> [0, >>>>> >>> >>> > >> 1, 2, >>>>> >>> >>> > >> 3, 4, 5, 12, 13, 14, 15, 16, 17]}}, 'version_name': >>>>> >>> >>> > >> 'Snow >>>>> >>> >>> > >> Man', >>>>> >>> >>> > >> 'vlans': >>>>> >>> >>> > >> {'bond0.10': {'iface': 'bond0', 'addr': >>>>> >>> >>> > >> '43.25.17.16', >>>>> >>> >>> > >> 'cfg': >>>>> >>> >>> > >> {'DEFROUTE': >>>>> >>> >>> > >> 'yes', 'VLAN': 'yes', 'IPADDR': '43.25.17.16', >>>>> >>> >>> > >> 'HOTPLUG': >>>>> >>> >>> > >> 'no', >>>>> >>> >>> > >> 'GATEWAY': >>>>> >>> >>> > >> '43.25.17.1', 'NM_CONTROLLED': 'no', 'NETMASK': >>>>> >>> >>> > >> '255.255.255.0', >>>>> >>> >>> > >> 'BOOTPROTO': 'none', 'DEVICE': 'bond0.10', 'MTU': >>>>> >>> >>> > >> '1500', >>>>> >>> >>> > >> 'ONBOOT': >>>>> >>> >>> > >> 'yes'}, >>>>> >>> >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], >>>>> >>> >>> > >> 'vlanid': >>>>> >>> >>> > >> 10, >>>>> >>> >>> > >> 'mtu': >>>>> >>> >>> > >> '1500', >>>>> >>> >>> > >> 'netmask': '255.255.255.0', 'ipv4addrs': >>>>> >>> >>> > >> ['43.25.17.16/24'] >>>>> >>> >>> > >> <http://43.25.17.16/24%27%5D>}, 'bond0.36': {'iface': >>>>> >>> >>> > >> 'bond0', >>>>> >>> >>> > >> 'addr': >>>>> >>> >>> > >> '', 'cfg': {'BRIDGE': 'VMNetwork', 'VLAN': 'yes', >>>>> >>> >>> > >> 'HOTPLUG': >>>>> >>> >>> > >> 'no', >>>>> >>> >>> > >> 'MTU': >>>>> >>> >>> > >> '1500', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond0.36', >>>>> >>> >>> > >> 'ONBOOT': >>>>> >>> >>> > >> 'no'}, >>>>> >>> >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], >>>>> >>> >>> > >> 'vlanid': >>>>> >>> >>> > >> 36, >>>>> >>> >>> > >> 'mtu': >>>>> >>> >>> > >> '1500', >>>>> >>> >>> > >> 'netmask': '', 'ipv4addrs': []}, 'bond1.100': >>>>> >>> >>> > >> {'iface': >>>>> >>> >>> > >> 'bond1', >>>>> >>> >>> > >> 'addr': >>>>> >>> >>> > >> '', 'cfg': {'BRIDGE': 'Internal', 'VLAN': 'yes', >>>>> >>> >>> > >> 'HOTPLUG': >>>>> >>> >>> > >> 'no', >>>>> >>> >>> > >> 'MTU': >>>>> >>> >>> > >> '9000', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond1.100', >>>>> >>> >>> > >> 'ONBOOT': >>>>> >>> >>> > >> 'no'}, >>>>> >>> >>> > >> 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], >>>>> >>> >>> > >> 'vlanid': >>>>> >>> >>> > >> 100, >>>>> >>> >>> > >> 'mtu': >>>>> >>> >>> > >> '9000', >>>>> >>> >>> > >> 'netmask': '', 'ipv4addrs': []}}, 'cpuCores': '12', >>>>> >>> >>> > >> 'kvmEnabled': >>>>> >>> >>> > >> 'true', >>>>> >>> >>> > >> 'guestOverhead': '65', 'cpuThreads': '24', >>>>> >>> >>> > >> 'emulatedMachines': >>>>> >>> >>> > >> [u'rhel6.5.0', u'pc', u'rhel6.4.0', u'rhel6.3.0', >>>>> >>> >>> > >> u'rhel6.2.0', >>>>> >>> >>> > >> u'rhel6.1.0', u'rhel6.0.0', u'rhel5.5.0', >>>>> >>> >>> > >> u'rhel5.4.4', >>>>> >>> >>> > >> u'rhel5.4.0'], >>>>> >>> >>> > >> 'operatingSystem': {'release': '5.el6.centos.11.1', >>>>> >>> >>> > >> 'version': >>>>> >>> >>> > >> '6', >>>>> >>> >>> > >> 'name': >>>>> >>> >>> > >> 'RHEL'}, 'lastClient': '10.10.10.2'}} >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> 21:41:41,620::BindingXMLRPC::1132::vds::(wrapper) >>>>> >>> >>> > >> client >>>>> >>> >>> > >> [10.10.10.2]::call >>>>> >>> >>> > >> getHardwareInfo with () {} >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> 21:41:41,621::BindingXMLRPC::1139::vds::(wrapper) >>>>> >>> >>> > >> return >>>>> >>> >>> > >> getHardwareInfo >>>>> >>> >>> > >> with {'status': {'message': 'Done', 'code': 0}, >>>>> >>> >>> > >> 'info': >>>>> >>> >>> > >> {'systemProductName': 'CS24-TY', >>>>> >>> >>> > >> 'systemSerialNumber': >>>>> >>> >>> > >> '7LWSPN1', >>>>> >>> >>> > >> 'systemFamily': 'Server', 'systemVersion': 'A00', >>>>> >>> >>> > >> 'systemUUID': >>>>> >>> >>> > >> '44454c4c-4c00-1057-8053-b7c04f504e31', >>>>> >>> >>> > >> 'systemManufacturer': >>>>> >>> >>> > >> 'Dell'}} >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> 21:41:41,733::BindingXMLRPC::1132::vds::(wrapper) >>>>> >>> >>> > >> client >>>>> >>> >>> > >> [10.10.10.2]::call >>>>> >>> >>> > >> hostsList with () {} flowID [222e8036] >>>>> >>> >>> > >> Thread-13::ERROR::2014-11-24 >>>>> >>> >>> > >> 21:41:44,753::BindingXMLRPC::1148::vds::(wrapper) >>>>> >>> >>> > >> vdsm >>>>> >>> >>> > >> exception >>>>> >>> >>> > >> occured >>>>> >>> >>> > >> Traceback (most recent call last): >>>>> >>> >>> > >> File "/usr/share/vdsm/rpc/BindingXMLRPC.py",
>>>>> >>> >>> > >> 1135, >>>>> >>> >>> > >> in >>>>> >>> >>> > >> wrapper >>>>> >>> >>> > >> res = f(*args, **kwargs) >>>>> >>> >>> > >> File "/usr/share/vdsm/gluster/api.py", line 54, in >>>>> >>> >>> > >> wrapper >>>>> >>> >>> > >> rv = func(*args, **kwargs) >>>>> >>> >>> > >> File "/usr/share/vdsm/gluster/api.py", line 251, in >>>>> >>> >>> > >> hostsList >>>>> >>> >>> > >> return {'hosts': >>>>> >>> >>> > >> self.svdsmProxy.glusterPeerStatus()} >>>>> >>> >>> > >> File "/usr/share/vdsm/supervdsm.py", line 50, in >>>>> >>> >>> > >> __call__ >>>>> >>> >>> > >> return callMethod() >>>>> >>> >>> > >> File "/usr/share/vdsm/supervdsm.py", line 48, in >>>>> >>> >>> > >> <lambda> >>>>> >>> >>> > >> **kwargs) >>>>> >>> >>> > >> File "<string>", line 2, in glusterPeerStatus >>>>> >>> >>> > >> File >>>>> >>> >>> > >> "/usr/lib64/python2.6/multiprocessing/managers.py", >>>>> >>> >>> > >> line >>>>> >>> >>> > >> 740, >>>>> >>> >>> > >> in >>>>> >>> >>> > >> _callmethod >>>>> >>> >>> > >> raise convert_to_error(kind, result) >>>>> >>> >>> > >> GlusterCmdExecFailedException: Command execution >>>>> >>> >>> > >> failed >>>>> >>> >>> > >> error: Connection failed. Please check if gluster >>>>> >>> >>> > >> daemon >>>>> >>> >>> > >> is >>>>> >>> >>> > >> operational. >>>>> >>> >>> > >> return code: 1 >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> 21:41:50,949::task::595::Storage.TaskManager.Task::(_updateState) >>>>> >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving >>>>> >>> >>> > >> from >>>>> >>> >>> > >> state >>>>> >>> >>> > >> init >>>>> >>> >>> > >> -> >>>>> >>> >>> > >> state preparing >>>>> >>> >>> > >> Thread-13::INFO::2014-11-24 >>>>> >>> >>> > >> 21:41:50,950::logUtils::44::dispatcher::(wrapper) Run >>>>> >>> >>> > >> and >>>>> >>> >>> > >> protect: >>>>> >>> >>> > >> repoStats(options=None) >>>>> >>> >>> > >> Thread-13::INFO::2014-11-24 >>>>> >>> >>> > >> 21:41:50,950::logUtils::47::dispatcher::(wrapper) Run >>>>> >>> >>> > >> and >>>>> >>> >>> > >> protect: >>>>> >>> >>> > >> repoStats, Return response: {} >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> 21:41:50,950::task::1191::Storage.TaskManager.Task::(prepare) >>>>> >>> >>> > >> >>>>> >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::finished: {} >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> 21:41:50,950::task::595::Storage.TaskManager.Task::(_updateState) >>>>> >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving >>>>> >>> >>> > >> from >>>>> >>> >>> > >> state >>>>> >>> >>> > >> preparing >>>>> >>> >>> > >> -> >>>>> >>> >>> > >> state finished >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> 21:41:50,951::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) >>>>> >>> >>> > >> Owner.releaseAll requests {} resources {} >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> 21:41:50,951::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >>>>> >>> >>> > >> Owner.cancelAll requests {} >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> 21:41:50,951::task::993::Storage.TaskManager.Task::(_decref) >>>>> >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::ref 0 >>>>> >>> >>> > >> aborting >>>>> >>> >>> > >> False >>>>> >>> >>> > >> ------------------------------- >>>>> >>> >>> > >> >>>>> >>> >>> > >> [root@compute4 ~]# service glusterd status >>>>> >>> >>> > >> glusterd is stopped >>>>> >>> >>> > >> [root@compute4 ~]# chkconfig --list | grep glusterd >>>>> >>> >>> > >> glusterd 0:off 1:off 2:on 3:on 4:on >>>>> >>> >>> > >> 5:on >>>>> >>> >>> > >> 6:off >>>>> >>> >>> > >> [root@compute4 ~]# >>>>> >>> >>> > >> >>>>> >>> >>> > >> Thanks, >>>>> >>> >>> > >> Punit >>>>> >>> >>> > >> >>>>> >>> >>> > >> On Mon, Nov 24, 2014 at 6:36 PM, Kanagaraj >>>>> >>> >>> > >> <kmayilsa@redhat.com> >>>>> >>> >>> > >> wrote: >>>>> >>> >>> > >> >>>>> >>> >>> > >>> Can you send the corresponding error in vdsm.log >>>>> >>> >>> > >>> from >>>>> >>> >>> > >>> the >>>>> >>> >>> > >>> host? >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> Also check if glusterd service is running. >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> Thanks, >>>>> >>> >>> > >>> Kanagaraj >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> On 11/24/2014 03:39 PM, Punit Dambiwal wrote: >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> Hi, >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> After reboot my Hypervisior host can not activate >>>>> >>> >>> > >>> again >>>>> >>> >>> > >>> in the >>>>> >>> >>> > >>> cluster >>>>> >>> >>> > >>> and failed with the following error :- >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> Gluster command [<UNKNOWN>] failed on server... >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> Engine logs :- >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> 2014-11-24 18:05:28,397 INFO >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-64) START, >>>>> >>> >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, >>>>> >>> >>> > >>> HostId >>>>> >>> >>> > >>> = >>>>> >>> >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: >>>>> >>> >>> > >>> 5f251c90 >>>>> >>> >>> > >>> 2014-11-24 18:05:30,609 INFO >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-64) FINISH, >>>>> >>> >>> > >>> GlusterVolumesListVDSCommand, >>>>> >>> >>> > >>> return: >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@d95203e0 }, >>>>> >>> >>> > >>> log id: 5f251c90 >>>>> >>> >>> > >>> 2014-11-24 18:05:33,768 INFO >>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>>>> >>> >>> > >>> (ajp--127.0.0.1-8702-8) >>>>> >>> >>> > >>> [287d570d] Lock Acquired to object EngineLock >>>>> >>> >>> > >>> [exclusiveLocks= >>>>> >>> >>> > >>> key: >>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS >>>>> >>> >>> > >>> , sharedLocks= ] >>>>> >>> >>> > >>> 2014-11-24 18:05:33,795 INFO >>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] >>>>> >>> >>> > >>> Running >>>>> >>> >>> > >>> command: >>>>> >>> >>> > >>> ActivateVdsCommand internal: false. Entities >>>>> >>> >>> > >>> affected : >>>>> >>> >>> > >>> ID: >>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDSAction >>>>> >>> >>> > >>> group >>>>> >>> >>> > >>> MANIPULATE_HOST >>>>> >>> >>> > >>> with role type ADMIN >>>>> >>> >>> > >>> 2014-11-24 18:05:33,796 INFO >>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] >>>>> >>> >>> > >>> Before >>>>> >>> >>> > >>> acquiring >>>>> >>> >>> > >>> lock in >>>>> >>> >>> > >>> order to prevent monitoring for host Compute5 from >>>>> >>> >>> > >>> data-center >>>>> >>> >>> > >>> SV_WTC >>>>> >>> >>> > >>> 2014-11-24 18:05:33,797 INFO >>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] Lock >>>>> >>> >>> > >>> acquired, >>>>> >>> >>> > >>> from >>>>> >>> >>> > >>> now a >>>>> >>> >>> > >>> monitoring of host will be skipped for host Compute5 >>>>> >>> >>> > >>> from >>>>> >>> >>> > >>> data-center >>>>> >>> >>> > >>> SV_WTC >>>>> >>> >>> > >>> 2014-11-24 18:05:33,817 INFO >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >>>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] >>>>> >>> >>> > >>> START, >>>>> >>> >>> > >>> SetVdsStatusVDSCommand(HostName = Compute5, HostId = >>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, >>>>> >>> >>> > >>> status=Unassigned, >>>>> >>> >>> > >>> nonOperationalReason=NONE, >>>>> >>> >>> > >>> stopSpmFailureLogged=false), >>>>> >>> >>> > >>> log id: >>>>> >>> >>> > >>> 1cbc7311 >>>>> >>> >>> > >>> 2014-11-24 18:05:33,820 INFO >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >>>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] >>>>> >>> >>> > >>> FINISH, >>>>> >>> >>> > >>> SetVdsStatusVDSCommand, log id: 1cbc7311 >>>>> >>> >>> > >>> 2014-11-24 18:05:34,086 INFO >>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) Activate >>>>> >>> >>> > >>> finished. >>>>> >>> >>> > >>> Lock >>>>> >>> >>> > >>> released. >>>>> >>> >>> > >>> Monitoring can run now for host Compute5 from >>>>> >>> >>> > >>> data-center >>>>> >>> >>> > >>> SV_WTC >>>>> >>> >>> > >>> 2014-11-24 18:05:34,088 INFO >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) Correlation ID: >>>>> >>> >>> > >>> 287d570d, >>>>> >>> >>> > >>> Job >>>>> >>> >>> > >>> ID: >>>>> >>> >>> > >>> 5ef8e4d6-b2bc-469e-8e81-7ef74b2a001a, Call Stack: >>>>> >>> >>> > >>> null, >>>>> >>> >>> > >>> Custom >>>>> >>> >>> > >>> Event ID: >>>>> >>> >>> > >>> -1, Message: Host Compute5 was activated by admin. >>>>> >>> >>> > >>> 2014-11-24 18:05:34,090 INFO >>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) Lock freed to >>>>> >>> >>> > >>> object >>>>> >>> >>> > >>> EngineLock >>>>> >>> >>> > >>> [exclusiveLocks= key: >>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a >>>>> >>> >>> > >>> value: >>>>> >>> >>> > >>> VDS >>>>> >>> >>> > >>> , sharedLocks= ] >>>>> >>> >>> > >>> 2014-11-24 18:05:35,792 INFO >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-55) [3706e836] START, >>>>> >>> >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, >>>>> >>> >>> > >>> HostId >>>>> >>> >>> > >>> = >>>>> >>> >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: >>>>> >>> >>> > >>> 48a0c832 >>>>> >>> >>> > >>> 2014-11-24 18:05:37,064 INFO >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] >>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) START, >>>>> >>> >>> > >>> GetHardwareInfoVDSCommand(HostName = Compute5, >>>>> >>> >>> > >>> HostId = >>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> vds=Host[Compute5,0bf6b00f-7947-4411-b55a-cc5eea2b381a]), log >>>>> >>> >>> > >>> id: >>>>> >>> >>> > >>> 6d560cc2 >>>>> >>> >>> > >>> 2014-11-24 18:05:37,074 INFO >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] >>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) FINISH, >>>>> >>> >>> > >>> GetHardwareInfoVDSCommand, log >>>>> >>> >>> > >>> id: 6d560cc2 >>>>> >>> >>> > >>> 2014-11-24 18:05:37,093 WARN >>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.VdsManager] >>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) Host Compute5 is >>>>> >>> >>> > >>> running >>>>> >>> >>> > >>> with >>>>> >>> >>> > >>> disabled >>>>> >>> >>> > >>> SELinux. >>>>> >>> >>> > >>> 2014-11-24 18:05:37,127 INFO >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] >>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] >>>>> >>> >>> > >>> Running >>>>> >>> >>> > >>> command: >>>>> >>> >>> > >>> HandleVdsCpuFlagsOrClusterChangedCommand internal: >>>>> >>> >>> > >>> true. >>>>> >>> >>> > >>> Entities >>>>> >>> >>> > >>> affected >>>>> >>> >>> > >>> : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: >>>>> >>> >>> > >>> VDS >>>>> >>> >>> > >>> 2014-11-24 18:05:37,147 INFO >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] START, >>>>> >>> >>> > >>> GlusterServersListVDSCommand(HostName = Compute5, >>>>> >>> >>> > >>> HostId >>>>> >>> >>> > >>> = >>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a), log id: >>>>> >>> >>> > >>> 4faed87 >>>>> >>> >>> > >>> 2014-11-24 18:05:37,164 INFO >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] >>>>> >>> >>> > >>> FINISH, >>>>> >>> >>> > >>> GlusterServersListVDSCommand, log id: 4faed87 >>>>> >>> >>> > >>> 2014-11-24 18:05:37,189 INFO >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] >>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] >>>>> >>> >>> > >>> Running >>>>> >>> >>> > >>> command: >>>>> >>> >>> > >>> SetNonOperationalVdsCommand internal: true. Entities >>>>> >>> >>> > >>> affected : >>>>> >>> >>> > >>> ID: >>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS >>>>> >>> >>> > >>> 2014-11-24 18:05:37,206 INFO >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] START, >>>>> >>> >>> > >>> SetVdsStatusVDSCommand(HostName = Compute5, HostId = >>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, >>>>> >>> >>> > >>> status=NonOperational, >>>>> >>> >>> > >>> nonOperationalReason=GLUSTER_COMMAND_FAILED, >>>>> >>> >>> > >>> stopSpmFailureLogged=false), >>>>> >>> >>> > >>> log id: fed5617 >>>>> >>> >>> > >>> 2014-11-24 18:05:37,209 INFO >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] >>>>> >>> >>> > >>> FINISH, >>>>> >>> >>> > >>> SetVdsStatusVDSCommand, log id: fed5617 >>>>> >>> >>> > >>> 2014-11-24 18:05:37,223 ERROR >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] >>>>> >>> >>> > >>> Correlation ID: >>>>> >>> >>> > >>> 4a84c4e5, >>>>> >>> >>> > >>> Job >>>>> >>> >>> > >>> ID: 4bfd4a6d-c3ef-468f-a40e-a3a6ca13011b, Call >>>>> >>> >>> > >>> Stack: >>>>> >>> >>> > >>> null, >>>>> >>> >>> > >>> Custom >>>>> >>> >>> > >>> Event >>>>> >>> >>> > >>> ID: -1, Message: Gluster command [<UNKNOWN>] failed >>>>> >>> >>> > >>> on >>>>> >>> >>> > >>> server >>>>> >>> >>> > >>> Compute5. >>>>> >>> >>> > >>> 2014-11-24 18:05:37,243 INFO >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] >>>>> >>> >>> > >>> Correlation ID: >>>>> >>> >>> > >>> null, >>>>> >>> >>> > >>> Call >>>>> >>> >>> > >>> Stack: null, Custom Event ID: -1, Message: Status of >>>>> >>> >>> > >>> host >>>>> >>> >>> > >>> Compute5 >>>>> >>> >>> > >>> was >>>>> >>> >>> > >>> set >>>>> >>> >>> > >>> to NonOperational. >>>>> >>> >>> > >>> 2014-11-24 18:05:37,272 INFO >>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.HandleVdsVersionCommand] >>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Running >>>>> >>> >>> > >>> command: >>>>> >>> >>> > >>> HandleVdsVersionCommand internal: true. Entities >>>>> >>> >>> > >>> affected : >>>>> >>> >>> > >>> ID: >>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS >>>>> >>> >>> > >>> 2014-11-24 18:05:37,274 INFO >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] >>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Host >>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a : Compute5 is >>>>> >>> >>> > >>> already in >>>>> >>> >>> > >>> NonOperational status for reason >>>>> >>> >>> > >>> GLUSTER_COMMAND_FAILED. >>>>> >>> >>> > >>> SetNonOperationalVds command is skipped. >>>>> >>> >>> > >>> 2014-11-24 18:05:38,065 INFO >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-55) [3706e836] >>>>> >>> >>> > >>> FINISH, >>>>> >>> >>> > >>> GlusterVolumesListVDSCommand, return: >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4e72a1b1 }, >>>>> >>> >>> > >>> log id: 48a0c832 >>>>> >>> >>> > >>> 2014-11-24 18:05:43,243 INFO >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-35) START, >>>>> >>> >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, >>>>> >>> >>> > >>> HostId >>>>> >>> >>> > >>> = >>>>> >>> >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: >>>>> >>> >>> > >>> 3ce13ebc >>>>> >>> >>> > >>> ^C >>>>> >>> >>> > >>> [root@ccr01 ~]# >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> Thanks, >>>>> >>> >>> > >>> Punit >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> _______________________________________________ >>>>> >>> >>> > >>> Users mailing >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> listUsers@ovirt.orghttp://
On Fri, Dec 5, 2014 at 12:47 PM, Punit Dambiwal <hypunit@gmail.com> wrote: the the line the probed). possible, the the then line lists.ovirt.org/mailman/listinfo/users
>>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > > >>>>> >>> >>> > > >>>>> >>> >>> > >>>>> >>> >> >>>>> >>> >> >>>>> >>> > >>>>> >>> > >>>>> >>> > _______________________________________________ >>>>> >>> > Gluster-users mailing list >>>>> >>> > Gluster-users@gluster.org >>>>> >>> > >>>>> >>> > http://supercolony.gluster.org/mailman/listinfo/gluster-users >>>>> >> >>>>> >> >>>>> > >>>> >>>> >>

Dear Kaushal, I tried various method...but still the same error...it seems it's gluster bug..is there any body can suggest work-around here ?? Thanks, Punit On Sun, Dec 7, 2014 at 8:40 PM, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi Kaushal,
Still the same error...even try your suggested workaround :-
------------------- Can you replace 'Before=network-online.target' with 'Wants=network-online.target' and try the boot again? This should force the network to be online before starting GlusterD. -------------------
Thanks, Punit
On Sat, Dec 6, 2014 at 11:44 AM, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi Kaushal,
I already have all the hosts entry in the /etc/hosts for the easy resolution....i will try your method in the glusterd.services and check and let you know....weather problem solve or not.....
On Fri, Dec 5, 2014 at 9:50 PM, Kaushal M <kshlmster@gmail.com> wrote:
Can you replace 'Before=network-online.target' with 'Wants=network-online.target' and try the boot again? This should force the network to be online before starting GlusterD.
If even that fails, you could try adding an entry into /etc/hosts with the hostname of the system. This should prevent any more failures.
I still don't believe it's a problem with Gluster. Gluster uses apis provided by the system to perform name resolution. These definitely work correctly because you can start GlusterD later. Since the resolution failure only happens during boot, it points to system or network setup issues during boot. To me it seems like the network isn't completely setup at that point of time.
~kaushal
Hi Kaushal,
It seems it's bug in glusterfs 3.6....even i manage my systemd to start the network service before glusterd...but it's still fail...
--------------- [Unit] Description=GlusterFS, a clustered file-system server After=network.target rpcbind.service Before=network-online.target
[Service] Type=forking PIDFile=/var/run/glusterd.pid LimitNOFILE=65536 ExecStartPre=/etc/rc.d/init.d/network start ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid KillMode=process
[Install] WantedBy=multi-user.target ----------------
Thanks, Punit
On Wed, Dec 3, 2014 at 8:56 PM, Kaushal M <kshlmster@gmail.com> wrote:
I just remembered this.
There was another user having a similar issue of GlusterD failing to start on the mailing list a while back. The cause of his problem was the way his network was brought up. IIRC, he was using a static network configuration. The problem vanished when he began using dhcp. Or it might have been he was using dhcp.service and it got solved after switching to NetworkManager.
This could be one more thing you could look at.
I'll try to find the mail thread to see if it was the same problem as
you.
~kaushal
On Wed, Dec 3, 2014 at 6:22 PM, Kaushal M <kshlmster@gmail.com>
wrote:
I don't know much about how the network target is brought up in CentOS7, but I'll try as much as I can.
It seems to me that, after the network has been brought up and by
time GlusterD is started, a. The machine hasn't yet recieved it's hostname, or b. It hasn't yet registered with the name server.
This is causing name resolution failures.
I don't know if the network target could come up without the machine getting its hostname, so I'm pretty sure it's not a.
So it seems to be b. But these kind of signing in happens only in DDNS systems, which doesn't seem to be the case for you.
Both of these reasons might be wrong (most likely wrong). You'd do good if you could ask for help from someone with more experience in systemd + networking.
~kaushal
On Wed, Dec 3, 2014 at 10:54 AM, Punit Dambiwal <hypunit@gmail.com> wrote: > Hi Kaushal, > > This is the host...which i rebooted...would you mind to let me know how > i > can make the glusterd sevice come up after network...i am using > centos7...if > network is the issue... > > On Wed, Dec 3, 2014 at 11:54 AM, Kaushal M <kshlmster@gmail.com> wrote: >> >> This peer cannot be identified. >> >> " [2014-12-03 02:29:25.998153] D >> [glusterd-peer-utils.c:121:glusterd_peerinfo_find_by_hostname] >> 0-management: >> Unable to find friend: cpu05.zne01.hkg1.ovt.36stack.com" >> >> I don't know why this address is not being resolved during boot time. >> If >> this is a valid peer, the the only reason I can think of this
>> network is not up. >> >> If you had previously detached the peer forcefully, the that could >> have >> left stale entries in some volumes. In this case as well, GlusterD >> will fail >> to identify the peer. >> >> Do either of these reasons seem a possibility to you? >> >> On Dec 3, 2014 8:07 AM, "Punit Dambiwal" <hypunit@gmail.com> wrote: >>> >>> Hi Kaushal, >>> >>> Please find the logs here :- http://ur1.ca/iyoe5 and >>> http://ur1.ca/iyoed >>> >>> On Tue, Dec 2, 2014 at 10:43 PM, Kaushal M <kshlmster@gmail.com> >>> wrote: >>>> >>>> Hey Punit, >>>> In the logs you've provided, GlusterD appears to be running >>>> correctly. >>>> Could you provide the logs for the time period when GlusterD >>>> attempts to >>>> start but fails. >>>> >>>> ~kaushal >>>> >>>> On Dec 2, 2014 8:03 PM, "Punit Dambiwal" <hypunit@gmail.com> wrote: >>>>> >>>>> Hi Kaushal, >>>>> >>>>> Please find the logs here :- http://ur1.ca/iyhs5 and >>>>> http://ur1.ca/iyhue >>>>> >>>>> Thanks, >>>>> punit >>>>> >>>>> >>>>> On Tue, Dec 2, 2014 at 12:00 PM, Kaushal M < kshlmster@gmail.com> >>>>> wrote: >>>>>> >>>>>> Hey Punit, >>>>>> Could you start Glusterd in debug mode and provide the logs here? >>>>>> To start it in debug mode, append '-LDEBUG' to the ExecStart
>>>>>> in >>>>>> the service file. >>>>>> >>>>>> ~kaushal >>>>>> >>>>>> On Mon, Dec 1, 2014 at 9:05 AM, Punit Dambiwal < hypunit@gmail.com> >>>>>> wrote: >>>>>> > Hi, >>>>>> > >>>>>> > Can Any body help me on this ?? >>>>>> > >>>>>> > On Thu, Nov 27, 2014 at 9:29 AM, Punit Dambiwal >>>>>> > <hypunit@gmail.com> >>>>>> > wrote: >>>>>> >> >>>>>> >> Hi Kaushal, >>>>>> >> >>>>>> >> Thanks for the detailed reply....let me explain my setup first >>>>>> >> :- >>>>>> >> >>>>>> >> 1. Ovirt Engine >>>>>> >> 2. 4* host as well as storage machine (Host and gluster >>>>>> >> combined) >>>>>> >> 3. Every host has 24 bricks... >>>>>> >> >>>>>> >> Now whenever the host machine reboot...it can come up but can >>>>>> >> not >>>>>> >> join the >>>>>> >> cluster again and through the following error "Gluster command >>>>>> >> [<UNKNOWN>] >>>>>> >> failed on server.." >>>>>> >> >>>>>> >> Please check my comment in line :- >>>>>> >> >>>>>> >> 1. Use the same string for doing the peer probe and for the >>>>>> >> brick >>>>>> >> address >>>>>> >> during volume create/add-brick. Ideally, we suggest you use >>>>>> >> properly >>>>>> >> resolvable FQDNs everywhere. If that is not possible, then use >>>>>> >> only >>>>>> >> IP >>>>>> >> addresses. Try to avoid short names. >>>>>> >> --------------- >>>>>> >> [root@cpu05 ~]# gluster peer status >>>>>> >> Number of Peers: 3 >>>>>> >> >>>>>> >> Hostname: cpu03.stack.com >>>>>> >> Uuid: 5729b8c4-e80d-4353-b456-6f467bddbdfb >>>>>> >> State: Peer in Cluster (Connected) >>>>>> >> >>>>>> >> Hostname: cpu04.stack.com >>>>>> >> Uuid: d272b790-c4b2-4bed-ba68-793656e6d7b0 >>>>>> >> State: Peer in Cluster (Connected) >>>>>> >> Other names: >>>>>> >> 10.10.0.8 >>>>>> >> >>>>>> >> Hostname: cpu02.stack.com >>>>>> >> Uuid: 8d8a7041-950e-40d0-85f9-58d14340ca25 >>>>>> >> State: Peer in Cluster (Connected) >>>>>> >> [root@cpu05 ~]# >>>>>> >> ---------------- >>>>>> >> 2. During boot up, make sure to launch glusterd only after
>>>>>> >> network is >>>>>> >> up. This will allow the new peer identification mechanism to do >>>>>> >> its >>>>>> >> job correctly. >>>>>> >> >> I think the service itself doing the same job.... >>>>>> >> >>>>>> >> [root@cpu05 ~]# cat /usr/lib/systemd/system/glusterd.service >>>>>> >> [Unit] >>>>>> >> Description=GlusterFS, a clustered file-system server >>>>>> >> After=network.target rpcbind.service >>>>>> >> Before=network-online.target >>>>>> >> >>>>>> >> [Service] >>>>>> >> Type=forking >>>>>> >> PIDFile=/var/run/glusterd.pid >>>>>> >> LimitNOFILE=65536 >>>>>> >> ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid >>>>>> >> KillMode=process >>>>>> >> >>>>>> >> [Install] >>>>>> >> WantedBy=multi-user.target >>>>>> >> [root@cpu05 ~]# >>>>>> >> -------------------- >>>>>> >> >>>>>> >> gluster logs :- >>>>>> >> >>>>>> >> [2014-11-24 09:22:22.147471] I [MSGID: 100030] >>>>>> >> [glusterfsd.c:2018:main] >>>>>> >> 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd >>>>>> >> version >>>>>> >> 3.6.1 >>>>>> >> (args: /usr/sbin/glusterd -p /var/run/glusterd.pid) >>>>>> >> [2014-11-24 09:22:22.151565] I [glusterd.c:1214:init] >>>>>> >> 0-management: >>>>>> >> Maximum allowed open file descriptors set to 65536 >>>>>> >> [2014-11-24 09:22:22.151599] I [glusterd.c:1259:init] >>>>>> >> 0-management: >>>>>> >> Using >>>>>> >> /var/lib/glusterd as working directory >>>>>> >> [2014-11-24 09:22:22.155216] W >>>>>> >> [rdma.c:4195:__gf_rdma_ctx_create] >>>>>> >> 0-rpc-transport/rdma: rdma_cm event channel creation failed (No >>>>>> >> such device) >>>>>> >> [2014-11-24 09:22:22.155264] E [rdma.c:4483:init] >>>>>> >> 0-rdma.management: >>>>>> >> Failed to initialize IB Device >>>>>> >> [2014-11-24 09:22:22.155285] E >>>>>> >> [rpc-transport.c:333:rpc_transport_load] >>>>>> >> 0-rpc-transport: 'rdma' initialization failed >>>>>> >> [2014-11-24 09:22:22.155354] W >>>>>> >> [rpcsvc.c:1524:rpcsvc_transport_create] >>>>>> >> 0-rpc-service: cannot create listener, initing the
>>>>>> >> failed >>>>>> >> [2014-11-24 09:22:22.156290] I >>>>>> >> [glusterd.c:413:glusterd_check_gsync_present] 0-glusterd: >>>>>> >> geo-replication >>>>>> >> module not installed in the system >>>>>> >> [2014-11-24 09:22:22.161318] I >>>>>> >> [glusterd-store.c:2043:glusterd_restore_op_version] 0-glusterd: >>>>>> >> retrieved >>>>>> >> op-version: 30600 >>>>>> >> [2014-11-24 09:22:22.821800] I >>>>>> >> [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] >>>>>> >> 0-management: >>>>>> >> connect returned 0 >>>>>> >> [2014-11-24 09:22:22.825810] I >>>>>> >> [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] >>>>>> >> 0-management: >>>>>> >> connect returned 0 >>>>>> >> [2014-11-24 09:22:22.828705] I >>>>>> >> [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] >>>>>> >> 0-management: >>>>>> >> connect returned 0 >>>>>> >> [2014-11-24 09:22:22.828771] I >>>>>> >> [rpc-clnt.c:969:rpc_clnt_connection_init] >>>>>> >> 0-management: setting frame-timeout to 600 >>>>>> >> [2014-11-24 09:22:22.832670] I >>>>>> >> [rpc-clnt.c:969:rpc_clnt_connection_init] >>>>>> >> 0-management: setting frame-timeout to 600 >>>>>> >> [2014-11-24 09:22:22.835919] I >>>>>> >> [rpc-clnt.c:969:rpc_clnt_connection_init] >>>>>> >> 0-management: setting frame-timeout to 600 >>>>>> >> [2014-11-24 09:22:22.840209] E >>>>>> >> [glusterd-store.c:4248:glusterd_resolve_all_bricks] 0-glusterd: >>>>>> >> resolve >>>>>> >> brick failed in restore >>>>>> >> [2014-11-24 09:22:22.840233] E [xlator.c:425:xlator_init] >>>>>> >> 0-management: >>>>>> >> Initialization of volume 'management' failed, review your >>>>>> >> volfile >>>>>> >> again >>>>>> >> [2014-11-24 09:22:22.840245] E >>>>>> >> [graph.c:322:glusterfs_graph_init] >>>>>> >> 0-management: initializing translator failed >>>>>> >> [2014-11-24 09:22:22.840264] E >>>>>> >> [graph.c:525:glusterfs_graph_activate] >>>>>> >> 0-graph: init failed >>>>>> >> [2014-11-24 09:22:22.840754] W >>>>>> >> [glusterfsd.c:1194:cleanup_and_exit] >>>>>> >> (--> >>>>>> >> 0-: received signum (0), shutting down >>>>>> >> >>>>>> >> Thanks, >>>>>> >> Punit >>>>>> >> >>>>>> >> >>>>>> >> >>>>>> >> >>>>>> >> On Wed, Nov 26, 2014 at 7:14 PM, Kaushal M >>>>>> >> <kshlmster@gmail.com> >>>>>> >> wrote: >>>>>> >>> >>>>>> >>> Based on the logs I can guess that glusterd is being started >>>>>> >>> before >>>>>> >>> the network has come up and that the addresses given to bricks >>>>>> >>> do >>>>>> >>> not >>>>>> >>> directly match the addresses used in during peer probe. >>>>>> >>> >>>>>> >>> The gluster_after_reboot log has the line "[2014-11-25 >>>>>> >>> 06:46:09.972113] E >>>>>> >>> [glusterd-store.c:2632:glusterd_resolve_all_bricks] >>>>>> >>> 0-glusterd: resolve brick failed in restore". >>>>>> >>> >>>>>> >>> Brick resolution fails when glusterd cannot match the address >>>>>> >>> for >>>>>> >>> the >>>>>> >>> brick, with one of the peers. Brick resolution happens in two >>>>>> >>> phases, >>>>>> >>> 1. We first try to identify the peer by performing string >>>>>> >>> comparisions >>>>>> >>> with the brick address and the peer addresses (The peer names >>>>>> >>> will >>>>>> >>> be >>>>>> >>> the names/addresses that were given when the peer was
>>>>>> >>> 2. If we don't find a match from step 1, we will then resolve >>>>>> >>> all >>>>>> >>> the >>>>>> >>> brick address and the peer addresses into addrinfo structs, >>>>>> >>> and >>>>>> >>> then >>>>>> >>> compare these structs to find a match. This process should >>>>>> >>> generally >>>>>> >>> find a match if available. This will fail only if the network >>>>>> >>> is >>>>>> >>> not >>>>>> >>> up yet as we cannot resolve addresses. >>>>>> >>> >>>>>> >>> The above steps are applicable only to glusterfs versions >>>>>> >>> >=3.6. >>>>>> >>> They >>>>>> >>> were introduced to reduce problems with peer identification, >>>>>> >>> like >>>>>> >>> the >>>>>> >>> one you encountered >>>>>> >>> >>>>>> >>> Since both of the steps failed to find a match in one run, but >>>>>> >>> succeeded later, we can come to the conclusion that, >>>>>> >>> a) the bricks don't have the exact same string used in
>>>>>> >>> probe >>>>>> >>> for >>>>>> >>> their addresses as step 1 failed, and >>>>>> >>> b) the network was not up in the initial run, as step 2 failed >>>>>> >>> during >>>>>> >>> the initial run, but passed in the second run. >>>>>> >>> >>>>>> >>> Please let me know if my conclusion is correct. >>>>>> >>> >>>>>> >>> If it is, you can solve your problem in two ways. >>>>>> >>> 1. Use the same string for doing the peer probe and for
>>>>>> >>> brick >>>>>> >>> address during volume create/add-brick. Ideally, we suggest >>>>>> >>> you >>>>>> >>> use >>>>>> >>> properly resolvable FQDNs everywhere. If that is not
>>>>>> >>> then >>>>>> >>> use only IP addresses. Try to avoid short names. >>>>>> >>> 2. During boot up, make sure to launch glusterd only after the >>>>>> >>> network >>>>>> >>> is up. This will allow the new peer identification mechanism >>>>>> >>> to do >>>>>> >>> its >>>>>> >>> job correctly. >>>>>> >>> >>>>>> >>> >>>>>> >>> If you have already followed these steps and yet still hit the >>>>>> >>> problem, then please provide more information (setup, logs, >>>>>> >>> etc.). >>>>>> >>> It >>>>>> >>> could be much different problem that you are facing. >>>>>> >>> >>>>>> >>> ~kaushal >>>>>> >>> >>>>>> >>> On Wed, Nov 26, 2014 at 4:01 PM, Punit Dambiwal >>>>>> >>> <hypunit@gmail.com> >>>>>> >>> wrote: >>>>>> >>> > Is there any one can help on this ?? >>>>>> >>> > >>>>>> >>> > Thanks, >>>>>> >>> > punit >>>>>> >>> > >>>>>> >>> > On Wed, Nov 26, 2014 at 9:42 AM, Punit Dambiwal >>>>>> >>> > <hypunit@gmail.com> >>>>>> >>> > wrote: >>>>>> >>> >> >>>>>> >>> >> Hi, >>>>>> >>> >> >>>>>> >>> >> My Glusterfs version is :- glusterfs-3.6.1-1.el7 >>>>>> >>> >> >>>>>> >>> >> On Wed, Nov 26, 2014 at 1:59 AM, Kanagaraj Mayilsamy >>>>>> >>> >> <kmayilsa@redhat.com> >>>>>> >>> >> wrote: >>>>>> >>> >>> >>>>>> >>> >>> [+Gluster-users@gluster.org] >>>>>> >>> >>> >>>>>> >>> >>> "Initialization of volume 'management' failed, review your >>>>>> >>> >>> volfile >>>>>> >>> >>> again", glusterd throws this error when the service is >>>>>> >>> >>> started >>>>>> >>> >>> automatically >>>>>> >>> >>> after the reboot. But the service is successfully started >>>>>> >>> >>> later >>>>>> >>> >>> manually by >>>>>> >>> >>> the user. >>>>>> >>> >>> >>>>>> >>> >>> can somebody from gluster-users please help on this? >>>>>> >>> >>> >>>>>> >>> >>> glusterfs version: 3.5.1 >>>>>> >>> >>> >>>>>> >>> >>> Thanks, >>>>>> >>> >>> Kanagaraj >>>>>> >>> >>> >>>>>> >>> >>> ----- Original Message ----- >>>>>> >>> >>> > From: "Punit Dambiwal" <hypunit@gmail.com> >>>>>> >>> >>> > To: "Kanagaraj" <kmayilsa@redhat.com> >>>>>> >>> >>> > Cc: users@ovirt.org >>>>>> >>> >>> > Sent: Tuesday, November 25, 2014 7:24:45 PM >>>>>> >>> >>> > Subject: Re: [ovirt-users] Gluster command [<UNKNOWN>] >>>>>> >>> >>> > failed on >>>>>> >>> >>> > server... >>>>>> >>> >>> > >>>>>> >>> >>> > Hi Kanagraj, >>>>>> >>> >>> > >>>>>> >>> >>> > Please check the attached log files....i didn't find any >>>>>> >>> >>> > thing >>>>>> >>> >>> > special.... >>>>>> >>> >>> > >>>>>> >>> >>> > On Tue, Nov 25, 2014 at 12:12 PM, Kanagaraj >>>>>> >>> >>> > <kmayilsa@redhat.com> >>>>>> >>> >>> > wrote: >>>>>> >>> >>> > >>>>>> >>> >>> > > Do you see any errors in >>>>>> >>> >>> > > /var/log/glusterfs/etc-glusterfs-glusterd.vol.log or >>>>>> >>> >>> > > vdsm.log >>>>>> >>> >>> > > when >>>>>> >>> >>> > > the >>>>>> >>> >>> > > service is trying to start automatically after the >>>>>> >>> >>> > > reboot? >>>>>> >>> >>> > > >>>>>> >>> >>> > > Thanks, >>>>>> >>> >>> > > Kanagaraj >>>>>> >>> >>> > > >>>>>> >>> >>> > > >>>>>> >>> >>> > > On 11/24/2014 08:13 PM, Punit Dambiwal wrote: >>>>>> >>> >>> > > >>>>>> >>> >>> > > Hi Kanagaraj, >>>>>> >>> >>> > > >>>>>> >>> >>> > > Yes...once i will start the gluster service and
>>>>>> >>> >>> > > vdsmd >>>>>> >>> >>> > > ...the >>>>>> >>> >>> > > host >>>>>> >>> >>> > > can connect to cluster...but the question is why it's >>>>>> >>> >>> > > not >>>>>> >>> >>> > > started >>>>>> >>> >>> > > even it >>>>>> >>> >>> > > has chkconfig enabled... >>>>>> >>> >>> > > >>>>>> >>> >>> > > I have tested it in two host cluster >>>>>> >>> >>> > > environment...(Centos 6.6 >>>>>> >>> >>> > > and >>>>>> >>> >>> > > centos 7.0) on both hypervisior cluster..it's failed >>>>>> >>> >>> > > to >>>>>> >>> >>> > > reconnect >>>>>> >>> >>> > > in >>>>>> >>> >>> > > to >>>>>> >>> >>> > > cluster after reboot.... >>>>>> >>> >>> > > >>>>>> >>> >>> > > In both the environment glusterd enabled for next >>>>>> >>> >>> > > boot....but >>>>>> >>> >>> > > it's >>>>>> >>> >>> > > failed with the same error....seems it's bug in either >>>>>> >>> >>> > > gluster or >>>>>> >>> >>> > > Ovirt ?? >>>>>> >>> >>> > > >>>>>> >>> >>> > > Please help me to find the workaround here if can not >>>>>> >>> >>> > > resolve >>>>>> >>> >>> > > it...as >>>>>> >>> >>> > > without this the Host machine can not connect after >>>>>> >>> >>> > > reboot....that >>>>>> >>> >>> > > means >>>>>> >>> >>> > > engine will consider it as down and every time need to >>>>>> >>> >>> > > manually >>>>>> >>> >>> > > start >>>>>> >>> >>> > > the >>>>>> >>> >>> > > gluster service and vdsmd... ?? >>>>>> >>> >>> > > >>>>>> >>> >>> > > Thanks, >>>>>> >>> >>> > > Punit >>>>>> >>> >>> > > >>>>>> >>> >>> > > On Mon, Nov 24, 2014 at 10:20 PM, Kanagaraj >>>>>> >>> >>> > > <kmayilsa@redhat.com> >>>>>> >>> >>> > > wrote: >>>>>> >>> >>> > > >>>>>> >>> >>> > >> From vdsm.log "error: Connection failed. Please >>>>>> >>> >>> > >> check if >>>>>> >>> >>> > >> gluster >>>>>> >>> >>> > >> daemon >>>>>> >>> >>> > >> is operational." >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> Starting glusterd service should fix this issue. >>>>>> >>> >>> > >> 'service >>>>>> >>> >>> > >> glusterd >>>>>> >>> >>> > >> start' >>>>>> >>> >>> > >> But i am wondering why the glusterd was not started >>>>>> >>> >>> > >> automatically >>>>>> >>> >>> > >> after >>>>>> >>> >>> > >> the reboot. >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> Thanks, >>>>>> >>> >>> > >> Kanagaraj >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> On 11/24/2014 07:18 PM, Punit Dambiwal wrote: >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> Hi Kanagaraj, >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> Please find the attached VDSM logs :- >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> ---------------- >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> 21:41:17,182::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >>>>>> >>> >>> > >> Owner.cancelAll requests {} >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> 21:41:17,182::task::993::Storage.TaskManager.Task::(_decref) >>>>>> >>> >>> > >> Task=`1691d409-9b27-4585-8281-5ec26154367a`::ref 0 >>>>>> >>> >>> > >> aborting >>>>>> >>> >>> > >> False >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> 21:41:32,393::task::595::Storage.TaskManager.Task::(_updateState) >>>>>> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving >>>>>> >>> >>> > >> from >>>>>> >>> >>> > >> state >>>>>> >>> >>> > >> init >>>>>> >>> >>> > >> -> >>>>>> >>> >>> > >> state preparing >>>>>> >>> >>> > >> Thread-13::INFO::2014-11-24 >>>>>> >>> >>> > >> 21:41:32,393::logUtils::44::dispatcher::(wrapper) Run >>>>>> >>> >>> > >> and >>>>>> >>> >>> > >> protect: >>>>>> >>> >>> > >> repoStats(options=None) >>>>>> >>> >>> > >> Thread-13::INFO::2014-11-24 >>>>>> >>> >>> > >> 21:41:32,393::logUtils::47::dispatcher::(wrapper) Run >>>>>> >>> >>> > >> and >>>>>> >>> >>> > >> protect: >>>>>> >>> >>> > >> repoStats, Return response: {} >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> 21:41:32,393::task::1191::Storage.TaskManager.Task::(prepare) >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::finished: {} >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> 21:41:32,394::task::595::Storage.TaskManager.Task::(_updateState) >>>>>> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving >>>>>> >>> >>> > >> from >>>>>> >>> >>> > >> state >>>>>> >>> >>> > >> preparing >>>>>> >>> >>> > >> -> >>>>>> >>> >>> > >> state finished >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> 21:41:32,394::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) >>>>>> >>> >>> > >> Owner.releaseAll requests {} resources {} >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> 21:41:32,394::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >>>>>> >>> >>> > >> Owner.cancelAll requests {} >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> 21:41:32,394::task::993::Storage.TaskManager.Task::(_decref) >>>>>> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::ref 0 >>>>>> >>> >>> > >> aborting >>>>>> >>> >>> > >> False >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> 21:41:41,550::BindingXMLRPC::1132::vds::(wrapper) >>>>>> >>> >>> > >> client >>>>>> >>> >>> > >> [10.10.10.2]::call >>>>>> >>> >>> > >> getCapabilities with () {} >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> 21:41:41,553::utils::738::root::(execCmd) >>>>>> >>> >>> > >> /sbin/ip route show to 0.0.0.0/0 table all (cwd None) >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> 21:41:41,560::utils::758::root::(execCmd) >>>>>> >>> >>> > >> SUCCESS: <err> = ''; <rc> = 0 >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> 21:41:41,588::caps::728::root::(_getKeyPackages) rpm >>>>>> >>> >>> > >> package >>>>>> >>> >>> > >> ('gluster-swift',) not found >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> 21:41:41,592::caps::728::root::(_getKeyPackages) rpm >>>>>> >>> >>> > >> package >>>>>> >>> >>> > >> ('gluster-swift-object',) not found >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> 21:41:41,593::caps::728::root::(_getKeyPackages) rpm >>>>>> >>> >>> > >> package >>>>>> >>> >>> > >> ('gluster-swift-plugin',) not found >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm >>>>>> >>> >>> > >> package >>>>>> >>> >>> > >> ('gluster-swift-account',) not found >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm >>>>>> >>> >>> > >> package >>>>>> >>> >>> > >> ('gluster-swift-proxy',) not found >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm >>>>>> >>> >>> > >> package >>>>>> >>> >>> > >> ('gluster-swift-doc',) not found >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm >>>>>> >>> >>> > >> package >>>>>> >>> >>> > >> ('gluster-swift-container',) not found >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm >>>>>> >>> >>> > >> package >>>>>> >>> >>> > >> ('glusterfs-geo-replication',) not found >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> 21:41:41,600::caps::646::root::(get) >>>>>> >>> >>> > >> VirtioRNG DISABLED: libvirt version 0.10.2-29.el6_5.9 >>>>>> >>> >>> > >> required >>>>>> >>> >>> > >> >= >>>>>> >>> >>> > >> 0.10.2-31 >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> 21:41:41,603::BindingXMLRPC::1139::vds::(wrapper) >>>>>> >>> >>> > >> return >>>>>> >>> >>> > >> getCapabilities >>>>>> >>> >>> > >> with {'status': {'message': 'Done', 'code': 0}, >>>>>> >>> >>> > >> 'info': >>>>>> >>> >>> > >> {'HBAInventory': >>>>>> >>> >>> > >> {'iSCSI': [{'InitiatorName': >>>>>> >>> >>> > >> 'iqn.1994-05.com.redhat:32151ce183c8'}], >>>>>> >>> >>> > >> 'FC': >>>>>> >>> >>> > >> []}, 'packages2': {'kernel': {'release': >>>>>> >>> >>> > >> '431.el6.x86_64', >>>>>> >>> >>> > >> 'buildtime': >>>>>> >>> >>> > >> 1385061309.0, 'version': '2.6.32'}, 'glusterfs-rdma': >>>>>> >>> >>> > >> {'release': >>>>>> >>> >>> > >> '1.el6', >>>>>> >>> >>> > >> 'buildtime': 1403622628L, 'version': '3.5.1'}, >>>>>> >>> >>> > >> 'glusterfs-fuse': >>>>>> >>> >>> > >> {'release': '1.el6', 'buildtime': 1403622628L, >>>>>> >>> >>> > >> 'version': >>>>>> >>> >>> > >> '3.5.1'}, >>>>>> >>> >>> > >> 'spice-server': {'release': '6.el6_5.2', 'buildtime': >>>>>> >>> >>> > >> 1402324637L, >>>>>> >>> >>> > >> 'version': '0.12.4'}, 'vdsm': {'release': >>>>>> >>> >>> > >> '1.gitdb83943.el6', >>>>>> >>> >>> > >> 'buildtime': >>>>>> >>> >>> > >> 1412784567L, 'version': '4.16.7'}, 'qemu-kvm': >>>>>> >>> >>> > >> {'release': >>>>>> >>> >>> > >> '2.415.el6_5.10', 'buildtime': 1402435700L, >>>>>> >>> >>> > >> 'version': >>>>>> >>> >>> > >> '0.12.1.2'}, >>>>>> >>> >>> > >> 'qemu-img': {'release': '2.415.el6_5.10', >>>>>> >>> >>> > >> 'buildtime': >>>>>> >>> >>> > >> 1402435700L, >>>>>> >>> >>> > >> 'version': '0.12.1.2'}, 'libvirt': {'release': >>>>>> >>> >>> > >> '29.el6_5.9', >>>>>> >>> >>> > >> 'buildtime': >>>>>> >>> >>> > >> 1402404612L, 'version': '0.10.2'}, 'glusterfs': >>>>>> >>> >>> > >> {'release': >>>>>> >>> >>> > >> '1.el6', >>>>>> >>> >>> > >> 'buildtime': 1403622628L, 'version': '3.5.1'}, 'mom': >>>>>> >>> >>> > >> {'release': >>>>>> >>> >>> > >> '2.el6', >>>>>> >>> >>> > >> 'buildtime': 1403794344L, 'version': '0.4.1'}, >>>>>> >>> >>> > >> 'glusterfs-server': >>>>>> >>> >>> > >> {'release': '1.el6', 'buildtime': 1403622628L, >>>>>> >>> >>> > >> 'version': >>>>>> >>> >>> > >> '3.5.1'}}, >>>>>> >>> >>> > >> 'numaNodeDistance': {'1': [20, 10], '0': [10, 20]}, >>>>>> >>> >>> > >> 'cpuModel': >>>>>> >>> >>> > >> 'Intel(R) >>>>>> >>> >>> > >> Xeon(R) CPU X5650 @ 2.67GHz', 'liveMerge': >>>>>> >>> >>> > >> 'false', >>>>>> >>> >>> > >> 'hooks': >>>>>> >>> >>> > >> {}, >>>>>> >>> >>> > >> 'cpuSockets': '2', 'vmTypes': ['kvm'], 'selinux': >>>>>> >>> >>> > >> {'mode': '1'}, >>>>>> >>> >>> > >> 'kdumpStatus': 0, 'supportedProtocols': ['2.2', >>>>>> >>> >>> > >> '2.3'], >>>>>> >>> >>> > >> 'networks': >>>>>> >>> >>> > >> {'ovirtmgmt': {'iface': u'bond0.10', 'addr': >>>>>> >>> >>> > >> '43.252.176.16', >>>>>> >>> >>> > >> 'bridged': >>>>>> >>> >>> > >> False, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], >>>>>> >>> >>> > >> 'mtu': >>>>>> >>> >>> > >> '1500', >>>>>> >>> >>> > >> 'bootproto4': 'none', 'netmask': '255.255.255.0', >>>>>> >>> >>> > >> 'ipv4addrs': >>>>>> >>> >>> > >> [' >>>>>> >>> >>> > >> 43.252.176.16/24' <http://43.252.176.16/24%27>], >>>>>> >>> >>> > >> 'interface': >>>>>> >>> >>> > >> u'bond0.10', 'ipv6gateway': '::', 'gateway': >>>>>> >>> >>> > >> '43.25.17.1'}, >>>>>> >>> >>> > >> 'Internal': >>>>>> >>> >>> > >> {'iface': 'Internal', 'addr': '', 'cfg': {'DEFROUTE': >>>>>> >>> >>> > >> 'no', >>>>>> >>> >>> > >> 'HOTPLUG': >>>>>> >>> >>> > >> 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': >>>>>> >>> >>> > >> 'no', >>>>>> >>> >>> > >> 'BOOTPROTO': >>>>>> >>> >>> > >> 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': >>>>>> >>> >>> > >> 'Bridge', >>>>>> >>> >>> > >> 'ONBOOT': >>>>>> >>> >>> > >> 'no'}, 'bridged': True, 'ipv6addrs': >>>>>> >>> >>> > >> ['fe80::210:18ff:fecd:daac/64'], >>>>>> >>> >>> > >> 'gateway': '', 'bootproto4': 'none', 'netmask': '', >>>>>> >>> >>> > >> 'stp': >>>>>> >>> >>> > >> 'off', >>>>>> >>> >>> > >> 'ipv4addrs': [], 'mtu': '9000', 'ipv6gateway': '::', >>>>>> >>> >>> > >> 'ports': >>>>>> >>> >>> > >> ['bond1.100']}, 'storage': {'iface': u'bond1', >>>>>> >>> >>> > >> 'addr': >>>>>> >>> >>> > >> '10.10.10.6', >>>>>> >>> >>> > >> 'bridged': False, 'ipv6addrs': >>>>>> >>> >>> > >> ['fe80::210:18ff:fecd:daac/64'], >>>>>> >>> >>> > >> 'mtu': >>>>>> >>> >>> > >> '9000', 'bootproto4': 'none', 'netmask': >>>>>> >>> >>> > >> '255.255.255.0', >>>>>> >>> >>> > >> 'ipv4addrs': [' >>>>>> >>> >>> > >> 10.10.10.6/24' <http://10.10.10.6/24%27>], >>>>>> >>> >>> > >> 'interface': >>>>>> >>> >>> > >> u'bond1', >>>>>> >>> >>> > >> 'ipv6gateway': '::', 'gateway': ''}, 'VMNetwork': >>>>>> >>> >>> > >> {'iface': >>>>>> >>> >>> > >> 'VMNetwork', >>>>>> >>> >>> > >> 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': >>>>>> >>> >>> > >> 'no', >>>>>> >>> >>> > >> 'MTU': >>>>>> >>> >>> > >> '1500', >>>>>> >>> >>> > >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': >>>>>> >>> >>> > >> 'none', >>>>>> >>> >>> > >> 'STP': >>>>>> >>> >>> > >> 'off', >>>>>> >>> >>> > >> 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': >>>>>> >>> >>> > >> 'no'}, >>>>>> >>> >>> > >> 'bridged': >>>>>> >>> >>> > >> True, >>>>>> >>> >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], >>>>>> >>> >>> > >> 'gateway': >>>>>> >>> >>> > >> '', >>>>>> >>> >>> > >> 'bootproto4': >>>>>> >>> >>> > >> 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], >>>>>> >>> >>> > >> 'mtu': >>>>>> >>> >>> > >> '1500', >>>>>> >>> >>> > >> 'ipv6gateway': '::', 'ports': ['bond0.36']}}, >>>>>> >>> >>> > >> 'bridges': >>>>>> >>> >>> > >> {'Internal': >>>>>> >>> >>> > >> {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': >>>>>> >>> >>> > >> 'no', >>>>>> >>> >>> > >> 'MTU': >>>>>> >>> >>> > >> '9000', >>>>>> >>> >>> > >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': >>>>>> >>> >>> > >> 'none', >>>>>> >>> >>> > >> 'STP': >>>>>> >>> >>> > >> 'off', >>>>>> >>> >>> > >> 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': >>>>>> >>> >>> > >> 'no'}, >>>>>> >>> >>> > >> 'ipv6addrs': >>>>>> >>> >>> > >> ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', >>>>>> >>> >>> > >> 'netmask': '', >>>>>> >>> >>> > >> 'stp': >>>>>> >>> >>> > >> 'off', 'ipv4addrs': [], 'ipv6gateway': '::', >>>>>> >>> >>> > >> 'gateway': >>>>>> >>> >>> > >> '', >>>>>> >>> >>> > >> 'opts': >>>>>> >>> >>> > >> {'topology_change_detected': '0', >>>>>> >>> >>> > >> 'multicast_last_member_count': >>>>>> >>> >>> > >> '2', >>>>>> >>> >>> > >> 'hash_elasticity': '4', >>>>>> >>> >>> > >> 'multicast_query_response_interval': >>>>>> >>> >>> > >> '999', >>>>>> >>> >>> > >> 'multicast_snooping': '1', >>>>>> >>> >>> > >> 'multicast_startup_query_interval': >>>>>> >>> >>> > >> '3124', >>>>>> >>> >>> > >> 'hello_timer': '31', 'multicast_querier_interval': >>>>>> >>> >>> > >> '25496', >>>>>> >>> >>> > >> 'max_age': >>>>>> >>> >>> > >> '1999', 'hash_max': '512', 'stp_state': '0', >>>>>> >>> >>> > >> 'root_id': >>>>>> >>> >>> > >> '8000.001018cddaac', 'priority': '32768', >>>>>> >>> >>> > >> 'multicast_membership_interval': >>>>>> >>> >>> > >> '25996', 'root_path_cost': '0', 'root_port': '0', >>>>>> >>> >>> > >> 'multicast_querier': >>>>>> >>> >>> > >> '0', >>>>>> >>> >>> > >> 'multicast_startup_query_count': '2', 'hello_time': >>>>>> >>> >>> > >> '199', >>>>>> >>> >>> > >> 'topology_change': '0', 'bridge_id': >>>>>> >>> >>> > >> '8000.001018cddaac', >>>>>> >>> >>> > >> 'topology_change_timer': '0', 'ageing_time': '29995', >>>>>> >>> >>> > >> 'gc_timer': >>>>>> >>> >>> > >> '31', >>>>>> >>> >>> > >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', >>>>>> >>> >>> > >> 'multicast_query_interval': '12498', >>>>>> >>> >>> > >> 'multicast_last_member_interval': >>>>>> >>> >>> > >> '99', 'multicast_router': '1', 'forward_delay': '0'}, >>>>>> >>> >>> > >> 'ports': >>>>>> >>> >>> > >> ['bond1.100']}, 'VMNetwork': {'addr': '', 'cfg': >>>>>> >>> >>> > >> {'DEFROUTE': >>>>>> >>> >>> > >> 'no', >>>>>> >>> >>> > >> 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', >>>>>> >>> >>> > >> 'NM_CONTROLLED': >>>>>> >>> >>> > >> 'no', >>>>>> >>> >>> > >> 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': >>>>>> >>> >>> > >> 'VMNetwork', >>>>>> >>> >>> > >> 'TYPE': >>>>>> >>> >>> > >> 'Bridge', >>>>>> >>> >>> > >> 'ONBOOT': 'no'}, 'ipv6addrs': >>>>>> >>> >>> > >> ['fe80::62eb:69ff:fe20:b46c/64'], >>>>>> >>> >>> > >> 'mtu': >>>>>> >>> >>> > >> '1500', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], >>>>>> >>> >>> > >> 'ipv6gateway': >>>>>> >>> >>> > >> '::', >>>>>> >>> >>> > >> 'gateway': '', 'opts': {'topology_change_detected': >>>>>> >>> >>> > >> '0', >>>>>> >>> >>> > >> 'multicast_last_member_count': '2', >>>>>> >>> >>> > >> 'hash_elasticity': >>>>>> >>> >>> > >> '4', >>>>>> >>> >>> > >> 'multicast_query_response_interval': '999', >>>>>> >>> >>> > >> 'multicast_snooping': >>>>>> >>> >>> > >> '1', >>>>>> >>> >>> > >> 'multicast_startup_query_interval': '3124', >>>>>> >>> >>> > >> 'hello_timer': >>>>>> >>> >>> > >> '131', >>>>>> >>> >>> > >> 'multicast_querier_interval': '25496', 'max_age': >>>>>> >>> >>> > >> '1999', >>>>>> >>> >>> > >> 'hash_max': >>>>>> >>> >>> > >> '512', 'stp_state': '0', 'root_id': >>>>>> >>> >>> > >> '8000.60eb6920b46c', >>>>>> >>> >>> > >> 'priority': >>>>>> >>> >>> > >> '32768', 'multicast_membership_interval': '25996', >>>>>> >>> >>> > >> 'root_path_cost': >>>>>> >>> >>> > >> '0', >>>>>> >>> >>> > >> 'root_port': '0', 'multicast_querier': '0', >>>>>> >>> >>> > >> 'multicast_startup_query_count': '2', 'hello_time': >>>>>> >>> >>> > >> '199', >>>>>> >>> >>> > >> 'topology_change': '0', 'bridge_id': >>>>>> >>> >>> > >> '8000.60eb6920b46c', >>>>>> >>> >>> > >> 'topology_change_timer': '0', 'ageing_time': '29995', >>>>>> >>> >>> > >> 'gc_timer': >>>>>> >>> >>> > >> '31', >>>>>> >>> >>> > >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', >>>>>> >>> >>> > >> 'multicast_query_interval': '12498', >>>>>> >>> >>> > >> 'multicast_last_member_interval': >>>>>> >>> >>> > >> '99', 'multicast_router': '1', 'forward_delay': '0'}, >>>>>> >>> >>> > >> 'ports': >>>>>> >>> >>> > >> ['bond0.36']}}, 'uuid': >>>>>> >>> >>> > >> '44454C4C-4C00-1057-8053-B7C04F504E31', >>>>>> >>> >>> > >> 'lastClientIface': 'bond1', 'nics': {'eth3': >>>>>> >>> >>> > >> {'permhwaddr': >>>>>> >>> >>> > >> '00:10:18:cd:da:ae', 'addr': '', 'cfg': {'SLAVE': >>>>>> >>> >>> > >> 'yes', >>>>>> >>> >>> > >> 'NM_CONTROLLED': >>>>>> >>> >>> > >> 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ae', >>>>>> >>> >>> > >> 'MASTER': >>>>>> >>> >>> > >> 'bond1', >>>>>> >>> >>> > >> 'DEVICE': 'eth3', 'ONBOOT': 'no'}, 'ipv6addrs': [], >>>>>> >>> >>> > >> 'mtu': >>>>>> >>> >>> > >> '9000', >>>>>> >>> >>> > >> 'netmask': '', 'ipv4addrs': [], 'hwaddr': >>>>>> >>> >>> > >> '00:10:18:cd:da:ac', >>>>>> >>> >>> > >> 'speed': >>>>>> >>> >>> > >> 1000}, 'eth2': {'permhwaddr': '00:10:18:cd:da:ac', >>>>>> >>> >>> > >> 'addr': '', >>>>>> >>> >>> > >> 'cfg': >>>>>> >>> >>> > >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': >>>>>> >>> >>> > >> '9000', >>>>>> >>> >>> > >> 'HWADDR': >>>>>> >>> >>> > >> '00:10:18:cd:da:ac', 'MASTER': 'bond1', 'DEVICE': >>>>>> >>> >>> > >> 'eth2', >>>>>> >>> >>> > >> 'ONBOOT': >>>>>> >>> >>> > >> 'no'}, >>>>>> >>> >>> > >> 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', >>>>>> >>> >>> > >> 'ipv4addrs': [], >>>>>> >>> >>> > >> 'hwaddr': >>>>>> >>> >>> > >> '00:10:18:cd:da:ac', 'speed': 1000}, 'eth1': >>>>>> >>> >>> > >> {'permhwaddr': >>>>>> >>> >>> > >> '60:eb:69:20:b4:6d', 'addr': '', 'cfg': {'SLAVE': >>>>>> >>> >>> > >> 'yes', >>>>>> >>> >>> > >> 'NM_CONTROLLED': >>>>>> >>> >>> > >> 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6d', >>>>>> >>> >>> > >> 'MASTER': >>>>>> >>> >>> > >> 'bond0', >>>>>> >>> >>> > >> 'DEVICE': 'eth1', 'ONBOOT': 'yes'}, 'ipv6addrs': [], >>>>>> >>> >>> > >> 'mtu': >>>>>> >>> >>> > >> '1500', >>>>>> >>> >>> > >> 'netmask': '', 'ipv4addrs': [], 'hwaddr': >>>>>> >>> >>> > >> '60:eb:69:20:b4:6c', >>>>>> >>> >>> > >> 'speed': >>>>>> >>> >>> > >> 1000}, 'eth0': {'permhwaddr': '60:eb:69:20:b4:6c', >>>>>> >>> >>> > >> 'addr': '', >>>>>> >>> >>> > >> 'cfg': >>>>>> >>> >>> > >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': >>>>>> >>> >>> > >> '1500', >>>>>> >>> >>> > >> 'HWADDR': >>>>>> >>> >>> > >> '60:eb:69:20:b4:6c', 'MASTER': 'bond0', 'DEVICE': >>>>>> >>> >>> > >> 'eth0', >>>>>> >>> >>> > >> 'ONBOOT': >>>>>> >>> >>> > >> 'yes'}, >>>>>> >>> >>> > >> 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', >>>>>> >>> >>> > >> 'ipv4addrs': [], >>>>>> >>> >>> > >> 'hwaddr': >>>>>> >>> >>> > >> '60:eb:69:20:b4:6c', 'speed': 1000}}, >>>>>> >>> >>> > >> 'software_revision': '1', >>>>>> >>> >>> > >> 'clusterLevels': ['3.0', '3.1', '3.2', '3.3', '3.4', >>>>>> >>> >>> > >> '3.5'], >>>>>> >>> >>> > >> 'cpuFlags': >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,popcnt,aes,lahf_lm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270', >>>>>> >>> >>> > >> 'ISCSIInitiatorName': >>>>>> >>> >>> > >> 'iqn.1994-05.com.redhat:32151ce183c8', >>>>>> >>> >>> > >> 'netConfigDirty': 'False', 'supportedENGINEs': >>>>>> >>> >>> > >> ['3.0', >>>>>> >>> >>> > >> '3.1', >>>>>> >>> >>> > >> '3.2', >>>>>> >>> >>> > >> '3.3', >>>>>> >>> >>> > >> '3.4', '3.5'], 'autoNumaBalancing': 2, 'reservedMem': >>>>>> >>> >>> > >> '321', >>>>>> >>> >>> > >> 'bondings': >>>>>> >>> >>> > >> {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', >>>>>> >>> >>> > >> 'netmask': '', >>>>>> >>> >>> > >> 'slaves': >>>>>> >>> >>> > >> [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': >>>>>> >>> >>> > >> '', >>>>>> >>> >>> > >> 'cfg': >>>>>> >>> >>> > >> {'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': >>>>>> >>> >>> > >> 'no', >>>>>> >>> >>> > >> 'BONDING_OPTS': >>>>>> >>> >>> > >> 'mode=4 miimon=100', 'DEVICE': 'bond0', 'ONBOOT': >>>>>> >>> >>> > >> 'yes'}, >>>>>> >>> >>> > >> 'ipv6addrs': >>>>>> >>> >>> > >> ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', >>>>>> >>> >>> > >> 'netmask': '', >>>>>> >>> >>> > >> 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', >>>>>> >>> >>> > >> 'slaves': >>>>>> >>> >>> > >> ['eth0', >>>>>> >>> >>> > >> 'eth1'], >>>>>> >>> >>> > >> 'opts': {'miimon': '100', 'mode': '4'}}, 'bond1': >>>>>> >>> >>> > >> {'addr': >>>>>> >>> >>> > >> '10.10.10.6', >>>>>> >>> >>> > >> 'cfg': {'DEFROUTE': 'no', 'IPADDR': '10.10.10.6', >>>>>> >>> >>> > >> 'HOTPLUG': >>>>>> >>> >>> > >> 'no', >>>>>> >>> >>> > >> 'MTU': >>>>>> >>> >>> > >> '9000', 'NM_CONTROLLED': 'no', 'NETMASK': >>>>>> >>> >>> > >> '255.255.255.0', >>>>>> >>> >>> > >> 'BOOTPROTO': >>>>>> >>> >>> > >> 'none', 'BONDING_OPTS': 'mode=4 miimon=100', >>>>>> >>> >>> > >> 'DEVICE': >>>>>> >>> >>> > >> 'bond1', >>>>>> >>> >>> > >> 'ONBOOT': >>>>>> >>> >>> > >> 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], >>>>>> >>> >>> > >> 'mtu': >>>>>> >>> >>> > >> '9000', >>>>>> >>> >>> > >> 'netmask': '255.255.255.0', 'ipv4addrs': >>>>>> >>> >>> > >> ['10.10.10.6/24' >>>>>> >>> >>> > >> <http://10.10.10.6/24%27>], 'hwaddr': >>>>>> >>> >>> > >> '00:10:18:cd:da:ac', >>>>>> >>> >>> > >> 'slaves': >>>>>> >>> >>> > >> ['eth2', 'eth3'], 'opts': {'miimon': '100', 'mode': >>>>>> >>> >>> > >> '4'}}, >>>>>> >>> >>> > >> 'bond2': >>>>>> >>> >>> > >> {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', >>>>>> >>> >>> > >> 'slaves': >>>>>> >>> >>> > >> [], >>>>>> >>> >>> > >> 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', >>>>>> >>> >>> > >> 'cfg': {}, >>>>>> >>> >>> > >> 'mtu': >>>>>> >>> >>> > >> '1500', 'netmask': '', 'slaves': [], 'hwaddr': >>>>>> >>> >>> > >> '00:00:00:00:00:00'}}, >>>>>> >>> >>> > >> 'software_version': '4.16', 'memSize': '24019', >>>>>> >>> >>> > >> 'cpuSpeed': >>>>>> >>> >>> > >> '2667.000', >>>>>> >>> >>> > >> 'numaNodes': {u'1': {'totalMemory': '12288', 'cpus': >>>>>> >>> >>> > >> [6, >>>>>> >>> >>> > >> 7, 8, >>>>>> >>> >>> > >> 9, >>>>>> >>> >>> > >> 10, 11, >>>>>> >>> >>> > >> 18, 19, 20, 21, 22, 23]}, u'0': {'totalMemory': >>>>>> >>> >>> > >> '12278', >>>>>> >>> >>> > >> 'cpus': >>>>>> >>> >>> > >> [0, >>>>>> >>> >>> > >> 1, 2, >>>>>> >>> >>> > >> 3, 4, 5, 12, 13, 14, 15, 16, 17]}}, 'version_name': >>>>>> >>> >>> > >> 'Snow >>>>>> >>> >>> > >> Man', >>>>>> >>> >>> > >> 'vlans': >>>>>> >>> >>> > >> {'bond0.10': {'iface': 'bond0', 'addr': >>>>>> >>> >>> > >> '43.25.17.16', >>>>>> >>> >>> > >> 'cfg': >>>>>> >>> >>> > >> {'DEFROUTE': >>>>>> >>> >>> > >> 'yes', 'VLAN': 'yes', 'IPADDR': '43.25.17.16', >>>>>> >>> >>> > >> 'HOTPLUG': >>>>>> >>> >>> > >> 'no', >>>>>> >>> >>> > >> 'GATEWAY': >>>>>> >>> >>> > >> '43.25.17.1', 'NM_CONTROLLED': 'no', 'NETMASK': >>>>>> >>> >>> > >> '255.255.255.0', >>>>>> >>> >>> > >> 'BOOTPROTO': 'none', 'DEVICE': 'bond0.10', 'MTU': >>>>>> >>> >>> > >> '1500', >>>>>> >>> >>> > >> 'ONBOOT': >>>>>> >>> >>> > >> 'yes'}, >>>>>> >>> >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], >>>>>> >>> >>> > >> 'vlanid': >>>>>> >>> >>> > >> 10, >>>>>> >>> >>> > >> 'mtu': >>>>>> >>> >>> > >> '1500', >>>>>> >>> >>> > >> 'netmask': '255.255.255.0', 'ipv4addrs': >>>>>> >>> >>> > >> ['43.25.17.16/24'] >>>>>> >>> >>> > >> <http://43.25.17.16/24%27%5D>}, 'bond0.36': {'iface': >>>>>> >>> >>> > >> 'bond0', >>>>>> >>> >>> > >> 'addr': >>>>>> >>> >>> > >> '', 'cfg': {'BRIDGE': 'VMNetwork', 'VLAN': 'yes', >>>>>> >>> >>> > >> 'HOTPLUG': >>>>>> >>> >>> > >> 'no', >>>>>> >>> >>> > >> 'MTU': >>>>>> >>> >>> > >> '1500', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond0.36', >>>>>> >>> >>> > >> 'ONBOOT': >>>>>> >>> >>> > >> 'no'}, >>>>>> >>> >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], >>>>>> >>> >>> > >> 'vlanid': >>>>>> >>> >>> > >> 36, >>>>>> >>> >>> > >> 'mtu': >>>>>> >>> >>> > >> '1500', >>>>>> >>> >>> > >> 'netmask': '', 'ipv4addrs': []}, 'bond1.100': >>>>>> >>> >>> > >> {'iface': >>>>>> >>> >>> > >> 'bond1', >>>>>> >>> >>> > >> 'addr': >>>>>> >>> >>> > >> '', 'cfg': {'BRIDGE': 'Internal', 'VLAN': 'yes', >>>>>> >>> >>> > >> 'HOTPLUG': >>>>>> >>> >>> > >> 'no', >>>>>> >>> >>> > >> 'MTU': >>>>>> >>> >>> > >> '9000', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond1.100', >>>>>> >>> >>> > >> 'ONBOOT': >>>>>> >>> >>> > >> 'no'}, >>>>>> >>> >>> > >> 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], >>>>>> >>> >>> > >> 'vlanid': >>>>>> >>> >>> > >> 100, >>>>>> >>> >>> > >> 'mtu': >>>>>> >>> >>> > >> '9000', >>>>>> >>> >>> > >> 'netmask': '', 'ipv4addrs': []}}, 'cpuCores': '12', >>>>>> >>> >>> > >> 'kvmEnabled': >>>>>> >>> >>> > >> 'true', >>>>>> >>> >>> > >> 'guestOverhead': '65', 'cpuThreads': '24', >>>>>> >>> >>> > >> 'emulatedMachines': >>>>>> >>> >>> > >> [u'rhel6.5.0', u'pc', u'rhel6.4.0', u'rhel6.3.0', >>>>>> >>> >>> > >> u'rhel6.2.0', >>>>>> >>> >>> > >> u'rhel6.1.0', u'rhel6.0.0', u'rhel5.5.0', >>>>>> >>> >>> > >> u'rhel5.4.4', >>>>>> >>> >>> > >> u'rhel5.4.0'], >>>>>> >>> >>> > >> 'operatingSystem': {'release': '5.el6.centos.11.1', >>>>>> >>> >>> > >> 'version': >>>>>> >>> >>> > >> '6', >>>>>> >>> >>> > >> 'name': >>>>>> >>> >>> > >> 'RHEL'}, 'lastClient': '10.10.10.2'}} >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> 21:41:41,620::BindingXMLRPC::1132::vds::(wrapper) >>>>>> >>> >>> > >> client >>>>>> >>> >>> > >> [10.10.10.2]::call >>>>>> >>> >>> > >> getHardwareInfo with () {} >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> 21:41:41,621::BindingXMLRPC::1139::vds::(wrapper) >>>>>> >>> >>> > >> return >>>>>> >>> >>> > >> getHardwareInfo >>>>>> >>> >>> > >> with {'status': {'message': 'Done', 'code': 0}, >>>>>> >>> >>> > >> 'info': >>>>>> >>> >>> > >> {'systemProductName': 'CS24-TY', >>>>>> >>> >>> > >> 'systemSerialNumber': >>>>>> >>> >>> > >> '7LWSPN1', >>>>>> >>> >>> > >> 'systemFamily': 'Server', 'systemVersion': 'A00', >>>>>> >>> >>> > >> 'systemUUID': >>>>>> >>> >>> > >> '44454c4c-4c00-1057-8053-b7c04f504e31', >>>>>> >>> >>> > >> 'systemManufacturer': >>>>>> >>> >>> > >> 'Dell'}} >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> 21:41:41,733::BindingXMLRPC::1132::vds::(wrapper) >>>>>> >>> >>> > >> client >>>>>> >>> >>> > >> [10.10.10.2]::call >>>>>> >>> >>> > >> hostsList with () {} flowID [222e8036] >>>>>> >>> >>> > >> Thread-13::ERROR::2014-11-24 >>>>>> >>> >>> > >> 21:41:44,753::BindingXMLRPC::1148::vds::(wrapper) >>>>>> >>> >>> > >> vdsm >>>>>> >>> >>> > >> exception >>>>>> >>> >>> > >> occured >>>>>> >>> >>> > >> Traceback (most recent call last): >>>>>> >>> >>> > >> File "/usr/share/vdsm/rpc/BindingXMLRPC.py",
>>>>>> >>> >>> > >> 1135, >>>>>> >>> >>> > >> in >>>>>> >>> >>> > >> wrapper >>>>>> >>> >>> > >> res = f(*args, **kwargs) >>>>>> >>> >>> > >> File "/usr/share/vdsm/gluster/api.py", line 54, in >>>>>> >>> >>> > >> wrapper >>>>>> >>> >>> > >> rv = func(*args, **kwargs) >>>>>> >>> >>> > >> File "/usr/share/vdsm/gluster/api.py", line 251, in >>>>>> >>> >>> > >> hostsList >>>>>> >>> >>> > >> return {'hosts': >>>>>> >>> >>> > >> self.svdsmProxy.glusterPeerStatus()} >>>>>> >>> >>> > >> File "/usr/share/vdsm/supervdsm.py", line 50, in >>>>>> >>> >>> > >> __call__ >>>>>> >>> >>> > >> return callMethod() >>>>>> >>> >>> > >> File "/usr/share/vdsm/supervdsm.py", line 48, in >>>>>> >>> >>> > >> <lambda> >>>>>> >>> >>> > >> **kwargs) >>>>>> >>> >>> > >> File "<string>", line 2, in glusterPeerStatus >>>>>> >>> >>> > >> File >>>>>> >>> >>> > >> "/usr/lib64/python2.6/multiprocessing/managers.py", >>>>>> >>> >>> > >> line >>>>>> >>> >>> > >> 740, >>>>>> >>> >>> > >> in >>>>>> >>> >>> > >> _callmethod >>>>>> >>> >>> > >> raise convert_to_error(kind, result) >>>>>> >>> >>> > >> GlusterCmdExecFailedException: Command execution >>>>>> >>> >>> > >> failed >>>>>> >>> >>> > >> error: Connection failed. Please check if gluster >>>>>> >>> >>> > >> daemon >>>>>> >>> >>> > >> is >>>>>> >>> >>> > >> operational. >>>>>> >>> >>> > >> return code: 1 >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> 21:41:50,949::task::595::Storage.TaskManager.Task::(_updateState) >>>>>> >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving >>>>>> >>> >>> > >> from >>>>>> >>> >>> > >> state >>>>>> >>> >>> > >> init >>>>>> >>> >>> > >> -> >>>>>> >>> >>> > >> state preparing >>>>>> >>> >>> > >> Thread-13::INFO::2014-11-24 >>>>>> >>> >>> > >> 21:41:50,950::logUtils::44::dispatcher::(wrapper) Run >>>>>> >>> >>> > >> and >>>>>> >>> >>> > >> protect: >>>>>> >>> >>> > >> repoStats(options=None) >>>>>> >>> >>> > >> Thread-13::INFO::2014-11-24 >>>>>> >>> >>> > >> 21:41:50,950::logUtils::47::dispatcher::(wrapper) Run >>>>>> >>> >>> > >> and >>>>>> >>> >>> > >> protect: >>>>>> >>> >>> > >> repoStats, Return response: {} >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> 21:41:50,950::task::1191::Storage.TaskManager.Task::(prepare) >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::finished: {} >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> 21:41:50,950::task::595::Storage.TaskManager.Task::(_updateState) >>>>>> >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving >>>>>> >>> >>> > >> from >>>>>> >>> >>> > >> state >>>>>> >>> >>> > >> preparing >>>>>> >>> >>> > >> -> >>>>>> >>> >>> > >> state finished >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> 21:41:50,951::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) >>>>>> >>> >>> > >> Owner.releaseAll requests {} resources {} >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> 21:41:50,951::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >>>>>> >>> >>> > >> Owner.cancelAll requests {} >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> 21:41:50,951::task::993::Storage.TaskManager.Task::(_decref) >>>>>> >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::ref 0 >>>>>> >>> >>> > >> aborting >>>>>> >>> >>> > >> False >>>>>> >>> >>> > >> ------------------------------- >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> [root@compute4 ~]# service glusterd status >>>>>> >>> >>> > >> glusterd is stopped >>>>>> >>> >>> > >> [root@compute4 ~]# chkconfig --list | grep glusterd >>>>>> >>> >>> > >> glusterd 0:off 1:off 2:on 3:on 4:on >>>>>> >>> >>> > >> 5:on >>>>>> >>> >>> > >> 6:off >>>>>> >>> >>> > >> [root@compute4 ~]# >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> Thanks, >>>>>> >>> >>> > >> Punit >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> On Mon, Nov 24, 2014 at 6:36 PM, Kanagaraj >>>>>> >>> >>> > >> <kmayilsa@redhat.com> >>>>>> >>> >>> > >> wrote: >>>>>> >>> >>> > >> >>>>>> >>> >>> > >>> Can you send the corresponding error in vdsm.log >>>>>> >>> >>> > >>> from >>>>>> >>> >>> > >>> the >>>>>> >>> >>> > >>> host? >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> Also check if glusterd service is running. >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> Thanks, >>>>>> >>> >>> > >>> Kanagaraj >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> On 11/24/2014 03:39 PM, Punit Dambiwal wrote: >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> Hi, >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> After reboot my Hypervisior host can not activate >>>>>> >>> >>> > >>> again >>>>>> >>> >>> > >>> in the >>>>>> >>> >>> > >>> cluster >>>>>> >>> >>> > >>> and failed with the following error :- >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> Gluster command [<UNKNOWN>] failed on server... >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> Engine logs :- >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> 2014-11-24 18:05:28,397 INFO >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-64) START, >>>>>> >>> >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, >>>>>> >>> >>> > >>> HostId >>>>>> >>> >>> > >>> = >>>>>> >>> >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: >>>>>> >>> >>> > >>> 5f251c90 >>>>>> >>> >>> > >>> 2014-11-24 18:05:30,609 INFO >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-64) FINISH, >>>>>> >>> >>> > >>> GlusterVolumesListVDSCommand, >>>>>> >>> >>> > >>> return: >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@d95203e0 }, >>>>>> >>> >>> > >>> log id: 5f251c90 >>>>>> >>> >>> > >>> 2014-11-24 18:05:33,768 INFO >>>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>>>>> >>> >>> > >>> (ajp--127.0.0.1-8702-8) >>>>>> >>> >>> > >>> [287d570d] Lock Acquired to object EngineLock >>>>>> >>> >>> > >>> [exclusiveLocks= >>>>>> >>> >>> > >>> key: >>>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS >>>>>> >>> >>> > >>> , sharedLocks= ] >>>>>> >>> >>> > >>> 2014-11-24 18:05:33,795 INFO >>>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>>>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] >>>>>> >>> >>> > >>> Running >>>>>> >>> >>> > >>> command: >>>>>> >>> >>> > >>> ActivateVdsCommand internal: false. Entities >>>>>> >>> >>> > >>> affected : >>>>>> >>> >>> > >>> ID: >>>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDSAction >>>>>> >>> >>> > >>> group >>>>>> >>> >>> > >>> MANIPULATE_HOST >>>>>> >>> >>> > >>> with role type ADMIN >>>>>> >>> >>> > >>> 2014-11-24 18:05:33,796 INFO >>>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>>>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] >>>>>> >>> >>> > >>> Before >>>>>> >>> >>> > >>> acquiring >>>>>> >>> >>> > >>> lock in >>>>>> >>> >>> > >>> order to prevent monitoring for host Compute5 from >>>>>> >>> >>> > >>> data-center >>>>>> >>> >>> > >>> SV_WTC >>>>>> >>> >>> > >>> 2014-11-24 18:05:33,797 INFO >>>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>>>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] Lock >>>>>> >>> >>> > >>> acquired, >>>>>> >>> >>> > >>> from >>>>>> >>> >>> > >>> now a >>>>>> >>> >>> > >>> monitoring of host will be skipped for host Compute5 >>>>>> >>> >>> > >>> from >>>>>> >>> >>> > >>> data-center >>>>>> >>> >>> > >>> SV_WTC >>>>>> >>> >>> > >>> 2014-11-24 18:05:33,817 INFO >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >>>>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] >>>>>> >>> >>> > >>> START, >>>>>> >>> >>> > >>> SetVdsStatusVDSCommand(HostName = Compute5, HostId = >>>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, >>>>>> >>> >>> > >>> status=Unassigned, >>>>>> >>> >>> > >>> nonOperationalReason=NONE, >>>>>> >>> >>> > >>> stopSpmFailureLogged=false), >>>>>> >>> >>> > >>> log id: >>>>>> >>> >>> > >>> 1cbc7311 >>>>>> >>> >>> > >>> 2014-11-24 18:05:33,820 INFO >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >>>>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] >>>>>> >>> >>> > >>> FINISH, >>>>>> >>> >>> > >>> SetVdsStatusVDSCommand, log id: 1cbc7311 >>>>>> >>> >>> > >>> 2014-11-24 18:05:34,086 INFO >>>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>>>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) Activate >>>>>> >>> >>> > >>> finished. >>>>>> >>> >>> > >>> Lock >>>>>> >>> >>> > >>> released. >>>>>> >>> >>> > >>> Monitoring can run now for host Compute5 from >>>>>> >>> >>> > >>> data-center >>>>>> >>> >>> > >>> SV_WTC >>>>>> >>> >>> > >>> 2014-11-24 18:05:34,088 INFO >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>>>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) Correlation ID: >>>>>> >>> >>> > >>> 287d570d, >>>>>> >>> >>> > >>> Job >>>>>> >>> >>> > >>> ID: >>>>>> >>> >>> > >>> 5ef8e4d6-b2bc-469e-8e81-7ef74b2a001a, Call Stack: >>>>>> >>> >>> > >>> null, >>>>>> >>> >>> > >>> Custom >>>>>> >>> >>> > >>> Event ID: >>>>>> >>> >>> > >>> -1, Message: Host Compute5 was activated by admin. >>>>>> >>> >>> > >>> 2014-11-24 18:05:34,090 INFO >>>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>>>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) Lock freed to >>>>>> >>> >>> > >>> object >>>>>> >>> >>> > >>> EngineLock >>>>>> >>> >>> > >>> [exclusiveLocks= key: >>>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a >>>>>> >>> >>> > >>> value: >>>>>> >>> >>> > >>> VDS >>>>>> >>> >>> > >>> , sharedLocks= ] >>>>>> >>> >>> > >>> 2014-11-24 18:05:35,792 INFO >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-55) [3706e836] START, >>>>>> >>> >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, >>>>>> >>> >>> > >>> HostId >>>>>> >>> >>> > >>> = >>>>>> >>> >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: >>>>>> >>> >>> > >>> 48a0c832 >>>>>> >>> >>> > >>> 2014-11-24 18:05:37,064 INFO >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] >>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) START, >>>>>> >>> >>> > >>> GetHardwareInfoVDSCommand(HostName = Compute5, >>>>>> >>> >>> > >>> HostId = >>>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> vds=Host[Compute5,0bf6b00f-7947-4411-b55a-cc5eea2b381a]), log >>>>>> >>> >>> > >>> id: >>>>>> >>> >>> > >>> 6d560cc2 >>>>>> >>> >>> > >>> 2014-11-24 18:05:37,074 INFO >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] >>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) FINISH, >>>>>> >>> >>> > >>> GetHardwareInfoVDSCommand, log >>>>>> >>> >>> > >>> id: 6d560cc2 >>>>>> >>> >>> > >>> 2014-11-24 18:05:37,093 WARN >>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.VdsManager] >>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) Host Compute5 is >>>>>> >>> >>> > >>> running >>>>>> >>> >>> > >>> with >>>>>> >>> >>> > >>> disabled >>>>>> >>> >>> > >>> SELinux. >>>>>> >>> >>> > >>> 2014-11-24 18:05:37,127 INFO >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] >>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] >>>>>> >>> >>> > >>> Running >>>>>> >>> >>> > >>> command: >>>>>> >>> >>> > >>> HandleVdsCpuFlagsOrClusterChangedCommand internal: >>>>>> >>> >>> > >>> true. >>>>>> >>> >>> > >>> Entities >>>>>> >>> >>> > >>> affected >>>>>> >>> >>> > >>> : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: >>>>>> >>> >>> > >>> VDS >>>>>> >>> >>> > >>> 2014-11-24 18:05:37,147 INFO >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] START, >>>>>> >>> >>> > >>> GlusterServersListVDSCommand(HostName = Compute5, >>>>>> >>> >>> > >>> HostId >>>>>> >>> >>> > >>> = >>>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a), log id: >>>>>> >>> >>> > >>> 4faed87 >>>>>> >>> >>> > >>> 2014-11-24 18:05:37,164 INFO >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] >>>>>> >>> >>> > >>> FINISH, >>>>>> >>> >>> > >>> GlusterServersListVDSCommand, log id: 4faed87 >>>>>> >>> >>> > >>> 2014-11-24 18:05:37,189 INFO >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] >>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] >>>>>> >>> >>> > >>> Running >>>>>> >>> >>> > >>> command: >>>>>> >>> >>> > >>> SetNonOperationalVdsCommand internal: true. Entities >>>>>> >>> >>> > >>> affected : >>>>>> >>> >>> > >>> ID: >>>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS >>>>>> >>> >>> > >>> 2014-11-24 18:05:37,206 INFO >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] START, >>>>>> >>> >>> > >>> SetVdsStatusVDSCommand(HostName = Compute5, HostId = >>>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, >>>>>> >>> >>> > >>> status=NonOperational, >>>>>> >>> >>> > >>> nonOperationalReason=GLUSTER_COMMAND_FAILED, >>>>>> >>> >>> > >>> stopSpmFailureLogged=false), >>>>>> >>> >>> > >>> log id: fed5617 >>>>>> >>> >>> > >>> 2014-11-24 18:05:37,209 INFO >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] >>>>>> >>> >>> > >>> FINISH, >>>>>> >>> >>> > >>> SetVdsStatusVDSCommand, log id: fed5617 >>>>>> >>> >>> > >>> 2014-11-24 18:05:37,223 ERROR >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] >>>>>> >>> >>> > >>> Correlation ID: >>>>>> >>> >>> > >>> 4a84c4e5, >>>>>> >>> >>> > >>> Job >>>>>> >>> >>> > >>> ID: 4bfd4a6d-c3ef-468f-a40e-a3a6ca13011b, Call >>>>>> >>> >>> > >>> Stack: >>>>>> >>> >>> > >>> null, >>>>>> >>> >>> > >>> Custom >>>>>> >>> >>> > >>> Event >>>>>> >>> >>> > >>> ID: -1, Message: Gluster command [<UNKNOWN>] failed >>>>>> >>> >>> > >>> on >>>>>> >>> >>> > >>> server >>>>>> >>> >>> > >>> Compute5. >>>>>> >>> >>> > >>> 2014-11-24 18:05:37,243 INFO >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] >>>>>> >>> >>> > >>> Correlation ID: >>>>>> >>> >>> > >>> null, >>>>>> >>> >>> > >>> Call >>>>>> >>> >>> > >>> Stack: null, Custom Event ID: -1, Message: Status of >>>>>> >>> >>> > >>> host >>>>>> >>> >>> > >>> Compute5 >>>>>> >>> >>> > >>> was >>>>>> >>> >>> > >>> set >>>>>> >>> >>> > >>> to NonOperational. >>>>>> >>> >>> > >>> 2014-11-24 18:05:37,272 INFO >>>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.HandleVdsVersionCommand] >>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Running >>>>>> >>> >>> > >>> command: >>>>>> >>> >>> > >>> HandleVdsVersionCommand internal: true. Entities >>>>>> >>> >>> > >>> affected : >>>>>> >>> >>> > >>> ID: >>>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS >>>>>> >>> >>> > >>> 2014-11-24 18:05:37,274 INFO >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] >>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Host >>>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a : Compute5 is >>>>>> >>> >>> > >>> already in >>>>>> >>> >>> > >>> NonOperational status for reason >>>>>> >>> >>> > >>> GLUSTER_COMMAND_FAILED. >>>>>> >>> >>> > >>> SetNonOperationalVds command is skipped. >>>>>> >>> >>> > >>> 2014-11-24 18:05:38,065 INFO >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-55) [3706e836] >>>>>> >>> >>> > >>> FINISH, >>>>>> >>> >>> > >>> GlusterVolumesListVDSCommand, return: >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4e72a1b1 }, >>>>>> >>> >>> > >>> log id: 48a0c832 >>>>>> >>> >>> > >>> 2014-11-24 18:05:43,243 INFO >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-35) START, >>>>>> >>> >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, >>>>>> >>> >>> > >>> HostId >>>>>> >>> >>> > >>> = >>>>>> >>> >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: >>>>>> >>> >>> > >>> 3ce13ebc >>>>>> >>> >>> > >>> ^C >>>>>> >>> >>> > >>> [root@ccr01 ~]# >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> Thanks, >>>>>> >>> >>> > >>> Punit >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> _______________________________________________ >>>>>> >>> >>> > >>> Users mailing >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> listUsers@ovirt.orghttp://
On Fri, Dec 5, 2014 at 12:47 PM, Punit Dambiwal <hypunit@gmail.com> wrote: the that the line the transport probed). peer the possible, then line lists.ovirt.org/mailman/listinfo/users
>>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >>> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > > >>>>>> >>> >>> > > >>>>>> >>> >>> > >>>>>> >>> >> >>>>>> >>> >> >>>>>> >>> > >>>>>> >>> > >>>>>> >>> > _______________________________________________ >>>>>> >>> > Gluster-users mailing list >>>>>> >>> > Gluster-users@gluster.org >>>>>> >>> > >>>>>> >>> > http://supercolony.gluster.org/mailman/listinfo/gluster-users >>>>>> >> >>>>>> >> >>>>>> > >>>>> >>>>> >>> >

I post a bug in centos about a problem with NetworkManager-wait-online.service, this service is executed before network-online.target http://bugs.centos.org/view.php?id=7825 I had a problem with mount drbd in a centos7 at boot time. I think it could be similar to the problem to mount gluster in boot time in centos7. I modified the service adding one line that wait to NetworkManager to set up the ip address in the interface that is used to connect de drbd. the ip address in eth0 is: 10.88.0.1 # cat /usr/lib/systemd/system/NetworkManager-wait-online.service [Unit] Description=Network Manager Wait Online Requisite=NetworkManager.service After=NetworkManager.service Wants=network.target Before=network.target network-online.target [Service] Type=oneshot ExecStart=/usr/bin/nm-online -q --timeout=30 ExecStart=/bin/sh -c 'until ip a s eth0 | grep -q 10.88; do usleep 100000; done' Alberto Larraz 2014-12-09 10:54 GMT+01:00 Punit Dambiwal <hypunit@gmail.com>:
Dear Kaushal,
I tried various method...but still the same error...it seems it's gluster bug..is there any body can suggest work-around here ??
Thanks, Punit
On Sun, Dec 7, 2014 at 8:40 PM, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi Kaushal,
Still the same error...even try your suggested workaround :-
------------------- Can you replace 'Before=network-online.target' with 'Wants=network-online.target' and try the boot again? This should force the network to be online before starting GlusterD. -------------------
Thanks, Punit
On Sat, Dec 6, 2014 at 11:44 AM, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi Kaushal,
I already have all the hosts entry in the /etc/hosts for the easy resolution....i will try your method in the glusterd.services and check and let you know....weather problem solve or not.....
On Fri, Dec 5, 2014 at 9:50 PM, Kaushal M <kshlmster@gmail.com> wrote:
Can you replace 'Before=network-online.target' with 'Wants=network-online.target' and try the boot again? This should force the network to be online before starting GlusterD.
If even that fails, you could try adding an entry into /etc/hosts with the hostname of the system. This should prevent any more failures.
I still don't believe it's a problem with Gluster. Gluster uses apis provided by the system to perform name resolution. These definitely work correctly because you can start GlusterD later. Since the resolution failure only happens during boot, it points to system or network setup issues during boot. To me it seems like the network isn't completely setup at that point of time.
~kaushal
Hi Kaushal,
It seems it's bug in glusterfs 3.6....even i manage my systemd to start the network service before glusterd...but it's still fail...
--------------- [Unit] Description=GlusterFS, a clustered file-system server After=network.target rpcbind.service Before=network-online.target
[Service] Type=forking PIDFile=/var/run/glusterd.pid LimitNOFILE=65536 ExecStartPre=/etc/rc.d/init.d/network start ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid KillMode=process
[Install] WantedBy=multi-user.target ----------------
Thanks, Punit
On Wed, Dec 3, 2014 at 8:56 PM, Kaushal M <kshlmster@gmail.com> wrote:
I just remembered this.
There was another user having a similar issue of GlusterD failing to start on the mailing list a while back. The cause of his problem was the way his network was brought up. IIRC, he was using a static network configuration. The problem vanished when he began using dhcp. Or it might have been he was using dhcp.service and it got solved after switching to NetworkManager.
This could be one more thing you could look at.
I'll try to find the mail thread to see if it was the same problem
as you.
~kaushal
On Wed, Dec 3, 2014 at 6:22 PM, Kaushal M <kshlmster@gmail.com>
wrote:
> I don't know much about how the network target is brought up in > CentOS7, but I'll try as much as I can. > > It seems to me that, after the network has been brought up and by
> time GlusterD is started, > a. The machine hasn't yet recieved it's hostname, or > b. It hasn't yet registered with the name server. > > This is causing name resolution failures. > > I don't know if the network target could come up without the machine > getting its hostname, so I'm pretty sure it's not a. > > So it seems to be b. But these kind of signing in happens only in DDNS > systems, which doesn't seem to be the case for you. > > Both of these reasons might be wrong (most likely wrong). You'd do > good if you could ask for help from someone with more experience in > systemd + networking. > > ~kaushal > > On Wed, Dec 3, 2014 at 10:54 AM, Punit Dambiwal <hypunit@gmail.com
> wrote: >> Hi Kaushal, >> >> This is the host...which i rebooted...would you mind to let me know how >> i >> can make the glusterd sevice come up after network...i am using >> centos7...if >> network is the issue... >> >> On Wed, Dec 3, 2014 at 11:54 AM, Kaushal M <kshlmster@gmail.com> wrote: >>> >>> This peer cannot be identified. >>> >>> " [2014-12-03 02:29:25.998153] D >>> [glusterd-peer-utils.c:121:glusterd_peerinfo_find_by_hostname] >>> 0-management: >>> Unable to find friend: cpu05.zne01.hkg1.ovt.36stack.com" >>> >>> I don't know why this address is not being resolved during boot time. >>> If >>> this is a valid peer, the the only reason I can think of this
>>> network is not up. >>> >>> If you had previously detached the peer forcefully, the that could >>> have >>> left stale entries in some volumes. In this case as well, GlusterD >>> will fail >>> to identify the peer. >>> >>> Do either of these reasons seem a possibility to you? >>> >>> On Dec 3, 2014 8:07 AM, "Punit Dambiwal" <hypunit@gmail.com> wrote: >>>> >>>> Hi Kaushal, >>>> >>>> Please find the logs here :- http://ur1.ca/iyoe5 and >>>> http://ur1.ca/iyoed >>>> >>>> On Tue, Dec 2, 2014 at 10:43 PM, Kaushal M <kshlmster@gmail.com
>>>> wrote: >>>>> >>>>> Hey Punit, >>>>> In the logs you've provided, GlusterD appears to be running >>>>> correctly. >>>>> Could you provide the logs for the time period when GlusterD >>>>> attempts to >>>>> start but fails. >>>>> >>>>> ~kaushal >>>>> >>>>> On Dec 2, 2014 8:03 PM, "Punit Dambiwal" <hypunit@gmail.com> wrote: >>>>>> >>>>>> Hi Kaushal, >>>>>> >>>>>> Please find the logs here :- http://ur1.ca/iyhs5 and >>>>>> http://ur1.ca/iyhue >>>>>> >>>>>> Thanks, >>>>>> punit >>>>>> >>>>>> >>>>>> On Tue, Dec 2, 2014 at 12:00 PM, Kaushal M < kshlmster@gmail.com> >>>>>> wrote: >>>>>>> >>>>>>> Hey Punit, >>>>>>> Could you start Glusterd in debug mode and provide the logs here? >>>>>>> To start it in debug mode, append '-LDEBUG' to the ExecStart
>>>>>>> in >>>>>>> the service file. >>>>>>> >>>>>>> ~kaushal >>>>>>> >>>>>>> On Mon, Dec 1, 2014 at 9:05 AM, Punit Dambiwal < hypunit@gmail.com> >>>>>>> wrote: >>>>>>> > Hi, >>>>>>> > >>>>>>> > Can Any body help me on this ?? >>>>>>> > >>>>>>> > On Thu, Nov 27, 2014 at 9:29 AM, Punit Dambiwal >>>>>>> > <hypunit@gmail.com> >>>>>>> > wrote: >>>>>>> >> >>>>>>> >> Hi Kaushal, >>>>>>> >> >>>>>>> >> Thanks for the detailed reply....let me explain my setup first >>>>>>> >> :- >>>>>>> >> >>>>>>> >> 1. Ovirt Engine >>>>>>> >> 2. 4* host as well as storage machine (Host and gluster >>>>>>> >> combined) >>>>>>> >> 3. Every host has 24 bricks... >>>>>>> >> >>>>>>> >> Now whenever the host machine reboot...it can come up but can >>>>>>> >> not >>>>>>> >> join the >>>>>>> >> cluster again and through the following error "Gluster command >>>>>>> >> [<UNKNOWN>] >>>>>>> >> failed on server.." >>>>>>> >> >>>>>>> >> Please check my comment in line :- >>>>>>> >> >>>>>>> >> 1. Use the same string for doing the peer probe and for
>>>>>>> >> brick >>>>>>> >> address >>>>>>> >> during volume create/add-brick. Ideally, we suggest you use >>>>>>> >> properly >>>>>>> >> resolvable FQDNs everywhere. If that is not possible,
>>>>>>> >> only >>>>>>> >> IP >>>>>>> >> addresses. Try to avoid short names. >>>>>>> >> --------------- >>>>>>> >> [root@cpu05 ~]# gluster peer status >>>>>>> >> Number of Peers: 3 >>>>>>> >> >>>>>>> >> Hostname: cpu03.stack.com >>>>>>> >> Uuid: 5729b8c4-e80d-4353-b456-6f467bddbdfb >>>>>>> >> State: Peer in Cluster (Connected) >>>>>>> >> >>>>>>> >> Hostname: cpu04.stack.com >>>>>>> >> Uuid: d272b790-c4b2-4bed-ba68-793656e6d7b0 >>>>>>> >> State: Peer in Cluster (Connected) >>>>>>> >> Other names: >>>>>>> >> 10.10.0.8 >>>>>>> >> >>>>>>> >> Hostname: cpu02.stack.com >>>>>>> >> Uuid: 8d8a7041-950e-40d0-85f9-58d14340ca25 >>>>>>> >> State: Peer in Cluster (Connected) >>>>>>> >> [root@cpu05 ~]# >>>>>>> >> ---------------- >>>>>>> >> 2. During boot up, make sure to launch glusterd only after the >>>>>>> >> network is >>>>>>> >> up. This will allow the new peer identification mechanism to do >>>>>>> >> its >>>>>>> >> job correctly. >>>>>>> >> >> I think the service itself doing the same job.... >>>>>>> >> >>>>>>> >> [root@cpu05 ~]# cat /usr/lib/systemd/system/glusterd.service >>>>>>> >> [Unit] >>>>>>> >> Description=GlusterFS, a clustered file-system server >>>>>>> >> After=network.target rpcbind.service >>>>>>> >> Before=network-online.target >>>>>>> >> >>>>>>> >> [Service] >>>>>>> >> Type=forking >>>>>>> >> PIDFile=/var/run/glusterd.pid >>>>>>> >> LimitNOFILE=65536 >>>>>>> >> ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid >>>>>>> >> KillMode=process >>>>>>> >> >>>>>>> >> [Install] >>>>>>> >> WantedBy=multi-user.target >>>>>>> >> [root@cpu05 ~]# >>>>>>> >> -------------------- >>>>>>> >> >>>>>>> >> gluster logs :- >>>>>>> >> >>>>>>> >> [2014-11-24 09:22:22.147471] I [MSGID: 100030] >>>>>>> >> [glusterfsd.c:2018:main] >>>>>>> >> 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd >>>>>>> >> version >>>>>>> >> 3.6.1 >>>>>>> >> (args: /usr/sbin/glusterd -p /var/run/glusterd.pid) >>>>>>> >> [2014-11-24 09:22:22.151565] I [glusterd.c:1214:init] >>>>>>> >> 0-management: >>>>>>> >> Maximum allowed open file descriptors set to 65536 >>>>>>> >> [2014-11-24 09:22:22.151599] I [glusterd.c:1259:init] >>>>>>> >> 0-management: >>>>>>> >> Using >>>>>>> >> /var/lib/glusterd as working directory >>>>>>> >> [2014-11-24 09:22:22.155216] W >>>>>>> >> [rdma.c:4195:__gf_rdma_ctx_create] >>>>>>> >> 0-rpc-transport/rdma: rdma_cm event channel creation failed (No >>>>>>> >> such device) >>>>>>> >> [2014-11-24 09:22:22.155264] E [rdma.c:4483:init] >>>>>>> >> 0-rdma.management: >>>>>>> >> Failed to initialize IB Device >>>>>>> >> [2014-11-24 09:22:22.155285] E >>>>>>> >> [rpc-transport.c:333:rpc_transport_load] >>>>>>> >> 0-rpc-transport: 'rdma' initialization failed >>>>>>> >> [2014-11-24 09:22:22.155354] W >>>>>>> >> [rpcsvc.c:1524:rpcsvc_transport_create] >>>>>>> >> 0-rpc-service: cannot create listener, initing the
>>>>>>> >> failed >>>>>>> >> [2014-11-24 09:22:22.156290] I >>>>>>> >> [glusterd.c:413:glusterd_check_gsync_present] 0-glusterd: >>>>>>> >> geo-replication >>>>>>> >> module not installed in the system >>>>>>> >> [2014-11-24 09:22:22.161318] I >>>>>>> >> [glusterd-store.c:2043:glusterd_restore_op_version] 0-glusterd: >>>>>>> >> retrieved >>>>>>> >> op-version: 30600 >>>>>>> >> [2014-11-24 09:22:22.821800] I >>>>>>> >> [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] >>>>>>> >> 0-management: >>>>>>> >> connect returned 0 >>>>>>> >> [2014-11-24 09:22:22.825810] I >>>>>>> >> [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] >>>>>>> >> 0-management: >>>>>>> >> connect returned 0 >>>>>>> >> [2014-11-24 09:22:22.828705] I >>>>>>> >> [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] >>>>>>> >> 0-management: >>>>>>> >> connect returned 0 >>>>>>> >> [2014-11-24 09:22:22.828771] I >>>>>>> >> [rpc-clnt.c:969:rpc_clnt_connection_init] >>>>>>> >> 0-management: setting frame-timeout to 600 >>>>>>> >> [2014-11-24 09:22:22.832670] I >>>>>>> >> [rpc-clnt.c:969:rpc_clnt_connection_init] >>>>>>> >> 0-management: setting frame-timeout to 600 >>>>>>> >> [2014-11-24 09:22:22.835919] I >>>>>>> >> [rpc-clnt.c:969:rpc_clnt_connection_init] >>>>>>> >> 0-management: setting frame-timeout to 600 >>>>>>> >> [2014-11-24 09:22:22.840209] E >>>>>>> >> [glusterd-store.c:4248:glusterd_resolve_all_bricks] 0-glusterd: >>>>>>> >> resolve >>>>>>> >> brick failed in restore >>>>>>> >> [2014-11-24 09:22:22.840233] E [xlator.c:425:xlator_init] >>>>>>> >> 0-management: >>>>>>> >> Initialization of volume 'management' failed, review your >>>>>>> >> volfile >>>>>>> >> again >>>>>>> >> [2014-11-24 09:22:22.840245] E >>>>>>> >> [graph.c:322:glusterfs_graph_init] >>>>>>> >> 0-management: initializing translator failed >>>>>>> >> [2014-11-24 09:22:22.840264] E >>>>>>> >> [graph.c:525:glusterfs_graph_activate] >>>>>>> >> 0-graph: init failed >>>>>>> >> [2014-11-24 09:22:22.840754] W >>>>>>> >> [glusterfsd.c:1194:cleanup_and_exit] >>>>>>> >> (--> >>>>>>> >> 0-: received signum (0), shutting down >>>>>>> >> >>>>>>> >> Thanks, >>>>>>> >> Punit >>>>>>> >> >>>>>>> >> >>>>>>> >> >>>>>>> >> >>>>>>> >> On Wed, Nov 26, 2014 at 7:14 PM, Kaushal M >>>>>>> >> <kshlmster@gmail.com> >>>>>>> >> wrote: >>>>>>> >>> >>>>>>> >>> Based on the logs I can guess that glusterd is being started >>>>>>> >>> before >>>>>>> >>> the network has come up and that the addresses given to bricks >>>>>>> >>> do >>>>>>> >>> not >>>>>>> >>> directly match the addresses used in during peer probe. >>>>>>> >>> >>>>>>> >>> The gluster_after_reboot log has the line "[2014-11-25 >>>>>>> >>> 06:46:09.972113] E >>>>>>> >>> [glusterd-store.c:2632:glusterd_resolve_all_bricks] >>>>>>> >>> 0-glusterd: resolve brick failed in restore". >>>>>>> >>> >>>>>>> >>> Brick resolution fails when glusterd cannot match the address >>>>>>> >>> for >>>>>>> >>> the >>>>>>> >>> brick, with one of the peers. Brick resolution happens in two >>>>>>> >>> phases, >>>>>>> >>> 1. We first try to identify the peer by performing string >>>>>>> >>> comparisions >>>>>>> >>> with the brick address and the peer addresses (The peer names >>>>>>> >>> will >>>>>>> >>> be >>>>>>> >>> the names/addresses that were given when the peer was
>>>>>>> >>> 2. If we don't find a match from step 1, we will then resolve >>>>>>> >>> all >>>>>>> >>> the >>>>>>> >>> brick address and the peer addresses into addrinfo structs, >>>>>>> >>> and >>>>>>> >>> then >>>>>>> >>> compare these structs to find a match. This process should >>>>>>> >>> generally >>>>>>> >>> find a match if available. This will fail only if the network >>>>>>> >>> is >>>>>>> >>> not >>>>>>> >>> up yet as we cannot resolve addresses. >>>>>>> >>> >>>>>>> >>> The above steps are applicable only to glusterfs versions >>>>>>> >>> >=3.6. >>>>>>> >>> They >>>>>>> >>> were introduced to reduce problems with peer identification, >>>>>>> >>> like >>>>>>> >>> the >>>>>>> >>> one you encountered >>>>>>> >>> >>>>>>> >>> Since both of the steps failed to find a match in one run, but >>>>>>> >>> succeeded later, we can come to the conclusion that, >>>>>>> >>> a) the bricks don't have the exact same string used in
>>>>>>> >>> probe >>>>>>> >>> for >>>>>>> >>> their addresses as step 1 failed, and >>>>>>> >>> b) the network was not up in the initial run, as step 2 failed >>>>>>> >>> during >>>>>>> >>> the initial run, but passed in the second run. >>>>>>> >>> >>>>>>> >>> Please let me know if my conclusion is correct. >>>>>>> >>> >>>>>>> >>> If it is, you can solve your problem in two ways. >>>>>>> >>> 1. Use the same string for doing the peer probe and for
>>>>>>> >>> brick >>>>>>> >>> address during volume create/add-brick. Ideally, we suggest >>>>>>> >>> you >>>>>>> >>> use >>>>>>> >>> properly resolvable FQDNs everywhere. If that is not
>>>>>>> >>> then >>>>>>> >>> use only IP addresses. Try to avoid short names. >>>>>>> >>> 2. During boot up, make sure to launch glusterd only after the >>>>>>> >>> network >>>>>>> >>> is up. This will allow the new peer identification mechanism >>>>>>> >>> to do >>>>>>> >>> its >>>>>>> >>> job correctly. >>>>>>> >>> >>>>>>> >>> >>>>>>> >>> If you have already followed these steps and yet still hit the >>>>>>> >>> problem, then please provide more information (setup, logs, >>>>>>> >>> etc.). >>>>>>> >>> It >>>>>>> >>> could be much different problem that you are facing. >>>>>>> >>> >>>>>>> >>> ~kaushal >>>>>>> >>> >>>>>>> >>> On Wed, Nov 26, 2014 at 4:01 PM, Punit Dambiwal >>>>>>> >>> <hypunit@gmail.com> >>>>>>> >>> wrote: >>>>>>> >>> > Is there any one can help on this ?? >>>>>>> >>> > >>>>>>> >>> > Thanks, >>>>>>> >>> > punit >>>>>>> >>> > >>>>>>> >>> > On Wed, Nov 26, 2014 at 9:42 AM, Punit Dambiwal >>>>>>> >>> > <hypunit@gmail.com> >>>>>>> >>> > wrote: >>>>>>> >>> >> >>>>>>> >>> >> Hi, >>>>>>> >>> >> >>>>>>> >>> >> My Glusterfs version is :- glusterfs-3.6.1-1.el7 >>>>>>> >>> >> >>>>>>> >>> >> On Wed, Nov 26, 2014 at 1:59 AM, Kanagaraj Mayilsamy >>>>>>> >>> >> <kmayilsa@redhat.com> >>>>>>> >>> >> wrote: >>>>>>> >>> >>> >>>>>>> >>> >>> [+Gluster-users@gluster.org] >>>>>>> >>> >>> >>>>>>> >>> >>> "Initialization of volume 'management' failed, review your >>>>>>> >>> >>> volfile >>>>>>> >>> >>> again", glusterd throws this error when the service is >>>>>>> >>> >>> started >>>>>>> >>> >>> automatically >>>>>>> >>> >>> after the reboot. But the service is successfully started >>>>>>> >>> >>> later >>>>>>> >>> >>> manually by >>>>>>> >>> >>> the user. >>>>>>> >>> >>> >>>>>>> >>> >>> can somebody from gluster-users please help on this? >>>>>>> >>> >>> >>>>>>> >>> >>> glusterfs version: 3.5.1 >>>>>>> >>> >>> >>>>>>> >>> >>> Thanks, >>>>>>> >>> >>> Kanagaraj >>>>>>> >>> >>> >>>>>>> >>> >>> ----- Original Message ----- >>>>>>> >>> >>> > From: "Punit Dambiwal" <hypunit@gmail.com> >>>>>>> >>> >>> > To: "Kanagaraj" <kmayilsa@redhat.com> >>>>>>> >>> >>> > Cc: users@ovirt.org >>>>>>> >>> >>> > Sent: Tuesday, November 25, 2014 7:24:45 PM >>>>>>> >>> >>> > Subject: Re: [ovirt-users] Gluster command [<UNKNOWN>] >>>>>>> >>> >>> > failed on >>>>>>> >>> >>> > server... >>>>>>> >>> >>> > >>>>>>> >>> >>> > Hi Kanagraj, >>>>>>> >>> >>> > >>>>>>> >>> >>> > Please check the attached log files....i didn't find any >>>>>>> >>> >>> > thing >>>>>>> >>> >>> > special.... >>>>>>> >>> >>> > >>>>>>> >>> >>> > On Tue, Nov 25, 2014 at 12:12 PM, Kanagaraj >>>>>>> >>> >>> > <kmayilsa@redhat.com> >>>>>>> >>> >>> > wrote: >>>>>>> >>> >>> > >>>>>>> >>> >>> > > Do you see any errors in >>>>>>> >>> >>> > > /var/log/glusterfs/etc-glusterfs-glusterd.vol.log or >>>>>>> >>> >>> > > vdsm.log >>>>>>> >>> >>> > > when >>>>>>> >>> >>> > > the >>>>>>> >>> >>> > > service is trying to start automatically after
>>>>>>> >>> >>> > > reboot? >>>>>>> >>> >>> > > >>>>>>> >>> >>> > > Thanks, >>>>>>> >>> >>> > > Kanagaraj >>>>>>> >>> >>> > > >>>>>>> >>> >>> > > >>>>>>> >>> >>> > > On 11/24/2014 08:13 PM, Punit Dambiwal wrote: >>>>>>> >>> >>> > > >>>>>>> >>> >>> > > Hi Kanagaraj, >>>>>>> >>> >>> > > >>>>>>> >>> >>> > > Yes...once i will start the gluster service and
>>>>>>> >>> >>> > > vdsmd >>>>>>> >>> >>> > > ...the >>>>>>> >>> >>> > > host >>>>>>> >>> >>> > > can connect to cluster...but the question is why it's >>>>>>> >>> >>> > > not >>>>>>> >>> >>> > > started >>>>>>> >>> >>> > > even it >>>>>>> >>> >>> > > has chkconfig enabled... >>>>>>> >>> >>> > > >>>>>>> >>> >>> > > I have tested it in two host cluster >>>>>>> >>> >>> > > environment...(Centos 6.6 >>>>>>> >>> >>> > > and >>>>>>> >>> >>> > > centos 7.0) on both hypervisior cluster..it's failed >>>>>>> >>> >>> > > to >>>>>>> >>> >>> > > reconnect >>>>>>> >>> >>> > > in >>>>>>> >>> >>> > > to >>>>>>> >>> >>> > > cluster after reboot.... >>>>>>> >>> >>> > > >>>>>>> >>> >>> > > In both the environment glusterd enabled for next >>>>>>> >>> >>> > > boot....but >>>>>>> >>> >>> > > it's >>>>>>> >>> >>> > > failed with the same error....seems it's bug in either >>>>>>> >>> >>> > > gluster or >>>>>>> >>> >>> > > Ovirt ?? >>>>>>> >>> >>> > > >>>>>>> >>> >>> > > Please help me to find the workaround here if can not >>>>>>> >>> >>> > > resolve >>>>>>> >>> >>> > > it...as >>>>>>> >>> >>> > > without this the Host machine can not connect after >>>>>>> >>> >>> > > reboot....that >>>>>>> >>> >>> > > means >>>>>>> >>> >>> > > engine will consider it as down and every time need to >>>>>>> >>> >>> > > manually >>>>>>> >>> >>> > > start >>>>>>> >>> >>> > > the >>>>>>> >>> >>> > > gluster service and vdsmd... ?? >>>>>>> >>> >>> > > >>>>>>> >>> >>> > > Thanks, >>>>>>> >>> >>> > > Punit >>>>>>> >>> >>> > > >>>>>>> >>> >>> > > On Mon, Nov 24, 2014 at 10:20 PM, Kanagaraj >>>>>>> >>> >>> > > <kmayilsa@redhat.com> >>>>>>> >>> >>> > > wrote: >>>>>>> >>> >>> > > >>>>>>> >>> >>> > >> From vdsm.log "error: Connection failed. Please >>>>>>> >>> >>> > >> check if >>>>>>> >>> >>> > >> gluster >>>>>>> >>> >>> > >> daemon >>>>>>> >>> >>> > >> is operational." >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> Starting glusterd service should fix this issue. >>>>>>> >>> >>> > >> 'service >>>>>>> >>> >>> > >> glusterd >>>>>>> >>> >>> > >> start' >>>>>>> >>> >>> > >> But i am wondering why the glusterd was not started >>>>>>> >>> >>> > >> automatically >>>>>>> >>> >>> > >> after >>>>>>> >>> >>> > >> the reboot. >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> Thanks, >>>>>>> >>> >>> > >> Kanagaraj >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> On 11/24/2014 07:18 PM, Punit Dambiwal wrote: >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> Hi Kanagaraj, >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> Please find the attached VDSM logs :- >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> ---------------- >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> 21:41:17,182::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >>>>>>> >>> >>> > >> Owner.cancelAll requests {} >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> 21:41:17,182::task::993::Storage.TaskManager.Task::(_decref) >>>>>>> >>> >>> > >> Task=`1691d409-9b27-4585-8281-5ec26154367a`::ref 0 >>>>>>> >>> >>> > >> aborting >>>>>>> >>> >>> > >> False >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> 21:41:32,393::task::595::Storage.TaskManager.Task::(_updateState) >>>>>>> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving >>>>>>> >>> >>> > >> from >>>>>>> >>> >>> > >> state >>>>>>> >>> >>> > >> init >>>>>>> >>> >>> > >> -> >>>>>>> >>> >>> > >> state preparing >>>>>>> >>> >>> > >> Thread-13::INFO::2014-11-24 >>>>>>> >>> >>> > >> 21:41:32,393::logUtils::44::dispatcher::(wrapper) Run >>>>>>> >>> >>> > >> and >>>>>>> >>> >>> > >> protect: >>>>>>> >>> >>> > >> repoStats(options=None) >>>>>>> >>> >>> > >> Thread-13::INFO::2014-11-24 >>>>>>> >>> >>> > >> 21:41:32,393::logUtils::47::dispatcher::(wrapper) Run >>>>>>> >>> >>> > >> and >>>>>>> >>> >>> > >> protect: >>>>>>> >>> >>> > >> repoStats, Return response: {} >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> 21:41:32,393::task::1191::Storage.TaskManager.Task::(prepare) >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::finished: {} >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> 21:41:32,394::task::595::Storage.TaskManager.Task::(_updateState) >>>>>>> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving >>>>>>> >>> >>> > >> from >>>>>>> >>> >>> > >> state >>>>>>> >>> >>> > >> preparing >>>>>>> >>> >>> > >> -> >>>>>>> >>> >>> > >> state finished >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> 21:41:32,394::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) >>>>>>> >>> >>> > >> Owner.releaseAll requests {} resources {} >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> 21:41:32,394::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >>>>>>> >>> >>> > >> Owner.cancelAll requests {} >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> 21:41:32,394::task::993::Storage.TaskManager.Task::(_decref) >>>>>>> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::ref 0 >>>>>>> >>> >>> > >> aborting >>>>>>> >>> >>> > >> False >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> 21:41:41,550::BindingXMLRPC::1132::vds::(wrapper) >>>>>>> >>> >>> > >> client >>>>>>> >>> >>> > >> [10.10.10.2]::call >>>>>>> >>> >>> > >> getCapabilities with () {} >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> 21:41:41,553::utils::738::root::(execCmd) >>>>>>> >>> >>> > >> /sbin/ip route show to 0.0.0.0/0 table all (cwd None) >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> 21:41:41,560::utils::758::root::(execCmd) >>>>>>> >>> >>> > >> SUCCESS: <err> = ''; <rc> = 0 >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> 21:41:41,588::caps::728::root::(_getKeyPackages) rpm >>>>>>> >>> >>> > >> package >>>>>>> >>> >>> > >> ('gluster-swift',) not found >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> 21:41:41,592::caps::728::root::(_getKeyPackages) rpm >>>>>>> >>> >>> > >> package >>>>>>> >>> >>> > >> ('gluster-swift-object',) not found >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> 21:41:41,593::caps::728::root::(_getKeyPackages) rpm >>>>>>> >>> >>> > >> package >>>>>>> >>> >>> > >> ('gluster-swift-plugin',) not found >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm >>>>>>> >>> >>> > >> package >>>>>>> >>> >>> > >> ('gluster-swift-account',) not found >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm >>>>>>> >>> >>> > >> package >>>>>>> >>> >>> > >> ('gluster-swift-proxy',) not found >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm >>>>>>> >>> >>> > >> package >>>>>>> >>> >>> > >> ('gluster-swift-doc',) not found >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm >>>>>>> >>> >>> > >> package >>>>>>> >>> >>> > >> ('gluster-swift-container',) not found >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm >>>>>>> >>> >>> > >> package >>>>>>> >>> >>> > >> ('glusterfs-geo-replication',) not found >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> 21:41:41,600::caps::646::root::(get) >>>>>>> >>> >>> > >> VirtioRNG DISABLED: libvirt version 0.10.2-29.el6_5.9 >>>>>>> >>> >>> > >> required >>>>>>> >>> >>> > >> >= >>>>>>> >>> >>> > >> 0.10.2-31 >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> 21:41:41,603::BindingXMLRPC::1139::vds::(wrapper) >>>>>>> >>> >>> > >> return >>>>>>> >>> >>> > >> getCapabilities >>>>>>> >>> >>> > >> with {'status': {'message': 'Done', 'code': 0}, >>>>>>> >>> >>> > >> 'info': >>>>>>> >>> >>> > >> {'HBAInventory': >>>>>>> >>> >>> > >> {'iSCSI': [{'InitiatorName': >>>>>>> >>> >>> > >> 'iqn.1994-05.com.redhat:32151ce183c8'}], >>>>>>> >>> >>> > >> 'FC': >>>>>>> >>> >>> > >> []}, 'packages2': {'kernel': {'release': >>>>>>> >>> >>> > >> '431.el6.x86_64', >>>>>>> >>> >>> > >> 'buildtime': >>>>>>> >>> >>> > >> 1385061309.0, 'version': '2.6.32'}, 'glusterfs-rdma': >>>>>>> >>> >>> > >> {'release': >>>>>>> >>> >>> > >> '1.el6', >>>>>>> >>> >>> > >> 'buildtime': 1403622628L, 'version': '3.5.1'}, >>>>>>> >>> >>> > >> 'glusterfs-fuse': >>>>>>> >>> >>> > >> {'release': '1.el6', 'buildtime': 1403622628L, >>>>>>> >>> >>> > >> 'version': >>>>>>> >>> >>> > >> '3.5.1'}, >>>>>>> >>> >>> > >> 'spice-server': {'release': '6.el6_5.2', 'buildtime': >>>>>>> >>> >>> > >> 1402324637L, >>>>>>> >>> >>> > >> 'version': '0.12.4'}, 'vdsm': {'release': >>>>>>> >>> >>> > >> '1.gitdb83943.el6', >>>>>>> >>> >>> > >> 'buildtime': >>>>>>> >>> >>> > >> 1412784567L, 'version': '4.16.7'}, 'qemu-kvm': >>>>>>> >>> >>> > >> {'release': >>>>>>> >>> >>> > >> '2.415.el6_5.10', 'buildtime': 1402435700L, >>>>>>> >>> >>> > >> 'version': >>>>>>> >>> >>> > >> '0.12.1.2'}, >>>>>>> >>> >>> > >> 'qemu-img': {'release': '2.415.el6_5.10', >>>>>>> >>> >>> > >> 'buildtime': >>>>>>> >>> >>> > >> 1402435700L, >>>>>>> >>> >>> > >> 'version': '0.12.1.2'}, 'libvirt': {'release': >>>>>>> >>> >>> > >> '29.el6_5.9', >>>>>>> >>> >>> > >> 'buildtime': >>>>>>> >>> >>> > >> 1402404612L, 'version': '0.10.2'}, 'glusterfs': >>>>>>> >>> >>> > >> {'release': >>>>>>> >>> >>> > >> '1.el6', >>>>>>> >>> >>> > >> 'buildtime': 1403622628L, 'version': '3.5.1'}, 'mom': >>>>>>> >>> >>> > >> {'release': >>>>>>> >>> >>> > >> '2.el6', >>>>>>> >>> >>> > >> 'buildtime': 1403794344L, 'version': '0.4.1'}, >>>>>>> >>> >>> > >> 'glusterfs-server': >>>>>>> >>> >>> > >> {'release': '1.el6', 'buildtime': 1403622628L, >>>>>>> >>> >>> > >> 'version': >>>>>>> >>> >>> > >> '3.5.1'}}, >>>>>>> >>> >>> > >> 'numaNodeDistance': {'1': [20, 10], '0': [10, 20]}, >>>>>>> >>> >>> > >> 'cpuModel': >>>>>>> >>> >>> > >> 'Intel(R) >>>>>>> >>> >>> > >> Xeon(R) CPU X5650 @ 2.67GHz', 'liveMerge': >>>>>>> >>> >>> > >> 'false', >>>>>>> >>> >>> > >> 'hooks': >>>>>>> >>> >>> > >> {}, >>>>>>> >>> >>> > >> 'cpuSockets': '2', 'vmTypes': ['kvm'], 'selinux': >>>>>>> >>> >>> > >> {'mode': '1'}, >>>>>>> >>> >>> > >> 'kdumpStatus': 0, 'supportedProtocols': ['2.2', >>>>>>> >>> >>> > >> '2.3'], >>>>>>> >>> >>> > >> 'networks': >>>>>>> >>> >>> > >> {'ovirtmgmt': {'iface': u'bond0.10', 'addr': >>>>>>> >>> >>> > >> '43.252.176.16', >>>>>>> >>> >>> > >> 'bridged': >>>>>>> >>> >>> > >> False, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], >>>>>>> >>> >>> > >> 'mtu': >>>>>>> >>> >>> > >> '1500', >>>>>>> >>> >>> > >> 'bootproto4': 'none', 'netmask': '255.255.255.0', >>>>>>> >>> >>> > >> 'ipv4addrs': >>>>>>> >>> >>> > >> [' >>>>>>> >>> >>> > >> 43.252.176.16/24' <http://43.252.176.16/24%27 ], >>>>>>> >>> >>> > >> 'interface': >>>>>>> >>> >>> > >> u'bond0.10', 'ipv6gateway': '::', 'gateway': >>>>>>> >>> >>> > >> '43.25.17.1'}, >>>>>>> >>> >>> > >> 'Internal': >>>>>>> >>> >>> > >> {'iface': 'Internal', 'addr': '', 'cfg': {'DEFROUTE': >>>>>>> >>> >>> > >> 'no', >>>>>>> >>> >>> > >> 'HOTPLUG': >>>>>>> >>> >>> > >> 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': >>>>>>> >>> >>> > >> 'no', >>>>>>> >>> >>> > >> 'BOOTPROTO': >>>>>>> >>> >>> > >> 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': >>>>>>> >>> >>> > >> 'Bridge', >>>>>>> >>> >>> > >> 'ONBOOT': >>>>>>> >>> >>> > >> 'no'}, 'bridged': True, 'ipv6addrs': >>>>>>> >>> >>> > >> ['fe80::210:18ff:fecd:daac/64'], >>>>>>> >>> >>> > >> 'gateway': '', 'bootproto4': 'none', 'netmask': '', >>>>>>> >>> >>> > >> 'stp': >>>>>>> >>> >>> > >> 'off', >>>>>>> >>> >>> > >> 'ipv4addrs': [], 'mtu': '9000', 'ipv6gateway': '::', >>>>>>> >>> >>> > >> 'ports': >>>>>>> >>> >>> > >> ['bond1.100']}, 'storage': {'iface': u'bond1', >>>>>>> >>> >>> > >> 'addr': >>>>>>> >>> >>> > >> '10.10.10.6', >>>>>>> >>> >>> > >> 'bridged': False, 'ipv6addrs': >>>>>>> >>> >>> > >> ['fe80::210:18ff:fecd:daac/64'], >>>>>>> >>> >>> > >> 'mtu': >>>>>>> >>> >>> > >> '9000', 'bootproto4': 'none', 'netmask': >>>>>>> >>> >>> > >> '255.255.255.0', >>>>>>> >>> >>> > >> 'ipv4addrs': [' >>>>>>> >>> >>> > >> 10.10.10.6/24' <http://10.10.10.6/24%27>], >>>>>>> >>> >>> > >> 'interface': >>>>>>> >>> >>> > >> u'bond1', >>>>>>> >>> >>> > >> 'ipv6gateway': '::', 'gateway': ''}, 'VMNetwork': >>>>>>> >>> >>> > >> {'iface': >>>>>>> >>> >>> > >> 'VMNetwork', >>>>>>> >>> >>> > >> 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': >>>>>>> >>> >>> > >> 'no', >>>>>>> >>> >>> > >> 'MTU': >>>>>>> >>> >>> > >> '1500', >>>>>>> >>> >>> > >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': >>>>>>> >>> >>> > >> 'none', >>>>>>> >>> >>> > >> 'STP': >>>>>>> >>> >>> > >> 'off', >>>>>>> >>> >>> > >> 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': >>>>>>> >>> >>> > >> 'no'}, >>>>>>> >>> >>> > >> 'bridged': >>>>>>> >>> >>> > >> True, >>>>>>> >>> >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], >>>>>>> >>> >>> > >> 'gateway': >>>>>>> >>> >>> > >> '', >>>>>>> >>> >>> > >> 'bootproto4': >>>>>>> >>> >>> > >> 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], >>>>>>> >>> >>> > >> 'mtu': >>>>>>> >>> >>> > >> '1500', >>>>>>> >>> >>> > >> 'ipv6gateway': '::', 'ports': ['bond0.36']}}, >>>>>>> >>> >>> > >> 'bridges': >>>>>>> >>> >>> > >> {'Internal': >>>>>>> >>> >>> > >> {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': >>>>>>> >>> >>> > >> 'no', >>>>>>> >>> >>> > >> 'MTU': >>>>>>> >>> >>> > >> '9000', >>>>>>> >>> >>> > >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': >>>>>>> >>> >>> > >> 'none', >>>>>>> >>> >>> > >> 'STP': >>>>>>> >>> >>> > >> 'off', >>>>>>> >>> >>> > >> 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': >>>>>>> >>> >>> > >> 'no'}, >>>>>>> >>> >>> > >> 'ipv6addrs': >>>>>>> >>> >>> > >> ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', >>>>>>> >>> >>> > >> 'netmask': '', >>>>>>> >>> >>> > >> 'stp': >>>>>>> >>> >>> > >> 'off', 'ipv4addrs': [], 'ipv6gateway': '::', >>>>>>> >>> >>> > >> 'gateway': >>>>>>> >>> >>> > >> '', >>>>>>> >>> >>> > >> 'opts': >>>>>>> >>> >>> > >> {'topology_change_detected': '0', >>>>>>> >>> >>> > >> 'multicast_last_member_count': >>>>>>> >>> >>> > >> '2', >>>>>>> >>> >>> > >> 'hash_elasticity': '4', >>>>>>> >>> >>> > >> 'multicast_query_response_interval': >>>>>>> >>> >>> > >> '999', >>>>>>> >>> >>> > >> 'multicast_snooping': '1', >>>>>>> >>> >>> > >> 'multicast_startup_query_interval': >>>>>>> >>> >>> > >> '3124', >>>>>>> >>> >>> > >> 'hello_timer': '31', 'multicast_querier_interval': >>>>>>> >>> >>> > >> '25496', >>>>>>> >>> >>> > >> 'max_age': >>>>>>> >>> >>> > >> '1999', 'hash_max': '512', 'stp_state': '0', >>>>>>> >>> >>> > >> 'root_id': >>>>>>> >>> >>> > >> '8000.001018cddaac', 'priority': '32768', >>>>>>> >>> >>> > >> 'multicast_membership_interval': >>>>>>> >>> >>> > >> '25996', 'root_path_cost': '0', 'root_port': '0', >>>>>>> >>> >>> > >> 'multicast_querier': >>>>>>> >>> >>> > >> '0', >>>>>>> >>> >>> > >> 'multicast_startup_query_count': '2', 'hello_time': >>>>>>> >>> >>> > >> '199', >>>>>>> >>> >>> > >> 'topology_change': '0', 'bridge_id': >>>>>>> >>> >>> > >> '8000.001018cddaac', >>>>>>> >>> >>> > >> 'topology_change_timer': '0', 'ageing_time': '29995', >>>>>>> >>> >>> > >> 'gc_timer': >>>>>>> >>> >>> > >> '31', >>>>>>> >>> >>> > >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', >>>>>>> >>> >>> > >> 'multicast_query_interval': '12498', >>>>>>> >>> >>> > >> 'multicast_last_member_interval': >>>>>>> >>> >>> > >> '99', 'multicast_router': '1', 'forward_delay': '0'}, >>>>>>> >>> >>> > >> 'ports': >>>>>>> >>> >>> > >> ['bond1.100']}, 'VMNetwork': {'addr': '', 'cfg': >>>>>>> >>> >>> > >> {'DEFROUTE': >>>>>>> >>> >>> > >> 'no', >>>>>>> >>> >>> > >> 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', >>>>>>> >>> >>> > >> 'NM_CONTROLLED': >>>>>>> >>> >>> > >> 'no', >>>>>>> >>> >>> > >> 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': >>>>>>> >>> >>> > >> 'VMNetwork', >>>>>>> >>> >>> > >> 'TYPE': >>>>>>> >>> >>> > >> 'Bridge', >>>>>>> >>> >>> > >> 'ONBOOT': 'no'}, 'ipv6addrs': >>>>>>> >>> >>> > >> ['fe80::62eb:69ff:fe20:b46c/64'], >>>>>>> >>> >>> > >> 'mtu': >>>>>>> >>> >>> > >> '1500', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], >>>>>>> >>> >>> > >> 'ipv6gateway': >>>>>>> >>> >>> > >> '::', >>>>>>> >>> >>> > >> 'gateway': '', 'opts': {'topology_change_detected': >>>>>>> >>> >>> > >> '0', >>>>>>> >>> >>> > >> 'multicast_last_member_count': '2', >>>>>>> >>> >>> > >> 'hash_elasticity': >>>>>>> >>> >>> > >> '4', >>>>>>> >>> >>> > >> 'multicast_query_response_interval': '999', >>>>>>> >>> >>> > >> 'multicast_snooping': >>>>>>> >>> >>> > >> '1', >>>>>>> >>> >>> > >> 'multicast_startup_query_interval': '3124', >>>>>>> >>> >>> > >> 'hello_timer': >>>>>>> >>> >>> > >> '131', >>>>>>> >>> >>> > >> 'multicast_querier_interval': '25496', 'max_age': >>>>>>> >>> >>> > >> '1999', >>>>>>> >>> >>> > >> 'hash_max': >>>>>>> >>> >>> > >> '512', 'stp_state': '0', 'root_id': >>>>>>> >>> >>> > >> '8000.60eb6920b46c', >>>>>>> >>> >>> > >> 'priority': >>>>>>> >>> >>> > >> '32768', 'multicast_membership_interval': '25996', >>>>>>> >>> >>> > >> 'root_path_cost': >>>>>>> >>> >>> > >> '0', >>>>>>> >>> >>> > >> 'root_port': '0', 'multicast_querier': '0', >>>>>>> >>> >>> > >> 'multicast_startup_query_count': '2', 'hello_time': >>>>>>> >>> >>> > >> '199', >>>>>>> >>> >>> > >> 'topology_change': '0', 'bridge_id': >>>>>>> >>> >>> > >> '8000.60eb6920b46c', >>>>>>> >>> >>> > >> 'topology_change_timer': '0', 'ageing_time': '29995', >>>>>>> >>> >>> > >> 'gc_timer': >>>>>>> >>> >>> > >> '31', >>>>>>> >>> >>> > >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', >>>>>>> >>> >>> > >> 'multicast_query_interval': '12498', >>>>>>> >>> >>> > >> 'multicast_last_member_interval': >>>>>>> >>> >>> > >> '99', 'multicast_router': '1', 'forward_delay': '0'}, >>>>>>> >>> >>> > >> 'ports': >>>>>>> >>> >>> > >> ['bond0.36']}}, 'uuid': >>>>>>> >>> >>> > >> '44454C4C-4C00-1057-8053-B7C04F504E31', >>>>>>> >>> >>> > >> 'lastClientIface': 'bond1', 'nics': {'eth3': >>>>>>> >>> >>> > >> {'permhwaddr': >>>>>>> >>> >>> > >> '00:10:18:cd:da:ae', 'addr': '', 'cfg': {'SLAVE': >>>>>>> >>> >>> > >> 'yes', >>>>>>> >>> >>> > >> 'NM_CONTROLLED': >>>>>>> >>> >>> > >> 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ae', >>>>>>> >>> >>> > >> 'MASTER': >>>>>>> >>> >>> > >> 'bond1', >>>>>>> >>> >>> > >> 'DEVICE': 'eth3', 'ONBOOT': 'no'}, 'ipv6addrs': [], >>>>>>> >>> >>> > >> 'mtu': >>>>>>> >>> >>> > >> '9000', >>>>>>> >>> >>> > >> 'netmask': '', 'ipv4addrs': [], 'hwaddr': >>>>>>> >>> >>> > >> '00:10:18:cd:da:ac', >>>>>>> >>> >>> > >> 'speed': >>>>>>> >>> >>> > >> 1000}, 'eth2': {'permhwaddr': '00:10:18:cd:da:ac', >>>>>>> >>> >>> > >> 'addr': '', >>>>>>> >>> >>> > >> 'cfg': >>>>>>> >>> >>> > >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': >>>>>>> >>> >>> > >> '9000', >>>>>>> >>> >>> > >> 'HWADDR': >>>>>>> >>> >>> > >> '00:10:18:cd:da:ac', 'MASTER': 'bond1', 'DEVICE': >>>>>>> >>> >>> > >> 'eth2', >>>>>>> >>> >>> > >> 'ONBOOT': >>>>>>> >>> >>> > >> 'no'}, >>>>>>> >>> >>> > >> 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', >>>>>>> >>> >>> > >> 'ipv4addrs': [], >>>>>>> >>> >>> > >> 'hwaddr': >>>>>>> >>> >>> > >> '00:10:18:cd:da:ac', 'speed': 1000}, 'eth1': >>>>>>> >>> >>> > >> {'permhwaddr': >>>>>>> >>> >>> > >> '60:eb:69:20:b4:6d', 'addr': '', 'cfg': {'SLAVE': >>>>>>> >>> >>> > >> 'yes', >>>>>>> >>> >>> > >> 'NM_CONTROLLED': >>>>>>> >>> >>> > >> 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6d', >>>>>>> >>> >>> > >> 'MASTER': >>>>>>> >>> >>> > >> 'bond0', >>>>>>> >>> >>> > >> 'DEVICE': 'eth1', 'ONBOOT': 'yes'}, 'ipv6addrs': [], >>>>>>> >>> >>> > >> 'mtu': >>>>>>> >>> >>> > >> '1500', >>>>>>> >>> >>> > >> 'netmask': '', 'ipv4addrs': [], 'hwaddr': >>>>>>> >>> >>> > >> '60:eb:69:20:b4:6c', >>>>>>> >>> >>> > >> 'speed': >>>>>>> >>> >>> > >> 1000}, 'eth0': {'permhwaddr': '60:eb:69:20:b4:6c', >>>>>>> >>> >>> > >> 'addr': '', >>>>>>> >>> >>> > >> 'cfg': >>>>>>> >>> >>> > >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': >>>>>>> >>> >>> > >> '1500', >>>>>>> >>> >>> > >> 'HWADDR': >>>>>>> >>> >>> > >> '60:eb:69:20:b4:6c', 'MASTER': 'bond0', 'DEVICE': >>>>>>> >>> >>> > >> 'eth0', >>>>>>> >>> >>> > >> 'ONBOOT': >>>>>>> >>> >>> > >> 'yes'}, >>>>>>> >>> >>> > >> 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', >>>>>>> >>> >>> > >> 'ipv4addrs': [], >>>>>>> >>> >>> > >> 'hwaddr': >>>>>>> >>> >>> > >> '60:eb:69:20:b4:6c', 'speed': 1000}}, >>>>>>> >>> >>> > >> 'software_revision': '1', >>>>>>> >>> >>> > >> 'clusterLevels': ['3.0', '3.1', '3.2', '3.3', '3.4', >>>>>>> >>> >>> > >> '3.5'], >>>>>>> >>> >>> > >> 'cpuFlags': >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,popcnt,aes,lahf_lm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270', >>>>>>> >>> >>> > >> 'ISCSIInitiatorName': >>>>>>> >>> >>> > >> 'iqn.1994-05.com.redhat:32151ce183c8', >>>>>>> >>> >>> > >> 'netConfigDirty': 'False', 'supportedENGINEs': >>>>>>> >>> >>> > >> ['3.0', >>>>>>> >>> >>> > >> '3.1', >>>>>>> >>> >>> > >> '3.2', >>>>>>> >>> >>> > >> '3.3', >>>>>>> >>> >>> > >> '3.4', '3.5'], 'autoNumaBalancing': 2, 'reservedMem': >>>>>>> >>> >>> > >> '321', >>>>>>> >>> >>> > >> 'bondings': >>>>>>> >>> >>> > >> {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', >>>>>>> >>> >>> > >> 'netmask': '', >>>>>>> >>> >>> > >> 'slaves': >>>>>>> >>> >>> > >> [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': >>>>>>> >>> >>> > >> '', >>>>>>> >>> >>> > >> 'cfg': >>>>>>> >>> >>> > >> {'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': >>>>>>> >>> >>> > >> 'no', >>>>>>> >>> >>> > >> 'BONDING_OPTS': >>>>>>> >>> >>> > >> 'mode=4 miimon=100', 'DEVICE': 'bond0', 'ONBOOT': >>>>>>> >>> >>> > >> 'yes'}, >>>>>>> >>> >>> > >> 'ipv6addrs': >>>>>>> >>> >>> > >> ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', >>>>>>> >>> >>> > >> 'netmask': '', >>>>>>> >>> >>> > >> 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', >>>>>>> >>> >>> > >> 'slaves': >>>>>>> >>> >>> > >> ['eth0', >>>>>>> >>> >>> > >> 'eth1'], >>>>>>> >>> >>> > >> 'opts': {'miimon': '100', 'mode': '4'}}, 'bond1': >>>>>>> >>> >>> > >> {'addr': >>>>>>> >>> >>> > >> '10.10.10.6', >>>>>>> >>> >>> > >> 'cfg': {'DEFROUTE': 'no', 'IPADDR': '10.10.10.6', >>>>>>> >>> >>> > >> 'HOTPLUG': >>>>>>> >>> >>> > >> 'no', >>>>>>> >>> >>> > >> 'MTU': >>>>>>> >>> >>> > >> '9000', 'NM_CONTROLLED': 'no', 'NETMASK': >>>>>>> >>> >>> > >> '255.255.255.0', >>>>>>> >>> >>> > >> 'BOOTPROTO': >>>>>>> >>> >>> > >> 'none', 'BONDING_OPTS': 'mode=4 miimon=100', >>>>>>> >>> >>> > >> 'DEVICE': >>>>>>> >>> >>> > >> 'bond1', >>>>>>> >>> >>> > >> 'ONBOOT': >>>>>>> >>> >>> > >> 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], >>>>>>> >>> >>> > >> 'mtu': >>>>>>> >>> >>> > >> '9000', >>>>>>> >>> >>> > >> 'netmask': '255.255.255.0', 'ipv4addrs': >>>>>>> >>> >>> > >> ['10.10.10.6/24' >>>>>>> >>> >>> > >> <http://10.10.10.6/24%27>], 'hwaddr': >>>>>>> >>> >>> > >> '00:10:18:cd:da:ac', >>>>>>> >>> >>> > >> 'slaves': >>>>>>> >>> >>> > >> ['eth2', 'eth3'], 'opts': {'miimon': '100', 'mode': >>>>>>> >>> >>> > >> '4'}}, >>>>>>> >>> >>> > >> 'bond2': >>>>>>> >>> >>> > >> {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', >>>>>>> >>> >>> > >> 'slaves': >>>>>>> >>> >>> > >> [], >>>>>>> >>> >>> > >> 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', >>>>>>> >>> >>> > >> 'cfg': {}, >>>>>>> >>> >>> > >> 'mtu': >>>>>>> >>> >>> > >> '1500', 'netmask': '', 'slaves': [], 'hwaddr': >>>>>>> >>> >>> > >> '00:00:00:00:00:00'}}, >>>>>>> >>> >>> > >> 'software_version': '4.16', 'memSize': '24019', >>>>>>> >>> >>> > >> 'cpuSpeed': >>>>>>> >>> >>> > >> '2667.000', >>>>>>> >>> >>> > >> 'numaNodes': {u'1': {'totalMemory': '12288', 'cpus': >>>>>>> >>> >>> > >> [6, >>>>>>> >>> >>> > >> 7, 8, >>>>>>> >>> >>> > >> 9, >>>>>>> >>> >>> > >> 10, 11, >>>>>>> >>> >>> > >> 18, 19, 20, 21, 22, 23]}, u'0': {'totalMemory': >>>>>>> >>> >>> > >> '12278', >>>>>>> >>> >>> > >> 'cpus': >>>>>>> >>> >>> > >> [0, >>>>>>> >>> >>> > >> 1, 2, >>>>>>> >>> >>> > >> 3, 4, 5, 12, 13, 14, 15, 16, 17]}}, 'version_name': >>>>>>> >>> >>> > >> 'Snow >>>>>>> >>> >>> > >> Man', >>>>>>> >>> >>> > >> 'vlans': >>>>>>> >>> >>> > >> {'bond0.10': {'iface': 'bond0', 'addr': >>>>>>> >>> >>> > >> '43.25.17.16', >>>>>>> >>> >>> > >> 'cfg': >>>>>>> >>> >>> > >> {'DEFROUTE': >>>>>>> >>> >>> > >> 'yes', 'VLAN': 'yes', 'IPADDR': '43.25.17.16', >>>>>>> >>> >>> > >> 'HOTPLUG': >>>>>>> >>> >>> > >> 'no', >>>>>>> >>> >>> > >> 'GATEWAY': >>>>>>> >>> >>> > >> '43.25.17.1', 'NM_CONTROLLED': 'no', 'NETMASK': >>>>>>> >>> >>> > >> '255.255.255.0', >>>>>>> >>> >>> > >> 'BOOTPROTO': 'none', 'DEVICE': 'bond0.10', 'MTU': >>>>>>> >>> >>> > >> '1500', >>>>>>> >>> >>> > >> 'ONBOOT': >>>>>>> >>> >>> > >> 'yes'}, >>>>>>> >>> >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], >>>>>>> >>> >>> > >> 'vlanid': >>>>>>> >>> >>> > >> 10, >>>>>>> >>> >>> > >> 'mtu': >>>>>>> >>> >>> > >> '1500', >>>>>>> >>> >>> > >> 'netmask': '255.255.255.0', 'ipv4addrs': >>>>>>> >>> >>> > >> ['43.25.17.16/24'] >>>>>>> >>> >>> > >> <http://43.25.17.16/24%27%5D>}, 'bond0.36': {'iface': >>>>>>> >>> >>> > >> 'bond0', >>>>>>> >>> >>> > >> 'addr': >>>>>>> >>> >>> > >> '', 'cfg': {'BRIDGE': 'VMNetwork', 'VLAN': 'yes', >>>>>>> >>> >>> > >> 'HOTPLUG': >>>>>>> >>> >>> > >> 'no', >>>>>>> >>> >>> > >> 'MTU': >>>>>>> >>> >>> > >> '1500', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond0.36', >>>>>>> >>> >>> > >> 'ONBOOT': >>>>>>> >>> >>> > >> 'no'}, >>>>>>> >>> >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], >>>>>>> >>> >>> > >> 'vlanid': >>>>>>> >>> >>> > >> 36, >>>>>>> >>> >>> > >> 'mtu': >>>>>>> >>> >>> > >> '1500', >>>>>>> >>> >>> > >> 'netmask': '', 'ipv4addrs': []}, 'bond1.100': >>>>>>> >>> >>> > >> {'iface': >>>>>>> >>> >>> > >> 'bond1', >>>>>>> >>> >>> > >> 'addr': >>>>>>> >>> >>> > >> '', 'cfg': {'BRIDGE': 'Internal', 'VLAN': 'yes', >>>>>>> >>> >>> > >> 'HOTPLUG': >>>>>>> >>> >>> > >> 'no', >>>>>>> >>> >>> > >> 'MTU': >>>>>>> >>> >>> > >> '9000', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond1.100', >>>>>>> >>> >>> > >> 'ONBOOT': >>>>>>> >>> >>> > >> 'no'}, >>>>>>> >>> >>> > >> 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], >>>>>>> >>> >>> > >> 'vlanid': >>>>>>> >>> >>> > >> 100, >>>>>>> >>> >>> > >> 'mtu': >>>>>>> >>> >>> > >> '9000', >>>>>>> >>> >>> > >> 'netmask': '', 'ipv4addrs': []}}, 'cpuCores': '12', >>>>>>> >>> >>> > >> 'kvmEnabled': >>>>>>> >>> >>> > >> 'true', >>>>>>> >>> >>> > >> 'guestOverhead': '65', 'cpuThreads': '24', >>>>>>> >>> >>> > >> 'emulatedMachines': >>>>>>> >>> >>> > >> [u'rhel6.5.0', u'pc', u'rhel6.4.0', u'rhel6.3.0', >>>>>>> >>> >>> > >> u'rhel6.2.0', >>>>>>> >>> >>> > >> u'rhel6.1.0', u'rhel6.0.0', u'rhel5.5.0', >>>>>>> >>> >>> > >> u'rhel5.4.4', >>>>>>> >>> >>> > >> u'rhel5.4.0'], >>>>>>> >>> >>> > >> 'operatingSystem': {'release': '5.el6.centos.11.1', >>>>>>> >>> >>> > >> 'version': >>>>>>> >>> >>> > >> '6', >>>>>>> >>> >>> > >> 'name': >>>>>>> >>> >>> > >> 'RHEL'}, 'lastClient': '10.10.10.2'}} >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> 21:41:41,620::BindingXMLRPC::1132::vds::(wrapper) >>>>>>> >>> >>> > >> client >>>>>>> >>> >>> > >> [10.10.10.2]::call >>>>>>> >>> >>> > >> getHardwareInfo with () {} >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> 21:41:41,621::BindingXMLRPC::1139::vds::(wrapper) >>>>>>> >>> >>> > >> return >>>>>>> >>> >>> > >> getHardwareInfo >>>>>>> >>> >>> > >> with {'status': {'message': 'Done', 'code': 0}, >>>>>>> >>> >>> > >> 'info': >>>>>>> >>> >>> > >> {'systemProductName': 'CS24-TY', >>>>>>> >>> >>> > >> 'systemSerialNumber': >>>>>>> >>> >>> > >> '7LWSPN1', >>>>>>> >>> >>> > >> 'systemFamily': 'Server', 'systemVersion': 'A00', >>>>>>> >>> >>> > >> 'systemUUID': >>>>>>> >>> >>> > >> '44454c4c-4c00-1057-8053-b7c04f504e31', >>>>>>> >>> >>> > >> 'systemManufacturer': >>>>>>> >>> >>> > >> 'Dell'}} >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> 21:41:41,733::BindingXMLRPC::1132::vds::(wrapper) >>>>>>> >>> >>> > >> client >>>>>>> >>> >>> > >> [10.10.10.2]::call >>>>>>> >>> >>> > >> hostsList with () {} flowID [222e8036] >>>>>>> >>> >>> > >> Thread-13::ERROR::2014-11-24 >>>>>>> >>> >>> > >> 21:41:44,753::BindingXMLRPC::1148::vds::(wrapper) >>>>>>> >>> >>> > >> vdsm >>>>>>> >>> >>> > >> exception >>>>>>> >>> >>> > >> occured >>>>>>> >>> >>> > >> Traceback (most recent call last): >>>>>>> >>> >>> > >> File "/usr/share/vdsm/rpc/BindingXMLRPC.py",
On Fri, Dec 5, 2014 at 12:47 PM, Punit Dambiwal <hypunit@gmail.com> wrote: the that the line the then use transport probed). peer the possible, the then line
>>>>>>> >>> >>> > >> 1135, >>>>>>> >>> >>> > >> in >>>>>>> >>> >>> > >> wrapper >>>>>>> >>> >>> > >> res = f(*args, **kwargs) >>>>>>> >>> >>> > >> File "/usr/share/vdsm/gluster/api.py", line 54, in >>>>>>> >>> >>> > >> wrapper >>>>>>> >>> >>> > >> rv = func(*args, **kwargs) >>>>>>> >>> >>> > >> File "/usr/share/vdsm/gluster/api.py", line 251, in >>>>>>> >>> >>> > >> hostsList >>>>>>> >>> >>> > >> return {'hosts': >>>>>>> >>> >>> > >> self.svdsmProxy.glusterPeerStatus()} >>>>>>> >>> >>> > >> File "/usr/share/vdsm/supervdsm.py", line 50, in >>>>>>> >>> >>> > >> __call__ >>>>>>> >>> >>> > >> return callMethod() >>>>>>> >>> >>> > >> File "/usr/share/vdsm/supervdsm.py", line 48, in >>>>>>> >>> >>> > >> <lambda> >>>>>>> >>> >>> > >> **kwargs) >>>>>>> >>> >>> > >> File "<string>", line 2, in glusterPeerStatus >>>>>>> >>> >>> > >> File >>>>>>> >>> >>> > >> "/usr/lib64/python2.6/multiprocessing/managers.py", >>>>>>> >>> >>> > >> line >>>>>>> >>> >>> > >> 740, >>>>>>> >>> >>> > >> in >>>>>>> >>> >>> > >> _callmethod >>>>>>> >>> >>> > >> raise convert_to_error(kind, result) >>>>>>> >>> >>> > >> GlusterCmdExecFailedException: Command execution >>>>>>> >>> >>> > >> failed >>>>>>> >>> >>> > >> error: Connection failed. Please check if gluster >>>>>>> >>> >>> > >> daemon >>>>>>> >>> >>> > >> is >>>>>>> >>> >>> > >> operational. >>>>>>> >>> >>> > >> return code: 1 >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> 21:41:50,949::task::595::Storage.TaskManager.Task::(_updateState) >>>>>>> >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving >>>>>>> >>> >>> > >> from >>>>>>> >>> >>> > >> state >>>>>>> >>> >>> > >> init >>>>>>> >>> >>> > >> -> >>>>>>> >>> >>> > >> state preparing >>>>>>> >>> >>> > >> Thread-13::INFO::2014-11-24 >>>>>>> >>> >>> > >> 21:41:50,950::logUtils::44::dispatcher::(wrapper) Run >>>>>>> >>> >>> > >> and >>>>>>> >>> >>> > >> protect: >>>>>>> >>> >>> > >> repoStats(options=None) >>>>>>> >>> >>> > >> Thread-13::INFO::2014-11-24 >>>>>>> >>> >>> > >> 21:41:50,950::logUtils::47::dispatcher::(wrapper) Run >>>>>>> >>> >>> > >> and >>>>>>> >>> >>> > >> protect: >>>>>>> >>> >>> > >> repoStats, Return response: {} >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> 21:41:50,950::task::1191::Storage.TaskManager.Task::(prepare) >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::finished: {} >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> 21:41:50,950::task::595::Storage.TaskManager.Task::(_updateState) >>>>>>> >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving >>>>>>> >>> >>> > >> from >>>>>>> >>> >>> > >> state >>>>>>> >>> >>> > >> preparing >>>>>>> >>> >>> > >> -> >>>>>>> >>> >>> > >> state finished >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> 21:41:50,951::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) >>>>>>> >>> >>> > >> Owner.releaseAll requests {} resources {} >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> 21:41:50,951::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >>>>>>> >>> >>> > >> Owner.cancelAll requests {} >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> 21:41:50,951::task::993::Storage.TaskManager.Task::(_decref) >>>>>>> >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::ref 0 >>>>>>> >>> >>> > >> aborting >>>>>>> >>> >>> > >> False >>>>>>> >>> >>> > >> ------------------------------- >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> [root@compute4 ~]# service glusterd status >>>>>>> >>> >>> > >> glusterd is stopped >>>>>>> >>> >>> > >> [root@compute4 ~]# chkconfig --list | grep glusterd >>>>>>> >>> >>> > >> glusterd 0:off 1:off 2:on 3:on 4:on >>>>>>> >>> >>> > >> 5:on >>>>>>> >>> >>> > >> 6:off >>>>>>> >>> >>> > >> [root@compute4 ~]# >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> Thanks, >>>>>>> >>> >>> > >> Punit >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> On Mon, Nov 24, 2014 at 6:36 PM, Kanagaraj >>>>>>> >>> >>> > >> <kmayilsa@redhat.com> >>>>>>> >>> >>> > >> wrote: >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >>> Can you send the corresponding error in vdsm.log >>>>>>> >>> >>> > >>> from >>>>>>> >>> >>> > >>> the >>>>>>> >>> >>> > >>> host? >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> Also check if glusterd service is running. >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> Thanks, >>>>>>> >>> >>> > >>> Kanagaraj >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> On 11/24/2014 03:39 PM, Punit Dambiwal wrote: >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> Hi, >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> After reboot my Hypervisior host can not activate >>>>>>> >>> >>> > >>> again >>>>>>> >>> >>> > >>> in the >>>>>>> >>> >>> > >>> cluster >>>>>>> >>> >>> > >>> and failed with the following error :- >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> Gluster command [<UNKNOWN>] failed on server... >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> Engine logs :- >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> 2014-11-24 18:05:28,397 INFO >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-64) START, >>>>>>> >>> >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, >>>>>>> >>> >>> > >>> HostId >>>>>>> >>> >>> > >>> = >>>>>>> >>> >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: >>>>>>> >>> >>> > >>> 5f251c90 >>>>>>> >>> >>> > >>> 2014-11-24 18:05:30,609 INFO >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-64) FINISH, >>>>>>> >>> >>> > >>> GlusterVolumesListVDSCommand, >>>>>>> >>> >>> > >>> return: >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@d95203e0 }, >>>>>>> >>> >>> > >>> log id: 5f251c90 >>>>>>> >>> >>> > >>> 2014-11-24 18:05:33,768 INFO >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>>>>>> >>> >>> > >>> (ajp--127.0.0.1-8702-8) >>>>>>> >>> >>> > >>> [287d570d] Lock Acquired to object EngineLock >>>>>>> >>> >>> > >>> [exclusiveLocks= >>>>>>> >>> >>> > >>> key: >>>>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS >>>>>>> >>> >>> > >>> , sharedLocks= ] >>>>>>> >>> >>> > >>> 2014-11-24 18:05:33,795 INFO >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>>>>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] >>>>>>> >>> >>> > >>> Running >>>>>>> >>> >>> > >>> command: >>>>>>> >>> >>> > >>> ActivateVdsCommand internal: false. Entities >>>>>>> >>> >>> > >>> affected : >>>>>>> >>> >>> > >>> ID: >>>>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDSAction >>>>>>> >>> >>> > >>> group >>>>>>> >>> >>> > >>> MANIPULATE_HOST >>>>>>> >>> >>> > >>> with role type ADMIN >>>>>>> >>> >>> > >>> 2014-11-24 18:05:33,796 INFO >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>>>>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] >>>>>>> >>> >>> > >>> Before >>>>>>> >>> >>> > >>> acquiring >>>>>>> >>> >>> > >>> lock in >>>>>>> >>> >>> > >>> order to prevent monitoring for host Compute5 from >>>>>>> >>> >>> > >>> data-center >>>>>>> >>> >>> > >>> SV_WTC >>>>>>> >>> >>> > >>> 2014-11-24 18:05:33,797 INFO >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>>>>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] Lock >>>>>>> >>> >>> > >>> acquired, >>>>>>> >>> >>> > >>> from >>>>>>> >>> >>> > >>> now a >>>>>>> >>> >>> > >>> monitoring of host will be skipped for host Compute5 >>>>>>> >>> >>> > >>> from >>>>>>> >>> >>> > >>> data-center >>>>>>> >>> >>> > >>> SV_WTC >>>>>>> >>> >>> > >>> 2014-11-24 18:05:33,817 INFO >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >>>>>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] >>>>>>> >>> >>> > >>> START, >>>>>>> >>> >>> > >>> SetVdsStatusVDSCommand(HostName = Compute5, HostId = >>>>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, >>>>>>> >>> >>> > >>> status=Unassigned, >>>>>>> >>> >>> > >>> nonOperationalReason=NONE, >>>>>>> >>> >>> > >>> stopSpmFailureLogged=false), >>>>>>> >>> >>> > >>> log id: >>>>>>> >>> >>> > >>> 1cbc7311 >>>>>>> >>> >>> > >>> 2014-11-24 18:05:33,820 INFO >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >>>>>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] >>>>>>> >>> >>> > >>> FINISH, >>>>>>> >>> >>> > >>> SetVdsStatusVDSCommand, log id: 1cbc7311 >>>>>>> >>> >>> > >>> 2014-11-24 18:05:34,086 INFO >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>>>>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) Activate >>>>>>> >>> >>> > >>> finished. >>>>>>> >>> >>> > >>> Lock >>>>>>> >>> >>> > >>> released. >>>>>>> >>> >>> > >>> Monitoring can run now for host Compute5 from >>>>>>> >>> >>> > >>> data-center >>>>>>> >>> >>> > >>> SV_WTC >>>>>>> >>> >>> > >>> 2014-11-24 18:05:34,088 INFO >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>>>>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) Correlation ID: >>>>>>> >>> >>> > >>> 287d570d, >>>>>>> >>> >>> > >>> Job >>>>>>> >>> >>> > >>> ID: >>>>>>> >>> >>> > >>> 5ef8e4d6-b2bc-469e-8e81-7ef74b2a001a, Call Stack: >>>>>>> >>> >>> > >>> null, >>>>>>> >>> >>> > >>> Custom >>>>>>> >>> >>> > >>> Event ID: >>>>>>> >>> >>> > >>> -1, Message: Host Compute5 was activated by admin. >>>>>>> >>> >>> > >>> 2014-11-24 18:05:34,090 INFO >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>>>>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) Lock freed to >>>>>>> >>> >>> > >>> object >>>>>>> >>> >>> > >>> EngineLock >>>>>>> >>> >>> > >>> [exclusiveLocks= key: >>>>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a >>>>>>> >>> >>> > >>> value: >>>>>>> >>> >>> > >>> VDS >>>>>>> >>> >>> > >>> , sharedLocks= ] >>>>>>> >>> >>> > >>> 2014-11-24 18:05:35,792 INFO >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-55) [3706e836] START, >>>>>>> >>> >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, >>>>>>> >>> >>> > >>> HostId >>>>>>> >>> >>> > >>> = >>>>>>> >>> >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: >>>>>>> >>> >>> > >>> 48a0c832 >>>>>>> >>> >>> > >>> 2014-11-24 18:05:37,064 INFO >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] >>>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) START, >>>>>>> >>> >>> > >>> GetHardwareInfoVDSCommand(HostName = Compute5, >>>>>>> >>> >>> > >>> HostId = >>>>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> vds=Host[Compute5,0bf6b00f-7947-4411-b55a-cc5eea2b381a]), log >>>>>>> >>> >>> > >>> id: >>>>>>> >>> >>> > >>> 6d560cc2 >>>>>>> >>> >>> > >>> 2014-11-24 18:05:37,074 INFO >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] >>>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) FINISH, >>>>>>> >>> >>> > >>> GetHardwareInfoVDSCommand, log >>>>>>> >>> >>> > >>> id: 6d560cc2 >>>>>>> >>> >>> > >>> 2014-11-24 18:05:37,093 WARN >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.VdsManager] >>>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) Host Compute5 is >>>>>>> >>> >>> > >>> running >>>>>>> >>> >>> > >>> with >>>>>>> >>> >>> > >>> disabled >>>>>>> >>> >>> > >>> SELinux. >>>>>>> >>> >>> > >>> 2014-11-24 18:05:37,127 INFO >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] >>>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] >>>>>>> >>> >>> > >>> Running >>>>>>> >>> >>> > >>> command: >>>>>>> >>> >>> > >>> HandleVdsCpuFlagsOrClusterChangedCommand internal: >>>>>>> >>> >>> > >>> true. >>>>>>> >>> >>> > >>> Entities >>>>>>> >>> >>> > >>> affected >>>>>>> >>> >>> > >>> : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: >>>>>>> >>> >>> > >>> VDS >>>>>>> >>> >>> > >>> 2014-11-24 18:05:37,147 INFO >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >>>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] START, >>>>>>> >>> >>> > >>> GlusterServersListVDSCommand(HostName = Compute5, >>>>>>> >>> >>> > >>> HostId >>>>>>> >>> >>> > >>> = >>>>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a), log id: >>>>>>> >>> >>> > >>> 4faed87 >>>>>>> >>> >>> > >>> 2014-11-24 18:05:37,164 INFO >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >>>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] >>>>>>> >>> >>> > >>> FINISH, >>>>>>> >>> >>> > >>> GlusterServersListVDSCommand, log id: 4faed87 >>>>>>> >>> >>> > >>> 2014-11-24 18:05:37,189 INFO >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] >>>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] >>>>>>> >>> >>> > >>> Running >>>>>>> >>> >>> > >>> command: >>>>>>> >>> >>> > >>> SetNonOperationalVdsCommand internal: true. Entities >>>>>>> >>> >>> > >>> affected : >>>>>>> >>> >>> > >>> ID: >>>>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS >>>>>>> >>> >>> > >>> 2014-11-24 18:05:37,206 INFO >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >>>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] START, >>>>>>> >>> >>> > >>> SetVdsStatusVDSCommand(HostName = Compute5, HostId = >>>>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, >>>>>>> >>> >>> > >>> status=NonOperational, >>>>>>> >>> >>> > >>> nonOperationalReason=GLUSTER_COMMAND_FAILED, >>>>>>> >>> >>> > >>> stopSpmFailureLogged=false), >>>>>>> >>> >>> > >>> log id: fed5617 >>>>>>> >>> >>> > >>> 2014-11-24 18:05:37,209 INFO >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >>>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] >>>>>>> >>> >>> > >>> FINISH, >>>>>>> >>> >>> > >>> SetVdsStatusVDSCommand, log id: fed5617 >>>>>>> >>> >>> > >>> 2014-11-24 18:05:37,223 ERROR >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] >>>>>>> >>> >>> > >>> Correlation ID: >>>>>>> >>> >>> > >>> 4a84c4e5, >>>>>>> >>> >>> > >>> Job >>>>>>> >>> >>> > >>> ID: 4bfd4a6d-c3ef-468f-a40e-a3a6ca13011b, Call >>>>>>> >>> >>> > >>> Stack: >>>>>>> >>> >>> > >>> null, >>>>>>> >>> >>> > >>> Custom >>>>>>> >>> >>> > >>> Event >>>>>>> >>> >>> > >>> ID: -1, Message: Gluster command [<UNKNOWN>] failed >>>>>>> >>> >>> > >>> on >>>>>>> >>> >>> > >>> server >>>>>>> >>> >>> > >>> Compute5. >>>>>>> >>> >>> > >>> 2014-11-24 18:05:37,243 INFO >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] >>>>>>> >>> >>> > >>> Correlation ID: >>>>>>> >>> >>> > >>> null, >>>>>>> >>> >>> > >>> Call >>>>>>> >>> >>> > >>> Stack: null, Custom Event ID: -1, Message: Status of >>>>>>> >>> >>> > >>> host >>>>>>> >>> >>> > >>> Compute5 >>>>>>> >>> >>> > >>> was >>>>>>> >>> >>> > >>> set >>>>>>> >>> >>> > >>> to NonOperational. >>>>>>> >>> >>> > >>> 2014-11-24 18:05:37,272 INFO >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.HandleVdsVersionCommand] >>>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Running >>>>>>> >>> >>> > >>> command: >>>>>>> >>> >>> > >>> HandleVdsVersionCommand internal: true. Entities >>>>>>> >>> >>> > >>> affected : >>>>>>> >>> >>> > >>> ID: >>>>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS >>>>>>> >>> >>> > >>> 2014-11-24 18:05:37,274 INFO >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] >>>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Host >>>>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a : Compute5 is >>>>>>> >>> >>> > >>> already in >>>>>>> >>> >>> > >>> NonOperational status for reason >>>>>>> >>> >>> > >>> GLUSTER_COMMAND_FAILED. >>>>>>> >>> >>> > >>> SetNonOperationalVds command is skipped. >>>>>>> >>> >>> > >>> 2014-11-24 18:05:38,065 INFO >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-55) [3706e836] >>>>>>> >>> >>> > >>> FINISH, >>>>>>> >>> >>> > >>> GlusterVolumesListVDSCommand, return: >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4e72a1b1 }, >>>>>>> >>> >>> > >>> log id: 48a0c832 >>>>>>> >>> >>> > >>> 2014-11-24 18:05:43,243 INFO >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-35) START, >>>>>>> >>> >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, >>>>>>> >>> >>> > >>> HostId >>>>>>> >>> >>> > >>> = >>>>>>> >>> >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: >>>>>>> >>> >>> > >>> 3ce13ebc >>>>>>> >>> >>> > >>> ^C >>>>>>> >>> >>> > >>> [root@ccr01 ~]# >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> Thanks, >>>>>>> >>> >>> > >>> Punit >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>>
>>>>>>> >>> >>> > >>> Users mailing >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> listUsers@ovirt.orghttp:// lists.ovirt.org/mailman/listinfo/users >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >>> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > > >>>>>>> >>> >>> > > >>>>>>> >>> >>> > >>>>>>> >>> >> >>>>>>> >>> >> >>>>>>> >>> > >>>>>>> >>> > >>>>>>> >>> > _______________________________________________ >>>>>>> >>> > Gluster-users mailing list >>>>>>> >>> > Gluster-users@gluster.org >>>>>>> >>> > >>>>>>> >>> > http://supercolony.gluster.org/mailman/listinfo/gluster-users >>>>>>> >> >>>>>>> >> >>>>>>> > >>>>>> >>>>>> >>>> >>
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

I post a bug in centos about a problem with NetworkManager-wait-online.service, this service is executed before network-online.target http://bugs.centos.org/view.php?id=7825 I had a problem with mount drbd in a centos7 at boot time. I think it could be similar to the problem to mount gluster in boot time in centos7. I modified the service adding one line that wait to NetworkManager to set up the ip address in the interface that is used to connect de drbd, the the ip address in eth0 is 10.88.0.1 and the line I added is: ExecStart=/bin/sh -c 'until ip a s eth0 | grep -q 10.88; do usleep 100000; done' # cat /usr/lib/systemd/system/NetworkManager-wait-online.service [Unit] Description=Network Manager Wait Online Requisite=NetworkManager.service After=NetworkManager.service Wants=network.target Before=network.target network-online.target [Service] Type=oneshot ExecStart=/usr/bin/nm-online -q --timeout=30 ExecStart=/bin/sh -c 'until ip a s eth0 | grep -q 10.88; do usleep 100000; done' Alberto 2014-12-05 14:50 GMT+01:00 Kaushal M <kshlmster@gmail.com>:
Can you replace 'Before=network-online.target' with 'Wants=network-online.target' and try the boot again? This should force the network to be online before starting GlusterD.
If even that fails, you could try adding an entry into /etc/hosts with the hostname of the system. This should prevent any more failures.
I still don't believe it's a problem with Gluster. Gluster uses apis provided by the system to perform name resolution. These definitely work correctly because you can start GlusterD later. Since the resolution failure only happens during boot, it points to system or network setup issues during boot. To me it seems like the network isn't completely setup at that point of time.
~kaushal
Hi Kaushal,
It seems it's bug in glusterfs 3.6....even i manage my systemd to start
network service before glusterd...but it's still fail...
--------------- [Unit] Description=GlusterFS, a clustered file-system server After=network.target rpcbind.service Before=network-online.target
[Service] Type=forking PIDFile=/var/run/glusterd.pid LimitNOFILE=65536 ExecStartPre=/etc/rc.d/init.d/network start ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid KillMode=process
[Install] WantedBy=multi-user.target ----------------
Thanks, Punit
On Wed, Dec 3, 2014 at 8:56 PM, Kaushal M <kshlmster@gmail.com> wrote:
I just remembered this.
There was another user having a similar issue of GlusterD failing to start on the mailing list a while back. The cause of his problem was the way his network was brought up. IIRC, he was using a static network configuration. The problem vanished when he began using dhcp. Or it might have been he was using dhcp.service and it got solved after switching to NetworkManager.
This could be one more thing you could look at.
I'll try to find the mail thread to see if it was the same problem as
you.
~kaushal
On Wed, Dec 3, 2014 at 6:22 PM, Kaushal M <kshlmster@gmail.com> wrote:
I don't know much about how the network target is brought up in CentOS7, but I'll try as much as I can.
It seems to me that, after the network has been brought up and by the time GlusterD is started, a. The machine hasn't yet recieved it's hostname, or b. It hasn't yet registered with the name server.
This is causing name resolution failures.
I don't know if the network target could come up without the machine getting its hostname, so I'm pretty sure it's not a.
So it seems to be b. But these kind of signing in happens only in DDNS systems, which doesn't seem to be the case for you.
Both of these reasons might be wrong (most likely wrong). You'd do good if you could ask for help from someone with more experience in systemd + networking.
~kaushal
On Wed, Dec 3, 2014 at 10:54 AM, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi Kaushal,
This is the host...which i rebooted...would you mind to let me know
how
i can make the glusterd sevice come up after network...i am using centos7...if network is the issue...
On Wed, Dec 3, 2014 at 11:54 AM, Kaushal M <kshlmster@gmail.com> wrote:
This peer cannot be identified.
" [2014-12-03 02:29:25.998153] D [glusterd-peer-utils.c:121:glusterd_peerinfo_find_by_hostname] 0-management: Unable to find friend: cpu05.zne01.hkg1.ovt.36stack.com"
I don't know why this address is not being resolved during boot
time.
If this is a valid peer, the the only reason I can think of this that
network is not up.
If you had previously detached the peer forcefully, the that could have left stale entries in some volumes. In this case as well, GlusterD will fail to identify the peer.
Do either of these reasons seem a possibility to you?
On Dec 3, 2014 8:07 AM, "Punit Dambiwal" <hypunit@gmail.com> wrote: > > Hi Kaushal, > > Please find the logs here :- http://ur1.ca/iyoe5 and > http://ur1.ca/iyoed > > On Tue, Dec 2, 2014 at 10:43 PM, Kaushal M <kshlmster@gmail.com> > wrote: >> >> Hey Punit, >> In the logs you've provided, GlusterD appears to be running >> correctly. >> Could you provide the logs for the time period when GlusterD >> attempts to >> start but fails. >> >> ~kaushal >> >> On Dec 2, 2014 8:03 PM, "Punit Dambiwal" <hypunit@gmail.com> wrote: >>> >>> Hi Kaushal, >>> >>> Please find the logs here :- http://ur1.ca/iyhs5 and >>> http://ur1.ca/iyhue >>> >>> Thanks, >>> punit >>> >>> >>> On Tue, Dec 2, 2014 at 12:00 PM, Kaushal M <kshlmster@gmail.com> >>> wrote: >>>> >>>> Hey Punit, >>>> Could you start Glusterd in debug mode and provide the logs here? >>>> To start it in debug mode, append '-LDEBUG' to the ExecStart
>>>> in >>>> the service file. >>>> >>>> ~kaushal >>>> >>>> On Mon, Dec 1, 2014 at 9:05 AM, Punit Dambiwal < hypunit@gmail.com> >>>> wrote: >>>> > Hi, >>>> > >>>> > Can Any body help me on this ?? >>>> > >>>> > On Thu, Nov 27, 2014 at 9:29 AM, Punit Dambiwal >>>> > <hypunit@gmail.com> >>>> > wrote: >>>> >> >>>> >> Hi Kaushal, >>>> >> >>>> >> Thanks for the detailed reply....let me explain my setup first >>>> >> :- >>>> >> >>>> >> 1. Ovirt Engine >>>> >> 2. 4* host as well as storage machine (Host and gluster >>>> >> combined) >>>> >> 3. Every host has 24 bricks... >>>> >> >>>> >> Now whenever the host machine reboot...it can come up but can >>>> >> not >>>> >> join the >>>> >> cluster again and through the following error "Gluster command >>>> >> [<UNKNOWN>] >>>> >> failed on server.." >>>> >> >>>> >> Please check my comment in line :- >>>> >> >>>> >> 1. Use the same string for doing the peer probe and for the >>>> >> brick >>>> >> address >>>> >> during volume create/add-brick. Ideally, we suggest you use >>>> >> properly >>>> >> resolvable FQDNs everywhere. If that is not possible, then use >>>> >> only >>>> >> IP >>>> >> addresses. Try to avoid short names. >>>> >> --------------- >>>> >> [root@cpu05 ~]# gluster peer status >>>> >> Number of Peers: 3 >>>> >> >>>> >> Hostname: cpu03.stack.com >>>> >> Uuid: 5729b8c4-e80d-4353-b456-6f467bddbdfb >>>> >> State: Peer in Cluster (Connected) >>>> >> >>>> >> Hostname: cpu04.stack.com >>>> >> Uuid: d272b790-c4b2-4bed-ba68-793656e6d7b0 >>>> >> State: Peer in Cluster (Connected) >>>> >> Other names: >>>> >> 10.10.0.8 >>>> >> >>>> >> Hostname: cpu02.stack.com >>>> >> Uuid: 8d8a7041-950e-40d0-85f9-58d14340ca25 >>>> >> State: Peer in Cluster (Connected) >>>> >> [root@cpu05 ~]# >>>> >> ---------------- >>>> >> 2. During boot up, make sure to launch glusterd only after
>>>> >> network is >>>> >> up. This will allow the new peer identification mechanism to do >>>> >> its >>>> >> job correctly. >>>> >> >> I think the service itself doing the same job.... >>>> >> >>>> >> [root@cpu05 ~]# cat /usr/lib/systemd/system/glusterd.service >>>> >> [Unit] >>>> >> Description=GlusterFS, a clustered file-system server >>>> >> After=network.target rpcbind.service >>>> >> Before=network-online.target >>>> >> >>>> >> [Service] >>>> >> Type=forking >>>> >> PIDFile=/var/run/glusterd.pid >>>> >> LimitNOFILE=65536 >>>> >> ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid >>>> >> KillMode=process >>>> >> >>>> >> [Install] >>>> >> WantedBy=multi-user.target >>>> >> [root@cpu05 ~]# >>>> >> -------------------- >>>> >> >>>> >> gluster logs :- >>>> >> >>>> >> [2014-11-24 09:22:22.147471] I [MSGID: 100030] >>>> >> [glusterfsd.c:2018:main] >>>> >> 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd >>>> >> version >>>> >> 3.6.1 >>>> >> (args: /usr/sbin/glusterd -p /var/run/glusterd.pid) >>>> >> [2014-11-24 09:22:22.151565] I [glusterd.c:1214:init] >>>> >> 0-management: >>>> >> Maximum allowed open file descriptors set to 65536 >>>> >> [2014-11-24 09:22:22.151599] I [glusterd.c:1259:init] >>>> >> 0-management: >>>> >> Using >>>> >> /var/lib/glusterd as working directory >>>> >> [2014-11-24 09:22:22.155216] W >>>> >> [rdma.c:4195:__gf_rdma_ctx_create] >>>> >> 0-rpc-transport/rdma: rdma_cm event channel creation failed (No >>>> >> such device) >>>> >> [2014-11-24 09:22:22.155264] E [rdma.c:4483:init] >>>> >> 0-rdma.management: >>>> >> Failed to initialize IB Device >>>> >> [2014-11-24 09:22:22.155285] E >>>> >> [rpc-transport.c:333:rpc_transport_load] >>>> >> 0-rpc-transport: 'rdma' initialization failed >>>> >> [2014-11-24 09:22:22.155354] W >>>> >> [rpcsvc.c:1524:rpcsvc_transport_create] >>>> >> 0-rpc-service: cannot create listener, initing the transport >>>> >> failed >>>> >> [2014-11-24 09:22:22.156290] I >>>> >> [glusterd.c:413:glusterd_check_gsync_present] 0-glusterd: >>>> >> geo-replication >>>> >> module not installed in the system >>>> >> [2014-11-24 09:22:22.161318] I >>>> >> [glusterd-store.c:2043:glusterd_restore_op_version] 0-glusterd: >>>> >> retrieved >>>> >> op-version: 30600 >>>> >> [2014-11-24 09:22:22.821800] I >>>> >> [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] >>>> >> 0-management: >>>> >> connect returned 0 >>>> >> [2014-11-24 09:22:22.825810] I >>>> >> [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] >>>> >> 0-management: >>>> >> connect returned 0 >>>> >> [2014-11-24 09:22:22.828705] I >>>> >> [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] >>>> >> 0-management: >>>> >> connect returned 0 >>>> >> [2014-11-24 09:22:22.828771] I >>>> >> [rpc-clnt.c:969:rpc_clnt_connection_init] >>>> >> 0-management: setting frame-timeout to 600 >>>> >> [2014-11-24 09:22:22.832670] I >>>> >> [rpc-clnt.c:969:rpc_clnt_connection_init] >>>> >> 0-management: setting frame-timeout to 600 >>>> >> [2014-11-24 09:22:22.835919] I >>>> >> [rpc-clnt.c:969:rpc_clnt_connection_init] >>>> >> 0-management: setting frame-timeout to 600 >>>> >> [2014-11-24 09:22:22.840209] E >>>> >> [glusterd-store.c:4248:glusterd_resolve_all_bricks] 0-glusterd: >>>> >> resolve >>>> >> brick failed in restore >>>> >> [2014-11-24 09:22:22.840233] E [xlator.c:425:xlator_init] >>>> >> 0-management: >>>> >> Initialization of volume 'management' failed, review your >>>> >> volfile >>>> >> again >>>> >> [2014-11-24 09:22:22.840245] E >>>> >> [graph.c:322:glusterfs_graph_init] >>>> >> 0-management: initializing translator failed >>>> >> [2014-11-24 09:22:22.840264] E >>>> >> [graph.c:525:glusterfs_graph_activate] >>>> >> 0-graph: init failed >>>> >> [2014-11-24 09:22:22.840754] W >>>> >> [glusterfsd.c:1194:cleanup_and_exit] >>>> >> (--> >>>> >> 0-: received signum (0), shutting down >>>> >> >>>> >> Thanks, >>>> >> Punit >>>> >> >>>> >> >>>> >> >>>> >> >>>> >> On Wed, Nov 26, 2014 at 7:14 PM, Kaushal M >>>> >> <kshlmster@gmail.com> >>>> >> wrote: >>>> >>> >>>> >>> Based on the logs I can guess that glusterd is being started >>>> >>> before >>>> >>> the network has come up and that the addresses given to bricks >>>> >>> do >>>> >>> not >>>> >>> directly match the addresses used in during peer probe. >>>> >>> >>>> >>> The gluster_after_reboot log has the line "[2014-11-25 >>>> >>> 06:46:09.972113] E >>>> >>> [glusterd-store.c:2632:glusterd_resolve_all_bricks] >>>> >>> 0-glusterd: resolve brick failed in restore". >>>> >>> >>>> >>> Brick resolution fails when glusterd cannot match the address >>>> >>> for >>>> >>> the >>>> >>> brick, with one of the peers. Brick resolution happens in two >>>> >>> phases, >>>> >>> 1. We first try to identify the peer by performing string >>>> >>> comparisions >>>> >>> with the brick address and the peer addresses (The peer names >>>> >>> will >>>> >>> be >>>> >>> the names/addresses that were given when the peer was
>>>> >>> 2. If we don't find a match from step 1, we will then resolve >>>> >>> all >>>> >>> the >>>> >>> brick address and the peer addresses into addrinfo structs, >>>> >>> and >>>> >>> then >>>> >>> compare these structs to find a match. This process should >>>> >>> generally >>>> >>> find a match if available. This will fail only if the network >>>> >>> is >>>> >>> not >>>> >>> up yet as we cannot resolve addresses. >>>> >>> >>>> >>> The above steps are applicable only to glusterfs versions >>>> >>> >=3.6. >>>> >>> They >>>> >>> were introduced to reduce problems with peer identification, >>>> >>> like >>>> >>> the >>>> >>> one you encountered >>>> >>> >>>> >>> Since both of the steps failed to find a match in one run, but >>>> >>> succeeded later, we can come to the conclusion that, >>>> >>> a) the bricks don't have the exact same string used in peer >>>> >>> probe >>>> >>> for >>>> >>> their addresses as step 1 failed, and >>>> >>> b) the network was not up in the initial run, as step 2 failed >>>> >>> during >>>> >>> the initial run, but passed in the second run. >>>> >>> >>>> >>> Please let me know if my conclusion is correct. >>>> >>> >>>> >>> If it is, you can solve your problem in two ways. >>>> >>> 1. Use the same string for doing the peer probe and for the >>>> >>> brick >>>> >>> address during volume create/add-brick. Ideally, we suggest >>>> >>> you >>>> >>> use >>>> >>> properly resolvable FQDNs everywhere. If that is not
>>>> >>> then >>>> >>> use only IP addresses. Try to avoid short names. >>>> >>> 2. During boot up, make sure to launch glusterd only after
>>>> >>> network >>>> >>> is up. This will allow the new peer identification mechanism >>>> >>> to do >>>> >>> its >>>> >>> job correctly. >>>> >>> >>>> >>> >>>> >>> If you have already followed these steps and yet still hit
>>>> >>> problem, then please provide more information (setup, logs, >>>> >>> etc.). >>>> >>> It >>>> >>> could be much different problem that you are facing. >>>> >>> >>>> >>> ~kaushal >>>> >>> >>>> >>> On Wed, Nov 26, 2014 at 4:01 PM, Punit Dambiwal >>>> >>> <hypunit@gmail.com> >>>> >>> wrote: >>>> >>> > Is there any one can help on this ?? >>>> >>> > >>>> >>> > Thanks, >>>> >>> > punit >>>> >>> > >>>> >>> > On Wed, Nov 26, 2014 at 9:42 AM, Punit Dambiwal >>>> >>> > <hypunit@gmail.com> >>>> >>> > wrote: >>>> >>> >> >>>> >>> >> Hi, >>>> >>> >> >>>> >>> >> My Glusterfs version is :- glusterfs-3.6.1-1.el7 >>>> >>> >> >>>> >>> >> On Wed, Nov 26, 2014 at 1:59 AM, Kanagaraj Mayilsamy >>>> >>> >> <kmayilsa@redhat.com> >>>> >>> >> wrote: >>>> >>> >>> >>>> >>> >>> [+Gluster-users@gluster.org] >>>> >>> >>> >>>> >>> >>> "Initialization of volume 'management' failed, review your >>>> >>> >>> volfile >>>> >>> >>> again", glusterd throws this error when the service is >>>> >>> >>> started >>>> >>> >>> automatically >>>> >>> >>> after the reboot. But the service is successfully started >>>> >>> >>> later >>>> >>> >>> manually by >>>> >>> >>> the user. >>>> >>> >>> >>>> >>> >>> can somebody from gluster-users please help on this? >>>> >>> >>> >>>> >>> >>> glusterfs version: 3.5.1 >>>> >>> >>> >>>> >>> >>> Thanks, >>>> >>> >>> Kanagaraj >>>> >>> >>> >>>> >>> >>> ----- Original Message ----- >>>> >>> >>> > From: "Punit Dambiwal" <hypunit@gmail.com> >>>> >>> >>> > To: "Kanagaraj" <kmayilsa@redhat.com> >>>> >>> >>> > Cc: users@ovirt.org >>>> >>> >>> > Sent: Tuesday, November 25, 2014 7:24:45 PM >>>> >>> >>> > Subject: Re: [ovirt-users] Gluster command [<UNKNOWN>] >>>> >>> >>> > failed on >>>> >>> >>> > server... >>>> >>> >>> > >>>> >>> >>> > Hi Kanagraj, >>>> >>> >>> > >>>> >>> >>> > Please check the attached log files....i didn't find any >>>> >>> >>> > thing >>>> >>> >>> > special.... >>>> >>> >>> > >>>> >>> >>> > On Tue, Nov 25, 2014 at 12:12 PM, Kanagaraj >>>> >>> >>> > <kmayilsa@redhat.com> >>>> >>> >>> > wrote: >>>> >>> >>> > >>>> >>> >>> > > Do you see any errors in >>>> >>> >>> > > /var/log/glusterfs/etc-glusterfs-glusterd.vol.log or >>>> >>> >>> > > vdsm.log >>>> >>> >>> > > when >>>> >>> >>> > > the >>>> >>> >>> > > service is trying to start automatically after the >>>> >>> >>> > > reboot? >>>> >>> >>> > > >>>> >>> >>> > > Thanks, >>>> >>> >>> > > Kanagaraj >>>> >>> >>> > > >>>> >>> >>> > > >>>> >>> >>> > > On 11/24/2014 08:13 PM, Punit Dambiwal wrote: >>>> >>> >>> > > >>>> >>> >>> > > Hi Kanagaraj, >>>> >>> >>> > > >>>> >>> >>> > > Yes...once i will start the gluster service and
>>>> >>> >>> > > vdsmd >>>> >>> >>> > > ...the >>>> >>> >>> > > host >>>> >>> >>> > > can connect to cluster...but the question is why it's >>>> >>> >>> > > not >>>> >>> >>> > > started >>>> >>> >>> > > even it >>>> >>> >>> > > has chkconfig enabled... >>>> >>> >>> > > >>>> >>> >>> > > I have tested it in two host cluster >>>> >>> >>> > > environment...(Centos 6.6 >>>> >>> >>> > > and >>>> >>> >>> > > centos 7.0) on both hypervisior cluster..it's failed >>>> >>> >>> > > to >>>> >>> >>> > > reconnect >>>> >>> >>> > > in >>>> >>> >>> > > to >>>> >>> >>> > > cluster after reboot.... >>>> >>> >>> > > >>>> >>> >>> > > In both the environment glusterd enabled for next >>>> >>> >>> > > boot....but >>>> >>> >>> > > it's >>>> >>> >>> > > failed with the same error....seems it's bug in either >>>> >>> >>> > > gluster or >>>> >>> >>> > > Ovirt ?? >>>> >>> >>> > > >>>> >>> >>> > > Please help me to find the workaround here if can not >>>> >>> >>> > > resolve >>>> >>> >>> > > it...as >>>> >>> >>> > > without this the Host machine can not connect after >>>> >>> >>> > > reboot....that >>>> >>> >>> > > means >>>> >>> >>> > > engine will consider it as down and every time need to >>>> >>> >>> > > manually >>>> >>> >>> > > start >>>> >>> >>> > > the >>>> >>> >>> > > gluster service and vdsmd... ?? >>>> >>> >>> > > >>>> >>> >>> > > Thanks, >>>> >>> >>> > > Punit >>>> >>> >>> > > >>>> >>> >>> > > On Mon, Nov 24, 2014 at 10:20 PM, Kanagaraj >>>> >>> >>> > > <kmayilsa@redhat.com> >>>> >>> >>> > > wrote: >>>> >>> >>> > > >>>> >>> >>> > >> From vdsm.log "error: Connection failed. Please >>>> >>> >>> > >> check if >>>> >>> >>> > >> gluster >>>> >>> >>> > >> daemon >>>> >>> >>> > >> is operational." >>>> >>> >>> > >> >>>> >>> >>> > >> Starting glusterd service should fix this issue. >>>> >>> >>> > >> 'service >>>> >>> >>> > >> glusterd >>>> >>> >>> > >> start' >>>> >>> >>> > >> But i am wondering why the glusterd was not started >>>> >>> >>> > >> automatically >>>> >>> >>> > >> after >>>> >>> >>> > >> the reboot. >>>> >>> >>> > >> >>>> >>> >>> > >> Thanks, >>>> >>> >>> > >> Kanagaraj >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> On 11/24/2014 07:18 PM, Punit Dambiwal wrote: >>>> >>> >>> > >> >>>> >>> >>> > >> Hi Kanagaraj, >>>> >>> >>> > >> >>>> >>> >>> > >> Please find the attached VDSM logs :- >>>> >>> >>> > >> >>>> >>> >>> > >> ---------------- >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> 21:41:17,182::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >>>> >>> >>> > >> Owner.cancelAll requests {} >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> 21:41:17,182::task::993::Storage.TaskManager.Task::(_decref) >>>> >>> >>> > >> Task=`1691d409-9b27-4585-8281-5ec26154367a`::ref 0 >>>> >>> >>> > >> aborting >>>> >>> >>> > >> False >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> 21:41:32,393::task::595::Storage.TaskManager.Task::(_updateState) >>>> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving >>>> >>> >>> > >> from >>>> >>> >>> > >> state >>>> >>> >>> > >> init >>>> >>> >>> > >> -> >>>> >>> >>> > >> state preparing >>>> >>> >>> > >> Thread-13::INFO::2014-11-24 >>>> >>> >>> > >> 21:41:32,393::logUtils::44::dispatcher::(wrapper) Run >>>> >>> >>> > >> and >>>> >>> >>> > >> protect: >>>> >>> >>> > >> repoStats(options=None) >>>> >>> >>> > >> Thread-13::INFO::2014-11-24 >>>> >>> >>> > >> 21:41:32,393::logUtils::47::dispatcher::(wrapper) Run >>>> >>> >>> > >> and >>>> >>> >>> > >> protect: >>>> >>> >>> > >> repoStats, Return response: {} >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> 21:41:32,393::task::1191::Storage.TaskManager.Task::(prepare) >>>> >>> >>> > >> >>>> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::finished: {} >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> 21:41:32,394::task::595::Storage.TaskManager.Task::(_updateState) >>>> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving >>>> >>> >>> > >> from >>>> >>> >>> > >> state >>>> >>> >>> > >> preparing >>>> >>> >>> > >> -> >>>> >>> >>> > >> state finished >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> 21:41:32,394::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) >>>> >>> >>> > >> Owner.releaseAll requests {} resources {} >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> 21:41:32,394::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >>>> >>> >>> > >> Owner.cancelAll requests {} >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> 21:41:32,394::task::993::Storage.TaskManager.Task::(_decref) >>>> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::ref 0 >>>> >>> >>> > >> aborting >>>> >>> >>> > >> False >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> 21:41:41,550::BindingXMLRPC::1132::vds::(wrapper) >>>> >>> >>> > >> client >>>> >>> >>> > >> [10.10.10.2]::call >>>> >>> >>> > >> getCapabilities with () {} >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> 21:41:41,553::utils::738::root::(execCmd) >>>> >>> >>> > >> /sbin/ip route show to 0.0.0.0/0 table all (cwd None) >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> 21:41:41,560::utils::758::root::(execCmd) >>>> >>> >>> > >> SUCCESS: <err> = ''; <rc> = 0 >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> 21:41:41,588::caps::728::root::(_getKeyPackages) rpm >>>> >>> >>> > >> package >>>> >>> >>> > >> ('gluster-swift',) not found >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> 21:41:41,592::caps::728::root::(_getKeyPackages) rpm >>>> >>> >>> > >> package >>>> >>> >>> > >> ('gluster-swift-object',) not found >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> 21:41:41,593::caps::728::root::(_getKeyPackages) rpm >>>> >>> >>> > >> package >>>> >>> >>> > >> ('gluster-swift-plugin',) not found >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm >>>> >>> >>> > >> package >>>> >>> >>> > >> ('gluster-swift-account',) not found >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm >>>> >>> >>> > >> package >>>> >>> >>> > >> ('gluster-swift-proxy',) not found >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm >>>> >>> >>> > >> package >>>> >>> >>> > >> ('gluster-swift-doc',) not found >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm >>>> >>> >>> > >> package >>>> >>> >>> > >> ('gluster-swift-container',) not found >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm >>>> >>> >>> > >> package >>>> >>> >>> > >> ('glusterfs-geo-replication',) not found >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> 21:41:41,600::caps::646::root::(get) >>>> >>> >>> > >> VirtioRNG DISABLED: libvirt version 0.10.2-29.el6_5.9 >>>> >>> >>> > >> required >>>> >>> >>> > >> >= >>>> >>> >>> > >> 0.10.2-31 >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> 21:41:41,603::BindingXMLRPC::1139::vds::(wrapper) >>>> >>> >>> > >> return >>>> >>> >>> > >> getCapabilities >>>> >>> >>> > >> with {'status': {'message': 'Done', 'code': 0}, >>>> >>> >>> > >> 'info': >>>> >>> >>> > >> {'HBAInventory': >>>> >>> >>> > >> {'iSCSI': [{'InitiatorName': >>>> >>> >>> > >> 'iqn.1994-05.com.redhat:32151ce183c8'}], >>>> >>> >>> > >> 'FC': >>>> >>> >>> > >> []}, 'packages2': {'kernel': {'release': >>>> >>> >>> > >> '431.el6.x86_64', >>>> >>> >>> > >> 'buildtime': >>>> >>> >>> > >> 1385061309.0, 'version': '2.6.32'}, 'glusterfs-rdma': >>>> >>> >>> > >> {'release': >>>> >>> >>> > >> '1.el6', >>>> >>> >>> > >> 'buildtime': 1403622628L, 'version': '3.5.1'}, >>>> >>> >>> > >> 'glusterfs-fuse': >>>> >>> >>> > >> {'release': '1.el6', 'buildtime': 1403622628L, >>>> >>> >>> > >> 'version': >>>> >>> >>> > >> '3.5.1'}, >>>> >>> >>> > >> 'spice-server': {'release': '6.el6_5.2', 'buildtime': >>>> >>> >>> > >> 1402324637L, >>>> >>> >>> > >> 'version': '0.12.4'}, 'vdsm': {'release': >>>> >>> >>> > >> '1.gitdb83943.el6', >>>> >>> >>> > >> 'buildtime': >>>> >>> >>> > >> 1412784567L, 'version': '4.16.7'}, 'qemu-kvm': >>>> >>> >>> > >> {'release': >>>> >>> >>> > >> '2.415.el6_5.10', 'buildtime': 1402435700L, >>>> >>> >>> > >> 'version': >>>> >>> >>> > >> '0.12.1.2'}, >>>> >>> >>> > >> 'qemu-img': {'release': '2.415.el6_5.10', >>>> >>> >>> > >> 'buildtime': >>>> >>> >>> > >> 1402435700L, >>>> >>> >>> > >> 'version': '0.12.1.2'}, 'libvirt': {'release': >>>> >>> >>> > >> '29.el6_5.9', >>>> >>> >>> > >> 'buildtime': >>>> >>> >>> > >> 1402404612L, 'version': '0.10.2'}, 'glusterfs': >>>> >>> >>> > >> {'release': >>>> >>> >>> > >> '1.el6', >>>> >>> >>> > >> 'buildtime': 1403622628L, 'version': '3.5.1'}, 'mom': >>>> >>> >>> > >> {'release': >>>> >>> >>> > >> '2.el6', >>>> >>> >>> > >> 'buildtime': 1403794344L, 'version': '0.4.1'}, >>>> >>> >>> > >> 'glusterfs-server': >>>> >>> >>> > >> {'release': '1.el6', 'buildtime': 1403622628L, >>>> >>> >>> > >> 'version': >>>> >>> >>> > >> '3.5.1'}}, >>>> >>> >>> > >> 'numaNodeDistance': {'1': [20, 10], '0': [10, 20]}, >>>> >>> >>> > >> 'cpuModel': >>>> >>> >>> > >> 'Intel(R) >>>> >>> >>> > >> Xeon(R) CPU X5650 @ 2.67GHz', 'liveMerge': >>>> >>> >>> > >> 'false', >>>> >>> >>> > >> 'hooks': >>>> >>> >>> > >> {}, >>>> >>> >>> > >> 'cpuSockets': '2', 'vmTypes': ['kvm'], 'selinux': >>>> >>> >>> > >> {'mode': '1'}, >>>> >>> >>> > >> 'kdumpStatus': 0, 'supportedProtocols': ['2.2', >>>> >>> >>> > >> '2.3'], >>>> >>> >>> > >> 'networks': >>>> >>> >>> > >> {'ovirtmgmt': {'iface': u'bond0.10', 'addr': >>>> >>> >>> > >> '43.252.176.16', >>>> >>> >>> > >> 'bridged': >>>> >>> >>> > >> False, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], >>>> >>> >>> > >> 'mtu': >>>> >>> >>> > >> '1500', >>>> >>> >>> > >> 'bootproto4': 'none', 'netmask': '255.255.255.0', >>>> >>> >>> > >> 'ipv4addrs': >>>> >>> >>> > >> [' >>>> >>> >>> > >> 43.252.176.16/24' <http://43.252.176.16/24%27>], >>>> >>> >>> > >> 'interface': >>>> >>> >>> > >> u'bond0.10', 'ipv6gateway': '::', 'gateway': >>>> >>> >>> > >> '43.25.17.1'}, >>>> >>> >>> > >> 'Internal': >>>> >>> >>> > >> {'iface': 'Internal', 'addr': '', 'cfg': {'DEFROUTE': >>>> >>> >>> > >> 'no', >>>> >>> >>> > >> 'HOTPLUG': >>>> >>> >>> > >> 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': >>>> >>> >>> > >> 'no', >>>> >>> >>> > >> 'BOOTPROTO': >>>> >>> >>> > >> 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': >>>> >>> >>> > >> 'Bridge', >>>> >>> >>> > >> 'ONBOOT': >>>> >>> >>> > >> 'no'}, 'bridged': True, 'ipv6addrs': >>>> >>> >>> > >> ['fe80::210:18ff:fecd:daac/64'], >>>> >>> >>> > >> 'gateway': '', 'bootproto4': 'none', 'netmask': '', >>>> >>> >>> > >> 'stp': >>>> >>> >>> > >> 'off', >>>> >>> >>> > >> 'ipv4addrs': [], 'mtu': '9000', 'ipv6gateway': '::', >>>> >>> >>> > >> 'ports': >>>> >>> >>> > >> ['bond1.100']}, 'storage': {'iface': u'bond1', >>>> >>> >>> > >> 'addr': >>>> >>> >>> > >> '10.10.10.6', >>>> >>> >>> > >> 'bridged': False, 'ipv6addrs': >>>> >>> >>> > >> ['fe80::210:18ff:fecd:daac/64'], >>>> >>> >>> > >> 'mtu': >>>> >>> >>> > >> '9000', 'bootproto4': 'none', 'netmask': >>>> >>> >>> > >> '255.255.255.0', >>>> >>> >>> > >> 'ipv4addrs': [' >>>> >>> >>> > >> 10.10.10.6/24' <http://10.10.10.6/24%27>], >>>> >>> >>> > >> 'interface': >>>> >>> >>> > >> u'bond1', >>>> >>> >>> > >> 'ipv6gateway': '::', 'gateway': ''}, 'VMNetwork': >>>> >>> >>> > >> {'iface': >>>> >>> >>> > >> 'VMNetwork', >>>> >>> >>> > >> 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': >>>> >>> >>> > >> 'no', >>>> >>> >>> > >> 'MTU': >>>> >>> >>> > >> '1500', >>>> >>> >>> > >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': >>>> >>> >>> > >> 'none', >>>> >>> >>> > >> 'STP': >>>> >>> >>> > >> 'off', >>>> >>> >>> > >> 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': >>>> >>> >>> > >> 'no'}, >>>> >>> >>> > >> 'bridged': >>>> >>> >>> > >> True, >>>> >>> >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], >>>> >>> >>> > >> 'gateway': >>>> >>> >>> > >> '', >>>> >>> >>> > >> 'bootproto4': >>>> >>> >>> > >> 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], >>>> >>> >>> > >> 'mtu': >>>> >>> >>> > >> '1500', >>>> >>> >>> > >> 'ipv6gateway': '::', 'ports': ['bond0.36']}}, >>>> >>> >>> > >> 'bridges': >>>> >>> >>> > >> {'Internal': >>>> >>> >>> > >> {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': >>>> >>> >>> > >> 'no', >>>> >>> >>> > >> 'MTU': >>>> >>> >>> > >> '9000', >>>> >>> >>> > >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': >>>> >>> >>> > >> 'none', >>>> >>> >>> > >> 'STP': >>>> >>> >>> > >> 'off', >>>> >>> >>> > >> 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': >>>> >>> >>> > >> 'no'}, >>>> >>> >>> > >> 'ipv6addrs': >>>> >>> >>> > >> ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', >>>> >>> >>> > >> 'netmask': '', >>>> >>> >>> > >> 'stp': >>>> >>> >>> > >> 'off', 'ipv4addrs': [], 'ipv6gateway': '::', >>>> >>> >>> > >> 'gateway': >>>> >>> >>> > >> '', >>>> >>> >>> > >> 'opts': >>>> >>> >>> > >> {'topology_change_detected': '0', >>>> >>> >>> > >> 'multicast_last_member_count': >>>> >>> >>> > >> '2', >>>> >>> >>> > >> 'hash_elasticity': '4', >>>> >>> >>> > >> 'multicast_query_response_interval': >>>> >>> >>> > >> '999', >>>> >>> >>> > >> 'multicast_snooping': '1', >>>> >>> >>> > >> 'multicast_startup_query_interval': >>>> >>> >>> > >> '3124', >>>> >>> >>> > >> 'hello_timer': '31', 'multicast_querier_interval': >>>> >>> >>> > >> '25496', >>>> >>> >>> > >> 'max_age': >>>> >>> >>> > >> '1999', 'hash_max': '512', 'stp_state': '0', >>>> >>> >>> > >> 'root_id': >>>> >>> >>> > >> '8000.001018cddaac', 'priority': '32768', >>>> >>> >>> > >> 'multicast_membership_interval': >>>> >>> >>> > >> '25996', 'root_path_cost': '0', 'root_port': '0', >>>> >>> >>> > >> 'multicast_querier': >>>> >>> >>> > >> '0', >>>> >>> >>> > >> 'multicast_startup_query_count': '2', 'hello_time': >>>> >>> >>> > >> '199', >>>> >>> >>> > >> 'topology_change': '0', 'bridge_id': >>>> >>> >>> > >> '8000.001018cddaac', >>>> >>> >>> > >> 'topology_change_timer': '0', 'ageing_time': '29995', >>>> >>> >>> > >> 'gc_timer': >>>> >>> >>> > >> '31', >>>> >>> >>> > >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', >>>> >>> >>> > >> 'multicast_query_interval': '12498', >>>> >>> >>> > >> 'multicast_last_member_interval': >>>> >>> >>> > >> '99', 'multicast_router': '1', 'forward_delay': '0'}, >>>> >>> >>> > >> 'ports': >>>> >>> >>> > >> ['bond1.100']}, 'VMNetwork': {'addr': '', 'cfg': >>>> >>> >>> > >> {'DEFROUTE': >>>> >>> >>> > >> 'no', >>>> >>> >>> > >> 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', >>>> >>> >>> > >> 'NM_CONTROLLED': >>>> >>> >>> > >> 'no', >>>> >>> >>> > >> 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': >>>> >>> >>> > >> 'VMNetwork', >>>> >>> >>> > >> 'TYPE': >>>> >>> >>> > >> 'Bridge', >>>> >>> >>> > >> 'ONBOOT': 'no'}, 'ipv6addrs': >>>> >>> >>> > >> ['fe80::62eb:69ff:fe20:b46c/64'], >>>> >>> >>> > >> 'mtu': >>>> >>> >>> > >> '1500', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], >>>> >>> >>> > >> 'ipv6gateway': >>>> >>> >>> > >> '::', >>>> >>> >>> > >> 'gateway': '', 'opts': {'topology_change_detected': >>>> >>> >>> > >> '0', >>>> >>> >>> > >> 'multicast_last_member_count': '2', >>>> >>> >>> > >> 'hash_elasticity': >>>> >>> >>> > >> '4', >>>> >>> >>> > >> 'multicast_query_response_interval': '999', >>>> >>> >>> > >> 'multicast_snooping': >>>> >>> >>> > >> '1', >>>> >>> >>> > >> 'multicast_startup_query_interval': '3124', >>>> >>> >>> > >> 'hello_timer': >>>> >>> >>> > >> '131', >>>> >>> >>> > >> 'multicast_querier_interval': '25496', 'max_age': >>>> >>> >>> > >> '1999', >>>> >>> >>> > >> 'hash_max': >>>> >>> >>> > >> '512', 'stp_state': '0', 'root_id': >>>> >>> >>> > >> '8000.60eb6920b46c', >>>> >>> >>> > >> 'priority': >>>> >>> >>> > >> '32768', 'multicast_membership_interval': '25996', >>>> >>> >>> > >> 'root_path_cost': >>>> >>> >>> > >> '0', >>>> >>> >>> > >> 'root_port': '0', 'multicast_querier': '0', >>>> >>> >>> > >> 'multicast_startup_query_count': '2', 'hello_time': >>>> >>> >>> > >> '199', >>>> >>> >>> > >> 'topology_change': '0', 'bridge_id': >>>> >>> >>> > >> '8000.60eb6920b46c', >>>> >>> >>> > >> 'topology_change_timer': '0', 'ageing_time': '29995', >>>> >>> >>> > >> 'gc_timer': >>>> >>> >>> > >> '31', >>>> >>> >>> > >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', >>>> >>> >>> > >> 'multicast_query_interval': '12498', >>>> >>> >>> > >> 'multicast_last_member_interval': >>>> >>> >>> > >> '99', 'multicast_router': '1', 'forward_delay': '0'}, >>>> >>> >>> > >> 'ports': >>>> >>> >>> > >> ['bond0.36']}}, 'uuid': >>>> >>> >>> > >> '44454C4C-4C00-1057-8053-B7C04F504E31', >>>> >>> >>> > >> 'lastClientIface': 'bond1', 'nics': {'eth3': >>>> >>> >>> > >> {'permhwaddr': >>>> >>> >>> > >> '00:10:18:cd:da:ae', 'addr': '', 'cfg': {'SLAVE': >>>> >>> >>> > >> 'yes', >>>> >>> >>> > >> 'NM_CONTROLLED': >>>> >>> >>> > >> 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ae', >>>> >>> >>> > >> 'MASTER': >>>> >>> >>> > >> 'bond1', >>>> >>> >>> > >> 'DEVICE': 'eth3', 'ONBOOT': 'no'}, 'ipv6addrs': [], >>>> >>> >>> > >> 'mtu': >>>> >>> >>> > >> '9000', >>>> >>> >>> > >> 'netmask': '', 'ipv4addrs': [], 'hwaddr': >>>> >>> >>> > >> '00:10:18:cd:da:ac', >>>> >>> >>> > >> 'speed': >>>> >>> >>> > >> 1000}, 'eth2': {'permhwaddr': '00:10:18:cd:da:ac', >>>> >>> >>> > >> 'addr': '', >>>> >>> >>> > >> 'cfg': >>>> >>> >>> > >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': >>>> >>> >>> > >> '9000', >>>> >>> >>> > >> 'HWADDR': >>>> >>> >>> > >> '00:10:18:cd:da:ac', 'MASTER': 'bond1', 'DEVICE': >>>> >>> >>> > >> 'eth2', >>>> >>> >>> > >> 'ONBOOT': >>>> >>> >>> > >> 'no'}, >>>> >>> >>> > >> 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', >>>> >>> >>> > >> 'ipv4addrs': [], >>>> >>> >>> > >> 'hwaddr': >>>> >>> >>> > >> '00:10:18:cd:da:ac', 'speed': 1000}, 'eth1': >>>> >>> >>> > >> {'permhwaddr': >>>> >>> >>> > >> '60:eb:69:20:b4:6d', 'addr': '', 'cfg': {'SLAVE': >>>> >>> >>> > >> 'yes', >>>> >>> >>> > >> 'NM_CONTROLLED': >>>> >>> >>> > >> 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6d', >>>> >>> >>> > >> 'MASTER': >>>> >>> >>> > >> 'bond0', >>>> >>> >>> > >> 'DEVICE': 'eth1', 'ONBOOT': 'yes'}, 'ipv6addrs': [], >>>> >>> >>> > >> 'mtu': >>>> >>> >>> > >> '1500', >>>> >>> >>> > >> 'netmask': '', 'ipv4addrs': [], 'hwaddr': >>>> >>> >>> > >> '60:eb:69:20:b4:6c', >>>> >>> >>> > >> 'speed': >>>> >>> >>> > >> 1000}, 'eth0': {'permhwaddr': '60:eb:69:20:b4:6c', >>>> >>> >>> > >> 'addr': '', >>>> >>> >>> > >> 'cfg': >>>> >>> >>> > >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': >>>> >>> >>> > >> '1500', >>>> >>> >>> > >> 'HWADDR': >>>> >>> >>> > >> '60:eb:69:20:b4:6c', 'MASTER': 'bond0', 'DEVICE': >>>> >>> >>> > >> 'eth0', >>>> >>> >>> > >> 'ONBOOT': >>>> >>> >>> > >> 'yes'}, >>>> >>> >>> > >> 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', >>>> >>> >>> > >> 'ipv4addrs': [], >>>> >>> >>> > >> 'hwaddr': >>>> >>> >>> > >> '60:eb:69:20:b4:6c', 'speed': 1000}}, >>>> >>> >>> > >> 'software_revision': '1', >>>> >>> >>> > >> 'clusterLevels': ['3.0', '3.1', '3.2', '3.3', '3.4', >>>> >>> >>> > >> '3.5'], >>>> >>> >>> > >> 'cpuFlags': >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,popcnt,aes,lahf_lm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270', >>>> >>> >>> > >> 'ISCSIInitiatorName': >>>> >>> >>> > >> 'iqn.1994-05.com.redhat:32151ce183c8', >>>> >>> >>> > >> 'netConfigDirty': 'False', 'supportedENGINEs': >>>> >>> >>> > >> ['3.0', >>>> >>> >>> > >> '3.1', >>>> >>> >>> > >> '3.2', >>>> >>> >>> > >> '3.3', >>>> >>> >>> > >> '3.4', '3.5'], 'autoNumaBalancing': 2, 'reservedMem': >>>> >>> >>> > >> '321', >>>> >>> >>> > >> 'bondings': >>>> >>> >>> > >> {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', >>>> >>> >>> > >> 'netmask': '', >>>> >>> >>> > >> 'slaves': >>>> >>> >>> > >> [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': >>>> >>> >>> > >> '', >>>> >>> >>> > >> 'cfg': >>>> >>> >>> > >> {'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': >>>> >>> >>> > >> 'no', >>>> >>> >>> > >> 'BONDING_OPTS': >>>> >>> >>> > >> 'mode=4 miimon=100', 'DEVICE': 'bond0', 'ONBOOT': >>>> >>> >>> > >> 'yes'}, >>>> >>> >>> > >> 'ipv6addrs': >>>> >>> >>> > >> ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', >>>> >>> >>> > >> 'netmask': '', >>>> >>> >>> > >> 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', >>>> >>> >>> > >> 'slaves': >>>> >>> >>> > >> ['eth0', >>>> >>> >>> > >> 'eth1'], >>>> >>> >>> > >> 'opts': {'miimon': '100', 'mode': '4'}}, 'bond1': >>>> >>> >>> > >> {'addr': >>>> >>> >>> > >> '10.10.10.6', >>>> >>> >>> > >> 'cfg': {'DEFROUTE': 'no', 'IPADDR': '10.10.10.6', >>>> >>> >>> > >> 'HOTPLUG': >>>> >>> >>> > >> 'no', >>>> >>> >>> > >> 'MTU': >>>> >>> >>> > >> '9000', 'NM_CONTROLLED': 'no', 'NETMASK': >>>> >>> >>> > >> '255.255.255.0', >>>> >>> >>> > >> 'BOOTPROTO': >>>> >>> >>> > >> 'none', 'BONDING_OPTS': 'mode=4 miimon=100', >>>> >>> >>> > >> 'DEVICE': >>>> >>> >>> > >> 'bond1', >>>> >>> >>> > >> 'ONBOOT': >>>> >>> >>> > >> 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], >>>> >>> >>> > >> 'mtu': >>>> >>> >>> > >> '9000', >>>> >>> >>> > >> 'netmask': '255.255.255.0', 'ipv4addrs': >>>> >>> >>> > >> ['10.10.10.6/24' >>>> >>> >>> > >> <http://10.10.10.6/24%27>], 'hwaddr': >>>> >>> >>> > >> '00:10:18:cd:da:ac', >>>> >>> >>> > >> 'slaves': >>>> >>> >>> > >> ['eth2', 'eth3'], 'opts': {'miimon': '100', 'mode': >>>> >>> >>> > >> '4'}}, >>>> >>> >>> > >> 'bond2': >>>> >>> >>> > >> {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', >>>> >>> >>> > >> 'slaves': >>>> >>> >>> > >> [], >>>> >>> >>> > >> 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', >>>> >>> >>> > >> 'cfg': {}, >>>> >>> >>> > >> 'mtu': >>>> >>> >>> > >> '1500', 'netmask': '', 'slaves': [], 'hwaddr': >>>> >>> >>> > >> '00:00:00:00:00:00'}}, >>>> >>> >>> > >> 'software_version': '4.16', 'memSize': '24019', >>>> >>> >>> > >> 'cpuSpeed': >>>> >>> >>> > >> '2667.000', >>>> >>> >>> > >> 'numaNodes': {u'1': {'totalMemory': '12288', 'cpus': >>>> >>> >>> > >> [6, >>>> >>> >>> > >> 7, 8, >>>> >>> >>> > >> 9, >>>> >>> >>> > >> 10, 11, >>>> >>> >>> > >> 18, 19, 20, 21, 22, 23]}, u'0': {'totalMemory': >>>> >>> >>> > >> '12278', >>>> >>> >>> > >> 'cpus': >>>> >>> >>> > >> [0, >>>> >>> >>> > >> 1, 2, >>>> >>> >>> > >> 3, 4, 5, 12, 13, 14, 15, 16, 17]}}, 'version_name': >>>> >>> >>> > >> 'Snow >>>> >>> >>> > >> Man', >>>> >>> >>> > >> 'vlans': >>>> >>> >>> > >> {'bond0.10': {'iface': 'bond0', 'addr': >>>> >>> >>> > >> '43.25.17.16', >>>> >>> >>> > >> 'cfg': >>>> >>> >>> > >> {'DEFROUTE': >>>> >>> >>> > >> 'yes', 'VLAN': 'yes', 'IPADDR': '43.25.17.16', >>>> >>> >>> > >> 'HOTPLUG': >>>> >>> >>> > >> 'no', >>>> >>> >>> > >> 'GATEWAY': >>>> >>> >>> > >> '43.25.17.1', 'NM_CONTROLLED': 'no', 'NETMASK': >>>> >>> >>> > >> '255.255.255.0', >>>> >>> >>> > >> 'BOOTPROTO': 'none', 'DEVICE': 'bond0.10', 'MTU': >>>> >>> >>> > >> '1500', >>>> >>> >>> > >> 'ONBOOT': >>>> >>> >>> > >> 'yes'}, >>>> >>> >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], >>>> >>> >>> > >> 'vlanid': >>>> >>> >>> > >> 10, >>>> >>> >>> > >> 'mtu': >>>> >>> >>> > >> '1500', >>>> >>> >>> > >> 'netmask': '255.255.255.0', 'ipv4addrs': >>>> >>> >>> > >> ['43.25.17.16/24'] >>>> >>> >>> > >> <http://43.25.17.16/24%27%5D>}, 'bond0.36': {'iface': >>>> >>> >>> > >> 'bond0', >>>> >>> >>> > >> 'addr': >>>> >>> >>> > >> '', 'cfg': {'BRIDGE': 'VMNetwork', 'VLAN': 'yes', >>>> >>> >>> > >> 'HOTPLUG': >>>> >>> >>> > >> 'no', >>>> >>> >>> > >> 'MTU': >>>> >>> >>> > >> '1500', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond0.36', >>>> >>> >>> > >> 'ONBOOT': >>>> >>> >>> > >> 'no'}, >>>> >>> >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], >>>> >>> >>> > >> 'vlanid': >>>> >>> >>> > >> 36, >>>> >>> >>> > >> 'mtu': >>>> >>> >>> > >> '1500', >>>> >>> >>> > >> 'netmask': '', 'ipv4addrs': []}, 'bond1.100': >>>> >>> >>> > >> {'iface': >>>> >>> >>> > >> 'bond1', >>>> >>> >>> > >> 'addr': >>>> >>> >>> > >> '', 'cfg': {'BRIDGE': 'Internal', 'VLAN': 'yes', >>>> >>> >>> > >> 'HOTPLUG': >>>> >>> >>> > >> 'no', >>>> >>> >>> > >> 'MTU': >>>> >>> >>> > >> '9000', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond1.100', >>>> >>> >>> > >> 'ONBOOT': >>>> >>> >>> > >> 'no'}, >>>> >>> >>> > >> 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], >>>> >>> >>> > >> 'vlanid': >>>> >>> >>> > >> 100, >>>> >>> >>> > >> 'mtu': >>>> >>> >>> > >> '9000', >>>> >>> >>> > >> 'netmask': '', 'ipv4addrs': []}}, 'cpuCores': '12', >>>> >>> >>> > >> 'kvmEnabled': >>>> >>> >>> > >> 'true', >>>> >>> >>> > >> 'guestOverhead': '65', 'cpuThreads': '24', >>>> >>> >>> > >> 'emulatedMachines': >>>> >>> >>> > >> [u'rhel6.5.0', u'pc', u'rhel6.4.0', u'rhel6.3.0', >>>> >>> >>> > >> u'rhel6.2.0', >>>> >>> >>> > >> u'rhel6.1.0', u'rhel6.0.0', u'rhel5.5.0', >>>> >>> >>> > >> u'rhel5.4.4', >>>> >>> >>> > >> u'rhel5.4.0'], >>>> >>> >>> > >> 'operatingSystem': {'release': '5.el6.centos.11.1', >>>> >>> >>> > >> 'version': >>>> >>> >>> > >> '6', >>>> >>> >>> > >> 'name': >>>> >>> >>> > >> 'RHEL'}, 'lastClient': '10.10.10.2'}} >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> 21:41:41,620::BindingXMLRPC::1132::vds::(wrapper) >>>> >>> >>> > >> client >>>> >>> >>> > >> [10.10.10.2]::call >>>> >>> >>> > >> getHardwareInfo with () {} >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> 21:41:41,621::BindingXMLRPC::1139::vds::(wrapper) >>>> >>> >>> > >> return >>>> >>> >>> > >> getHardwareInfo >>>> >>> >>> > >> with {'status': {'message': 'Done', 'code': 0}, >>>> >>> >>> > >> 'info': >>>> >>> >>> > >> {'systemProductName': 'CS24-TY', >>>> >>> >>> > >> 'systemSerialNumber': >>>> >>> >>> > >> '7LWSPN1', >>>> >>> >>> > >> 'systemFamily': 'Server', 'systemVersion': 'A00', >>>> >>> >>> > >> 'systemUUID': >>>> >>> >>> > >> '44454c4c-4c00-1057-8053-b7c04f504e31', >>>> >>> >>> > >> 'systemManufacturer': >>>> >>> >>> > >> 'Dell'}} >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> 21:41:41,733::BindingXMLRPC::1132::vds::(wrapper) >>>> >>> >>> > >> client >>>> >>> >>> > >> [10.10.10.2]::call >>>> >>> >>> > >> hostsList with () {} flowID [222e8036] >>>> >>> >>> > >> Thread-13::ERROR::2014-11-24 >>>> >>> >>> > >> 21:41:44,753::BindingXMLRPC::1148::vds::(wrapper) >>>> >>> >>> > >> vdsm >>>> >>> >>> > >> exception >>>> >>> >>> > >> occured >>>> >>> >>> > >> Traceback (most recent call last): >>>> >>> >>> > >> File "/usr/share/vdsm/rpc/BindingXMLRPC.py", line >>>> >>> >>> > >> 1135, >>>> >>> >>> > >> in >>>> >>> >>> > >> wrapper >>>> >>> >>> > >> res = f(*args, **kwargs) >>>> >>> >>> > >> File "/usr/share/vdsm/gluster/api.py", line 54, in >>>> >>> >>> > >> wrapper >>>> >>> >>> > >> rv = func(*args, **kwargs) >>>> >>> >>> > >> File "/usr/share/vdsm/gluster/api.py", line 251, in >>>> >>> >>> > >> hostsList >>>> >>> >>> > >> return {'hosts': >>>> >>> >>> > >> self.svdsmProxy.glusterPeerStatus()} >>>> >>> >>> > >> File "/usr/share/vdsm/supervdsm.py", line 50, in >>>> >>> >>> > >> __call__ >>>> >>> >>> > >> return callMethod() >>>> >>> >>> > >> File "/usr/share/vdsm/supervdsm.py", line 48, in >>>> >>> >>> > >> <lambda> >>>> >>> >>> > >> **kwargs) >>>> >>> >>> > >> File "<string>", line 2, in glusterPeerStatus >>>> >>> >>> > >> File >>>> >>> >>> > >> "/usr/lib64/python2.6/multiprocessing/managers.py", >>>> >>> >>> > >> line >>>> >>> >>> > >> 740, >>>> >>> >>> > >> in >>>> >>> >>> > >> _callmethod >>>> >>> >>> > >> raise convert_to_error(kind, result) >>>> >>> >>> > >> GlusterCmdExecFailedException: Command execution >>>> >>> >>> > >> failed >>>> >>> >>> > >> error: Connection failed. Please check if gluster >>>> >>> >>> > >> daemon >>>> >>> >>> > >> is >>>> >>> >>> > >> operational. >>>> >>> >>> > >> return code: 1 >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> 21:41:50,949::task::595::Storage.TaskManager.Task::(_updateState) >>>> >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving >>>> >>> >>> > >> from >>>> >>> >>> > >> state >>>> >>> >>> > >> init >>>> >>> >>> > >> -> >>>> >>> >>> > >> state preparing >>>> >>> >>> > >> Thread-13::INFO::2014-11-24 >>>> >>> >>> > >> 21:41:50,950::logUtils::44::dispatcher::(wrapper) Run >>>> >>> >>> > >> and >>>> >>> >>> > >> protect: >>>> >>> >>> > >> repoStats(options=None) >>>> >>> >>> > >> Thread-13::INFO::2014-11-24 >>>> >>> >>> > >> 21:41:50,950::logUtils::47::dispatcher::(wrapper) Run >>>> >>> >>> > >> and >>>> >>> >>> > >> protect: >>>> >>> >>> > >> repoStats, Return response: {} >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> 21:41:50,950::task::1191::Storage.TaskManager.Task::(prepare) >>>> >>> >>> > >> >>>> >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::finished: {} >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> 21:41:50,950::task::595::Storage.TaskManager.Task::(_updateState) >>>> >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving >>>> >>> >>> > >> from >>>> >>> >>> > >> state >>>> >>> >>> > >> preparing >>>> >>> >>> > >> -> >>>> >>> >>> > >> state finished >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> 21:41:50,951::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) >>>> >>> >>> > >> Owner.releaseAll requests {} resources {} >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> 21:41:50,951::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >>>> >>> >>> > >> Owner.cancelAll requests {} >>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > >> 21:41:50,951::task::993::Storage.TaskManager.Task::(_decref) >>>> >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::ref 0 >>>> >>> >>> > >> aborting >>>> >>> >>> > >> False >>>> >>> >>> > >> ------------------------------- >>>> >>> >>> > >> >>>> >>> >>> > >> [root@compute4 ~]# service glusterd status >>>> >>> >>> > >> glusterd is stopped >>>> >>> >>> > >> [root@compute4 ~]# chkconfig --list | grep glusterd >>>> >>> >>> > >> glusterd 0:off 1:off 2:on 3:on 4:on >>>> >>> >>> > >> 5:on >>>> >>> >>> > >> 6:off >>>> >>> >>> > >> [root@compute4 ~]# >>>> >>> >>> > >> >>>> >>> >>> > >> Thanks, >>>> >>> >>> > >> Punit >>>> >>> >>> > >> >>>> >>> >>> > >> On Mon, Nov 24, 2014 at 6:36 PM, Kanagaraj >>>> >>> >>> > >> <kmayilsa@redhat.com> >>>> >>> >>> > >> wrote: >>>> >>> >>> > >> >>>> >>> >>> > >>> Can you send the corresponding error in vdsm.log >>>> >>> >>> > >>> from >>>> >>> >>> > >>> the >>>> >>> >>> > >>> host? >>>> >>> >>> > >>> >>>> >>> >>> > >>> Also check if glusterd service is running. >>>> >>> >>> > >>> >>>> >>> >>> > >>> Thanks, >>>> >>> >>> > >>> Kanagaraj >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> On 11/24/2014 03:39 PM, Punit Dambiwal wrote: >>>> >>> >>> > >>> >>>> >>> >>> > >>> Hi, >>>> >>> >>> > >>> >>>> >>> >>> > >>> After reboot my Hypervisior host can not activate >>>> >>> >>> > >>> again >>>> >>> >>> > >>> in the >>>> >>> >>> > >>> cluster >>>> >>> >>> > >>> and failed with the following error :- >>>> >>> >>> > >>> >>>> >>> >>> > >>> Gluster command [<UNKNOWN>] failed on server... >>>> >>> >>> > >>> >>>> >>> >>> > >>> Engine logs :- >>>> >>> >>> > >>> >>>> >>> >>> > >>> 2014-11-24 18:05:28,397 INFO >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-64) START, >>>> >>> >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, >>>> >>> >>> > >>> HostId >>>> >>> >>> > >>> = >>>> >>> >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: >>>> >>> >>> > >>> 5f251c90 >>>> >>> >>> > >>> 2014-11-24 18:05:30,609 INFO >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-64) FINISH, >>>> >>> >>> > >>> GlusterVolumesListVDSCommand, >>>> >>> >>> > >>> return: >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@d95203e0 }, >>>> >>> >>> > >>> log id: 5f251c90 >>>> >>> >>> > >>> 2014-11-24 18:05:33,768 INFO >>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>>> >>> >>> > >>> (ajp--127.0.0.1-8702-8) >>>> >>> >>> > >>> [287d570d] Lock Acquired to object EngineLock >>>> >>> >>> > >>> [exclusiveLocks= >>>> >>> >>> > >>> key: >>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS >>>> >>> >>> > >>> , sharedLocks= ] >>>> >>> >>> > >>> 2014-11-24 18:05:33,795 INFO >>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] >>>> >>> >>> > >>> Running >>>> >>> >>> > >>> command: >>>> >>> >>> > >>> ActivateVdsCommand internal: false. Entities >>>> >>> >>> > >>> affected : >>>> >>> >>> > >>> ID: >>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDSAction >>>> >>> >>> > >>> group >>>> >>> >>> > >>> MANIPULATE_HOST >>>> >>> >>> > >>> with role type ADMIN >>>> >>> >>> > >>> 2014-11-24 18:05:33,796 INFO >>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] >>>> >>> >>> > >>> Before >>>> >>> >>> > >>> acquiring >>>> >>> >>> > >>> lock in >>>> >>> >>> > >>> order to prevent monitoring for host Compute5 from >>>> >>> >>> > >>> data-center >>>> >>> >>> > >>> SV_WTC >>>> >>> >>> > >>> 2014-11-24 18:05:33,797 INFO >>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] Lock >>>> >>> >>> > >>> acquired, >>>> >>> >>> > >>> from >>>> >>> >>> > >>> now a >>>> >>> >>> > >>> monitoring of host will be skipped for host Compute5 >>>> >>> >>> > >>> from >>>> >>> >>> > >>> data-center >>>> >>> >>> > >>> SV_WTC >>>> >>> >>> > >>> 2014-11-24 18:05:33,817 INFO >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] >>>> >>> >>> > >>> START, >>>> >>> >>> > >>> SetVdsStatusVDSCommand(HostName = Compute5, HostId = >>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, >>>> >>> >>> > >>> status=Unassigned, >>>> >>> >>> > >>> nonOperationalReason=NONE, >>>> >>> >>> > >>> stopSpmFailureLogged=false), >>>> >>> >>> > >>> log id: >>>> >>> >>> > >>> 1cbc7311 >>>> >>> >>> > >>> 2014-11-24 18:05:33,820 INFO >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] >>>> >>> >>> > >>> FINISH, >>>> >>> >>> > >>> SetVdsStatusVDSCommand, log id: 1cbc7311 >>>> >>> >>> > >>> 2014-11-24 18:05:34,086 INFO >>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) Activate >>>> >>> >>> > >>> finished. >>>> >>> >>> > >>> Lock >>>> >>> >>> > >>> released. >>>> >>> >>> > >>> Monitoring can run now for host Compute5 from >>>> >>> >>> > >>> data-center >>>> >>> >>> > >>> SV_WTC >>>> >>> >>> > >>> 2014-11-24 18:05:34,088 INFO >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) Correlation ID: >>>> >>> >>> > >>> 287d570d, >>>> >>> >>> > >>> Job >>>> >>> >>> > >>> ID: >>>> >>> >>> > >>> 5ef8e4d6-b2bc-469e-8e81-7ef74b2a001a, Call Stack: >>>> >>> >>> > >>> null, >>>> >>> >>> > >>> Custom >>>> >>> >>> > >>> Event ID: >>>> >>> >>> > >>> -1, Message: Host Compute5 was activated by admin. >>>> >>> >>> > >>> 2014-11-24 18:05:34,090 INFO >>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) Lock freed to >>>> >>> >>> > >>> object >>>> >>> >>> > >>> EngineLock >>>> >>> >>> > >>> [exclusiveLocks= key: >>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a >>>> >>> >>> > >>> value: >>>> >>> >>> > >>> VDS >>>> >>> >>> > >>> , sharedLocks= ] >>>> >>> >>> > >>> 2014-11-24 18:05:35,792 INFO >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-55) [3706e836] START, >>>> >>> >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, >>>> >>> >>> > >>> HostId >>>> >>> >>> > >>> = >>>> >>> >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: >>>> >>> >>> > >>> 48a0c832 >>>> >>> >>> > >>> 2014-11-24 18:05:37,064 INFO >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] >>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) START, >>>> >>> >>> > >>> GetHardwareInfoVDSCommand(HostName = Compute5, >>>> >>> >>> > >>> HostId = >>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> vds=Host[Compute5,0bf6b00f-7947-4411-b55a-cc5eea2b381a]), log >>>> >>> >>> > >>> id: >>>> >>> >>> > >>> 6d560cc2 >>>> >>> >>> > >>> 2014-11-24 18:05:37,074 INFO >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] >>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) FINISH, >>>> >>> >>> > >>> GetHardwareInfoVDSCommand, log >>>> >>> >>> > >>> id: 6d560cc2 >>>> >>> >>> > >>> 2014-11-24 18:05:37,093 WARN >>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.VdsManager] >>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) Host Compute5 is >>>> >>> >>> > >>> running >>>> >>> >>> > >>> with >>>> >>> >>> > >>> disabled >>>> >>> >>> > >>> SELinux. >>>> >>> >>> > >>> 2014-11-24 18:05:37,127 INFO >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] >>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] >>>> >>> >>> > >>> Running >>>> >>> >>> > >>> command: >>>> >>> >>> > >>> HandleVdsCpuFlagsOrClusterChangedCommand internal: >>>> >>> >>> > >>> true. >>>> >>> >>> > >>> Entities >>>> >>> >>> > >>> affected >>>> >>> >>> > >>> : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: >>>> >>> >>> > >>> VDS >>>> >>> >>> > >>> 2014-11-24 18:05:37,147 INFO >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] START, >>>> >>> >>> > >>> GlusterServersListVDSCommand(HostName = Compute5, >>>> >>> >>> > >>> HostId >>>> >>> >>> > >>> = >>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a), log id: >>>> >>> >>> > >>> 4faed87 >>>> >>> >>> > >>> 2014-11-24 18:05:37,164 INFO >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] >>>> >>> >>> > >>> FINISH, >>>> >>> >>> > >>> GlusterServersListVDSCommand, log id: 4faed87 >>>> >>> >>> > >>> 2014-11-24 18:05:37,189 INFO >>>> >>> >>> > >>> >>>> >>> >>> > >>> [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] >>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] >>>> >>> >>> > >>> Running >>>> >>> >>> > >>> command: >>>> >>> >>> > >>> SetNonOperationalVdsCommand internal: true. Entities >>>> >>> >>> > >>> affected : >>>> >>> >>> > >>> ID: >>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS >>>> >>> >>> > >>> 2014-11-24 18:05:37,206 INFO >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] START, >>>> >>> >>> > >>> SetVdsStatusVDSCommand(HostName = Compute5, HostId = >>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, >>>> >>> >>> > >>> status=NonOperational, >>>> >>> >>> > >>> nonOperationalReason=GLUSTER_COMMAND_FAILED, >>>> >>> >>> > >>> stopSpmFailureLogged=false), >>>> >>> >>> > >>> log id: fed5617 >>>> >>> >>> > >>> 2014-11-24 18:05:37,209 INFO >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] >>>> >>> >>> > >>> FINISH, >>>> >>> >>> > >>> SetVdsStatusVDSCommand, log id: fed5617 >>>> >>> >>> > >>> 2014-11-24 18:05:37,223 ERROR >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] >>>> >>> >>> > >>> Correlation ID: >>>> >>> >>> > >>> 4a84c4e5, >>>> >>> >>> > >>> Job >>>> >>> >>> > >>> ID: 4bfd4a6d-c3ef-468f-a40e-a3a6ca13011b, Call >>>> >>> >>> > >>> Stack: >>>> >>> >>> > >>> null, >>>> >>> >>> > >>> Custom >>>> >>> >>> > >>> Event >>>> >>> >>> > >>> ID: -1, Message: Gluster command [<UNKNOWN>] failed >>>> >>> >>> > >>> on >>>> >>> >>> > >>> server >>>> >>> >>> > >>> Compute5. >>>> >>> >>> > >>> 2014-11-24 18:05:37,243 INFO >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] >>>> >>> >>> > >>> Correlation ID: >>>> >>> >>> > >>> null, >>>> >>> >>> > >>> Call >>>> >>> >>> > >>> Stack: null, Custom Event ID: -1, Message: Status of >>>> >>> >>> > >>> host >>>> >>> >>> > >>> Compute5 >>>> >>> >>> > >>> was >>>> >>> >>> > >>> set >>>> >>> >>> > >>> to NonOperational. >>>> >>> >>> > >>> 2014-11-24 18:05:37,272 INFO >>>> >>> >>> > >>> [org.ovirt.engine.core.bll.HandleVdsVersionCommand] >>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Running >>>> >>> >>> > >>> command: >>>> >>> >>> > >>> HandleVdsVersionCommand internal: true. Entities >>>> >>> >>> > >>> affected : >>>> >>> >>> > >>> ID: >>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS >>>> >>> >>> > >>> 2014-11-24 18:05:37,274 INFO >>>> >>> >>> > >>> >>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] >>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Host >>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a : Compute5 is >>>> >>> >>> > >>> already in >>>> >>> >>> > >>> NonOperational status for reason >>>> >>> >>> > >>> GLUSTER_COMMAND_FAILED. >>>> >>> >>> > >>> SetNonOperationalVds command is skipped. >>>> >>> >>> > >>> 2014-11-24 18:05:38,065 INFO >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-55) [3706e836] >>>> >>> >>> > >>> FINISH, >>>> >>> >>> > >>> GlusterVolumesListVDSCommand, return: >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4e72a1b1 }, >>>> >>> >>> > >>> log id: 48a0c832 >>>> >>> >>> > >>> 2014-11-24 18:05:43,243 INFO >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-35) START, >>>> >>> >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, >>>> >>> >>> > >>> HostId >>>> >>> >>> > >>> = >>>> >>> >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: >>>> >>> >>> > >>> 3ce13ebc >>>> >>> >>> > >>> ^C >>>> >>> >>> > >>> [root@ccr01 ~]# >>>> >>> >>> > >>> >>>> >>> >>> > >>> Thanks, >>>> >>> >>> > >>> Punit >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> _______________________________________________ >>>> >>> >>> > >>> Users mailing >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> listUsers@ovirt.orghttp://
On Fri, Dec 5, 2014 at 12:47 PM, Punit Dambiwal <hypunit@gmail.com> wrote: the the line the probed). possible, the the then lists.ovirt.org/mailman/listinfo/users
>>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >>> >>>> >>> >>> > >> >>>> >>> >>> > >> >>>> >>> >>> > > >>>> >>> >>> > > >>>> >>> >>> > >>>> >>> >> >>>> >>> >> >>>> >>> > >>>> >>> > >>>> >>> > _______________________________________________ >>>> >>> > Gluster-users mailing list >>>> >>> > Gluster-users@gluster.org >>>> >>> > >>>> >>> > http://supercolony.gluster.org/mailman/listinfo/gluster-users >>>> >> >>>> >> >>>> > >>> >>> >
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hi Albereto, I am using the bonding eth2+eth3=bond0 for gluster communication....and i am not using the NetworkManager.service....as all the interface configured by Ovirt itself.....please let me know how i can achieve this...?? [root@cpu04 ~]# systemctl list-unit-files | grep Network NetworkManager-dispatcher.service disabled NetworkManager-wait-online.service disabled NetworkManager.service disabled [root@cpu04 ~]# cat /usr/lib/systemd/system/glusterd.service [Unit] Description=GlusterFS, a clustered file-system server After=network.target rpcbind.service Wants=network-online.target [Service] Type=forking PIDFile=/var/run/glusterd.pid LimitNOFILE=65536 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid KillMode=process [Install] WantedBy=multi-user.target [root@cpu04 ~]# Thanks, Punit On Wed, Dec 10, 2014 at 6:21 AM, Alberto Larraz < alarraz@escoladeltreball.org> wrote:
I post a bug in centos about a problem with NetworkManager-wait-online.service, this service is executed before network-online.target
http://bugs.centos.org/view.php?id=7825
I had a problem with mount drbd in a centos7 at boot time. I think it could be similar to the problem to mount gluster in boot time in centos7.
I modified the service adding one line that wait to NetworkManager to set up the ip address in the interface that is used to connect de drbd, the the ip address in eth0 is 10.88.0.1 and the line I added is:
ExecStart=/bin/sh -c 'until ip a s eth0 | grep -q 10.88; do usleep 100000; done'
# cat /usr/lib/systemd/system/NetworkManager-wait-online.service
[Unit] Description=Network Manager Wait Online Requisite=NetworkManager.service After=NetworkManager.service Wants=network.target Before=network.target network-online.target
[Service] Type=oneshot ExecStart=/usr/bin/nm-online -q --timeout=30 ExecStart=/bin/sh -c 'until ip a s eth0 | grep -q 10.88; do usleep 100000; done'
Alberto
2014-12-05 14:50 GMT+01:00 Kaushal M <kshlmster@gmail.com>:
Can you replace 'Before=network-online.target' with 'Wants=network-online.target' and try the boot again? This should force the network to be online before starting GlusterD.
If even that fails, you could try adding an entry into /etc/hosts with the hostname of the system. This should prevent any more failures.
I still don't believe it's a problem with Gluster. Gluster uses apis provided by the system to perform name resolution. These definitely work correctly because you can start GlusterD later. Since the resolution failure only happens during boot, it points to system or network setup issues during boot. To me it seems like the network isn't completely setup at that point of time.
~kaushal
Hi Kaushal,
It seems it's bug in glusterfs 3.6....even i manage my systemd to start
network service before glusterd...but it's still fail...
--------------- [Unit] Description=GlusterFS, a clustered file-system server After=network.target rpcbind.service Before=network-online.target
[Service] Type=forking PIDFile=/var/run/glusterd.pid LimitNOFILE=65536 ExecStartPre=/etc/rc.d/init.d/network start ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid KillMode=process
[Install] WantedBy=multi-user.target ----------------
Thanks, Punit
On Wed, Dec 3, 2014 at 8:56 PM, Kaushal M <kshlmster@gmail.com> wrote:
I just remembered this.
There was another user having a similar issue of GlusterD failing to start on the mailing list a while back. The cause of his problem was the way his network was brought up. IIRC, he was using a static network configuration. The problem vanished when he began using dhcp. Or it might have been he was using dhcp.service and it got solved after switching to NetworkManager.
This could be one more thing you could look at.
I'll try to find the mail thread to see if it was the same problem as
you.
~kaushal
On Wed, Dec 3, 2014 at 6:22 PM, Kaushal M <kshlmster@gmail.com> wrote:
I don't know much about how the network target is brought up in CentOS7, but I'll try as much as I can.
It seems to me that, after the network has been brought up and by the time GlusterD is started, a. The machine hasn't yet recieved it's hostname, or b. It hasn't yet registered with the name server.
This is causing name resolution failures.
I don't know if the network target could come up without the machine getting its hostname, so I'm pretty sure it's not a.
So it seems to be b. But these kind of signing in happens only in
DDNS
systems, which doesn't seem to be the case for you.
Both of these reasons might be wrong (most likely wrong). You'd do good if you could ask for help from someone with more experience in systemd + networking.
~kaushal
On Wed, Dec 3, 2014 at 10:54 AM, Punit Dambiwal <hypunit@gmail.com> wrote:
Hi Kaushal,
This is the host...which i rebooted...would you mind to let me know how i can make the glusterd sevice come up after network...i am using centos7...if network is the issue...
On Wed, Dec 3, 2014 at 11:54 AM, Kaushal M <kshlmster@gmail.com> wrote: > > This peer cannot be identified. > > " [2014-12-03 02:29:25.998153] D > [glusterd-peer-utils.c:121:glusterd_peerinfo_find_by_hostname] > 0-management: > Unable to find friend: cpu05.zne01.hkg1.ovt.36stack.com" > > I don't know why this address is not being resolved during boot time. > If > this is a valid peer, the the only reason I can think of this that
> network is not up. > > If you had previously detached the peer forcefully, the that could > have > left stale entries in some volumes. In this case as well, GlusterD > will fail > to identify the peer. > > Do either of these reasons seem a possibility to you? > > On Dec 3, 2014 8:07 AM, "Punit Dambiwal" <hypunit@gmail.com> wrote: >> >> Hi Kaushal, >> >> Please find the logs here :- http://ur1.ca/iyoe5 and >> http://ur1.ca/iyoed >> >> On Tue, Dec 2, 2014 at 10:43 PM, Kaushal M <kshlmster@gmail.com> >> wrote: >>> >>> Hey Punit, >>> In the logs you've provided, GlusterD appears to be running >>> correctly. >>> Could you provide the logs for the time period when GlusterD >>> attempts to >>> start but fails. >>> >>> ~kaushal >>> >>> On Dec 2, 2014 8:03 PM, "Punit Dambiwal" <hypunit@gmail.com> wrote: >>>> >>>> Hi Kaushal, >>>> >>>> Please find the logs here :- http://ur1.ca/iyhs5 and >>>> http://ur1.ca/iyhue >>>> >>>> Thanks, >>>> punit >>>> >>>> >>>> On Tue, Dec 2, 2014 at 12:00 PM, Kaushal M <kshlmster@gmail.com
>>>> wrote: >>>>> >>>>> Hey Punit, >>>>> Could you start Glusterd in debug mode and provide the logs here? >>>>> To start it in debug mode, append '-LDEBUG' to the ExecStart
>>>>> in >>>>> the service file. >>>>> >>>>> ~kaushal >>>>> >>>>> On Mon, Dec 1, 2014 at 9:05 AM, Punit Dambiwal < hypunit@gmail.com> >>>>> wrote: >>>>> > Hi, >>>>> > >>>>> > Can Any body help me on this ?? >>>>> > >>>>> > On Thu, Nov 27, 2014 at 9:29 AM, Punit Dambiwal >>>>> > <hypunit@gmail.com> >>>>> > wrote: >>>>> >> >>>>> >> Hi Kaushal, >>>>> >> >>>>> >> Thanks for the detailed reply....let me explain my setup first >>>>> >> :- >>>>> >> >>>>> >> 1. Ovirt Engine >>>>> >> 2. 4* host as well as storage machine (Host and gluster >>>>> >> combined) >>>>> >> 3. Every host has 24 bricks... >>>>> >> >>>>> >> Now whenever the host machine reboot...it can come up but can >>>>> >> not >>>>> >> join the >>>>> >> cluster again and through the following error "Gluster command >>>>> >> [<UNKNOWN>] >>>>> >> failed on server.." >>>>> >> >>>>> >> Please check my comment in line :- >>>>> >> >>>>> >> 1. Use the same string for doing the peer probe and for the >>>>> >> brick >>>>> >> address >>>>> >> during volume create/add-brick. Ideally, we suggest you use >>>>> >> properly >>>>> >> resolvable FQDNs everywhere. If that is not possible, then use >>>>> >> only >>>>> >> IP >>>>> >> addresses. Try to avoid short names. >>>>> >> --------------- >>>>> >> [root@cpu05 ~]# gluster peer status >>>>> >> Number of Peers: 3 >>>>> >> >>>>> >> Hostname: cpu03.stack.com >>>>> >> Uuid: 5729b8c4-e80d-4353-b456-6f467bddbdfb >>>>> >> State: Peer in Cluster (Connected) >>>>> >> >>>>> >> Hostname: cpu04.stack.com >>>>> >> Uuid: d272b790-c4b2-4bed-ba68-793656e6d7b0 >>>>> >> State: Peer in Cluster (Connected) >>>>> >> Other names: >>>>> >> 10.10.0.8 >>>>> >> >>>>> >> Hostname: cpu02.stack.com >>>>> >> Uuid: 8d8a7041-950e-40d0-85f9-58d14340ca25 >>>>> >> State: Peer in Cluster (Connected) >>>>> >> [root@cpu05 ~]# >>>>> >> ---------------- >>>>> >> 2. During boot up, make sure to launch glusterd only after
>>>>> >> network is >>>>> >> up. This will allow the new peer identification mechanism to do >>>>> >> its >>>>> >> job correctly. >>>>> >> >> I think the service itself doing the same job.... >>>>> >> >>>>> >> [root@cpu05 ~]# cat /usr/lib/systemd/system/glusterd.service >>>>> >> [Unit] >>>>> >> Description=GlusterFS, a clustered file-system server >>>>> >> After=network.target rpcbind.service >>>>> >> Before=network-online.target >>>>> >> >>>>> >> [Service] >>>>> >> Type=forking >>>>> >> PIDFile=/var/run/glusterd.pid >>>>> >> LimitNOFILE=65536 >>>>> >> ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid >>>>> >> KillMode=process >>>>> >> >>>>> >> [Install] >>>>> >> WantedBy=multi-user.target >>>>> >> [root@cpu05 ~]# >>>>> >> -------------------- >>>>> >> >>>>> >> gluster logs :- >>>>> >> >>>>> >> [2014-11-24 09:22:22.147471] I [MSGID: 100030] >>>>> >> [glusterfsd.c:2018:main] >>>>> >> 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd >>>>> >> version >>>>> >> 3.6.1 >>>>> >> (args: /usr/sbin/glusterd -p /var/run/glusterd.pid) >>>>> >> [2014-11-24 09:22:22.151565] I [glusterd.c:1214:init] >>>>> >> 0-management: >>>>> >> Maximum allowed open file descriptors set to 65536 >>>>> >> [2014-11-24 09:22:22.151599] I [glusterd.c:1259:init] >>>>> >> 0-management: >>>>> >> Using >>>>> >> /var/lib/glusterd as working directory >>>>> >> [2014-11-24 09:22:22.155216] W >>>>> >> [rdma.c:4195:__gf_rdma_ctx_create] >>>>> >> 0-rpc-transport/rdma: rdma_cm event channel creation failed (No >>>>> >> such device) >>>>> >> [2014-11-24 09:22:22.155264] E [rdma.c:4483:init] >>>>> >> 0-rdma.management: >>>>> >> Failed to initialize IB Device >>>>> >> [2014-11-24 09:22:22.155285] E >>>>> >> [rpc-transport.c:333:rpc_transport_load] >>>>> >> 0-rpc-transport: 'rdma' initialization failed >>>>> >> [2014-11-24 09:22:22.155354] W >>>>> >> [rpcsvc.c:1524:rpcsvc_transport_create] >>>>> >> 0-rpc-service: cannot create listener, initing the transport >>>>> >> failed >>>>> >> [2014-11-24 09:22:22.156290] I >>>>> >> [glusterd.c:413:glusterd_check_gsync_present] 0-glusterd: >>>>> >> geo-replication >>>>> >> module not installed in the system >>>>> >> [2014-11-24 09:22:22.161318] I >>>>> >> [glusterd-store.c:2043:glusterd_restore_op_version] 0-glusterd: >>>>> >> retrieved >>>>> >> op-version: 30600 >>>>> >> [2014-11-24 09:22:22.821800] I >>>>> >> [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] >>>>> >> 0-management: >>>>> >> connect returned 0 >>>>> >> [2014-11-24 09:22:22.825810] I >>>>> >> [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] >>>>> >> 0-management: >>>>> >> connect returned 0 >>>>> >> [2014-11-24 09:22:22.828705] I >>>>> >> [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] >>>>> >> 0-management: >>>>> >> connect returned 0 >>>>> >> [2014-11-24 09:22:22.828771] I >>>>> >> [rpc-clnt.c:969:rpc_clnt_connection_init] >>>>> >> 0-management: setting frame-timeout to 600 >>>>> >> [2014-11-24 09:22:22.832670] I >>>>> >> [rpc-clnt.c:969:rpc_clnt_connection_init] >>>>> >> 0-management: setting frame-timeout to 600 >>>>> >> [2014-11-24 09:22:22.835919] I >>>>> >> [rpc-clnt.c:969:rpc_clnt_connection_init] >>>>> >> 0-management: setting frame-timeout to 600 >>>>> >> [2014-11-24 09:22:22.840209] E >>>>> >> [glusterd-store.c:4248:glusterd_resolve_all_bricks] 0-glusterd: >>>>> >> resolve >>>>> >> brick failed in restore >>>>> >> [2014-11-24 09:22:22.840233] E [xlator.c:425:xlator_init] >>>>> >> 0-management: >>>>> >> Initialization of volume 'management' failed, review your >>>>> >> volfile >>>>> >> again >>>>> >> [2014-11-24 09:22:22.840245] E >>>>> >> [graph.c:322:glusterfs_graph_init] >>>>> >> 0-management: initializing translator failed >>>>> >> [2014-11-24 09:22:22.840264] E >>>>> >> [graph.c:525:glusterfs_graph_activate] >>>>> >> 0-graph: init failed >>>>> >> [2014-11-24 09:22:22.840754] W >>>>> >> [glusterfsd.c:1194:cleanup_and_exit] >>>>> >> (--> >>>>> >> 0-: received signum (0), shutting down >>>>> >> >>>>> >> Thanks, >>>>> >> Punit >>>>> >> >>>>> >> >>>>> >> >>>>> >> >>>>> >> On Wed, Nov 26, 2014 at 7:14 PM, Kaushal M >>>>> >> <kshlmster@gmail.com> >>>>> >> wrote: >>>>> >>> >>>>> >>> Based on the logs I can guess that glusterd is being started >>>>> >>> before >>>>> >>> the network has come up and that the addresses given to bricks >>>>> >>> do >>>>> >>> not >>>>> >>> directly match the addresses used in during peer probe. >>>>> >>> >>>>> >>> The gluster_after_reboot log has the line "[2014-11-25 >>>>> >>> 06:46:09.972113] E >>>>> >>> [glusterd-store.c:2632:glusterd_resolve_all_bricks] >>>>> >>> 0-glusterd: resolve brick failed in restore". >>>>> >>> >>>>> >>> Brick resolution fails when glusterd cannot match the address >>>>> >>> for >>>>> >>> the >>>>> >>> brick, with one of the peers. Brick resolution happens in two >>>>> >>> phases, >>>>> >>> 1. We first try to identify the peer by performing string >>>>> >>> comparisions >>>>> >>> with the brick address and the peer addresses (The peer names >>>>> >>> will >>>>> >>> be >>>>> >>> the names/addresses that were given when the peer was
>>>>> >>> 2. If we don't find a match from step 1, we will then resolve >>>>> >>> all >>>>> >>> the >>>>> >>> brick address and the peer addresses into addrinfo structs, >>>>> >>> and >>>>> >>> then >>>>> >>> compare these structs to find a match. This process should >>>>> >>> generally >>>>> >>> find a match if available. This will fail only if the network >>>>> >>> is >>>>> >>> not >>>>> >>> up yet as we cannot resolve addresses. >>>>> >>> >>>>> >>> The above steps are applicable only to glusterfs versions >>>>> >>> >=3.6. >>>>> >>> They >>>>> >>> were introduced to reduce problems with peer identification, >>>>> >>> like >>>>> >>> the >>>>> >>> one you encountered >>>>> >>> >>>>> >>> Since both of the steps failed to find a match in one run, but >>>>> >>> succeeded later, we can come to the conclusion that, >>>>> >>> a) the bricks don't have the exact same string used in peer >>>>> >>> probe >>>>> >>> for >>>>> >>> their addresses as step 1 failed, and >>>>> >>> b) the network was not up in the initial run, as step 2 failed >>>>> >>> during >>>>> >>> the initial run, but passed in the second run. >>>>> >>> >>>>> >>> Please let me know if my conclusion is correct. >>>>> >>> >>>>> >>> If it is, you can solve your problem in two ways. >>>>> >>> 1. Use the same string for doing the peer probe and for the >>>>> >>> brick >>>>> >>> address during volume create/add-brick. Ideally, we suggest >>>>> >>> you >>>>> >>> use >>>>> >>> properly resolvable FQDNs everywhere. If that is not
>>>>> >>> then >>>>> >>> use only IP addresses. Try to avoid short names. >>>>> >>> 2. During boot up, make sure to launch glusterd only after
>>>>> >>> network >>>>> >>> is up. This will allow the new peer identification mechanism >>>>> >>> to do >>>>> >>> its >>>>> >>> job correctly. >>>>> >>> >>>>> >>> >>>>> >>> If you have already followed these steps and yet still hit
>>>>> >>> problem, then please provide more information (setup, logs, >>>>> >>> etc.). >>>>> >>> It >>>>> >>> could be much different problem that you are facing. >>>>> >>> >>>>> >>> ~kaushal >>>>> >>> >>>>> >>> On Wed, Nov 26, 2014 at 4:01 PM, Punit Dambiwal >>>>> >>> <hypunit@gmail.com> >>>>> >>> wrote: >>>>> >>> > Is there any one can help on this ?? >>>>> >>> > >>>>> >>> > Thanks, >>>>> >>> > punit >>>>> >>> > >>>>> >>> > On Wed, Nov 26, 2014 at 9:42 AM, Punit Dambiwal >>>>> >>> > <hypunit@gmail.com> >>>>> >>> > wrote: >>>>> >>> >> >>>>> >>> >> Hi, >>>>> >>> >> >>>>> >>> >> My Glusterfs version is :- glusterfs-3.6.1-1.el7 >>>>> >>> >> >>>>> >>> >> On Wed, Nov 26, 2014 at 1:59 AM, Kanagaraj Mayilsamy >>>>> >>> >> <kmayilsa@redhat.com> >>>>> >>> >> wrote: >>>>> >>> >>> >>>>> >>> >>> [+Gluster-users@gluster.org] >>>>> >>> >>> >>>>> >>> >>> "Initialization of volume 'management' failed, review your >>>>> >>> >>> volfile >>>>> >>> >>> again", glusterd throws this error when the service is >>>>> >>> >>> started >>>>> >>> >>> automatically >>>>> >>> >>> after the reboot. But the service is successfully started >>>>> >>> >>> later >>>>> >>> >>> manually by >>>>> >>> >>> the user. >>>>> >>> >>> >>>>> >>> >>> can somebody from gluster-users please help on this? >>>>> >>> >>> >>>>> >>> >>> glusterfs version: 3.5.1 >>>>> >>> >>> >>>>> >>> >>> Thanks, >>>>> >>> >>> Kanagaraj >>>>> >>> >>> >>>>> >>> >>> ----- Original Message ----- >>>>> >>> >>> > From: "Punit Dambiwal" <hypunit@gmail.com> >>>>> >>> >>> > To: "Kanagaraj" <kmayilsa@redhat.com> >>>>> >>> >>> > Cc: users@ovirt.org >>>>> >>> >>> > Sent: Tuesday, November 25, 2014 7:24:45 PM >>>>> >>> >>> > Subject: Re: [ovirt-users] Gluster command [<UNKNOWN>] >>>>> >>> >>> > failed on >>>>> >>> >>> > server... >>>>> >>> >>> > >>>>> >>> >>> > Hi Kanagraj, >>>>> >>> >>> > >>>>> >>> >>> > Please check the attached log files....i didn't find any >>>>> >>> >>> > thing >>>>> >>> >>> > special.... >>>>> >>> >>> > >>>>> >>> >>> > On Tue, Nov 25, 2014 at 12:12 PM, Kanagaraj >>>>> >>> >>> > <kmayilsa@redhat.com> >>>>> >>> >>> > wrote: >>>>> >>> >>> > >>>>> >>> >>> > > Do you see any errors in >>>>> >>> >>> > > /var/log/glusterfs/etc-glusterfs-glusterd.vol.log or >>>>> >>> >>> > > vdsm.log >>>>> >>> >>> > > when >>>>> >>> >>> > > the >>>>> >>> >>> > > service is trying to start automatically after the >>>>> >>> >>> > > reboot? >>>>> >>> >>> > > >>>>> >>> >>> > > Thanks, >>>>> >>> >>> > > Kanagaraj >>>>> >>> >>> > > >>>>> >>> >>> > > >>>>> >>> >>> > > On 11/24/2014 08:13 PM, Punit Dambiwal wrote: >>>>> >>> >>> > > >>>>> >>> >>> > > Hi Kanagaraj, >>>>> >>> >>> > > >>>>> >>> >>> > > Yes...once i will start the gluster service and
>>>>> >>> >>> > > vdsmd >>>>> >>> >>> > > ...the >>>>> >>> >>> > > host >>>>> >>> >>> > > can connect to cluster...but the question is why it's >>>>> >>> >>> > > not >>>>> >>> >>> > > started >>>>> >>> >>> > > even it >>>>> >>> >>> > > has chkconfig enabled... >>>>> >>> >>> > > >>>>> >>> >>> > > I have tested it in two host cluster >>>>> >>> >>> > > environment...(Centos 6.6 >>>>> >>> >>> > > and >>>>> >>> >>> > > centos 7.0) on both hypervisior cluster..it's failed >>>>> >>> >>> > > to >>>>> >>> >>> > > reconnect >>>>> >>> >>> > > in >>>>> >>> >>> > > to >>>>> >>> >>> > > cluster after reboot.... >>>>> >>> >>> > > >>>>> >>> >>> > > In both the environment glusterd enabled for next >>>>> >>> >>> > > boot....but >>>>> >>> >>> > > it's >>>>> >>> >>> > > failed with the same error....seems it's bug in either >>>>> >>> >>> > > gluster or >>>>> >>> >>> > > Ovirt ?? >>>>> >>> >>> > > >>>>> >>> >>> > > Please help me to find the workaround here if can not >>>>> >>> >>> > > resolve >>>>> >>> >>> > > it...as >>>>> >>> >>> > > without this the Host machine can not connect after >>>>> >>> >>> > > reboot....that >>>>> >>> >>> > > means >>>>> >>> >>> > > engine will consider it as down and every time need to >>>>> >>> >>> > > manually >>>>> >>> >>> > > start >>>>> >>> >>> > > the >>>>> >>> >>> > > gluster service and vdsmd... ?? >>>>> >>> >>> > > >>>>> >>> >>> > > Thanks, >>>>> >>> >>> > > Punit >>>>> >>> >>> > > >>>>> >>> >>> > > On Mon, Nov 24, 2014 at 10:20 PM, Kanagaraj >>>>> >>> >>> > > <kmayilsa@redhat.com> >>>>> >>> >>> > > wrote: >>>>> >>> >>> > > >>>>> >>> >>> > >> From vdsm.log "error: Connection failed. Please >>>>> >>> >>> > >> check if >>>>> >>> >>> > >> gluster >>>>> >>> >>> > >> daemon >>>>> >>> >>> > >> is operational." >>>>> >>> >>> > >> >>>>> >>> >>> > >> Starting glusterd service should fix this issue. >>>>> >>> >>> > >> 'service >>>>> >>> >>> > >> glusterd >>>>> >>> >>> > >> start' >>>>> >>> >>> > >> But i am wondering why the glusterd was not started >>>>> >>> >>> > >> automatically >>>>> >>> >>> > >> after >>>>> >>> >>> > >> the reboot. >>>>> >>> >>> > >> >>>>> >>> >>> > >> Thanks, >>>>> >>> >>> > >> Kanagaraj >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> On 11/24/2014 07:18 PM, Punit Dambiwal wrote: >>>>> >>> >>> > >> >>>>> >>> >>> > >> Hi Kanagaraj, >>>>> >>> >>> > >> >>>>> >>> >>> > >> Please find the attached VDSM logs :- >>>>> >>> >>> > >> >>>>> >>> >>> > >> ---------------- >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> 21:41:17,182::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >>>>> >>> >>> > >> Owner.cancelAll requests {} >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> 21:41:17,182::task::993::Storage.TaskManager.Task::(_decref) >>>>> >>> >>> > >> Task=`1691d409-9b27-4585-8281-5ec26154367a`::ref 0 >>>>> >>> >>> > >> aborting >>>>> >>> >>> > >> False >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> 21:41:32,393::task::595::Storage.TaskManager.Task::(_updateState) >>>>> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving >>>>> >>> >>> > >> from >>>>> >>> >>> > >> state >>>>> >>> >>> > >> init >>>>> >>> >>> > >> -> >>>>> >>> >>> > >> state preparing >>>>> >>> >>> > >> Thread-13::INFO::2014-11-24 >>>>> >>> >>> > >> 21:41:32,393::logUtils::44::dispatcher::(wrapper) Run >>>>> >>> >>> > >> and >>>>> >>> >>> > >> protect: >>>>> >>> >>> > >> repoStats(options=None) >>>>> >>> >>> > >> Thread-13::INFO::2014-11-24 >>>>> >>> >>> > >> 21:41:32,393::logUtils::47::dispatcher::(wrapper) Run >>>>> >>> >>> > >> and >>>>> >>> >>> > >> protect: >>>>> >>> >>> > >> repoStats, Return response: {} >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> 21:41:32,393::task::1191::Storage.TaskManager.Task::(prepare) >>>>> >>> >>> > >> >>>>> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::finished: {} >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> 21:41:32,394::task::595::Storage.TaskManager.Task::(_updateState) >>>>> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving >>>>> >>> >>> > >> from >>>>> >>> >>> > >> state >>>>> >>> >>> > >> preparing >>>>> >>> >>> > >> -> >>>>> >>> >>> > >> state finished >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> 21:41:32,394::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) >>>>> >>> >>> > >> Owner.releaseAll requests {} resources {} >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> 21:41:32,394::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >>>>> >>> >>> > >> Owner.cancelAll requests {} >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> 21:41:32,394::task::993::Storage.TaskManager.Task::(_decref) >>>>> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::ref 0 >>>>> >>> >>> > >> aborting >>>>> >>> >>> > >> False >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> 21:41:41,550::BindingXMLRPC::1132::vds::(wrapper) >>>>> >>> >>> > >> client >>>>> >>> >>> > >> [10.10.10.2]::call >>>>> >>> >>> > >> getCapabilities with () {} >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> 21:41:41,553::utils::738::root::(execCmd) >>>>> >>> >>> > >> /sbin/ip route show to 0.0.0.0/0 table all (cwd None) >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> 21:41:41,560::utils::758::root::(execCmd) >>>>> >>> >>> > >> SUCCESS: <err> = ''; <rc> = 0 >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> 21:41:41,588::caps::728::root::(_getKeyPackages) rpm >>>>> >>> >>> > >> package >>>>> >>> >>> > >> ('gluster-swift',) not found >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> 21:41:41,592::caps::728::root::(_getKeyPackages) rpm >>>>> >>> >>> > >> package >>>>> >>> >>> > >> ('gluster-swift-object',) not found >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> 21:41:41,593::caps::728::root::(_getKeyPackages) rpm >>>>> >>> >>> > >> package >>>>> >>> >>> > >> ('gluster-swift-plugin',) not found >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm >>>>> >>> >>> > >> package >>>>> >>> >>> > >> ('gluster-swift-account',) not found >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm >>>>> >>> >>> > >> package >>>>> >>> >>> > >> ('gluster-swift-proxy',) not found >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm >>>>> >>> >>> > >> package >>>>> >>> >>> > >> ('gluster-swift-doc',) not found >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm >>>>> >>> >>> > >> package >>>>> >>> >>> > >> ('gluster-swift-container',) not found >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm >>>>> >>> >>> > >> package >>>>> >>> >>> > >> ('glusterfs-geo-replication',) not found >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> 21:41:41,600::caps::646::root::(get) >>>>> >>> >>> > >> VirtioRNG DISABLED: libvirt version 0.10.2-29.el6_5.9 >>>>> >>> >>> > >> required >>>>> >>> >>> > >> >= >>>>> >>> >>> > >> 0.10.2-31 >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> 21:41:41,603::BindingXMLRPC::1139::vds::(wrapper) >>>>> >>> >>> > >> return >>>>> >>> >>> > >> getCapabilities >>>>> >>> >>> > >> with {'status': {'message': 'Done', 'code': 0}, >>>>> >>> >>> > >> 'info': >>>>> >>> >>> > >> {'HBAInventory': >>>>> >>> >>> > >> {'iSCSI': [{'InitiatorName': >>>>> >>> >>> > >> 'iqn.1994-05.com.redhat:32151ce183c8'}], >>>>> >>> >>> > >> 'FC': >>>>> >>> >>> > >> []}, 'packages2': {'kernel': {'release': >>>>> >>> >>> > >> '431.el6.x86_64', >>>>> >>> >>> > >> 'buildtime': >>>>> >>> >>> > >> 1385061309.0, 'version': '2.6.32'}, 'glusterfs-rdma': >>>>> >>> >>> > >> {'release': >>>>> >>> >>> > >> '1.el6', >>>>> >>> >>> > >> 'buildtime': 1403622628L, 'version': '3.5.1'}, >>>>> >>> >>> > >> 'glusterfs-fuse': >>>>> >>> >>> > >> {'release': '1.el6', 'buildtime': 1403622628L, >>>>> >>> >>> > >> 'version': >>>>> >>> >>> > >> '3.5.1'}, >>>>> >>> >>> > >> 'spice-server': {'release': '6.el6_5.2', 'buildtime': >>>>> >>> >>> > >> 1402324637L, >>>>> >>> >>> > >> 'version': '0.12.4'}, 'vdsm': {'release': >>>>> >>> >>> > >> '1.gitdb83943.el6', >>>>> >>> >>> > >> 'buildtime': >>>>> >>> >>> > >> 1412784567L, 'version': '4.16.7'}, 'qemu-kvm': >>>>> >>> >>> > >> {'release': >>>>> >>> >>> > >> '2.415.el6_5.10', 'buildtime': 1402435700L, >>>>> >>> >>> > >> 'version': >>>>> >>> >>> > >> '0.12.1.2'}, >>>>> >>> >>> > >> 'qemu-img': {'release': '2.415.el6_5.10', >>>>> >>> >>> > >> 'buildtime': >>>>> >>> >>> > >> 1402435700L, >>>>> >>> >>> > >> 'version': '0.12.1.2'}, 'libvirt': {'release': >>>>> >>> >>> > >> '29.el6_5.9', >>>>> >>> >>> > >> 'buildtime': >>>>> >>> >>> > >> 1402404612L, 'version': '0.10.2'}, 'glusterfs': >>>>> >>> >>> > >> {'release': >>>>> >>> >>> > >> '1.el6', >>>>> >>> >>> > >> 'buildtime': 1403622628L, 'version': '3.5.1'}, 'mom': >>>>> >>> >>> > >> {'release': >>>>> >>> >>> > >> '2.el6', >>>>> >>> >>> > >> 'buildtime': 1403794344L, 'version': '0.4.1'}, >>>>> >>> >>> > >> 'glusterfs-server': >>>>> >>> >>> > >> {'release': '1.el6', 'buildtime': 1403622628L, >>>>> >>> >>> > >> 'version': >>>>> >>> >>> > >> '3.5.1'}}, >>>>> >>> >>> > >> 'numaNodeDistance': {'1': [20, 10], '0': [10, 20]}, >>>>> >>> >>> > >> 'cpuModel': >>>>> >>> >>> > >> 'Intel(R) >>>>> >>> >>> > >> Xeon(R) CPU X5650 @ 2.67GHz', 'liveMerge': >>>>> >>> >>> > >> 'false', >>>>> >>> >>> > >> 'hooks': >>>>> >>> >>> > >> {}, >>>>> >>> >>> > >> 'cpuSockets': '2', 'vmTypes': ['kvm'], 'selinux': >>>>> >>> >>> > >> {'mode': '1'}, >>>>> >>> >>> > >> 'kdumpStatus': 0, 'supportedProtocols': ['2.2', >>>>> >>> >>> > >> '2.3'], >>>>> >>> >>> > >> 'networks': >>>>> >>> >>> > >> {'ovirtmgmt': {'iface': u'bond0.10', 'addr': >>>>> >>> >>> > >> '43.252.176.16', >>>>> >>> >>> > >> 'bridged': >>>>> >>> >>> > >> False, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], >>>>> >>> >>> > >> 'mtu': >>>>> >>> >>> > >> '1500', >>>>> >>> >>> > >> 'bootproto4': 'none', 'netmask': '255.255.255.0', >>>>> >>> >>> > >> 'ipv4addrs': >>>>> >>> >>> > >> [' >>>>> >>> >>> > >> 43.252.176.16/24' <http://43.252.176.16/24%27>], >>>>> >>> >>> > >> 'interface': >>>>> >>> >>> > >> u'bond0.10', 'ipv6gateway': '::', 'gateway': >>>>> >>> >>> > >> '43.25.17.1'}, >>>>> >>> >>> > >> 'Internal': >>>>> >>> >>> > >> {'iface': 'Internal', 'addr': '', 'cfg': {'DEFROUTE': >>>>> >>> >>> > >> 'no', >>>>> >>> >>> > >> 'HOTPLUG': >>>>> >>> >>> > >> 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': >>>>> >>> >>> > >> 'no', >>>>> >>> >>> > >> 'BOOTPROTO': >>>>> >>> >>> > >> 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': >>>>> >>> >>> > >> 'Bridge', >>>>> >>> >>> > >> 'ONBOOT': >>>>> >>> >>> > >> 'no'}, 'bridged': True, 'ipv6addrs': >>>>> >>> >>> > >> ['fe80::210:18ff:fecd:daac/64'], >>>>> >>> >>> > >> 'gateway': '', 'bootproto4': 'none', 'netmask': '', >>>>> >>> >>> > >> 'stp': >>>>> >>> >>> > >> 'off', >>>>> >>> >>> > >> 'ipv4addrs': [], 'mtu': '9000', 'ipv6gateway': '::', >>>>> >>> >>> > >> 'ports': >>>>> >>> >>> > >> ['bond1.100']}, 'storage': {'iface': u'bond1', >>>>> >>> >>> > >> 'addr': >>>>> >>> >>> > >> '10.10.10.6', >>>>> >>> >>> > >> 'bridged': False, 'ipv6addrs': >>>>> >>> >>> > >> ['fe80::210:18ff:fecd:daac/64'], >>>>> >>> >>> > >> 'mtu': >>>>> >>> >>> > >> '9000', 'bootproto4': 'none', 'netmask': >>>>> >>> >>> > >> '255.255.255.0', >>>>> >>> >>> > >> 'ipv4addrs': [' >>>>> >>> >>> > >> 10.10.10.6/24' <http://10.10.10.6/24%27>], >>>>> >>> >>> > >> 'interface': >>>>> >>> >>> > >> u'bond1', >>>>> >>> >>> > >> 'ipv6gateway': '::', 'gateway': ''}, 'VMNetwork': >>>>> >>> >>> > >> {'iface': >>>>> >>> >>> > >> 'VMNetwork', >>>>> >>> >>> > >> 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': >>>>> >>> >>> > >> 'no', >>>>> >>> >>> > >> 'MTU': >>>>> >>> >>> > >> '1500', >>>>> >>> >>> > >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': >>>>> >>> >>> > >> 'none', >>>>> >>> >>> > >> 'STP': >>>>> >>> >>> > >> 'off', >>>>> >>> >>> > >> 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': >>>>> >>> >>> > >> 'no'}, >>>>> >>> >>> > >> 'bridged': >>>>> >>> >>> > >> True, >>>>> >>> >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], >>>>> >>> >>> > >> 'gateway': >>>>> >>> >>> > >> '', >>>>> >>> >>> > >> 'bootproto4': >>>>> >>> >>> > >> 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], >>>>> >>> >>> > >> 'mtu': >>>>> >>> >>> > >> '1500', >>>>> >>> >>> > >> 'ipv6gateway': '::', 'ports': ['bond0.36']}}, >>>>> >>> >>> > >> 'bridges': >>>>> >>> >>> > >> {'Internal': >>>>> >>> >>> > >> {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': >>>>> >>> >>> > >> 'no', >>>>> >>> >>> > >> 'MTU': >>>>> >>> >>> > >> '9000', >>>>> >>> >>> > >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': >>>>> >>> >>> > >> 'none', >>>>> >>> >>> > >> 'STP': >>>>> >>> >>> > >> 'off', >>>>> >>> >>> > >> 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': >>>>> >>> >>> > >> 'no'}, >>>>> >>> >>> > >> 'ipv6addrs': >>>>> >>> >>> > >> ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', >>>>> >>> >>> > >> 'netmask': '', >>>>> >>> >>> > >> 'stp': >>>>> >>> >>> > >> 'off', 'ipv4addrs': [], 'ipv6gateway': '::', >>>>> >>> >>> > >> 'gateway': >>>>> >>> >>> > >> '', >>>>> >>> >>> > >> 'opts': >>>>> >>> >>> > >> {'topology_change_detected': '0', >>>>> >>> >>> > >> 'multicast_last_member_count': >>>>> >>> >>> > >> '2', >>>>> >>> >>> > >> 'hash_elasticity': '4', >>>>> >>> >>> > >> 'multicast_query_response_interval': >>>>> >>> >>> > >> '999', >>>>> >>> >>> > >> 'multicast_snooping': '1', >>>>> >>> >>> > >> 'multicast_startup_query_interval': >>>>> >>> >>> > >> '3124', >>>>> >>> >>> > >> 'hello_timer': '31', 'multicast_querier_interval': >>>>> >>> >>> > >> '25496', >>>>> >>> >>> > >> 'max_age': >>>>> >>> >>> > >> '1999', 'hash_max': '512', 'stp_state': '0', >>>>> >>> >>> > >> 'root_id': >>>>> >>> >>> > >> '8000.001018cddaac', 'priority': '32768', >>>>> >>> >>> > >> 'multicast_membership_interval': >>>>> >>> >>> > >> '25996', 'root_path_cost': '0', 'root_port': '0', >>>>> >>> >>> > >> 'multicast_querier': >>>>> >>> >>> > >> '0', >>>>> >>> >>> > >> 'multicast_startup_query_count': '2', 'hello_time': >>>>> >>> >>> > >> '199', >>>>> >>> >>> > >> 'topology_change': '0', 'bridge_id': >>>>> >>> >>> > >> '8000.001018cddaac', >>>>> >>> >>> > >> 'topology_change_timer': '0', 'ageing_time': '29995', >>>>> >>> >>> > >> 'gc_timer': >>>>> >>> >>> > >> '31', >>>>> >>> >>> > >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', >>>>> >>> >>> > >> 'multicast_query_interval': '12498', >>>>> >>> >>> > >> 'multicast_last_member_interval': >>>>> >>> >>> > >> '99', 'multicast_router': '1', 'forward_delay': '0'}, >>>>> >>> >>> > >> 'ports': >>>>> >>> >>> > >> ['bond1.100']}, 'VMNetwork': {'addr': '', 'cfg': >>>>> >>> >>> > >> {'DEFROUTE': >>>>> >>> >>> > >> 'no', >>>>> >>> >>> > >> 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', >>>>> >>> >>> > >> 'NM_CONTROLLED': >>>>> >>> >>> > >> 'no', >>>>> >>> >>> > >> 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': >>>>> >>> >>> > >> 'VMNetwork', >>>>> >>> >>> > >> 'TYPE': >>>>> >>> >>> > >> 'Bridge', >>>>> >>> >>> > >> 'ONBOOT': 'no'}, 'ipv6addrs': >>>>> >>> >>> > >> ['fe80::62eb:69ff:fe20:b46c/64'], >>>>> >>> >>> > >> 'mtu': >>>>> >>> >>> > >> '1500', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], >>>>> >>> >>> > >> 'ipv6gateway': >>>>> >>> >>> > >> '::', >>>>> >>> >>> > >> 'gateway': '', 'opts': {'topology_change_detected': >>>>> >>> >>> > >> '0', >>>>> >>> >>> > >> 'multicast_last_member_count': '2', >>>>> >>> >>> > >> 'hash_elasticity': >>>>> >>> >>> > >> '4', >>>>> >>> >>> > >> 'multicast_query_response_interval': '999', >>>>> >>> >>> > >> 'multicast_snooping': >>>>> >>> >>> > >> '1', >>>>> >>> >>> > >> 'multicast_startup_query_interval': '3124', >>>>> >>> >>> > >> 'hello_timer': >>>>> >>> >>> > >> '131', >>>>> >>> >>> > >> 'multicast_querier_interval': '25496', 'max_age': >>>>> >>> >>> > >> '1999', >>>>> >>> >>> > >> 'hash_max': >>>>> >>> >>> > >> '512', 'stp_state': '0', 'root_id': >>>>> >>> >>> > >> '8000.60eb6920b46c', >>>>> >>> >>> > >> 'priority': >>>>> >>> >>> > >> '32768', 'multicast_membership_interval': '25996', >>>>> >>> >>> > >> 'root_path_cost': >>>>> >>> >>> > >> '0', >>>>> >>> >>> > >> 'root_port': '0', 'multicast_querier': '0', >>>>> >>> >>> > >> 'multicast_startup_query_count': '2', 'hello_time': >>>>> >>> >>> > >> '199', >>>>> >>> >>> > >> 'topology_change': '0', 'bridge_id': >>>>> >>> >>> > >> '8000.60eb6920b46c', >>>>> >>> >>> > >> 'topology_change_timer': '0', 'ageing_time': '29995', >>>>> >>> >>> > >> 'gc_timer': >>>>> >>> >>> > >> '31', >>>>> >>> >>> > >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', >>>>> >>> >>> > >> 'multicast_query_interval': '12498', >>>>> >>> >>> > >> 'multicast_last_member_interval': >>>>> >>> >>> > >> '99', 'multicast_router': '1', 'forward_delay': '0'}, >>>>> >>> >>> > >> 'ports': >>>>> >>> >>> > >> ['bond0.36']}}, 'uuid': >>>>> >>> >>> > >> '44454C4C-4C00-1057-8053-B7C04F504E31', >>>>> >>> >>> > >> 'lastClientIface': 'bond1', 'nics': {'eth3': >>>>> >>> >>> > >> {'permhwaddr': >>>>> >>> >>> > >> '00:10:18:cd:da:ae', 'addr': '', 'cfg': {'SLAVE': >>>>> >>> >>> > >> 'yes', >>>>> >>> >>> > >> 'NM_CONTROLLED': >>>>> >>> >>> > >> 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ae', >>>>> >>> >>> > >> 'MASTER': >>>>> >>> >>> > >> 'bond1', >>>>> >>> >>> > >> 'DEVICE': 'eth3', 'ONBOOT': 'no'}, 'ipv6addrs': [], >>>>> >>> >>> > >> 'mtu': >>>>> >>> >>> > >> '9000', >>>>> >>> >>> > >> 'netmask': '', 'ipv4addrs': [], 'hwaddr': >>>>> >>> >>> > >> '00:10:18:cd:da:ac', >>>>> >>> >>> > >> 'speed': >>>>> >>> >>> > >> 1000}, 'eth2': {'permhwaddr': '00:10:18:cd:da:ac', >>>>> >>> >>> > >> 'addr': '', >>>>> >>> >>> > >> 'cfg': >>>>> >>> >>> > >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': >>>>> >>> >>> > >> '9000', >>>>> >>> >>> > >> 'HWADDR': >>>>> >>> >>> > >> '00:10:18:cd:da:ac', 'MASTER': 'bond1', 'DEVICE': >>>>> >>> >>> > >> 'eth2', >>>>> >>> >>> > >> 'ONBOOT': >>>>> >>> >>> > >> 'no'}, >>>>> >>> >>> > >> 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', >>>>> >>> >>> > >> 'ipv4addrs': [], >>>>> >>> >>> > >> 'hwaddr': >>>>> >>> >>> > >> '00:10:18:cd:da:ac', 'speed': 1000}, 'eth1': >>>>> >>> >>> > >> {'permhwaddr': >>>>> >>> >>> > >> '60:eb:69:20:b4:6d', 'addr': '', 'cfg': {'SLAVE': >>>>> >>> >>> > >> 'yes', >>>>> >>> >>> > >> 'NM_CONTROLLED': >>>>> >>> >>> > >> 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6d', >>>>> >>> >>> > >> 'MASTER': >>>>> >>> >>> > >> 'bond0', >>>>> >>> >>> > >> 'DEVICE': 'eth1', 'ONBOOT': 'yes'}, 'ipv6addrs': [], >>>>> >>> >>> > >> 'mtu': >>>>> >>> >>> > >> '1500', >>>>> >>> >>> > >> 'netmask': '', 'ipv4addrs': [], 'hwaddr': >>>>> >>> >>> > >> '60:eb:69:20:b4:6c', >>>>> >>> >>> > >> 'speed': >>>>> >>> >>> > >> 1000}, 'eth0': {'permhwaddr': '60:eb:69:20:b4:6c', >>>>> >>> >>> > >> 'addr': '', >>>>> >>> >>> > >> 'cfg': >>>>> >>> >>> > >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': >>>>> >>> >>> > >> '1500', >>>>> >>> >>> > >> 'HWADDR': >>>>> >>> >>> > >> '60:eb:69:20:b4:6c', 'MASTER': 'bond0', 'DEVICE': >>>>> >>> >>> > >> 'eth0', >>>>> >>> >>> > >> 'ONBOOT': >>>>> >>> >>> > >> 'yes'}, >>>>> >>> >>> > >> 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', >>>>> >>> >>> > >> 'ipv4addrs': [], >>>>> >>> >>> > >> 'hwaddr': >>>>> >>> >>> > >> '60:eb:69:20:b4:6c', 'speed': 1000}}, >>>>> >>> >>> > >> 'software_revision': '1', >>>>> >>> >>> > >> 'clusterLevels': ['3.0', '3.1', '3.2', '3.3', '3.4', >>>>> >>> >>> > >> '3.5'], >>>>> >>> >>> > >> 'cpuFlags': >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,popcnt,aes,lahf_lm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270', >>>>> >>> >>> > >> 'ISCSIInitiatorName': >>>>> >>> >>> > >> 'iqn.1994-05.com.redhat:32151ce183c8', >>>>> >>> >>> > >> 'netConfigDirty': 'False', 'supportedENGINEs': >>>>> >>> >>> > >> ['3.0', >>>>> >>> >>> > >> '3.1', >>>>> >>> >>> > >> '3.2', >>>>> >>> >>> > >> '3.3', >>>>> >>> >>> > >> '3.4', '3.5'], 'autoNumaBalancing': 2, 'reservedMem': >>>>> >>> >>> > >> '321', >>>>> >>> >>> > >> 'bondings': >>>>> >>> >>> > >> {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', >>>>> >>> >>> > >> 'netmask': '', >>>>> >>> >>> > >> 'slaves': >>>>> >>> >>> > >> [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': >>>>> >>> >>> > >> '', >>>>> >>> >>> > >> 'cfg': >>>>> >>> >>> > >> {'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': >>>>> >>> >>> > >> 'no', >>>>> >>> >>> > >> 'BONDING_OPTS': >>>>> >>> >>> > >> 'mode=4 miimon=100', 'DEVICE': 'bond0', 'ONBOOT': >>>>> >>> >>> > >> 'yes'}, >>>>> >>> >>> > >> 'ipv6addrs': >>>>> >>> >>> > >> ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', >>>>> >>> >>> > >> 'netmask': '', >>>>> >>> >>> > >> 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', >>>>> >>> >>> > >> 'slaves': >>>>> >>> >>> > >> ['eth0', >>>>> >>> >>> > >> 'eth1'], >>>>> >>> >>> > >> 'opts': {'miimon': '100', 'mode': '4'}}, 'bond1': >>>>> >>> >>> > >> {'addr': >>>>> >>> >>> > >> '10.10.10.6', >>>>> >>> >>> > >> 'cfg': {'DEFROUTE': 'no', 'IPADDR': '10.10.10.6', >>>>> >>> >>> > >> 'HOTPLUG': >>>>> >>> >>> > >> 'no', >>>>> >>> >>> > >> 'MTU': >>>>> >>> >>> > >> '9000', 'NM_CONTROLLED': 'no', 'NETMASK': >>>>> >>> >>> > >> '255.255.255.0', >>>>> >>> >>> > >> 'BOOTPROTO': >>>>> >>> >>> > >> 'none', 'BONDING_OPTS': 'mode=4 miimon=100', >>>>> >>> >>> > >> 'DEVICE': >>>>> >>> >>> > >> 'bond1', >>>>> >>> >>> > >> 'ONBOOT': >>>>> >>> >>> > >> 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], >>>>> >>> >>> > >> 'mtu': >>>>> >>> >>> > >> '9000', >>>>> >>> >>> > >> 'netmask': '255.255.255.0', 'ipv4addrs': >>>>> >>> >>> > >> ['10.10.10.6/24' >>>>> >>> >>> > >> <http://10.10.10.6/24%27>], 'hwaddr': >>>>> >>> >>> > >> '00:10:18:cd:da:ac', >>>>> >>> >>> > >> 'slaves': >>>>> >>> >>> > >> ['eth2', 'eth3'], 'opts': {'miimon': '100', 'mode': >>>>> >>> >>> > >> '4'}}, >>>>> >>> >>> > >> 'bond2': >>>>> >>> >>> > >> {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', >>>>> >>> >>> > >> 'slaves': >>>>> >>> >>> > >> [], >>>>> >>> >>> > >> 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', >>>>> >>> >>> > >> 'cfg': {}, >>>>> >>> >>> > >> 'mtu': >>>>> >>> >>> > >> '1500', 'netmask': '', 'slaves': [], 'hwaddr': >>>>> >>> >>> > >> '00:00:00:00:00:00'}}, >>>>> >>> >>> > >> 'software_version': '4.16', 'memSize': '24019', >>>>> >>> >>> > >> 'cpuSpeed': >>>>> >>> >>> > >> '2667.000', >>>>> >>> >>> > >> 'numaNodes': {u'1': {'totalMemory': '12288', 'cpus': >>>>> >>> >>> > >> [6, >>>>> >>> >>> > >> 7, 8, >>>>> >>> >>> > >> 9, >>>>> >>> >>> > >> 10, 11, >>>>> >>> >>> > >> 18, 19, 20, 21, 22, 23]}, u'0': {'totalMemory': >>>>> >>> >>> > >> '12278', >>>>> >>> >>> > >> 'cpus': >>>>> >>> >>> > >> [0, >>>>> >>> >>> > >> 1, 2, >>>>> >>> >>> > >> 3, 4, 5, 12, 13, 14, 15, 16, 17]}}, 'version_name': >>>>> >>> >>> > >> 'Snow >>>>> >>> >>> > >> Man', >>>>> >>> >>> > >> 'vlans': >>>>> >>> >>> > >> {'bond0.10': {'iface': 'bond0', 'addr': >>>>> >>> >>> > >> '43.25.17.16', >>>>> >>> >>> > >> 'cfg': >>>>> >>> >>> > >> {'DEFROUTE': >>>>> >>> >>> > >> 'yes', 'VLAN': 'yes', 'IPADDR': '43.25.17.16', >>>>> >>> >>> > >> 'HOTPLUG': >>>>> >>> >>> > >> 'no', >>>>> >>> >>> > >> 'GATEWAY': >>>>> >>> >>> > >> '43.25.17.1', 'NM_CONTROLLED': 'no', 'NETMASK': >>>>> >>> >>> > >> '255.255.255.0', >>>>> >>> >>> > >> 'BOOTPROTO': 'none', 'DEVICE': 'bond0.10', 'MTU': >>>>> >>> >>> > >> '1500', >>>>> >>> >>> > >> 'ONBOOT': >>>>> >>> >>> > >> 'yes'}, >>>>> >>> >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], >>>>> >>> >>> > >> 'vlanid': >>>>> >>> >>> > >> 10, >>>>> >>> >>> > >> 'mtu': >>>>> >>> >>> > >> '1500', >>>>> >>> >>> > >> 'netmask': '255.255.255.0', 'ipv4addrs': >>>>> >>> >>> > >> ['43.25.17.16/24'] >>>>> >>> >>> > >> <http://43.25.17.16/24%27%5D>}, 'bond0.36': {'iface': >>>>> >>> >>> > >> 'bond0', >>>>> >>> >>> > >> 'addr': >>>>> >>> >>> > >> '', 'cfg': {'BRIDGE': 'VMNetwork', 'VLAN': 'yes', >>>>> >>> >>> > >> 'HOTPLUG': >>>>> >>> >>> > >> 'no', >>>>> >>> >>> > >> 'MTU': >>>>> >>> >>> > >> '1500', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond0.36', >>>>> >>> >>> > >> 'ONBOOT': >>>>> >>> >>> > >> 'no'}, >>>>> >>> >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], >>>>> >>> >>> > >> 'vlanid': >>>>> >>> >>> > >> 36, >>>>> >>> >>> > >> 'mtu': >>>>> >>> >>> > >> '1500', >>>>> >>> >>> > >> 'netmask': '', 'ipv4addrs': []}, 'bond1.100': >>>>> >>> >>> > >> {'iface': >>>>> >>> >>> > >> 'bond1', >>>>> >>> >>> > >> 'addr': >>>>> >>> >>> > >> '', 'cfg': {'BRIDGE': 'Internal', 'VLAN': 'yes', >>>>> >>> >>> > >> 'HOTPLUG': >>>>> >>> >>> > >> 'no', >>>>> >>> >>> > >> 'MTU': >>>>> >>> >>> > >> '9000', 'NM_CONTROLLED': 'no', 'DEVICE': 'bond1.100', >>>>> >>> >>> > >> 'ONBOOT': >>>>> >>> >>> > >> 'no'}, >>>>> >>> >>> > >> 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], >>>>> >>> >>> > >> 'vlanid': >>>>> >>> >>> > >> 100, >>>>> >>> >>> > >> 'mtu': >>>>> >>> >>> > >> '9000', >>>>> >>> >>> > >> 'netmask': '', 'ipv4addrs': []}}, 'cpuCores': '12', >>>>> >>> >>> > >> 'kvmEnabled': >>>>> >>> >>> > >> 'true', >>>>> >>> >>> > >> 'guestOverhead': '65', 'cpuThreads': '24', >>>>> >>> >>> > >> 'emulatedMachines': >>>>> >>> >>> > >> [u'rhel6.5.0', u'pc', u'rhel6.4.0', u'rhel6.3.0', >>>>> >>> >>> > >> u'rhel6.2.0', >>>>> >>> >>> > >> u'rhel6.1.0', u'rhel6.0.0', u'rhel5.5.0', >>>>> >>> >>> > >> u'rhel5.4.4', >>>>> >>> >>> > >> u'rhel5.4.0'], >>>>> >>> >>> > >> 'operatingSystem': {'release': '5.el6.centos.11.1', >>>>> >>> >>> > >> 'version': >>>>> >>> >>> > >> '6', >>>>> >>> >>> > >> 'name': >>>>> >>> >>> > >> 'RHEL'}, 'lastClient': '10.10.10.2'}} >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> 21:41:41,620::BindingXMLRPC::1132::vds::(wrapper) >>>>> >>> >>> > >> client >>>>> >>> >>> > >> [10.10.10.2]::call >>>>> >>> >>> > >> getHardwareInfo with () {} >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> 21:41:41,621::BindingXMLRPC::1139::vds::(wrapper) >>>>> >>> >>> > >> return >>>>> >>> >>> > >> getHardwareInfo >>>>> >>> >>> > >> with {'status': {'message': 'Done', 'code': 0}, >>>>> >>> >>> > >> 'info': >>>>> >>> >>> > >> {'systemProductName': 'CS24-TY', >>>>> >>> >>> > >> 'systemSerialNumber': >>>>> >>> >>> > >> '7LWSPN1', >>>>> >>> >>> > >> 'systemFamily': 'Server', 'systemVersion': 'A00', >>>>> >>> >>> > >> 'systemUUID': >>>>> >>> >>> > >> '44454c4c-4c00-1057-8053-b7c04f504e31', >>>>> >>> >>> > >> 'systemManufacturer': >>>>> >>> >>> > >> 'Dell'}} >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> 21:41:41,733::BindingXMLRPC::1132::vds::(wrapper) >>>>> >>> >>> > >> client >>>>> >>> >>> > >> [10.10.10.2]::call >>>>> >>> >>> > >> hostsList with () {} flowID [222e8036] >>>>> >>> >>> > >> Thread-13::ERROR::2014-11-24 >>>>> >>> >>> > >> 21:41:44,753::BindingXMLRPC::1148::vds::(wrapper) >>>>> >>> >>> > >> vdsm >>>>> >>> >>> > >> exception >>>>> >>> >>> > >> occured >>>>> >>> >>> > >> Traceback (most recent call last): >>>>> >>> >>> > >> File "/usr/share/vdsm/rpc/BindingXMLRPC.py",
>>>>> >>> >>> > >> 1135, >>>>> >>> >>> > >> in >>>>> >>> >>> > >> wrapper >>>>> >>> >>> > >> res = f(*args, **kwargs) >>>>> >>> >>> > >> File "/usr/share/vdsm/gluster/api.py", line 54, in >>>>> >>> >>> > >> wrapper >>>>> >>> >>> > >> rv = func(*args, **kwargs) >>>>> >>> >>> > >> File "/usr/share/vdsm/gluster/api.py", line 251, in >>>>> >>> >>> > >> hostsList >>>>> >>> >>> > >> return {'hosts': >>>>> >>> >>> > >> self.svdsmProxy.glusterPeerStatus()} >>>>> >>> >>> > >> File "/usr/share/vdsm/supervdsm.py", line 50, in >>>>> >>> >>> > >> __call__ >>>>> >>> >>> > >> return callMethod() >>>>> >>> >>> > >> File "/usr/share/vdsm/supervdsm.py", line 48, in >>>>> >>> >>> > >> <lambda> >>>>> >>> >>> > >> **kwargs) >>>>> >>> >>> > >> File "<string>", line 2, in glusterPeerStatus >>>>> >>> >>> > >> File >>>>> >>> >>> > >> "/usr/lib64/python2.6/multiprocessing/managers.py", >>>>> >>> >>> > >> line >>>>> >>> >>> > >> 740, >>>>> >>> >>> > >> in >>>>> >>> >>> > >> _callmethod >>>>> >>> >>> > >> raise convert_to_error(kind, result) >>>>> >>> >>> > >> GlusterCmdExecFailedException: Command execution >>>>> >>> >>> > >> failed >>>>> >>> >>> > >> error: Connection failed. Please check if gluster >>>>> >>> >>> > >> daemon >>>>> >>> >>> > >> is >>>>> >>> >>> > >> operational. >>>>> >>> >>> > >> return code: 1 >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> 21:41:50,949::task::595::Storage.TaskManager.Task::(_updateState) >>>>> >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving >>>>> >>> >>> > >> from >>>>> >>> >>> > >> state >>>>> >>> >>> > >> init >>>>> >>> >>> > >> -> >>>>> >>> >>> > >> state preparing >>>>> >>> >>> > >> Thread-13::INFO::2014-11-24 >>>>> >>> >>> > >> 21:41:50,950::logUtils::44::dispatcher::(wrapper) Run >>>>> >>> >>> > >> and >>>>> >>> >>> > >> protect: >>>>> >>> >>> > >> repoStats(options=None) >>>>> >>> >>> > >> Thread-13::INFO::2014-11-24 >>>>> >>> >>> > >> 21:41:50,950::logUtils::47::dispatcher::(wrapper) Run >>>>> >>> >>> > >> and >>>>> >>> >>> > >> protect: >>>>> >>> >>> > >> repoStats, Return response: {} >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> 21:41:50,950::task::1191::Storage.TaskManager.Task::(prepare) >>>>> >>> >>> > >> >>>>> >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::finished: {} >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> 21:41:50,950::task::595::Storage.TaskManager.Task::(_updateState) >>>>> >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::moving >>>>> >>> >>> > >> from >>>>> >>> >>> > >> state >>>>> >>> >>> > >> preparing >>>>> >>> >>> > >> -> >>>>> >>> >>> > >> state finished >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> 21:41:50,951::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) >>>>> >>> >>> > >> Owner.releaseAll requests {} resources {} >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> 21:41:50,951::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >>>>> >>> >>> > >> Owner.cancelAll requests {} >>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > >> 21:41:50,951::task::993::Storage.TaskManager.Task::(_decref) >>>>> >>> >>> > >> Task=`c9042986-c978-4b08-adb2-616f5299e115`::ref 0 >>>>> >>> >>> > >> aborting >>>>> >>> >>> > >> False >>>>> >>> >>> > >> ------------------------------- >>>>> >>> >>> > >> >>>>> >>> >>> > >> [root@compute4 ~]# service glusterd status >>>>> >>> >>> > >> glusterd is stopped >>>>> >>> >>> > >> [root@compute4 ~]# chkconfig --list | grep glusterd >>>>> >>> >>> > >> glusterd 0:off 1:off 2:on 3:on 4:on >>>>> >>> >>> > >> 5:on >>>>> >>> >>> > >> 6:off >>>>> >>> >>> > >> [root@compute4 ~]# >>>>> >>> >>> > >> >>>>> >>> >>> > >> Thanks, >>>>> >>> >>> > >> Punit >>>>> >>> >>> > >> >>>>> >>> >>> > >> On Mon, Nov 24, 2014 at 6:36 PM, Kanagaraj >>>>> >>> >>> > >> <kmayilsa@redhat.com> >>>>> >>> >>> > >> wrote: >>>>> >>> >>> > >> >>>>> >>> >>> > >>> Can you send the corresponding error in vdsm.log >>>>> >>> >>> > >>> from >>>>> >>> >>> > >>> the >>>>> >>> >>> > >>> host? >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> Also check if glusterd service is running. >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> Thanks, >>>>> >>> >>> > >>> Kanagaraj >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> On 11/24/2014 03:39 PM, Punit Dambiwal wrote: >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> Hi, >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> After reboot my Hypervisior host can not activate >>>>> >>> >>> > >>> again >>>>> >>> >>> > >>> in the >>>>> >>> >>> > >>> cluster >>>>> >>> >>> > >>> and failed with the following error :- >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> Gluster command [<UNKNOWN>] failed on server... >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> Engine logs :- >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> 2014-11-24 18:05:28,397 INFO >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-64) START, >>>>> >>> >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, >>>>> >>> >>> > >>> HostId >>>>> >>> >>> > >>> = >>>>> >>> >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: >>>>> >>> >>> > >>> 5f251c90 >>>>> >>> >>> > >>> 2014-11-24 18:05:30,609 INFO >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-64) FINISH, >>>>> >>> >>> > >>> GlusterVolumesListVDSCommand, >>>>> >>> >>> > >>> return: >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@d95203e0 }, >>>>> >>> >>> > >>> log id: 5f251c90 >>>>> >>> >>> > >>> 2014-11-24 18:05:33,768 INFO >>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>>>> >>> >>> > >>> (ajp--127.0.0.1-8702-8) >>>>> >>> >>> > >>> [287d570d] Lock Acquired to object EngineLock >>>>> >>> >>> > >>> [exclusiveLocks= >>>>> >>> >>> > >>> key: >>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a value: VDS >>>>> >>> >>> > >>> , sharedLocks= ] >>>>> >>> >>> > >>> 2014-11-24 18:05:33,795 INFO >>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] >>>>> >>> >>> > >>> Running >>>>> >>> >>> > >>> command: >>>>> >>> >>> > >>> ActivateVdsCommand internal: false. Entities >>>>> >>> >>> > >>> affected : >>>>> >>> >>> > >>> ID: >>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDSAction >>>>> >>> >>> > >>> group >>>>> >>> >>> > >>> MANIPULATE_HOST >>>>> >>> >>> > >>> with role type ADMIN >>>>> >>> >>> > >>> 2014-11-24 18:05:33,796 INFO >>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] >>>>> >>> >>> > >>> Before >>>>> >>> >>> > >>> acquiring >>>>> >>> >>> > >>> lock in >>>>> >>> >>> > >>> order to prevent monitoring for host Compute5 from >>>>> >>> >>> > >>> data-center >>>>> >>> >>> > >>> SV_WTC >>>>> >>> >>> > >>> 2014-11-24 18:05:33,797 INFO >>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] Lock >>>>> >>> >>> > >>> acquired, >>>>> >>> >>> > >>> from >>>>> >>> >>> > >>> now a >>>>> >>> >>> > >>> monitoring of host will be skipped for host Compute5 >>>>> >>> >>> > >>> from >>>>> >>> >>> > >>> data-center >>>>> >>> >>> > >>> SV_WTC >>>>> >>> >>> > >>> 2014-11-24 18:05:33,817 INFO >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >>>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] >>>>> >>> >>> > >>> START, >>>>> >>> >>> > >>> SetVdsStatusVDSCommand(HostName = Compute5, HostId = >>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, >>>>> >>> >>> > >>> status=Unassigned, >>>>> >>> >>> > >>> nonOperationalReason=NONE, >>>>> >>> >>> > >>> stopSpmFailureLogged=false), >>>>> >>> >>> > >>> log id: >>>>> >>> >>> > >>> 1cbc7311 >>>>> >>> >>> > >>> 2014-11-24 18:05:33,820 INFO >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >>>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) [287d570d] >>>>> >>> >>> > >>> FINISH, >>>>> >>> >>> > >>> SetVdsStatusVDSCommand, log id: 1cbc7311 >>>>> >>> >>> > >>> 2014-11-24 18:05:34,086 INFO >>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) Activate >>>>> >>> >>> > >>> finished. >>>>> >>> >>> > >>> Lock >>>>> >>> >>> > >>> released. >>>>> >>> >>> > >>> Monitoring can run now for host Compute5 from >>>>> >>> >>> > >>> data-center >>>>> >>> >>> > >>> SV_WTC >>>>> >>> >>> > >>> 2014-11-24 18:05:34,088 INFO >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) Correlation ID: >>>>> >>> >>> > >>> 287d570d, >>>>> >>> >>> > >>> Job >>>>> >>> >>> > >>> ID: >>>>> >>> >>> > >>> 5ef8e4d6-b2bc-469e-8e81-7ef74b2a001a, Call Stack: >>>>> >>> >>> > >>> null, >>>>> >>> >>> > >>> Custom >>>>> >>> >>> > >>> Event ID: >>>>> >>> >>> > >>> -1, Message: Host Compute5 was activated by admin. >>>>> >>> >>> > >>> 2014-11-24 18:05:34,090 INFO >>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.ActivateVdsCommand] >>>>> >>> >>> > >>> (org.ovirt.thread.pool-8-thread-45) Lock freed to >>>>> >>> >>> > >>> object >>>>> >>> >>> > >>> EngineLock >>>>> >>> >>> > >>> [exclusiveLocks= key: >>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a >>>>> >>> >>> > >>> value: >>>>> >>> >>> > >>> VDS >>>>> >>> >>> > >>> , sharedLocks= ] >>>>> >>> >>> > >>> 2014-11-24 18:05:35,792 INFO >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-55) [3706e836] START, >>>>> >>> >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, >>>>> >>> >>> > >>> HostId >>>>> >>> >>> > >>> = >>>>> >>> >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: >>>>> >>> >>> > >>> 48a0c832 >>>>> >>> >>> > >>> 2014-11-24 18:05:37,064 INFO >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] >>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) START, >>>>> >>> >>> > >>> GetHardwareInfoVDSCommand(HostName = Compute5, >>>>> >>> >>> > >>> HostId = >>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> vds=Host[Compute5,0bf6b00f-7947-4411-b55a-cc5eea2b381a]), log >>>>> >>> >>> > >>> id: >>>>> >>> >>> > >>> 6d560cc2 >>>>> >>> >>> > >>> 2014-11-24 18:05:37,074 INFO >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHardwareInfoVDSCommand] >>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) FINISH, >>>>> >>> >>> > >>> GetHardwareInfoVDSCommand, log >>>>> >>> >>> > >>> id: 6d560cc2 >>>>> >>> >>> > >>> 2014-11-24 18:05:37,093 WARN >>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.VdsManager] >>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) Host Compute5 is >>>>> >>> >>> > >>> running >>>>> >>> >>> > >>> with >>>>> >>> >>> > >>> disabled >>>>> >>> >>> > >>> SELinux. >>>>> >>> >>> > >>> 2014-11-24 18:05:37,127 INFO >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.HandleVdsCpuFlagsOrClusterChangedCommand] >>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] >>>>> >>> >>> > >>> Running >>>>> >>> >>> > >>> command: >>>>> >>> >>> > >>> HandleVdsCpuFlagsOrClusterChangedCommand internal: >>>>> >>> >>> > >>> true. >>>>> >>> >>> > >>> Entities >>>>> >>> >>> > >>> affected >>>>> >>> >>> > >>> : ID: 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: >>>>> >>> >>> > >>> VDS >>>>> >>> >>> > >>> 2014-11-24 18:05:37,147 INFO >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] START, >>>>> >>> >>> > >>> GlusterServersListVDSCommand(HostName = Compute5, >>>>> >>> >>> > >>> HostId >>>>> >>> >>> > >>> = >>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a), log id: >>>>> >>> >>> > >>> 4faed87 >>>>> >>> >>> > >>> 2014-11-24 18:05:37,164 INFO >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterServersListVDSCommand] >>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [2b4a51cf] >>>>> >>> >>> > >>> FINISH, >>>>> >>> >>> > >>> GlusterServersListVDSCommand, log id: 4faed87 >>>>> >>> >>> > >>> 2014-11-24 18:05:37,189 INFO >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.SetNonOperationalVdsCommand] >>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] >>>>> >>> >>> > >>> Running >>>>> >>> >>> > >>> command: >>>>> >>> >>> > >>> SetNonOperationalVdsCommand internal: true. Entities >>>>> >>> >>> > >>> affected : >>>>> >>> >>> > >>> ID: >>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS >>>>> >>> >>> > >>> 2014-11-24 18:05:37,206 INFO >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] START, >>>>> >>> >>> > >>> SetVdsStatusVDSCommand(HostName = Compute5, HostId = >>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a, >>>>> >>> >>> > >>> status=NonOperational, >>>>> >>> >>> > >>> nonOperationalReason=GLUSTER_COMMAND_FAILED, >>>>> >>> >>> > >>> stopSpmFailureLogged=false), >>>>> >>> >>> > >>> log id: fed5617 >>>>> >>> >>> > >>> 2014-11-24 18:05:37,209 INFO >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] >>>>> >>> >>> > >>> FINISH, >>>>> >>> >>> > >>> SetVdsStatusVDSCommand, log id: fed5617 >>>>> >>> >>> > >>> 2014-11-24 18:05:37,223 ERROR >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] >>>>> >>> >>> > >>> Correlation ID: >>>>> >>> >>> > >>> 4a84c4e5, >>>>> >>> >>> > >>> Job >>>>> >>> >>> > >>> ID: 4bfd4a6d-c3ef-468f-a40e-a3a6ca13011b, Call >>>>> >>> >>> > >>> Stack: >>>>> >>> >>> > >>> null, >>>>> >>> >>> > >>> Custom >>>>> >>> >>> > >>> Event >>>>> >>> >>> > >>> ID: -1, Message: Gluster command [<UNKNOWN>] failed >>>>> >>> >>> > >>> on >>>>> >>> >>> > >>> server >>>>> >>> >>> > >>> Compute5. >>>>> >>> >>> > >>> 2014-11-24 18:05:37,243 INFO >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [4a84c4e5] >>>>> >>> >>> > >>> Correlation ID: >>>>> >>> >>> > >>> null, >>>>> >>> >>> > >>> Call >>>>> >>> >>> > >>> Stack: null, Custom Event ID: -1, Message: Status of >>>>> >>> >>> > >>> host >>>>> >>> >>> > >>> Compute5 >>>>> >>> >>> > >>> was >>>>> >>> >>> > >>> set >>>>> >>> >>> > >>> to NonOperational. >>>>> >>> >>> > >>> 2014-11-24 18:05:37,272 INFO >>>>> >>> >>> > >>> [org.ovirt.engine.core.bll.HandleVdsVersionCommand] >>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Running >>>>> >>> >>> > >>> command: >>>>> >>> >>> > >>> HandleVdsVersionCommand internal: true. Entities >>>>> >>> >>> > >>> affected : >>>>> >>> >>> > >>> ID: >>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a Type: VDS >>>>> >>> >>> > >>> 2014-11-24 18:05:37,274 INFO >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] >>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-69) [a0c8a7f] Host >>>>> >>> >>> > >>> 0bf6b00f-7947-4411-b55a-cc5eea2b381a : Compute5 is >>>>> >>> >>> > >>> already in >>>>> >>> >>> > >>> NonOperational status for reason >>>>> >>> >>> > >>> GLUSTER_COMMAND_FAILED. >>>>> >>> >>> > >>> SetNonOperationalVds command is skipped. >>>>> >>> >>> > >>> 2014-11-24 18:05:38,065 INFO >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-55) [3706e836] >>>>> >>> >>> > >>> FINISH, >>>>> >>> >>> > >>> GlusterVolumesListVDSCommand, return: >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> {26ae1672-ee09-4a38-8fd2-72dd9974cc2b=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@4e72a1b1 }, >>>>> >>> >>> > >>> log id: 48a0c832 >>>>> >>> >>> > >>> 2014-11-24 18:05:43,243 INFO >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand] >>>>> >>> >>> > >>> (DefaultQuartzScheduler_Worker-35) START, >>>>> >>> >>> > >>> GlusterVolumesListVDSCommand(HostName = Compute4, >>>>> >>> >>> > >>> HostId >>>>> >>> >>> > >>> = >>>>> >>> >>> > >>> 33648a90-200c-45ca-89d5-1ce305d79a6a), log id: >>>>> >>> >>> > >>> 3ce13ebc >>>>> >>> >>> > >>> ^C >>>>> >>> >>> > >>> [root@ccr01 ~]# >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> Thanks, >>>>> >>> >>> > >>> Punit >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> _______________________________________________ >>>>> >>> >>> > >>> Users mailing >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> listUsers@ovirt.orghttp://
On Fri, Dec 5, 2014 at 12:47 PM, Punit Dambiwal <hypunit@gmail.com> wrote: the the line the probed). possible, the the then line lists.ovirt.org/mailman/listinfo/users
>>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >>> >>>>> >>> >>> > >> >>>>> >>> >>> > >> >>>>> >>> >>> > > >>>>> >>> >>> > > >>>>> >>> >>> > >>>>> >>> >> >>>>> >>> >> >>>>> >>> > >>>>> >>> > >>>>> >>> > _______________________________________________ >>>>> >>> > Gluster-users mailing list >>>>> >>> > Gluster-users@gluster.org >>>>> >>> > >>>>> >>> > http://supercolony.gluster.org/mailman/listinfo/gluster-users >>>>> >> >>>>> >> >>>>> > >>>> >>>> >>
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

If you use network service, and the line After=network.target rpcbind.service is in the glusterd.service glusterd have to wait to service network start... You can try this for debug de problem with the network... if your ip address for gluster is for example: 10.88.0.1, you can grep "10.88" in the line ExecStart You can create a new service (type oneshot) in systemd, for example: /etc/systemd/system/address_up.service [Unit] Description=address_up After=network-online.target [Service] Type=oneshot ExecStart=/bin/sh -c 'until ip a s bond0 | grep -q 10.88; do usleep 100000; done' ## debug option ExecStart=/bin/sh -c 'ip a s > /var/tmp/ip_information' RemainAfterExit=yes [Install] WantedBy=multi-user.target And then you can modify /usr/lib/systemd/system/glusterd.service . Add the service in "After" line: After=network.target rpcbind.service address_up.service Good luck: Alberto 2014-12-10 3:21 GMT+01:00 Punit Dambiwal <hypunit@gmail.com>:
Hi Albereto,
I am using the bonding eth2+eth3=bond0 for gluster communication....and i am not using the NetworkManager.service....as all the interface configured by Ovirt itself.....please let me know how i can achieve this...??
[root@cpu04 ~]# systemctl list-unit-files | grep Network NetworkManager-dispatcher.service disabled NetworkManager-wait-online.service disabled NetworkManager.service disabled [root@cpu04 ~]# cat /usr/lib/systemd/system/glusterd.service [Unit] Description=GlusterFS, a clustered file-system server After=network.target rpcbind.service Wants=network-online.target
[Service] Type=forking PIDFile=/var/run/glusterd.pid LimitNOFILE=65536 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid KillMode=process
[Install] WantedBy=multi-user.target [root@cpu04 ~]#
Thanks, Punit
On Wed, Dec 10, 2014 at 6:21 AM, Alberto Larraz < alarraz@escoladeltreball.org> wrote:
I post a bug in centos about a problem with NetworkManager-wait-online.service, this service is executed before network-online.target
http://bugs.centos.org/view.php?id=7825
I had a problem with mount drbd in a centos7 at boot time. I think it could be similar to the problem to mount gluster in boot time in centos7.
I modified the service adding one line that wait to NetworkManager to set up the ip address in the interface that is used to connect de drbd, the the ip address in eth0 is 10.88.0.1 and the line I added is:
ExecStart=/bin/sh -c 'until ip a s eth0 | grep -q 10.88; do usleep 100000; done'
# cat /usr/lib/systemd/system/NetworkManager-wait-online.service
[Unit] Description=Network Manager Wait Online Requisite=NetworkManager.service After=NetworkManager.service Wants=network.target Before=network.target network-online.target
[Service] Type=oneshot ExecStart=/usr/bin/nm-online -q --timeout=30 ExecStart=/bin/sh -c 'until ip a s eth0 | grep -q 10.88; do usleep 100000; done'
Alberto
2014-12-05 14:50 GMT+01:00 Kaushal M <kshlmster@gmail.com>:
Can you replace 'Before=network-online.target' with 'Wants=network-online.target' and try the boot again? This should force the network to be online before starting GlusterD.
If even that fails, you could try adding an entry into /etc/hosts with the hostname of the system. This should prevent any more failures.
I still don't believe it's a problem with Gluster. Gluster uses apis provided by the system to perform name resolution. These definitely work correctly because you can start GlusterD later. Since the resolution failure only happens during boot, it points to system or network setup issues during boot. To me it seems like the network isn't completely setup at that point of time.
~kaushal
Hi Kaushal,
It seems it's bug in glusterfs 3.6....even i manage my systemd to start the network service before glusterd...but it's still fail...
--------------- [Unit] Description=GlusterFS, a clustered file-system server After=network.target rpcbind.service Before=network-online.target
[Service] Type=forking PIDFile=/var/run/glusterd.pid LimitNOFILE=65536 ExecStartPre=/etc/rc.d/init.d/network start ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid KillMode=process
[Install] WantedBy=multi-user.target ----------------
Thanks, Punit
On Wed, Dec 3, 2014 at 8:56 PM, Kaushal M <kshlmster@gmail.com> wrote:
I just remembered this.
There was another user having a similar issue of GlusterD failing to start on the mailing list a while back. The cause of his problem was the way his network was brought up. IIRC, he was using a static network configuration. The problem vanished when he began using dhcp. Or it might have been he was using dhcp.service and it got solved after switching to NetworkManager.
This could be one more thing you could look at.
I'll try to find the mail thread to see if it was the same problem as
you.
~kaushal
On Wed, Dec 3, 2014 at 6:22 PM, Kaushal M <kshlmster@gmail.com>
wrote:
I don't know much about how the network target is brought up in CentOS7, but I'll try as much as I can.
It seems to me that, after the network has been brought up and by
time GlusterD is started, a. The machine hasn't yet recieved it's hostname, or b. It hasn't yet registered with the name server.
This is causing name resolution failures.
I don't know if the network target could come up without the machine getting its hostname, so I'm pretty sure it's not a.
So it seems to be b. But these kind of signing in happens only in DDNS systems, which doesn't seem to be the case for you.
Both of these reasons might be wrong (most likely wrong). You'd do good if you could ask for help from someone with more experience in systemd + networking.
~kaushal
On Wed, Dec 3, 2014 at 10:54 AM, Punit Dambiwal <hypunit@gmail.com> wrote: > Hi Kaushal, > > This is the host...which i rebooted...would you mind to let me know how > i > can make the glusterd sevice come up after network...i am using > centos7...if > network is the issue... > > On Wed, Dec 3, 2014 at 11:54 AM, Kaushal M <kshlmster@gmail.com> wrote: >> >> This peer cannot be identified. >> >> " [2014-12-03 02:29:25.998153] D >> [glusterd-peer-utils.c:121:glusterd_peerinfo_find_by_hostname] >> 0-management: >> Unable to find friend: cpu05.zne01.hkg1.ovt.36stack.com" >> >> I don't know why this address is not being resolved during boot time. >> If >> this is a valid peer, the the only reason I can think of this
>> network is not up. >> >> If you had previously detached the peer forcefully, the that could >> have >> left stale entries in some volumes. In this case as well, GlusterD >> will fail >> to identify the peer. >> >> Do either of these reasons seem a possibility to you? >> >> On Dec 3, 2014 8:07 AM, "Punit Dambiwal" <hypunit@gmail.com> wrote: >>> >>> Hi Kaushal, >>> >>> Please find the logs here :- http://ur1.ca/iyoe5 and >>> http://ur1.ca/iyoed >>> >>> On Tue, Dec 2, 2014 at 10:43 PM, Kaushal M <kshlmster@gmail.com> >>> wrote: >>>> >>>> Hey Punit, >>>> In the logs you've provided, GlusterD appears to be running >>>> correctly. >>>> Could you provide the logs for the time period when GlusterD >>>> attempts to >>>> start but fails. >>>> >>>> ~kaushal >>>> >>>> On Dec 2, 2014 8:03 PM, "Punit Dambiwal" <hypunit@gmail.com> wrote: >>>>> >>>>> Hi Kaushal, >>>>> >>>>> Please find the logs here :- http://ur1.ca/iyhs5 and >>>>> http://ur1.ca/iyhue >>>>> >>>>> Thanks, >>>>> punit >>>>> >>>>> >>>>> On Tue, Dec 2, 2014 at 12:00 PM, Kaushal M < kshlmster@gmail.com> >>>>> wrote: >>>>>> >>>>>> Hey Punit, >>>>>> Could you start Glusterd in debug mode and provide the logs here? >>>>>> To start it in debug mode, append '-LDEBUG' to the ExecStart
>>>>>> in >>>>>> the service file. >>>>>> >>>>>> ~kaushal >>>>>> >>>>>> On Mon, Dec 1, 2014 at 9:05 AM, Punit Dambiwal < hypunit@gmail.com> >>>>>> wrote: >>>>>> > Hi, >>>>>> > >>>>>> > Can Any body help me on this ?? >>>>>> > >>>>>> > On Thu, Nov 27, 2014 at 9:29 AM, Punit Dambiwal >>>>>> > <hypunit@gmail.com> >>>>>> > wrote: >>>>>> >> >>>>>> >> Hi Kaushal, >>>>>> >> >>>>>> >> Thanks for the detailed reply....let me explain my setup first >>>>>> >> :- >>>>>> >> >>>>>> >> 1. Ovirt Engine >>>>>> >> 2. 4* host as well as storage machine (Host and gluster >>>>>> >> combined) >>>>>> >> 3. Every host has 24 bricks... >>>>>> >> >>>>>> >> Now whenever the host machine reboot...it can come up but can >>>>>> >> not >>>>>> >> join the >>>>>> >> cluster again and through the following error "Gluster command >>>>>> >> [<UNKNOWN>] >>>>>> >> failed on server.." >>>>>> >> >>>>>> >> Please check my comment in line :- >>>>>> >> >>>>>> >> 1. Use the same string for doing the peer probe and for the >>>>>> >> brick >>>>>> >> address >>>>>> >> during volume create/add-brick. Ideally, we suggest you use >>>>>> >> properly >>>>>> >> resolvable FQDNs everywhere. If that is not possible, then use >>>>>> >> only >>>>>> >> IP >>>>>> >> addresses. Try to avoid short names. >>>>>> >> --------------- >>>>>> >> [root@cpu05 ~]# gluster peer status >>>>>> >> Number of Peers: 3 >>>>>> >> >>>>>> >> Hostname: cpu03.stack.com >>>>>> >> Uuid: 5729b8c4-e80d-4353-b456-6f467bddbdfb >>>>>> >> State: Peer in Cluster (Connected) >>>>>> >> >>>>>> >> Hostname: cpu04.stack.com >>>>>> >> Uuid: d272b790-c4b2-4bed-ba68-793656e6d7b0 >>>>>> >> State: Peer in Cluster (Connected) >>>>>> >> Other names: >>>>>> >> 10.10.0.8 >>>>>> >> >>>>>> >> Hostname: cpu02.stack.com >>>>>> >> Uuid: 8d8a7041-950e-40d0-85f9-58d14340ca25 >>>>>> >> State: Peer in Cluster (Connected) >>>>>> >> [root@cpu05 ~]# >>>>>> >> ---------------- >>>>>> >> 2. During boot up, make sure to launch glusterd only after
>>>>>> >> network is >>>>>> >> up. This will allow the new peer identification mechanism to do >>>>>> >> its >>>>>> >> job correctly. >>>>>> >> >> I think the service itself doing the same job.... >>>>>> >> >>>>>> >> [root@cpu05 ~]# cat /usr/lib/systemd/system/glusterd.service >>>>>> >> [Unit] >>>>>> >> Description=GlusterFS, a clustered file-system server >>>>>> >> After=network.target rpcbind.service >>>>>> >> Before=network-online.target >>>>>> >> >>>>>> >> [Service] >>>>>> >> Type=forking >>>>>> >> PIDFile=/var/run/glusterd.pid >>>>>> >> LimitNOFILE=65536 >>>>>> >> ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid >>>>>> >> KillMode=process >>>>>> >> >>>>>> >> [Install] >>>>>> >> WantedBy=multi-user.target >>>>>> >> [root@cpu05 ~]# >>>>>> >> -------------------- >>>>>> >> >>>>>> >> gluster logs :- >>>>>> >> >>>>>> >> [2014-11-24 09:22:22.147471] I [MSGID: 100030] >>>>>> >> [glusterfsd.c:2018:main] >>>>>> >> 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd >>>>>> >> version >>>>>> >> 3.6.1 >>>>>> >> (args: /usr/sbin/glusterd -p /var/run/glusterd.pid) >>>>>> >> [2014-11-24 09:22:22.151565] I [glusterd.c:1214:init] >>>>>> >> 0-management: >>>>>> >> Maximum allowed open file descriptors set to 65536 >>>>>> >> [2014-11-24 09:22:22.151599] I [glusterd.c:1259:init] >>>>>> >> 0-management: >>>>>> >> Using >>>>>> >> /var/lib/glusterd as working directory >>>>>> >> [2014-11-24 09:22:22.155216] W >>>>>> >> [rdma.c:4195:__gf_rdma_ctx_create] >>>>>> >> 0-rpc-transport/rdma: rdma_cm event channel creation failed (No >>>>>> >> such device) >>>>>> >> [2014-11-24 09:22:22.155264] E [rdma.c:4483:init] >>>>>> >> 0-rdma.management: >>>>>> >> Failed to initialize IB Device >>>>>> >> [2014-11-24 09:22:22.155285] E >>>>>> >> [rpc-transport.c:333:rpc_transport_load] >>>>>> >> 0-rpc-transport: 'rdma' initialization failed >>>>>> >> [2014-11-24 09:22:22.155354] W >>>>>> >> [rpcsvc.c:1524:rpcsvc_transport_create] >>>>>> >> 0-rpc-service: cannot create listener, initing the
>>>>>> >> failed >>>>>> >> [2014-11-24 09:22:22.156290] I >>>>>> >> [glusterd.c:413:glusterd_check_gsync_present] 0-glusterd: >>>>>> >> geo-replication >>>>>> >> module not installed in the system >>>>>> >> [2014-11-24 09:22:22.161318] I >>>>>> >> [glusterd-store.c:2043:glusterd_restore_op_version] 0-glusterd: >>>>>> >> retrieved >>>>>> >> op-version: 30600 >>>>>> >> [2014-11-24 09:22:22.821800] I >>>>>> >> [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] >>>>>> >> 0-management: >>>>>> >> connect returned 0 >>>>>> >> [2014-11-24 09:22:22.825810] I >>>>>> >> [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] >>>>>> >> 0-management: >>>>>> >> connect returned 0 >>>>>> >> [2014-11-24 09:22:22.828705] I >>>>>> >> [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] >>>>>> >> 0-management: >>>>>> >> connect returned 0 >>>>>> >> [2014-11-24 09:22:22.828771] I >>>>>> >> [rpc-clnt.c:969:rpc_clnt_connection_init] >>>>>> >> 0-management: setting frame-timeout to 600 >>>>>> >> [2014-11-24 09:22:22.832670] I >>>>>> >> [rpc-clnt.c:969:rpc_clnt_connection_init] >>>>>> >> 0-management: setting frame-timeout to 600 >>>>>> >> [2014-11-24 09:22:22.835919] I >>>>>> >> [rpc-clnt.c:969:rpc_clnt_connection_init] >>>>>> >> 0-management: setting frame-timeout to 600 >>>>>> >> [2014-11-24 09:22:22.840209] E >>>>>> >> [glusterd-store.c:4248:glusterd_resolve_all_bricks] 0-glusterd: >>>>>> >> resolve >>>>>> >> brick failed in restore >>>>>> >> [2014-11-24 09:22:22.840233] E [xlator.c:425:xlator_init] >>>>>> >> 0-management: >>>>>> >> Initialization of volume 'management' failed, review your >>>>>> >> volfile >>>>>> >> again >>>>>> >> [2014-11-24 09:22:22.840245] E >>>>>> >> [graph.c:322:glusterfs_graph_init] >>>>>> >> 0-management: initializing translator failed >>>>>> >> [2014-11-24 09:22:22.840264] E >>>>>> >> [graph.c:525:glusterfs_graph_activate] >>>>>> >> 0-graph: init failed >>>>>> >> [2014-11-24 09:22:22.840754] W >>>>>> >> [glusterfsd.c:1194:cleanup_and_exit] >>>>>> >> (--> >>>>>> >> 0-: received signum (0), shutting down >>>>>> >> >>>>>> >> Thanks, >>>>>> >> Punit >>>>>> >> >>>>>> >> >>>>>> >> >>>>>> >> >>>>>> >> On Wed, Nov 26, 2014 at 7:14 PM, Kaushal M >>>>>> >> <kshlmster@gmail.com> >>>>>> >> wrote: >>>>>> >>> >>>>>> >>> Based on the logs I can guess that glusterd is being started >>>>>> >>> before >>>>>> >>> the network has come up and that the addresses given to bricks >>>>>> >>> do >>>>>> >>> not >>>>>> >>> directly match the addresses used in during peer probe. >>>>>> >>> >>>>>> >>> The gluster_after_reboot log has the line "[2014-11-25 >>>>>> >>> 06:46:09.972113] E >>>>>> >>> [glusterd-store.c:2632:glusterd_resolve_all_bricks] >>>>>> >>> 0-glusterd: resolve brick failed in restore". >>>>>> >>> >>>>>> >>> Brick resolution fails when glusterd cannot match the address >>>>>> >>> for >>>>>> >>> the >>>>>> >>> brick, with one of the peers. Brick resolution happens in two >>>>>> >>> phases, >>>>>> >>> 1. We first try to identify the peer by performing string >>>>>> >>> comparisions >>>>>> >>> with the brick address and the peer addresses (The peer names >>>>>> >>> will >>>>>> >>> be >>>>>> >>> the names/addresses that were given when the peer was
>>>>>> >>> 2. If we don't find a match from step 1, we will then resolve >>>>>> >>> all >>>>>> >>> the >>>>>> >>> brick address and the peer addresses into addrinfo structs, >>>>>> >>> and >>>>>> >>> then >>>>>> >>> compare these structs to find a match. This process should >>>>>> >>> generally >>>>>> >>> find a match if available. This will fail only if the network >>>>>> >>> is >>>>>> >>> not >>>>>> >>> up yet as we cannot resolve addresses. >>>>>> >>> >>>>>> >>> The above steps are applicable only to glusterfs versions >>>>>> >>> >=3.6. >>>>>> >>> They >>>>>> >>> were introduced to reduce problems with peer identification, >>>>>> >>> like >>>>>> >>> the >>>>>> >>> one you encountered >>>>>> >>> >>>>>> >>> Since both of the steps failed to find a match in one run, but >>>>>> >>> succeeded later, we can come to the conclusion that, >>>>>> >>> a) the bricks don't have the exact same string used in
>>>>>> >>> probe >>>>>> >>> for >>>>>> >>> their addresses as step 1 failed, and >>>>>> >>> b) the network was not up in the initial run, as step 2 failed >>>>>> >>> during >>>>>> >>> the initial run, but passed in the second run. >>>>>> >>> >>>>>> >>> Please let me know if my conclusion is correct. >>>>>> >>> >>>>>> >>> If it is, you can solve your problem in two ways. >>>>>> >>> 1. Use the same string for doing the peer probe and for
>>>>>> >>> brick >>>>>> >>> address during volume create/add-brick. Ideally, we suggest >>>>>> >>> you >>>>>> >>> use >>>>>> >>> properly resolvable FQDNs everywhere. If that is not
>>>>>> >>> then >>>>>> >>> use only IP addresses. Try to avoid short names. >>>>>> >>> 2. During boot up, make sure to launch glusterd only after the >>>>>> >>> network >>>>>> >>> is up. This will allow the new peer identification mechanism >>>>>> >>> to do >>>>>> >>> its >>>>>> >>> job correctly. >>>>>> >>> >>>>>> >>> >>>>>> >>> If you have already followed these steps and yet still hit the >>>>>> >>> problem, then please provide more information (setup, logs, >>>>>> >>> etc.). >>>>>> >>> It >>>>>> >>> could be much different problem that you are facing. >>>>>> >>> >>>>>> >>> ~kaushal >>>>>> >>> >>>>>> >>> On Wed, Nov 26, 2014 at 4:01 PM, Punit Dambiwal >>>>>> >>> <hypunit@gmail.com> >>>>>> >>> wrote: >>>>>> >>> > Is there any one can help on this ?? >>>>>> >>> > >>>>>> >>> > Thanks, >>>>>> >>> > punit >>>>>> >>> > >>>>>> >>> > On Wed, Nov 26, 2014 at 9:42 AM, Punit Dambiwal >>>>>> >>> > <hypunit@gmail.com> >>>>>> >>> > wrote: >>>>>> >>> >> >>>>>> >>> >> Hi, >>>>>> >>> >> >>>>>> >>> >> My Glusterfs version is :- glusterfs-3.6.1-1.el7 >>>>>> >>> >> >>>>>> >>> >> On Wed, Nov 26, 2014 at 1:59 AM, Kanagaraj Mayilsamy >>>>>> >>> >> <kmayilsa@redhat.com> >>>>>> >>> >> wrote: >>>>>> >>> >>> >>>>>> >>> >>> [+Gluster-users@gluster.org] >>>>>> >>> >>> >>>>>> >>> >>> "Initialization of volume 'management' failed, review your >>>>>> >>> >>> volfile >>>>>> >>> >>> again", glusterd throws this error when the service is >>>>>> >>> >>> started >>>>>> >>> >>> automatically >>>>>> >>> >>> after the reboot. But the service is successfully started >>>>>> >>> >>> later >>>>>> >>> >>> manually by >>>>>> >>> >>> the user. >>>>>> >>> >>> >>>>>> >>> >>> can somebody from gluster-users please help on this? >>>>>> >>> >>> >>>>>> >>> >>> glusterfs version: 3.5.1 >>>>>> >>> >>> >>>>>> >>> >>> Thanks, >>>>>> >>> >>> Kanagaraj >>>>>> >>> >>> >>>>>> >>> >>> ----- Original Message ----- >>>>>> >>> >>> > From: "Punit Dambiwal" <hypunit@gmail.com> >>>>>> >>> >>> > To: "Kanagaraj" <kmayilsa@redhat.com> >>>>>> >>> >>> > Cc: users@ovirt.org >>>>>> >>> >>> > Sent: Tuesday, November 25, 2014 7:24:45 PM >>>>>> >>> >>> > Subject: Re: [ovirt-users] Gluster command [<UNKNOWN>] >>>>>> >>> >>> > failed on >>>>>> >>> >>> > server... >>>>>> >>> >>> > >>>>>> >>> >>> > Hi Kanagraj, >>>>>> >>> >>> > >>>>>> >>> >>> > Please check the attached log files....i didn't find any >>>>>> >>> >>> > thing >>>>>> >>> >>> > special.... >>>>>> >>> >>> > >>>>>> >>> >>> > On Tue, Nov 25, 2014 at 12:12 PM, Kanagaraj >>>>>> >>> >>> > <kmayilsa@redhat.com> >>>>>> >>> >>> > wrote: >>>>>> >>> >>> > >>>>>> >>> >>> > > Do you see any errors in >>>>>> >>> >>> > > /var/log/glusterfs/etc-glusterfs-glusterd.vol.log or >>>>>> >>> >>> > > vdsm.log >>>>>> >>> >>> > > when >>>>>> >>> >>> > > the >>>>>> >>> >>> > > service is trying to start automatically after the >>>>>> >>> >>> > > reboot? >>>>>> >>> >>> > > >>>>>> >>> >>> > > Thanks, >>>>>> >>> >>> > > Kanagaraj >>>>>> >>> >>> > > >>>>>> >>> >>> > > >>>>>> >>> >>> > > On 11/24/2014 08:13 PM, Punit Dambiwal wrote: >>>>>> >>> >>> > > >>>>>> >>> >>> > > Hi Kanagaraj, >>>>>> >>> >>> > > >>>>>> >>> >>> > > Yes...once i will start the gluster service and
On Fri, Dec 5, 2014 at 12:47 PM, Punit Dambiwal <hypunit@gmail.com> wrote: the that the line the transport probed). peer the possible, then
>>>>>> >>> >>> > > vdsmd >>>>>> >>> >>> > > ...the >>>>>> >>> >>> > > host >>>>>> >>> >>> > > can connect to cluster...but the question is why it's >>>>>> >>> >>> > > not >>>>>> >>> >>> > > started >>>>>> >>> >>> > > even it >>>>>> >>> >>> > > has chkconfig enabled... >>>>>> >>> >>> > > >>>>>> >>> >>> > > I have tested it in two host cluster >>>>>> >>> >>> > > environment...(Centos 6.6 >>>>>> >>> >>> > > and >>>>>> >>> >>> > > centos 7.0) on both hypervisior cluster..it's failed >>>>>> >>> >>> > > to >>>>>> >>> >>> > > reconnect >>>>>> >>> >>> > > in >>>>>> >>> >>> > > to >>>>>> >>> >>> > > cluster after reboot.... >>>>>> >>> >>> > > >>>>>> >>> >>> > > In both the environment glusterd enabled for next >>>>>> >>> >>> > > boot....but >>>>>> >>> >>> > > it's >>>>>> >>> >>> > > failed with the same error....seems it's bug in either >>>>>> >>> >>> > > gluster or >>>>>> >>> >>> > > Ovirt ?? >>>>>> >>> >>> > > >>>>>> >>> >>> > > Please help me to find the workaround here if can not >>>>>> >>> >>> > > resolve >>>>>> >>> >>> > > it...as >>>>>> >>> >>> > > without this the Host machine can not connect after >>>>>> >>> >>> > > reboot....that >>>>>> >>> >>> > > means >>>>>> >>> >>> > > engine will consider it as down and every time need to >>>>>> >>> >>> > > manually >>>>>> >>> >>> > > start >>>>>> >>> >>> > > the >>>>>> >>> >>> > > gluster service and vdsmd... ?? >>>>>> >>> >>> > > >>>>>> >>> >>> > > Thanks, >>>>>> >>> >>> > > Punit >>>>>> >>> >>> > > >>>>>> >>> >>> > > On Mon, Nov 24, 2014 at 10:20 PM, Kanagaraj >>>>>> >>> >>> > > <kmayilsa@redhat.com> >>>>>> >>> >>> > > wrote: >>>>>> >>> >>> > > >>>>>> >>> >>> > >> From vdsm.log "error: Connection failed. Please >>>>>> >>> >>> > >> check if >>>>>> >>> >>> > >> gluster >>>>>> >>> >>> > >> daemon >>>>>> >>> >>> > >> is operational." >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> Starting glusterd service should fix this issue. >>>>>> >>> >>> > >> 'service >>>>>> >>> >>> > >> glusterd >>>>>> >>> >>> > >> start' >>>>>> >>> >>> > >> But i am wondering why the glusterd was not started >>>>>> >>> >>> > >> automatically >>>>>> >>> >>> > >> after >>>>>> >>> >>> > >> the reboot. >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> Thanks, >>>>>> >>> >>> > >> Kanagaraj >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> On 11/24/2014 07:18 PM, Punit Dambiwal wrote: >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> Hi Kanagaraj, >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> Please find the attached VDSM logs :- >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> ---------------- >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> 21:41:17,182::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >>>>>> >>> >>> > >> Owner.cancelAll requests {} >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> 21:41:17,182::task::993::Storage.TaskManager.Task::(_decref) >>>>>> >>> >>> > >> Task=`1691d409-9b27-4585-8281-5ec26154367a`::ref 0 >>>>>> >>> >>> > >> aborting >>>>>> >>> >>> > >> False >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> 21:41:32,393::task::595::Storage.TaskManager.Task::(_updateState) >>>>>> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving >>>>>> >>> >>> > >> from >>>>>> >>> >>> > >> state >>>>>> >>> >>> > >> init >>>>>> >>> >>> > >> -> >>>>>> >>> >>> > >> state preparing >>>>>> >>> >>> > >> Thread-13::INFO::2014-11-24 >>>>>> >>> >>> > >> 21:41:32,393::logUtils::44::dispatcher::(wrapper) Run >>>>>> >>> >>> > >> and >>>>>> >>> >>> > >> protect: >>>>>> >>> >>> > >> repoStats(options=None) >>>>>> >>> >>> > >> Thread-13::INFO::2014-11-24 >>>>>> >>> >>> > >> 21:41:32,393::logUtils::47::dispatcher::(wrapper) Run >>>>>> >>> >>> > >> and >>>>>> >>> >>> > >> protect: >>>>>> >>> >>> > >> repoStats, Return response: {} >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> 21:41:32,393::task::1191::Storage.TaskManager.Task::(prepare) >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::finished: {} >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> 21:41:32,394::task::595::Storage.TaskManager.Task::(_updateState) >>>>>> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving >>>>>> >>> >>> > >> from >>>>>> >>> >>> > >> state >>>>>> >>> >>> > >> preparing >>>>>> >>> >>> > >> -> >>>>>> >>> >>> > >> state finished >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> 21:41:32,394::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) >>>>>> >>> >>> > >> Owner.releaseAll requests {} resources {} >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> 21:41:32,394::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >>>>>> >>> >>> > >> Owner.cancelAll requests {} >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> 21:41:32,394::task::993::Storage.TaskManager.Task::(_decref) >>>>>> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::ref 0 >>>>>> >>> >>> > >> aborting >>>>>> >>> >>> > >> False >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> 21:41:41,550::BindingXMLRPC::1132::vds::(wrapper) >>>>>> >>> >>> > >> client >>>>>> >>> >>> > >> [10.10.10.2]::call >>>>>> >>> >>> > >> getCapabilities with () {} >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> 21:41:41,553::utils::738::root::(execCmd) >>>>>> >>> >>> > >> /sbin/ip route show to 0.0.0.0/0 table all (cwd None) >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> 21:41:41,560::utils::758::root::(execCmd) >>>>>> >>> >>> > >> SUCCESS: <err> = ''; <rc> = 0 >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> 21:41:41,588::caps::728::root::(_getKeyPackages) rpm >>>>>> >>> >>> > >> package >>>>>> >>> >>> > >> ('gluster-swift',) not found >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> 21:41:41,592::caps::728::root::(_getKeyPackages) rpm >>>>>> >>> >>> > >> package >>>>>> >>> >>> > >> ('gluster-swift-object',) not found >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> 21:41:41,593::caps::728::root::(_getKeyPackages) rpm >>>>>> >>> >>> > >> package >>>>>> >>> >>> > >> ('gluster-swift-plugin',) not found >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm >>>>>> >>> >>> > >> package >>>>>> >>> >>> > >> ('gluster-swift-account',) not found >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm >>>>>> >>> >>> > >> package >>>>>> >>> >>> > >> ('gluster-swift-proxy',) not found >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm >>>>>> >>> >>> > >> package >>>>>> >>> >>> > >> ('gluster-swift-doc',) not found >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm >>>>>> >>> >>> > >> package >>>>>> >>> >>> > >> ('gluster-swift-container',) not found >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm >>>>>> >>> >>> > >> package >>>>>> >>> >>> > >> ('glusterfs-geo-replication',) not found >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> 21:41:41,600::caps::646::root::(get) >>>>>> >>> >>> > >> VirtioRNG DISABLED: libvirt version 0.10.2-29.el6_5.9 >>>>>> >>> >>> > >> required >>>>>> >>> >>> > >> >= >>>>>> >>> >>> > >> 0.10.2-31 >>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>> >>> >>> > >> 21:41:41,603::BindingXMLRPC::1139::vds::(wrapper) >>>>>> >>> >>> > >> return >>>>>> >>> >>> > >> getCapabilities >>>>>> >>> >>> > >> with {'status': {'message': 'Done', 'code': 0}, >>>>>> >>> >>> > >> 'info': >>>>>> >>> >>> > >> {'HBAInventory': >>>>>> >>> >>> > >> {'iSCSI': [{'InitiatorName': >>>>>> >>> >>> > >> 'iqn.1994-05.com.redhat:32151ce183c8'}], >>>>>> >>> >>> > >> 'FC': >>>>>> >>> >>> > >> []}, 'packages2': {'kernel': {'release': >>>>>> >>> >>> > >> '431.el6.x86_64', >>>>>> >>> >>> > >> 'buildtime': >>>>>> >>> >>> > >> 1385061309.0, 'version': '2.6.32'}, 'glusterfs-rdma': >>>>>> >>> >>> > >> {'release': >>>>>> >>> >>> > >> '1.el6', >>>>>> >>> >>> > >> 'buildtime': 1403622628L, 'version': '3.5.1'}, >>>>>> >>> >>> > >> 'glusterfs-fuse': >>>>>> >>> >>> > >> {'release': '1.el6', 'buildtime': 1403622628L, >>>>>> >>> >>> > >> 'version': >>>>>> >>> >>> > >> '3.5.1'}, >>>>>> >>> >>> > >> 'spice-server': {'release': '6.el6_5.2', 'buildtime': >>>>>> >>> >>> > >> 1402324637L, >>>>>> >>> >>> > >> 'version': '0.12.4'}, 'vdsm': {'release': >>>>>> >>> >>> > >> '1.gitdb83943.el6', >>>>>> >>> >>> > >> 'buildtime': >>>>>> >>> >>> > >> 1412784567L, 'version': '4.16.7'}, 'qemu-kvm': >>>>>> >>> >>> > >> {'release': >>>>>> >>> >>> > >> '2.415.el6_5.10', 'buildtime': 1402435700L, >>>>>> >>> >>> > >> 'version': >>>>>> >>> >>> > >> '0.12.1.2'}, >>>>>> >>> >>> > >> 'qemu-img': {'release': '2.415.el6_5.10', >>>>>> >>> >>> > >> 'buildtime': >>>>>> >>> >>> > >> 1402435700L, >>>>>> >>> >>> > >> 'version': '0.12.1.2'}, 'libvirt': {'release': >>>>>> >>> >>> > >> '29.el6_5.9', >>>>>> >>> >>> > >> 'buildtime': >>>>>> >>> >>> > >> 1402404612L, 'version': '0.10.2'}, 'glusterfs': >>>>>> >>> >>> > >> {'release': >>>>>> >>> >>> > >> '1.el6', >>>>>> >>> >>> > >> 'buildtime': 1403622628L, 'version': '3.5.1'}, 'mom': >>>>>> >>> >>> > >> {'release': >>>>>> >>> >>> > >> '2.el6', >>>>>> >>> >>> > >> 'buildtime': 1403794344L, 'version': '0.4.1'}, >>>>>> >>> >>> > >> 'glusterfs-server': >>>>>> >>> >>> > >> {'release': '1.el6', 'buildtime': 1403622628L, >>>>>> >>> >>> > >> 'version': >>>>>> >>> >>> > >> '3.5.1'}}, >>>>>> >>> >>> > >> 'numaNodeDistance': {'1': [20, 10], '0': [10, 20]}, >>>>>> >>> >>> > >> 'cpuModel': >>>>>> >>> >>> > >> 'Intel(R) >>>>>> >>> >>> > >> Xeon(R) CPU X5650 @ 2.67GHz', 'liveMerge': >>>>>> >>> >>> > >> 'false', >>>>>> >>> >>> > >> 'hooks': >>>>>> >>> >>> > >> {}, >>>>>> >>> >>> > >> 'cpuSockets': '2', 'vmTypes': ['kvm'], 'selinux': >>>>>> >>> >>> > >> {'mode': '1'}, >>>>>> >>> >>> > >> 'kdumpStatus': 0, 'supportedProtocols': ['2.2', >>>>>> >>> >>> > >> '2.3'], >>>>>> >>> >>> > >> 'networks': >>>>>> >>> >>> > >> {'ovirtmgmt': {'iface': u'bond0.10', 'addr': >>>>>> >>> >>> > >> '43.252.176.16', >>>>>> >>> >>> > >> 'bridged': >>>>>> >>> >>> > >> False, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], >>>>>> >>> >>> > >> 'mtu': >>>>>> >>> >>> > >> '1500', >>>>>> >>> >>> > >> 'bootproto4': 'none', 'netmask': '255.255.255.0', >>>>>> >>> >>> > >> 'ipv4addrs': >>>>>> >>> >>> > >> [' >>>>>> >>> >>> > >> 43.252.176.16/24' <http://43.252.176.16/24%27>], >>>>>> >>> >>> > >> 'interface': >>>>>> >>> >>> > >> u'bond0.10', 'ipv6gateway': '::', 'gateway': >>>>>> >>> >>> > >> '43.25.17.1'}, >>>>>> >>> >>> > >> 'Internal': >>>>>> >>> >>> > >> {'iface': 'Internal', 'addr': '', 'cfg': {'DEFROUTE': >>>>>> >>> >>> > >> 'no', >>>>>> >>> >>> > >> 'HOTPLUG': >>>>>> >>> >>> > >> 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': >>>>>> >>> >>> > >> 'no', >>>>>> >>> >>> > >> 'BOOTPROTO': >>>>>> >>> >>> > >> 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': >>>>>> >>> >>> > >> 'Bridge', >>>>>> >>> >>> > >> 'ONBOOT': >>>>>> >>> >>> > >> 'no'}, 'bridged': True, 'ipv6addrs': >>>>>> >>> >>> > >> ['fe80::210:18ff:fecd:daac/64'], >>>>>> >>> >>> > >> 'gateway': '', 'bootproto4': 'none', 'netmask': '', >>>>>> >>> >>> > >> 'stp': >>>>>> >>> >>> > >> 'off', >>>>>> >>> >>> > >> 'ipv4addrs': [], 'mtu': '9000', 'ipv6gateway': '::', >>>>>> >>> >>> > >> 'ports': >>>>>> >>> >>> > >> ['bond1.100']}, 'storage': {'iface': u'bond1', >>>>>> >>> >>> > >> 'addr': >>>>>> >>> >>> > >> '10.10.10.6', >>>>>> >>> >>> > >> 'bridged': False, 'ipv6addrs': >>>>>> >>> >>> > >> ['fe80::210:18ff:fecd:daac/64'], >>>>>> >>> >>> > >> 'mtu': >>>>>> >>> >>> > >> '9000', 'bootproto4': 'none', 'netmask': >>>>>> >>> >>> > >> '255.255.255.0', >>>>>> >>> >>> > >> 'ipv4addrs': [' >>>>>> >>> >>> > >> 10.10.10.6/24' <http://10.10.10.6/24%27>], >>>>>> >>> >>> > >> 'interface': >>>>>> >>> >>> > >> u'bond1', >>>>>> >>> >>> > >> 'ipv6gateway': '::', 'gateway': ''}, 'VMNetwork': >>>>>> >>> >>> > >> {'iface': >>>>>> >>> >>> > >> 'VMNetwork', >>>>>> >>> >>> > >> 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': >>>>>> >>> >>> > >> 'no', >>>>>> >>> >>> > >> 'MTU': >>>>>> >>> >>> > >> '1500', >>>>>> >>> >>> > >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': >>>>>> >>> >>> > >> 'none', >>>>>> >>> >>> > >> 'STP': >>>>>> >>> >>> > >> 'off', >>>>>> >>> >>> > >> 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': >>>>>> >>> >>> > >> 'no'}, >>>>>> >>> >>> > >> 'bridged': >>>>>> >>> >>> > >> True, >>>>>> >>> >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], >>>>>> >>> >>> > >> 'gateway': >>>>>> >>> >>> > >> '', >>>>>> >>> >>> > >> 'bootproto4': >>>>>> >>> >>> > >> 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], >>>>>> >>> >>> > >> 'mtu': >>>>>> >>> >>> > >> '1500', >>>>>> >>> >>> > >> 'ipv6gateway': '::', 'ports': ['bond0.36']}}, >>>>>> >>> >>> > >> 'bridges': >>>>>> >>> >>> > >> {'Internal': >>>>>> >>> >>> > >> {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': >>>>>> >>> >>> > >> 'no', >>>>>> >>> >>> > >> 'MTU': >>>>>> >>> >>> > >> '9000', >>>>>> >>> >>> > >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': >>>>>> >>> >>> > >> 'none', >>>>>> >>> >>> > >> 'STP': >>>>>> >>> >>> > >> 'off', >>>>>> >>> >>> > >> 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': >>>>>> >>> >>> > >> 'no'}, >>>>>> >>> >>> > >> 'ipv6addrs': >>>>>> >>> >>> > >> ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', >>>>>> >>> >>> > >> 'netmask': '', >>>>>> >>> >>> > >> 'stp': >>>>>> >>> >>> > >> 'off', 'ipv4addrs': [], 'ipv6gateway': '::', >>>>>> >>> >>> > >> 'gateway': >>>>>> >>> >>> > >> '', >>>>>> >>> >>> > >> 'opts': >>>>>> >>> >>> > >> {'topology_change_detected': '0', >>>>>> >>> >>> > >> 'multicast_last_member_count': >>>>>> >>> >>> > >> '2', >>>>>> >>> >>> > >> 'hash_elasticity': '4', >>>>>> >>> >>> > >> 'multicast_query_response_interval': >>>>>> >>> >>> > >> '999', >>>>>> >>> >>> > >> 'multicast_snooping': '1', >>>>>> >>> >>> > >> 'multicast_startup_query_interval': >>>>>> >>> >>> > >> '3124', >>>>>> >>> >>> > >> 'hello_timer': '31', 'multicast_querier_interval': >>>>>> >>> >>> > >> '25496', >>>>>> >>> >>> > >> 'max_age': >>>>>> >>> >>> > >> '1999', 'hash_max': '512', 'stp_state': '0', >>>>>> >>> >>> > >> 'root_id': >>>>>> >>> >>> > >> '8000.001018cddaac', 'priority': '32768', >>>>>> >>> >>> > >> 'multicast_membership_interval': >>>>>> >>> >>> > >> '25996', 'root_path_cost': '0', 'root_port': '0', >>>>>> >>> >>> > >> 'multicast_querier': >>>>>> >>> >>> > >> '0', >>>>>> >>> >>> > >> 'multicast_startup_query_count': '2', 'hello_time': >>>>>> >>> >>> > >> '199', >>>>>> >>> >>> > >> 'topology_change': '0', 'bridge_id': >>>>>> >>> >>> > >> '8000.001018cddaac', >>>>>> >>> >>> > >> 'topology_change_timer': '0', 'ageing_time': '29995', >>>>>> >>> >>> > >> 'gc_timer': >>>>>> >>> >>> > >> '31', >>>>>> >>> >>> > >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', >>>>>> >>> >>> > >> 'multicast_query_interval': '12498', >>>>>> >>> >>> > >> 'multicast_last_member_interval': >>>>>> >>> >>> > >> '99', 'multicast_router': '1', 'forward_delay': '0'}, >>>>>> >>> >>> > >> 'ports': >>>>>> >>> >>> > >> ['bond1.100']}, 'VMNetwork': {'addr': '', 'cfg': >>>>>> >>> >>> > >> {'DEFROUTE': >>>>>> >>> >>> > >> 'no', >>>>>> >>> >>> > >> 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', >>>>>> >>> >>> > >> 'NM_CONTROLLED': >>>>>> >>> >>> > >> 'no', >>>>>> >>> >>> > >> 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': >>>>>> >>> >>> > >> 'VMNetwork', >>>>>> >>> >>> > >> 'TYPE': >>>>>> >>> >>> > >> 'Bridge', >>>>>> >>> >>> > >> 'ONBOOT': 'no'}, 'ipv6addrs': >>>>>> >>> >>> > >> ['fe80::62eb:69ff:fe20:b46c/64'], >>>>>> >>> >>> > >> 'mtu': >>>>>> >>> >>> > >> '1500', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], >>>>>> >>> >>> > >> 'ipv6gateway': >>>>>> >>> >>> > >> '::', >>>>>> >>> >>> > >> 'gateway': '', 'opts': {'topology_change_detected': >>>>>> >>> >>> > >> '0', >>>>>> >>> >>> > >> 'multicast_last_member_count': '2', >>>>>> >>> >>> > >> 'hash_elasticity': >>>>>> >>> >>> > >> '4', >>>>>> >>> >>> > >> 'multicast_query_response_interval': '999', >>>>>> >>> >>> > >> 'multicast_snooping': >>>>>> >>> >>> > >> '1', >>>>>> >>> >>> > >> 'multicast_startup_query_interval': '3124', >>>>>> >>> >>> > >> 'hello_timer': >>>>>> >>> >>> > >> '131', >>>>>> >>> >>> > >> 'multicast_querier_interval': '25496', 'max_age': >>>>>> >>> >>> > >> '1999', >>>>>> >>> >>> > >> 'hash_max': >>>>>> >>> >>> > >> '512', 'stp_state': '0', 'root_id': >>>>>> >>> >>> > >> '8000.60eb6920b46c', >>>>>> >>> >>> > >> 'priority': >>>>>> >>> >>> > >> '32768', 'multicast_membership_interval': '25996', >>>>>> >>> >>> > >> 'root_path_cost': >>>>>> >>> >>> > >> '0', >>>>>> >>> >>> > >> 'root_port': '0', 'multicast_querier': '0', >>>>>> >>> >>> > >> 'multicast_startup_query_count': '2', 'hello_time': >>>>>> >>> >>> > >> '199', >>>>>> >>> >>> > >> 'topology_change': '0', 'bridge_id': >>>>>> >>> >>> > >> '8000.60eb6920b46c', >>>>>> >>> >>> > >> 'topology_change_timer': '0', 'ageing_time': '29995', >>>>>> >>> >>> > >> 'gc_timer': >>>>>> >>> >>> > >> '31', >>>>>> >>> >>> > >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', >>>>>> >>> >>> > >> 'multicast_query_interval': '12498', >>>>>> >>> >>> > >> 'multicast_last_member_interval': >>>>>> >>> >>> > >> '99', 'multicast_router': '1', 'forward_delay': '0'}, >>>>>> >>> >>> > >> 'ports': >>>>>> >>> >>> > >> ['bond0.36']}}, 'uuid': >>>>>> >>> >>> > >> '44454C4C-4C00-1057-8053-B7C04F504E31', >>>>>> >>> >>> > >> 'lastClientIface': 'bond1', 'nics': {'eth3': >>>>>> >>> >>> > >> {'permhwaddr': >>>>>> >>> >>> > >> '00:10:18:cd:da:ae', 'addr': '', 'cfg': {'SLAVE': >>>>>> >>> >>> > >> 'yes', >>>>>> >>> >>> > >> 'NM_CONTROLLED': >>>>>> >>> >>> > >> 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ae', >>>>>> >>> >>> > >> 'MASTER': >>>>>> >>> >>> > >> 'bond1', >>>>>> >>> >>> > >> 'DEVICE': 'eth3', 'ONBOOT': 'no'}, 'ipv6addrs': [], >>>>>> >>> >>> > >> 'mtu': >>>>>> >>> >>> > >> '9000', >>>>>> >>> >>> > >> 'netmask': '', 'ipv4addrs': [], 'hwaddr': >>>>>> >>> >>> > >> '00:10:18:cd:da:ac', >>>>>> >>> >>> > >> 'speed': >>>>>> >>> >>> > >> 1000}, 'eth2': {'permhwaddr': '00:10:18:cd:da:ac', >>>>>> >>> >>> > >> 'addr': '', >>>>>> >>> >>> > >> 'cfg': >>>>>> >>> >>> > >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': >>>>>> >>> >>> > >> '9000', >>>>>> >>> >>> > >> 'HWADDR': >>>>>> >>> >>> > >> '00:10:18:cd:da:ac', 'MASTER': 'bond1', 'DEVICE': >>>>>> >>> >>> > >> 'eth2', >>>>>> >>> >>> > >> 'ONBOOT': >>>>>> >>> >>> > >> 'no'}, >>>>>> >>> >>> > >> 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', >>>>>> >>> >>> > >> 'ipv4addrs': [], >>>>>> >>> >>> > >> 'hwaddr': >>>>>> >>> >>> > >> '00:10:18:cd:da:ac', 'speed': 1000}, 'eth1': >>>>>> >>> >>> > >> {'permhwaddr': >>>>>> >>> >>> > >> '60:eb:69:20:b4:6d', 'addr': '', 'cfg': {'SLAVE': >>>>>> >>> >>> > >> 'yes', >>>>>> >>> >>> > >> 'NM_CONTROLLED': >>>>>> >>> >>> > >> 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6d', >>>>>> >>> >>> > >> 'MASTER': >>>>>> >>> >>> > >> 'bond0', >>>>>> >>> >>> > >> 'DEVICE': 'eth1', 'ONBOOT': 'yes'}, 'ipv6addrs': [], >>>>>> >>> >>> > >> 'mtu': >>>>>> >>> >>> > >> '1500', >>>>>> >>> >>> > >> 'netmask': '', 'ipv4addrs': [], 'hwaddr': >>>>>> >>> >>> > >> '60:eb:69:20:b4:6c', >>>>>> >>> >>> > >> 'speed': >>>>>> >>> >>> > >> 1000}, 'eth0': {'permhwaddr': '60:eb:69:20:b4:6c', >>>>>> >>> >>> > >> 'addr': '', >>>>>> >>> >>> > >> 'cfg': >>>>>> >>> >>> > >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': >>>>>> >>> >>> > >> '1500', >>>>>> >>> >>> > >> 'HWADDR': >>>>>> >>> >>> > >> '60:eb:69:20:b4:6c', 'MASTER': 'bond0', 'DEVICE': >>>>>> >>> >>> > >> 'eth0', >>>>>> >>> >>> > >> 'ONBOOT': >>>>>> >>> >>> > >> 'yes'}, >>>>>> >>> >>> > >> 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', >>>>>> >>> >>> > >> 'ipv4addrs': [], >>>>>> >>> >>> > >> 'hwaddr': >>>>>> >>> >>> > >> '60:eb:69:20:b4:6c', 'speed': 1000}}, >>>>>> >>> >>> > >> 'software_revision': '1', >>>>>> >>> >>> > >> 'clusterLevels': ['3.0', '3.1', '3.2', '3.3', '3.4', >>>>>> >>> >>> > >> '3.5'], >>>>>> >>> >>> > >> 'cpuFlags': >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> >>>>>> >>> >>> > >> u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,popcnt,aes,lahf_lm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270', >>>>>> >>> >>> > >> 'ISCSIInitiatorName': >>>>>> >>> >>> > >> 'iqn.1994-05.com.redhat:32151ce183c8', >>>>>> >>> >>> > >> 'netConfigDirty': 'False', 'supportedENGINEs': >>>>>> >>> >>> > >> ['3.0', >>>>>> >>> >>> > >> '3.1', >>>>>> >>> >>> > >> '3.2', >>>>>> >>> >>> > >> '3.3', >>>>>> >>> >>> > >> '3.4', '3.5'], 'autoNumaBalancing': 2, 'reservedMem': >>>>>> >>> >>> > >> '321', >>>>>> >>> >>> > >> 'bondings': >>>>>> >>> >>> > >> {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', >>>>>> >>> >>> > >> 'netmask': '', >>>>>> >>> >>> > >> 'slaves': >>>>>> >>> >>> > >> [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': >>>>>> >>> >>> > >> '', >>>>>> >>> >>> > >> 'cfg': >>>>>> >>> >>> > >> {'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': >>>>>> >>> >>> > >> 'no', >>>>>> >>> >>> > >> 'BONDING_OPTS': >>>>>> >>> >>> > >> 'mode=4 miimon=100', 'DEVICE': 'bond0', 'ONBOOT': >>>>>> >>> >>> > >> 'yes'}, >>>>>> >>> >>> > >> 'ipv6addrs': >>>>>> >>> >>> > >> ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', >>>>>> >>> >>> > >> 'netmask': '', >>>>>> >>> >>> > >> 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', >>>>>> >>> >>> > >> 'slaves': >>>>>> >>> >>> > >> ['eth0', >>>>>> >>> >>> > >> 'eth1'], >>>>>> >>> >>> > >> 'opts': {'miimon': '100', 'mode': '4'}}, 'bond1': >>>>>> >>> >>> > >> {'addr': >>>>>> >>> >>> > >> '10.10.10.6', >>>>>> >>> >>> > >> 'cfg': {'DEFROUTE': 'no', 'IPADDR': '10.10.10.6', >>>>>> >>> >>> > >> 'HOTPLUG': >>>>>> >>> >>> > >> 'no', >>>>>> >>> >>> > >> 'MTU': >>>>>> >>> >>> > >> '9000', 'NM_CONTROLLED': 'no', 'NETMASK': >>>>>> >>> >>> > >> '255.255.255.0', >>>>>> >>> >>> > >> 'BOOTPROTO': >>>>>> >>> >>> > >> 'none', 'BONDING_OPTS': 'mode=4 miimon=100', >>>>>> >>> >>> > >> 'DEVICE': >>>>>> >>> >>> > >> 'bond1', >>>>>> >>> >>> > >> 'ONBOOT': >>>>>> >>> >>> > >> 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], >>>>>> >>> >>> > >> 'mtu': >>>>>> >>> >>> > >> '9000', >>>>>> >>> >>> > >> 'netmask': '255.255.255.0', 'ipv4addrs': >>>>>> >>> >>> > >> ['10.10.10.6/24' >>>>>> >>> >>> > >> <http://10.10.10.6/24%27>], 'hwaddr': >>>>>> >>> >>> > >> '00:10:18:cd:da:ac', >>>>>> >>>
...
[Mensaje recortado]

Hi Alberto, I tried it but still the same error... On Thu, Dec 11, 2014 at 9:07 PM, Alberto Larraz < alarraz@escoladeltreball.org> wrote:
If you use network service, and the line After=network.target rpcbind.service is in the glusterd.service glusterd have to wait to service network start... You can try this for debug de problem with the network...
if your ip address for gluster is for example: 10.88.0.1, you can grep "10.88" in the line ExecStart
You can create a new service (type oneshot) in systemd, for example: /etc/systemd/system/address_up.service
[Unit] Description=address_up After=network-online.target
[Service]
Type=oneshot ExecStart=/bin/sh -c 'until ip a s bond0 | grep -q 10.88; do usleep 100000; done'
## debug option ExecStart=/bin/sh -c 'ip a s > /var/tmp/ip_information'
RemainAfterExit=yes
[Install] WantedBy=multi-user.target
And then you can modify /usr/lib/systemd/system/glusterd.service . Add the service in "After" line:
After=network.target rpcbind.service address_up.service
Good luck:
Alberto
2014-12-10 3:21 GMT+01:00 Punit Dambiwal <hypunit@gmail.com>:
Hi Albereto,
I am using the bonding eth2+eth3=bond0 for gluster communication....and i am not using the NetworkManager.service....as all the interface configured by Ovirt itself.....please let me know how i can achieve this...??
[root@cpu04 ~]# systemctl list-unit-files | grep Network NetworkManager-dispatcher.service disabled NetworkManager-wait-online.service disabled NetworkManager.service disabled [root@cpu04 ~]# cat /usr/lib/systemd/system/glusterd.service [Unit] Description=GlusterFS, a clustered file-system server After=network.target rpcbind.service Wants=network-online.target
[Service] Type=forking PIDFile=/var/run/glusterd.pid LimitNOFILE=65536 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid KillMode=process
[Install] WantedBy=multi-user.target [root@cpu04 ~]#
Thanks, Punit
On Wed, Dec 10, 2014 at 6:21 AM, Alberto Larraz < alarraz@escoladeltreball.org> wrote:
I post a bug in centos about a problem with NetworkManager-wait-online.service, this service is executed before network-online.target
http://bugs.centos.org/view.php?id=7825
I had a problem with mount drbd in a centos7 at boot time. I think it could be similar to the problem to mount gluster in boot time in centos7.
I modified the service adding one line that wait to NetworkManager to set up the ip address in the interface that is used to connect de drbd, the the ip address in eth0 is 10.88.0.1 and the line I added is:
ExecStart=/bin/sh -c 'until ip a s eth0 | grep -q 10.88; do usleep 100000; done'
# cat /usr/lib/systemd/system/NetworkManager-wait-online.service
[Unit] Description=Network Manager Wait Online Requisite=NetworkManager.service After=NetworkManager.service Wants=network.target Before=network.target network-online.target
[Service] Type=oneshot ExecStart=/usr/bin/nm-online -q --timeout=30 ExecStart=/bin/sh -c 'until ip a s eth0 | grep -q 10.88; do usleep 100000; done'
Alberto
2014-12-05 14:50 GMT+01:00 Kaushal M <kshlmster@gmail.com>:
Can you replace 'Before=network-online.target' with 'Wants=network-online.target' and try the boot again? This should force the network to be online before starting GlusterD.
If even that fails, you could try adding an entry into /etc/hosts with the hostname of the system. This should prevent any more failures.
I still don't believe it's a problem with Gluster. Gluster uses apis provided by the system to perform name resolution. These definitely work correctly because you can start GlusterD later. Since the resolution failure only happens during boot, it points to system or network setup issues during boot. To me it seems like the network isn't completely setup at that point of time.
~kaushal
Hi Kaushal,
It seems it's bug in glusterfs 3.6....even i manage my systemd to start the network service before glusterd...but it's still fail...
--------------- [Unit] Description=GlusterFS, a clustered file-system server After=network.target rpcbind.service Before=network-online.target
[Service] Type=forking PIDFile=/var/run/glusterd.pid LimitNOFILE=65536 ExecStartPre=/etc/rc.d/init.d/network start ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid KillMode=process
[Install] WantedBy=multi-user.target ----------------
Thanks, Punit
On Wed, Dec 3, 2014 at 8:56 PM, Kaushal M <kshlmster@gmail.com> wrote:
I just remembered this.
There was another user having a similar issue of GlusterD failing to start on the mailing list a while back. The cause of his problem was the way his network was brought up. IIRC, he was using a static network configuration. The problem vanished when he began using dhcp. Or it might have been he was using dhcp.service and it got solved after switching to NetworkManager.
This could be one more thing you could look at.
I'll try to find the mail thread to see if it was the same problem
as you.
~kaushal
On Wed, Dec 3, 2014 at 6:22 PM, Kaushal M <kshlmster@gmail.com>
wrote:
> I don't know much about how the network target is brought up in > CentOS7, but I'll try as much as I can. > > It seems to me that, after the network has been brought up and by
> time GlusterD is started, > a. The machine hasn't yet recieved it's hostname, or > b. It hasn't yet registered with the name server. > > This is causing name resolution failures. > > I don't know if the network target could come up without the machine > getting its hostname, so I'm pretty sure it's not a. > > So it seems to be b. But these kind of signing in happens only in DDNS > systems, which doesn't seem to be the case for you. > > Both of these reasons might be wrong (most likely wrong). You'd do > good if you could ask for help from someone with more experience in > systemd + networking. > > ~kaushal > > On Wed, Dec 3, 2014 at 10:54 AM, Punit Dambiwal <hypunit@gmail.com
> wrote: >> Hi Kaushal, >> >> This is the host...which i rebooted...would you mind to let me know how >> i >> can make the glusterd sevice come up after network...i am using >> centos7...if >> network is the issue... >> >> On Wed, Dec 3, 2014 at 11:54 AM, Kaushal M <kshlmster@gmail.com> wrote: >>> >>> This peer cannot be identified. >>> >>> " [2014-12-03 02:29:25.998153] D >>> [glusterd-peer-utils.c:121:glusterd_peerinfo_find_by_hostname] >>> 0-management: >>> Unable to find friend: cpu05.zne01.hkg1.ovt.36stack.com" >>> >>> I don't know why this address is not being resolved during boot time. >>> If >>> this is a valid peer, the the only reason I can think of this
>>> network is not up. >>> >>> If you had previously detached the peer forcefully, the that could >>> have >>> left stale entries in some volumes. In this case as well, GlusterD >>> will fail >>> to identify the peer. >>> >>> Do either of these reasons seem a possibility to you? >>> >>> On Dec 3, 2014 8:07 AM, "Punit Dambiwal" <hypunit@gmail.com> wrote: >>>> >>>> Hi Kaushal, >>>> >>>> Please find the logs here :- http://ur1.ca/iyoe5 and >>>> http://ur1.ca/iyoed >>>> >>>> On Tue, Dec 2, 2014 at 10:43 PM, Kaushal M <kshlmster@gmail.com
>>>> wrote: >>>>> >>>>> Hey Punit, >>>>> In the logs you've provided, GlusterD appears to be running >>>>> correctly. >>>>> Could you provide the logs for the time period when GlusterD >>>>> attempts to >>>>> start but fails. >>>>> >>>>> ~kaushal >>>>> >>>>> On Dec 2, 2014 8:03 PM, "Punit Dambiwal" <hypunit@gmail.com> wrote: >>>>>> >>>>>> Hi Kaushal, >>>>>> >>>>>> Please find the logs here :- http://ur1.ca/iyhs5 and >>>>>> http://ur1.ca/iyhue >>>>>> >>>>>> Thanks, >>>>>> punit >>>>>> >>>>>> >>>>>> On Tue, Dec 2, 2014 at 12:00 PM, Kaushal M < kshlmster@gmail.com> >>>>>> wrote: >>>>>>> >>>>>>> Hey Punit, >>>>>>> Could you start Glusterd in debug mode and provide the logs here? >>>>>>> To start it in debug mode, append '-LDEBUG' to the ExecStart
>>>>>>> in >>>>>>> the service file. >>>>>>> >>>>>>> ~kaushal >>>>>>> >>>>>>> On Mon, Dec 1, 2014 at 9:05 AM, Punit Dambiwal < hypunit@gmail.com> >>>>>>> wrote: >>>>>>> > Hi, >>>>>>> > >>>>>>> > Can Any body help me on this ?? >>>>>>> > >>>>>>> > On Thu, Nov 27, 2014 at 9:29 AM, Punit Dambiwal >>>>>>> > <hypunit@gmail.com> >>>>>>> > wrote: >>>>>>> >> >>>>>>> >> Hi Kaushal, >>>>>>> >> >>>>>>> >> Thanks for the detailed reply....let me explain my setup first >>>>>>> >> :- >>>>>>> >> >>>>>>> >> 1. Ovirt Engine >>>>>>> >> 2. 4* host as well as storage machine (Host and gluster >>>>>>> >> combined) >>>>>>> >> 3. Every host has 24 bricks... >>>>>>> >> >>>>>>> >> Now whenever the host machine reboot...it can come up but can >>>>>>> >> not >>>>>>> >> join the >>>>>>> >> cluster again and through the following error "Gluster command >>>>>>> >> [<UNKNOWN>] >>>>>>> >> failed on server.." >>>>>>> >> >>>>>>> >> Please check my comment in line :- >>>>>>> >> >>>>>>> >> 1. Use the same string for doing the peer probe and for
>>>>>>> >> brick >>>>>>> >> address >>>>>>> >> during volume create/add-brick. Ideally, we suggest you use >>>>>>> >> properly >>>>>>> >> resolvable FQDNs everywhere. If that is not possible,
>>>>>>> >> only >>>>>>> >> IP >>>>>>> >> addresses. Try to avoid short names. >>>>>>> >> --------------- >>>>>>> >> [root@cpu05 ~]# gluster peer status >>>>>>> >> Number of Peers: 3 >>>>>>> >> >>>>>>> >> Hostname: cpu03.stack.com >>>>>>> >> Uuid: 5729b8c4-e80d-4353-b456-6f467bddbdfb >>>>>>> >> State: Peer in Cluster (Connected) >>>>>>> >> >>>>>>> >> Hostname: cpu04.stack.com >>>>>>> >> Uuid: d272b790-c4b2-4bed-ba68-793656e6d7b0 >>>>>>> >> State: Peer in Cluster (Connected) >>>>>>> >> Other names: >>>>>>> >> 10.10.0.8 >>>>>>> >> >>>>>>> >> Hostname: cpu02.stack.com >>>>>>> >> Uuid: 8d8a7041-950e-40d0-85f9-58d14340ca25 >>>>>>> >> State: Peer in Cluster (Connected) >>>>>>> >> [root@cpu05 ~]# >>>>>>> >> ---------------- >>>>>>> >> 2. During boot up, make sure to launch glusterd only after the >>>>>>> >> network is >>>>>>> >> up. This will allow the new peer identification mechanism to do >>>>>>> >> its >>>>>>> >> job correctly. >>>>>>> >> >> I think the service itself doing the same job.... >>>>>>> >> >>>>>>> >> [root@cpu05 ~]# cat /usr/lib/systemd/system/glusterd.service >>>>>>> >> [Unit] >>>>>>> >> Description=GlusterFS, a clustered file-system server >>>>>>> >> After=network.target rpcbind.service >>>>>>> >> Before=network-online.target >>>>>>> >> >>>>>>> >> [Service] >>>>>>> >> Type=forking >>>>>>> >> PIDFile=/var/run/glusterd.pid >>>>>>> >> LimitNOFILE=65536 >>>>>>> >> ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid >>>>>>> >> KillMode=process >>>>>>> >> >>>>>>> >> [Install] >>>>>>> >> WantedBy=multi-user.target >>>>>>> >> [root@cpu05 ~]# >>>>>>> >> -------------------- >>>>>>> >> >>>>>>> >> gluster logs :- >>>>>>> >> >>>>>>> >> [2014-11-24 09:22:22.147471] I [MSGID: 100030] >>>>>>> >> [glusterfsd.c:2018:main] >>>>>>> >> 0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd >>>>>>> >> version >>>>>>> >> 3.6.1 >>>>>>> >> (args: /usr/sbin/glusterd -p /var/run/glusterd.pid) >>>>>>> >> [2014-11-24 09:22:22.151565] I [glusterd.c:1214:init] >>>>>>> >> 0-management: >>>>>>> >> Maximum allowed open file descriptors set to 65536 >>>>>>> >> [2014-11-24 09:22:22.151599] I [glusterd.c:1259:init] >>>>>>> >> 0-management: >>>>>>> >> Using >>>>>>> >> /var/lib/glusterd as working directory >>>>>>> >> [2014-11-24 09:22:22.155216] W >>>>>>> >> [rdma.c:4195:__gf_rdma_ctx_create] >>>>>>> >> 0-rpc-transport/rdma: rdma_cm event channel creation failed (No >>>>>>> >> such device) >>>>>>> >> [2014-11-24 09:22:22.155264] E [rdma.c:4483:init] >>>>>>> >> 0-rdma.management: >>>>>>> >> Failed to initialize IB Device >>>>>>> >> [2014-11-24 09:22:22.155285] E >>>>>>> >> [rpc-transport.c:333:rpc_transport_load] >>>>>>> >> 0-rpc-transport: 'rdma' initialization failed >>>>>>> >> [2014-11-24 09:22:22.155354] W >>>>>>> >> [rpcsvc.c:1524:rpcsvc_transport_create] >>>>>>> >> 0-rpc-service: cannot create listener, initing the
>>>>>>> >> failed >>>>>>> >> [2014-11-24 09:22:22.156290] I >>>>>>> >> [glusterd.c:413:glusterd_check_gsync_present] 0-glusterd: >>>>>>> >> geo-replication >>>>>>> >> module not installed in the system >>>>>>> >> [2014-11-24 09:22:22.161318] I >>>>>>> >> [glusterd-store.c:2043:glusterd_restore_op_version] 0-glusterd: >>>>>>> >> retrieved >>>>>>> >> op-version: 30600 >>>>>>> >> [2014-11-24 09:22:22.821800] I >>>>>>> >> [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] >>>>>>> >> 0-management: >>>>>>> >> connect returned 0 >>>>>>> >> [2014-11-24 09:22:22.825810] I >>>>>>> >> [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] >>>>>>> >> 0-management: >>>>>>> >> connect returned 0 >>>>>>> >> [2014-11-24 09:22:22.828705] I >>>>>>> >> [glusterd-handler.c:3146:glusterd_friend_add_from_peerinfo] >>>>>>> >> 0-management: >>>>>>> >> connect returned 0 >>>>>>> >> [2014-11-24 09:22:22.828771] I >>>>>>> >> [rpc-clnt.c:969:rpc_clnt_connection_init] >>>>>>> >> 0-management: setting frame-timeout to 600 >>>>>>> >> [2014-11-24 09:22:22.832670] I >>>>>>> >> [rpc-clnt.c:969:rpc_clnt_connection_init] >>>>>>> >> 0-management: setting frame-timeout to 600 >>>>>>> >> [2014-11-24 09:22:22.835919] I >>>>>>> >> [rpc-clnt.c:969:rpc_clnt_connection_init] >>>>>>> >> 0-management: setting frame-timeout to 600 >>>>>>> >> [2014-11-24 09:22:22.840209] E >>>>>>> >> [glusterd-store.c:4248:glusterd_resolve_all_bricks] 0-glusterd: >>>>>>> >> resolve >>>>>>> >> brick failed in restore >>>>>>> >> [2014-11-24 09:22:22.840233] E [xlator.c:425:xlator_init] >>>>>>> >> 0-management: >>>>>>> >> Initialization of volume 'management' failed, review your >>>>>>> >> volfile >>>>>>> >> again >>>>>>> >> [2014-11-24 09:22:22.840245] E >>>>>>> >> [graph.c:322:glusterfs_graph_init] >>>>>>> >> 0-management: initializing translator failed >>>>>>> >> [2014-11-24 09:22:22.840264] E >>>>>>> >> [graph.c:525:glusterfs_graph_activate] >>>>>>> >> 0-graph: init failed >>>>>>> >> [2014-11-24 09:22:22.840754] W >>>>>>> >> [glusterfsd.c:1194:cleanup_and_exit] >>>>>>> >> (--> >>>>>>> >> 0-: received signum (0), shutting down >>>>>>> >> >>>>>>> >> Thanks, >>>>>>> >> Punit >>>>>>> >> >>>>>>> >> >>>>>>> >> >>>>>>> >> >>>>>>> >> On Wed, Nov 26, 2014 at 7:14 PM, Kaushal M >>>>>>> >> <kshlmster@gmail.com> >>>>>>> >> wrote: >>>>>>> >>> >>>>>>> >>> Based on the logs I can guess that glusterd is being started >>>>>>> >>> before >>>>>>> >>> the network has come up and that the addresses given to bricks >>>>>>> >>> do >>>>>>> >>> not >>>>>>> >>> directly match the addresses used in during peer probe. >>>>>>> >>> >>>>>>> >>> The gluster_after_reboot log has the line "[2014-11-25 >>>>>>> >>> 06:46:09.972113] E >>>>>>> >>> [glusterd-store.c:2632:glusterd_resolve_all_bricks] >>>>>>> >>> 0-glusterd: resolve brick failed in restore". >>>>>>> >>> >>>>>>> >>> Brick resolution fails when glusterd cannot match the address >>>>>>> >>> for >>>>>>> >>> the >>>>>>> >>> brick, with one of the peers. Brick resolution happens in two >>>>>>> >>> phases, >>>>>>> >>> 1. We first try to identify the peer by performing string >>>>>>> >>> comparisions >>>>>>> >>> with the brick address and the peer addresses (The peer names >>>>>>> >>> will >>>>>>> >>> be >>>>>>> >>> the names/addresses that were given when the peer was
>>>>>>> >>> 2. If we don't find a match from step 1, we will then resolve >>>>>>> >>> all >>>>>>> >>> the >>>>>>> >>> brick address and the peer addresses into addrinfo structs, >>>>>>> >>> and >>>>>>> >>> then >>>>>>> >>> compare these structs to find a match. This process should >>>>>>> >>> generally >>>>>>> >>> find a match if available. This will fail only if the network >>>>>>> >>> is >>>>>>> >>> not >>>>>>> >>> up yet as we cannot resolve addresses. >>>>>>> >>> >>>>>>> >>> The above steps are applicable only to glusterfs versions >>>>>>> >>> >=3.6. >>>>>>> >>> They >>>>>>> >>> were introduced to reduce problems with peer identification, >>>>>>> >>> like >>>>>>> >>> the >>>>>>> >>> one you encountered >>>>>>> >>> >>>>>>> >>> Since both of the steps failed to find a match in one run, but >>>>>>> >>> succeeded later, we can come to the conclusion that, >>>>>>> >>> a) the bricks don't have the exact same string used in
>>>>>>> >>> probe >>>>>>> >>> for >>>>>>> >>> their addresses as step 1 failed, and >>>>>>> >>> b) the network was not up in the initial run, as step 2 failed >>>>>>> >>> during >>>>>>> >>> the initial run, but passed in the second run. >>>>>>> >>> >>>>>>> >>> Please let me know if my conclusion is correct. >>>>>>> >>> >>>>>>> >>> If it is, you can solve your problem in two ways. >>>>>>> >>> 1. Use the same string for doing the peer probe and for
>>>>>>> >>> brick >>>>>>> >>> address during volume create/add-brick. Ideally, we suggest >>>>>>> >>> you >>>>>>> >>> use >>>>>>> >>> properly resolvable FQDNs everywhere. If that is not
>>>>>>> >>> then >>>>>>> >>> use only IP addresses. Try to avoid short names. >>>>>>> >>> 2. During boot up, make sure to launch glusterd only after the >>>>>>> >>> network >>>>>>> >>> is up. This will allow the new peer identification mechanism >>>>>>> >>> to do >>>>>>> >>> its >>>>>>> >>> job correctly. >>>>>>> >>> >>>>>>> >>> >>>>>>> >>> If you have already followed these steps and yet still hit the >>>>>>> >>> problem, then please provide more information (setup, logs, >>>>>>> >>> etc.). >>>>>>> >>> It >>>>>>> >>> could be much different problem that you are facing. >>>>>>> >>> >>>>>>> >>> ~kaushal >>>>>>> >>> >>>>>>> >>> On Wed, Nov 26, 2014 at 4:01 PM, Punit Dambiwal >>>>>>> >>> <hypunit@gmail.com> >>>>>>> >>> wrote: >>>>>>> >>> > Is there any one can help on this ?? >>>>>>> >>> > >>>>>>> >>> > Thanks, >>>>>>> >>> > punit >>>>>>> >>> > >>>>>>> >>> > On Wed, Nov 26, 2014 at 9:42 AM, Punit Dambiwal >>>>>>> >>> > <hypunit@gmail.com> >>>>>>> >>> > wrote: >>>>>>> >>> >> >>>>>>> >>> >> Hi, >>>>>>> >>> >> >>>>>>> >>> >> My Glusterfs version is :- glusterfs-3.6.1-1.el7 >>>>>>> >>> >> >>>>>>> >>> >> On Wed, Nov 26, 2014 at 1:59 AM, Kanagaraj Mayilsamy >>>>>>> >>> >> <kmayilsa@redhat.com> >>>>>>> >>> >> wrote: >>>>>>> >>> >>> >>>>>>> >>> >>> [+Gluster-users@gluster.org] >>>>>>> >>> >>> >>>>>>> >>> >>> "Initialization of volume 'management' failed, review your >>>>>>> >>> >>> volfile >>>>>>> >>> >>> again", glusterd throws this error when the service is >>>>>>> >>> >>> started >>>>>>> >>> >>> automatically >>>>>>> >>> >>> after the reboot. But the service is successfully started >>>>>>> >>> >>> later >>>>>>> >>> >>> manually by >>>>>>> >>> >>> the user. >>>>>>> >>> >>> >>>>>>> >>> >>> can somebody from gluster-users please help on this? >>>>>>> >>> >>> >>>>>>> >>> >>> glusterfs version: 3.5.1 >>>>>>> >>> >>> >>>>>>> >>> >>> Thanks, >>>>>>> >>> >>> Kanagaraj >>>>>>> >>> >>> >>>>>>> >>> >>> ----- Original Message ----- >>>>>>> >>> >>> > From: "Punit Dambiwal" <hypunit@gmail.com> >>>>>>> >>> >>> > To: "Kanagaraj" <kmayilsa@redhat.com> >>>>>>> >>> >>> > Cc: users@ovirt.org >>>>>>> >>> >>> > Sent: Tuesday, November 25, 2014 7:24:45 PM >>>>>>> >>> >>> > Subject: Re: [ovirt-users] Gluster command [<UNKNOWN>] >>>>>>> >>> >>> > failed on >>>>>>> >>> >>> > server... >>>>>>> >>> >>> > >>>>>>> >>> >>> > Hi Kanagraj, >>>>>>> >>> >>> > >>>>>>> >>> >>> > Please check the attached log files....i didn't find any >>>>>>> >>> >>> > thing >>>>>>> >>> >>> > special.... >>>>>>> >>> >>> > >>>>>>> >>> >>> > On Tue, Nov 25, 2014 at 12:12 PM, Kanagaraj >>>>>>> >>> >>> > <kmayilsa@redhat.com> >>>>>>> >>> >>> > wrote: >>>>>>> >>> >>> > >>>>>>> >>> >>> > > Do you see any errors in >>>>>>> >>> >>> > > /var/log/glusterfs/etc-glusterfs-glusterd.vol.log or >>>>>>> >>> >>> > > vdsm.log >>>>>>> >>> >>> > > when >>>>>>> >>> >>> > > the >>>>>>> >>> >>> > > service is trying to start automatically after
>>>>>>> >>> >>> > > reboot? >>>>>>> >>> >>> > > >>>>>>> >>> >>> > > Thanks, >>>>>>> >>> >>> > > Kanagaraj >>>>>>> >>> >>> > > >>>>>>> >>> >>> > > >>>>>>> >>> >>> > > On 11/24/2014 08:13 PM, Punit Dambiwal wrote: >>>>>>> >>> >>> > > >>>>>>> >>> >>> > > Hi Kanagaraj, >>>>>>> >>> >>> > > >>>>>>> >>> >>> > > Yes...once i will start the gluster service and
On Fri, Dec 5, 2014 at 12:47 PM, Punit Dambiwal <hypunit@gmail.com> wrote: the that the line the then use transport probed). peer the possible, the then
>>>>>>> >>> >>> > > vdsmd >>>>>>> >>> >>> > > ...the >>>>>>> >>> >>> > > host >>>>>>> >>> >>> > > can connect to cluster...but the question is why it's >>>>>>> >>> >>> > > not >>>>>>> >>> >>> > > started >>>>>>> >>> >>> > > even it >>>>>>> >>> >>> > > has chkconfig enabled... >>>>>>> >>> >>> > > >>>>>>> >>> >>> > > I have tested it in two host cluster >>>>>>> >>> >>> > > environment...(Centos 6.6 >>>>>>> >>> >>> > > and >>>>>>> >>> >>> > > centos 7.0) on both hypervisior cluster..it's failed >>>>>>> >>> >>> > > to >>>>>>> >>> >>> > > reconnect >>>>>>> >>> >>> > > in >>>>>>> >>> >>> > > to >>>>>>> >>> >>> > > cluster after reboot.... >>>>>>> >>> >>> > > >>>>>>> >>> >>> > > In both the environment glusterd enabled for next >>>>>>> >>> >>> > > boot....but >>>>>>> >>> >>> > > it's >>>>>>> >>> >>> > > failed with the same error....seems it's bug in either >>>>>>> >>> >>> > > gluster or >>>>>>> >>> >>> > > Ovirt ?? >>>>>>> >>> >>> > > >>>>>>> >>> >>> > > Please help me to find the workaround here if can not >>>>>>> >>> >>> > > resolve >>>>>>> >>> >>> > > it...as >>>>>>> >>> >>> > > without this the Host machine can not connect after >>>>>>> >>> >>> > > reboot....that >>>>>>> >>> >>> > > means >>>>>>> >>> >>> > > engine will consider it as down and every time need to >>>>>>> >>> >>> > > manually >>>>>>> >>> >>> > > start >>>>>>> >>> >>> > > the >>>>>>> >>> >>> > > gluster service and vdsmd... ?? >>>>>>> >>> >>> > > >>>>>>> >>> >>> > > Thanks, >>>>>>> >>> >>> > > Punit >>>>>>> >>> >>> > > >>>>>>> >>> >>> > > On Mon, Nov 24, 2014 at 10:20 PM, Kanagaraj >>>>>>> >>> >>> > > <kmayilsa@redhat.com> >>>>>>> >>> >>> > > wrote: >>>>>>> >>> >>> > > >>>>>>> >>> >>> > >> From vdsm.log "error: Connection failed. Please >>>>>>> >>> >>> > >> check if >>>>>>> >>> >>> > >> gluster >>>>>>> >>> >>> > >> daemon >>>>>>> >>> >>> > >> is operational." >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> Starting glusterd service should fix this issue. >>>>>>> >>> >>> > >> 'service >>>>>>> >>> >>> > >> glusterd >>>>>>> >>> >>> > >> start' >>>>>>> >>> >>> > >> But i am wondering why the glusterd was not started >>>>>>> >>> >>> > >> automatically >>>>>>> >>> >>> > >> after >>>>>>> >>> >>> > >> the reboot. >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> Thanks, >>>>>>> >>> >>> > >> Kanagaraj >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> On 11/24/2014 07:18 PM, Punit Dambiwal wrote: >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> Hi Kanagaraj, >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> Please find the attached VDSM logs :- >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> ---------------- >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> 21:41:17,182::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >>>>>>> >>> >>> > >> Owner.cancelAll requests {} >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> 21:41:17,182::task::993::Storage.TaskManager.Task::(_decref) >>>>>>> >>> >>> > >> Task=`1691d409-9b27-4585-8281-5ec26154367a`::ref 0 >>>>>>> >>> >>> > >> aborting >>>>>>> >>> >>> > >> False >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> 21:41:32,393::task::595::Storage.TaskManager.Task::(_updateState) >>>>>>> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving >>>>>>> >>> >>> > >> from >>>>>>> >>> >>> > >> state >>>>>>> >>> >>> > >> init >>>>>>> >>> >>> > >> -> >>>>>>> >>> >>> > >> state preparing >>>>>>> >>> >>> > >> Thread-13::INFO::2014-11-24 >>>>>>> >>> >>> > >> 21:41:32,393::logUtils::44::dispatcher::(wrapper) Run >>>>>>> >>> >>> > >> and >>>>>>> >>> >>> > >> protect: >>>>>>> >>> >>> > >> repoStats(options=None) >>>>>>> >>> >>> > >> Thread-13::INFO::2014-11-24 >>>>>>> >>> >>> > >> 21:41:32,393::logUtils::47::dispatcher::(wrapper) Run >>>>>>> >>> >>> > >> and >>>>>>> >>> >>> > >> protect: >>>>>>> >>> >>> > >> repoStats, Return response: {} >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> 21:41:32,393::task::1191::Storage.TaskManager.Task::(prepare) >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::finished: {} >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> 21:41:32,394::task::595::Storage.TaskManager.Task::(_updateState) >>>>>>> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::moving >>>>>>> >>> >>> > >> from >>>>>>> >>> >>> > >> state >>>>>>> >>> >>> > >> preparing >>>>>>> >>> >>> > >> -> >>>>>>> >>> >>> > >> state finished >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> 21:41:32,394::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) >>>>>>> >>> >>> > >> Owner.releaseAll requests {} resources {} >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> 21:41:32,394::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) >>>>>>> >>> >>> > >> Owner.cancelAll requests {} >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> 21:41:32,394::task::993::Storage.TaskManager.Task::(_decref) >>>>>>> >>> >>> > >> Task=`994c7bc3-a236-4d03-a732-e068c7ed9ed4`::ref 0 >>>>>>> >>> >>> > >> aborting >>>>>>> >>> >>> > >> False >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> 21:41:41,550::BindingXMLRPC::1132::vds::(wrapper) >>>>>>> >>> >>> > >> client >>>>>>> >>> >>> > >> [10.10.10.2]::call >>>>>>> >>> >>> > >> getCapabilities with () {} >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> 21:41:41,553::utils::738::root::(execCmd) >>>>>>> >>> >>> > >> /sbin/ip route show to 0.0.0.0/0 table all (cwd None) >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> 21:41:41,560::utils::758::root::(execCmd) >>>>>>> >>> >>> > >> SUCCESS: <err> = ''; <rc> = 0 >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> 21:41:41,588::caps::728::root::(_getKeyPackages) rpm >>>>>>> >>> >>> > >> package >>>>>>> >>> >>> > >> ('gluster-swift',) not found >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> 21:41:41,592::caps::728::root::(_getKeyPackages) rpm >>>>>>> >>> >>> > >> package >>>>>>> >>> >>> > >> ('gluster-swift-object',) not found >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> 21:41:41,593::caps::728::root::(_getKeyPackages) rpm >>>>>>> >>> >>> > >> package >>>>>>> >>> >>> > >> ('gluster-swift-plugin',) not found >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm >>>>>>> >>> >>> > >> package >>>>>>> >>> >>> > >> ('gluster-swift-account',) not found >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm >>>>>>> >>> >>> > >> package >>>>>>> >>> >>> > >> ('gluster-swift-proxy',) not found >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> 21:41:41,598::caps::728::root::(_getKeyPackages) rpm >>>>>>> >>> >>> > >> package >>>>>>> >>> >>> > >> ('gluster-swift-doc',) not found >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm >>>>>>> >>> >>> > >> package >>>>>>> >>> >>> > >> ('gluster-swift-container',) not found >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> 21:41:41,599::caps::728::root::(_getKeyPackages) rpm >>>>>>> >>> >>> > >> package >>>>>>> >>> >>> > >> ('glusterfs-geo-replication',) not found >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> 21:41:41,600::caps::646::root::(get) >>>>>>> >>> >>> > >> VirtioRNG DISABLED: libvirt version 0.10.2-29.el6_5.9 >>>>>>> >>> >>> > >> required >>>>>>> >>> >>> > >> >= >>>>>>> >>> >>> > >> 0.10.2-31 >>>>>>> >>> >>> > >> Thread-13::DEBUG::2014-11-24 >>>>>>> >>> >>> > >> 21:41:41,603::BindingXMLRPC::1139::vds::(wrapper) >>>>>>> >>> >>> > >> return >>>>>>> >>> >>> > >> getCapabilities >>>>>>> >>> >>> > >> with {'status': {'message': 'Done', 'code': 0}, >>>>>>> >>> >>> > >> 'info': >>>>>>> >>> >>> > >> {'HBAInventory': >>>>>>> >>> >>> > >> {'iSCSI': [{'InitiatorName': >>>>>>> >>> >>> > >> 'iqn.1994-05.com.redhat:32151ce183c8'}], >>>>>>> >>> >>> > >> 'FC': >>>>>>> >>> >>> > >> []}, 'packages2': {'kernel': {'release': >>>>>>> >>> >>> > >> '431.el6.x86_64', >>>>>>> >>> >>> > >> 'buildtime': >>>>>>> >>> >>> > >> 1385061309.0, 'version': '2.6.32'}, 'glusterfs-rdma': >>>>>>> >>> >>> > >> {'release': >>>>>>> >>> >>> > >> '1.el6', >>>>>>> >>> >>> > >> 'buildtime': 1403622628L, 'version': '3.5.1'}, >>>>>>> >>> >>> > >> 'glusterfs-fuse': >>>>>>> >>> >>> > >> {'release': '1.el6', 'buildtime': 1403622628L, >>>>>>> >>> >>> > >> 'version': >>>>>>> >>> >>> > >> '3.5.1'}, >>>>>>> >>> >>> > >> 'spice-server': {'release': '6.el6_5.2', 'buildtime': >>>>>>> >>> >>> > >> 1402324637L, >>>>>>> >>> >>> > >> 'version': '0.12.4'}, 'vdsm': {'release': >>>>>>> >>> >>> > >> '1.gitdb83943.el6', >>>>>>> >>> >>> > >> 'buildtime': >>>>>>> >>> >>> > >> 1412784567L, 'version': '4.16.7'}, 'qemu-kvm': >>>>>>> >>> >>> > >> {'release': >>>>>>> >>> >>> > >> '2.415.el6_5.10', 'buildtime': 1402435700L, >>>>>>> >>> >>> > >> 'version': >>>>>>> >>> >>> > >> '0.12.1.2'}, >>>>>>> >>> >>> > >> 'qemu-img': {'release': '2.415.el6_5.10', >>>>>>> >>> >>> > >> 'buildtime': >>>>>>> >>> >>> > >> 1402435700L, >>>>>>> >>> >>> > >> 'version': '0.12.1.2'}, 'libvirt': {'release': >>>>>>> >>> >>> > >> '29.el6_5.9', >>>>>>> >>> >>> > >> 'buildtime': >>>>>>> >>> >>> > >> 1402404612L, 'version': '0.10.2'}, 'glusterfs': >>>>>>> >>> >>> > >> {'release': >>>>>>> >>> >>> > >> '1.el6', >>>>>>> >>> >>> > >> 'buildtime': 1403622628L, 'version': '3.5.1'}, 'mom': >>>>>>> >>> >>> > >> {'release': >>>>>>> >>> >>> > >> '2.el6', >>>>>>> >>> >>> > >> 'buildtime': 1403794344L, 'version': '0.4.1'}, >>>>>>> >>> >>> > >> 'glusterfs-server': >>>>>>> >>> >>> > >> {'release': '1.el6', 'buildtime': 1403622628L, >>>>>>> >>> >>> > >> 'version': >>>>>>> >>> >>> > >> '3.5.1'}}, >>>>>>> >>> >>> > >> 'numaNodeDistance': {'1': [20, 10], '0': [10, 20]}, >>>>>>> >>> >>> > >> 'cpuModel': >>>>>>> >>> >>> > >> 'Intel(R) >>>>>>> >>> >>> > >> Xeon(R) CPU X5650 @ 2.67GHz', 'liveMerge': >>>>>>> >>> >>> > >> 'false', >>>>>>> >>> >>> > >> 'hooks': >>>>>>> >>> >>> > >> {}, >>>>>>> >>> >>> > >> 'cpuSockets': '2', 'vmTypes': ['kvm'], 'selinux': >>>>>>> >>> >>> > >> {'mode': '1'}, >>>>>>> >>> >>> > >> 'kdumpStatus': 0, 'supportedProtocols': ['2.2', >>>>>>> >>> >>> > >> '2.3'], >>>>>>> >>> >>> > >> 'networks': >>>>>>> >>> >>> > >> {'ovirtmgmt': {'iface': u'bond0.10', 'addr': >>>>>>> >>> >>> > >> '43.252.176.16', >>>>>>> >>> >>> > >> 'bridged': >>>>>>> >>> >>> > >> False, 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], >>>>>>> >>> >>> > >> 'mtu': >>>>>>> >>> >>> > >> '1500', >>>>>>> >>> >>> > >> 'bootproto4': 'none', 'netmask': '255.255.255.0', >>>>>>> >>> >>> > >> 'ipv4addrs': >>>>>>> >>> >>> > >> [' >>>>>>> >>> >>> > >> 43.252.176.16/24' <http://43.252.176.16/24%27 ], >>>>>>> >>> >>> > >> 'interface': >>>>>>> >>> >>> > >> u'bond0.10', 'ipv6gateway': '::', 'gateway': >>>>>>> >>> >>> > >> '43.25.17.1'}, >>>>>>> >>> >>> > >> 'Internal': >>>>>>> >>> >>> > >> {'iface': 'Internal', 'addr': '', 'cfg': {'DEFROUTE': >>>>>>> >>> >>> > >> 'no', >>>>>>> >>> >>> > >> 'HOTPLUG': >>>>>>> >>> >>> > >> 'no', 'MTU': '9000', 'DELAY': '0', 'NM_CONTROLLED': >>>>>>> >>> >>> > >> 'no', >>>>>>> >>> >>> > >> 'BOOTPROTO': >>>>>>> >>> >>> > >> 'none', 'STP': 'off', 'DEVICE': 'Internal', 'TYPE': >>>>>>> >>> >>> > >> 'Bridge', >>>>>>> >>> >>> > >> 'ONBOOT': >>>>>>> >>> >>> > >> 'no'}, 'bridged': True, 'ipv6addrs': >>>>>>> >>> >>> > >> ['fe80::210:18ff:fecd:daac/64'], >>>>>>> >>> >>> > >> 'gateway': '', 'bootproto4': 'none', 'netmask': '', >>>>>>> >>> >>> > >> 'stp': >>>>>>> >>> >>> > >> 'off', >>>>>>> >>> >>> > >> 'ipv4addrs': [], 'mtu': '9000', 'ipv6gateway': '::', >>>>>>> >>> >>> > >> 'ports': >>>>>>> >>> >>> > >> ['bond1.100']}, 'storage': {'iface': u'bond1', >>>>>>> >>> >>> > >> 'addr': >>>>>>> >>> >>> > >> '10.10.10.6', >>>>>>> >>> >>> > >> 'bridged': False, 'ipv6addrs': >>>>>>> >>> >>> > >> ['fe80::210:18ff:fecd:daac/64'], >>>>>>> >>> >>> > >> 'mtu': >>>>>>> >>> >>> > >> '9000', 'bootproto4': 'none', 'netmask': >>>>>>> >>> >>> > >> '255.255.255.0', >>>>>>> >>> >>> > >> 'ipv4addrs': [' >>>>>>> >>> >>> > >> 10.10.10.6/24' <http://10.10.10.6/24%27>], >>>>>>> >>> >>> > >> 'interface': >>>>>>> >>> >>> > >> u'bond1', >>>>>>> >>> >>> > >> 'ipv6gateway': '::', 'gateway': ''}, 'VMNetwork': >>>>>>> >>> >>> > >> {'iface': >>>>>>> >>> >>> > >> 'VMNetwork', >>>>>>> >>> >>> > >> 'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': >>>>>>> >>> >>> > >> 'no', >>>>>>> >>> >>> > >> 'MTU': >>>>>>> >>> >>> > >> '1500', >>>>>>> >>> >>> > >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': >>>>>>> >>> >>> > >> 'none', >>>>>>> >>> >>> > >> 'STP': >>>>>>> >>> >>> > >> 'off', >>>>>>> >>> >>> > >> 'DEVICE': 'VMNetwork', 'TYPE': 'Bridge', 'ONBOOT': >>>>>>> >>> >>> > >> 'no'}, >>>>>>> >>> >>> > >> 'bridged': >>>>>>> >>> >>> > >> True, >>>>>>> >>> >>> > >> 'ipv6addrs': ['fe80::62eb:69ff:fe20:b46c/64'], >>>>>>> >>> >>> > >> 'gateway': >>>>>>> >>> >>> > >> '', >>>>>>> >>> >>> > >> 'bootproto4': >>>>>>> >>> >>> > >> 'none', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], >>>>>>> >>> >>> > >> 'mtu': >>>>>>> >>> >>> > >> '1500', >>>>>>> >>> >>> > >> 'ipv6gateway': '::', 'ports': ['bond0.36']}}, >>>>>>> >>> >>> > >> 'bridges': >>>>>>> >>> >>> > >> {'Internal': >>>>>>> >>> >>> > >> {'addr': '', 'cfg': {'DEFROUTE': 'no', 'HOTPLUG': >>>>>>> >>> >>> > >> 'no', >>>>>>> >>> >>> > >> 'MTU': >>>>>>> >>> >>> > >> '9000', >>>>>>> >>> >>> > >> 'DELAY': '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': >>>>>>> >>> >>> > >> 'none', >>>>>>> >>> >>> > >> 'STP': >>>>>>> >>> >>> > >> 'off', >>>>>>> >>> >>> > >> 'DEVICE': 'Internal', 'TYPE': 'Bridge', 'ONBOOT': >>>>>>> >>> >>> > >> 'no'}, >>>>>>> >>> >>> > >> 'ipv6addrs': >>>>>>> >>> >>> > >> ['fe80::210:18ff:fecd:daac/64'], 'mtu': '9000', >>>>>>> >>> >>> > >> 'netmask': '', >>>>>>> >>> >>> > >> 'stp': >>>>>>> >>> >>> > >> 'off', 'ipv4addrs': [], 'ipv6gateway': '::', >>>>>>> >>> >>> > >> 'gateway': >>>>>>> >>> >>> > >> '', >>>>>>> >>> >>> > >> 'opts': >>>>>>> >>> >>> > >> {'topology_change_detected': '0', >>>>>>> >>> >>> > >> 'multicast_last_member_count': >>>>>>> >>> >>> > >> '2', >>>>>>> >>> >>> > >> 'hash_elasticity': '4', >>>>>>> >>> >>> > >> 'multicast_query_response_interval': >>>>>>> >>> >>> > >> '999', >>>>>>> >>> >>> > >> 'multicast_snooping': '1', >>>>>>> >>> >>> > >> 'multicast_startup_query_interval': >>>>>>> >>> >>> > >> '3124', >>>>>>> >>> >>> > >> 'hello_timer': '31', 'multicast_querier_interval': >>>>>>> >>> >>> > >> '25496', >>>>>>> >>> >>> > >> 'max_age': >>>>>>> >>> >>> > >> '1999', 'hash_max': '512', 'stp_state': '0', >>>>>>> >>> >>> > >> 'root_id': >>>>>>> >>> >>> > >> '8000.001018cddaac', 'priority': '32768', >>>>>>> >>> >>> > >> 'multicast_membership_interval': >>>>>>> >>> >>> > >> '25996', 'root_path_cost': '0', 'root_port': '0', >>>>>>> >>> >>> > >> 'multicast_querier': >>>>>>> >>> >>> > >> '0', >>>>>>> >>> >>> > >> 'multicast_startup_query_count': '2', 'hello_time': >>>>>>> >>> >>> > >> '199', >>>>>>> >>> >>> > >> 'topology_change': '0', 'bridge_id': >>>>>>> >>> >>> > >> '8000.001018cddaac', >>>>>>> >>> >>> > >> 'topology_change_timer': '0', 'ageing_time': '29995', >>>>>>> >>> >>> > >> 'gc_timer': >>>>>>> >>> >>> > >> '31', >>>>>>> >>> >>> > >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', >>>>>>> >>> >>> > >> 'multicast_query_interval': '12498', >>>>>>> >>> >>> > >> 'multicast_last_member_interval': >>>>>>> >>> >>> > >> '99', 'multicast_router': '1', 'forward_delay': '0'}, >>>>>>> >>> >>> > >> 'ports': >>>>>>> >>> >>> > >> ['bond1.100']}, 'VMNetwork': {'addr': '', 'cfg': >>>>>>> >>> >>> > >> {'DEFROUTE': >>>>>>> >>> >>> > >> 'no', >>>>>>> >>> >>> > >> 'HOTPLUG': 'no', 'MTU': '1500', 'DELAY': '0', >>>>>>> >>> >>> > >> 'NM_CONTROLLED': >>>>>>> >>> >>> > >> 'no', >>>>>>> >>> >>> > >> 'BOOTPROTO': 'none', 'STP': 'off', 'DEVICE': >>>>>>> >>> >>> > >> 'VMNetwork', >>>>>>> >>> >>> > >> 'TYPE': >>>>>>> >>> >>> > >> 'Bridge', >>>>>>> >>> >>> > >> 'ONBOOT': 'no'}, 'ipv6addrs': >>>>>>> >>> >>> > >> ['fe80::62eb:69ff:fe20:b46c/64'], >>>>>>> >>> >>> > >> 'mtu': >>>>>>> >>> >>> > >> '1500', 'netmask': '', 'stp': 'off', 'ipv4addrs': [], >>>>>>> >>> >>> > >> 'ipv6gateway': >>>>>>> >>> >>> > >> '::', >>>>>>> >>> >>> > >> 'gateway': '', 'opts': {'topology_change_detected': >>>>>>> >>> >>> > >> '0', >>>>>>> >>> >>> > >> 'multicast_last_member_count': '2', >>>>>>> >>> >>> > >> 'hash_elasticity': >>>>>>> >>> >>> > >> '4', >>>>>>> >>> >>> > >> 'multicast_query_response_interval': '999', >>>>>>> >>> >>> > >> 'multicast_snooping': >>>>>>> >>> >>> > >> '1', >>>>>>> >>> >>> > >> 'multicast_startup_query_interval': '3124', >>>>>>> >>> >>> > >> 'hello_timer': >>>>>>> >>> >>> > >> '131', >>>>>>> >>> >>> > >> 'multicast_querier_interval': '25496', 'max_age': >>>>>>> >>> >>> > >> '1999', >>>>>>> >>> >>> > >> 'hash_max': >>>>>>> >>> >>> > >> '512', 'stp_state': '0', 'root_id': >>>>>>> >>> >>> > >> '8000.60eb6920b46c', >>>>>>> >>> >>> > >> 'priority': >>>>>>> >>> >>> > >> '32768', 'multicast_membership_interval': '25996', >>>>>>> >>> >>> > >> 'root_path_cost': >>>>>>> >>> >>> > >> '0', >>>>>>> >>> >>> > >> 'root_port': '0', 'multicast_querier': '0', >>>>>>> >>> >>> > >> 'multicast_startup_query_count': '2', 'hello_time': >>>>>>> >>> >>> > >> '199', >>>>>>> >>> >>> > >> 'topology_change': '0', 'bridge_id': >>>>>>> >>> >>> > >> '8000.60eb6920b46c', >>>>>>> >>> >>> > >> 'topology_change_timer': '0', 'ageing_time': '29995', >>>>>>> >>> >>> > >> 'gc_timer': >>>>>>> >>> >>> > >> '31', >>>>>>> >>> >>> > >> 'group_addr': '1:80:c2:0:0:0', 'tcn_timer': '0', >>>>>>> >>> >>> > >> 'multicast_query_interval': '12498', >>>>>>> >>> >>> > >> 'multicast_last_member_interval': >>>>>>> >>> >>> > >> '99', 'multicast_router': '1', 'forward_delay': '0'}, >>>>>>> >>> >>> > >> 'ports': >>>>>>> >>> >>> > >> ['bond0.36']}}, 'uuid': >>>>>>> >>> >>> > >> '44454C4C-4C00-1057-8053-B7C04F504E31', >>>>>>> >>> >>> > >> 'lastClientIface': 'bond1', 'nics': {'eth3': >>>>>>> >>> >>> > >> {'permhwaddr': >>>>>>> >>> >>> > >> '00:10:18:cd:da:ae', 'addr': '', 'cfg': {'SLAVE': >>>>>>> >>> >>> > >> 'yes', >>>>>>> >>> >>> > >> 'NM_CONTROLLED': >>>>>>> >>> >>> > >> 'no', 'MTU': '9000', 'HWADDR': '00:10:18:cd:da:ae', >>>>>>> >>> >>> > >> 'MASTER': >>>>>>> >>> >>> > >> 'bond1', >>>>>>> >>> >>> > >> 'DEVICE': 'eth3', 'ONBOOT': 'no'}, 'ipv6addrs': [], >>>>>>> >>> >>> > >> 'mtu': >>>>>>> >>> >>> > >> '9000', >>>>>>> >>> >>> > >> 'netmask': '', 'ipv4addrs': [], 'hwaddr': >>>>>>> >>> >>> > >> '00:10:18:cd:da:ac', >>>>>>> >>> >>> > >> 'speed': >>>>>>> >>> >>> > >> 1000}, 'eth2': {'permhwaddr': '00:10:18:cd:da:ac', >>>>>>> >>> >>> > >> 'addr': '', >>>>>>> >>> >>> > >> 'cfg': >>>>>>> >>> >>> > >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': >>>>>>> >>> >>> > >> '9000', >>>>>>> >>> >>> > >> 'HWADDR': >>>>>>> >>> >>> > >> '00:10:18:cd:da:ac', 'MASTER': 'bond1', 'DEVICE': >>>>>>> >>> >>> > >> 'eth2', >>>>>>> >>> >>> > >> 'ONBOOT': >>>>>>> >>> >>> > >> 'no'}, >>>>>>> >>> >>> > >> 'ipv6addrs': [], 'mtu': '9000', 'netmask': '', >>>>>>> >>> >>> > >> 'ipv4addrs': [], >>>>>>> >>> >>> > >> 'hwaddr': >>>>>>> >>> >>> > >> '00:10:18:cd:da:ac', 'speed': 1000}, 'eth1': >>>>>>> >>> >>> > >> {'permhwaddr': >>>>>>> >>> >>> > >> '60:eb:69:20:b4:6d', 'addr': '', 'cfg': {'SLAVE': >>>>>>> >>> >>> > >> 'yes', >>>>>>> >>> >>> > >> 'NM_CONTROLLED': >>>>>>> >>> >>> > >> 'no', 'MTU': '1500', 'HWADDR': '60:eb:69:20:b4:6d', >>>>>>> >>> >>> > >> 'MASTER': >>>>>>> >>> >>> > >> 'bond0', >>>>>>> >>> >>> > >> 'DEVICE': 'eth1', 'ONBOOT': 'yes'}, 'ipv6addrs': [], >>>>>>> >>> >>> > >> 'mtu': >>>>>>> >>> >>> > >> '1500', >>>>>>> >>> >>> > >> 'netmask': '', 'ipv4addrs': [], 'hwaddr': >>>>>>> >>> >>> > >> '60:eb:69:20:b4:6c', >>>>>>> >>> >>> > >> 'speed': >>>>>>> >>> >>> > >> 1000}, 'eth0': {'permhwaddr': '60:eb:69:20:b4:6c', >>>>>>> >>> >>> > >> 'addr': '', >>>>>>> >>> >>> > >> 'cfg': >>>>>>> >>> >>> > >> {'SLAVE': 'yes', 'NM_CONTROLLED': 'no', 'MTU': >>>>>>> >>> >>> > >> '1500', >>>>>>> >>> >>> > >> 'HWADDR': >>>>>>> >>> >>> > >> '60:eb:69:20:b4:6c', 'MASTER': 'bond0', 'DEVICE': >>>>>>> >>> >>> > >> 'eth0', >>>>>>> >>> >>> > >> 'ONBOOT': >>>>>>> >>> >>> > >> 'yes'}, >>>>>>> >>> >>> > >> 'ipv6addrs': [], 'mtu': '1500', 'netmask': '', >>>>>>> >>> >>> > >> 'ipv4addrs': [], >>>>>>> >>> >>> > >> 'hwaddr': >>>>>>> >>> >>> > >> '60:eb:69:20:b4:6c', 'speed': 1000}}, >>>>>>> >>> >>> > >> 'software_revision': '1', >>>>>>> >>> >>> > >> 'clusterLevels': ['3.0', '3.1', '3.2', '3.3', '3.4', >>>>>>> >>> >>> > >> '3.5'], >>>>>>> >>> >>> > >> 'cpuFlags': >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> >>>>>>> >>> >>> > >> u'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,xtopology,nonstop_tsc,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,popcnt,aes,lahf_lm,tpr_shadow,vnmi,flexpriority,ept,vpid,model_Nehalem,model_Conroe,model_coreduo,model_core2duo,model_Penryn,model_Westmere,model_n270', >>>>>>> >>> >>> > >> 'ISCSIInitiatorName': >>>>>>> >>> >>> > >> 'iqn.1994-05.com.redhat:32151ce183c8', >>>>>>> >>> >>> > >> 'netConfigDirty': 'False', 'supportedENGINEs': >>>>>>> >>> >>> > >> ['3.0', >>>>>>> >>> >>> > >> '3.1', >>>>>>> >>> >>> > >> '3.2', >>>>>>> >>> >>> > >> '3.3', >>>>>>> >>> >>> > >> '3.4', '3.5'], 'autoNumaBalancing': 2, 'reservedMem': >>>>>>> >>> >>> > >> '321', >>>>>>> >>> >>> > >> 'bondings': >>>>>>> >>> >>> > >> {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', >>>>>>> >>> >>> > >> 'netmask': '', >>>>>>> >>> >>> > >> 'slaves': >>>>>>> >>> >>> > >> [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': >>>>>>> >>> >>> > >> '', >>>>>>> >>> >>> > >> 'cfg': >>>>>>> >>> >>> > >> {'HOTPLUG': 'no', 'MTU': '1500', 'NM_CONTROLLED': >>>>>>> >>> >>> > >> 'no', >>>>>>> >>> >>> > >> 'BONDING_OPTS': >>>>>>> >>> >>> > >> 'mode=4 miimon=100', 'DEVICE': 'bond0', 'ONBOOT': >>>>>>> >>> >>> > >> 'yes'}, >>>>>>> >>> >>> > >> 'ipv6addrs': >>>>>>> >>> >>> > >> ['fe80::62eb:69ff:fe20:b46c/64'], 'mtu': '1500', >>>>>>> >>> >>> > >> 'netmask': '', >>>>>>> >>> >>> > >> 'ipv4addrs': [], 'hwaddr': '60:eb:69:20:b4:6c', >>>>>>> >>> >>> > >> 'slaves': >>>>>>> >>> >>> > >> ['eth0', >>>>>>> >>> >>> > >> 'eth1'], >>>>>>> >>> >>> > >> 'opts': {'miimon': '100', 'mode': '4'}}, 'bond1': >>>>>>> >>> >>> > >> {'addr': >>>>>>> >>> >>> > >> '10.10.10.6', >>>>>>> >>> >>> > >> 'cfg': {'DEFROUTE': 'no', 'IPADDR': '10.10.10.6', >>>>>>> >>> >>> > >> 'HOTPLUG': >>>>>>> >>> >>> > >> 'no', >>>>>>> >>> >>> > >> 'MTU': >>>>>>> >>> >>> > >> '9000', 'NM_CONTROLLED': 'no', 'NETMASK': >>>>>>> >>> >>> > >> '255.255.255.0', >>>>>>> >>> >>> > >> 'BOOTPROTO': >>>>>>> >>> >>> > >> 'none', 'BONDING_OPTS': 'mode=4 miimon=100', >>>>>>> >>> >>> > >> 'DEVICE': >>>>>>> >>> >>> > >> 'bond1', >>>>>>> >>> >>> > >> 'ONBOOT': >>>>>>> >>> >>> > >> 'no'}, 'ipv6addrs': ['fe80::210:18ff:fecd:daac/64'], >>>>>>> >>> >>> > >> 'mtu': >>>>>>> >>> >>> > >> '9000', >>>>>>> >>> >>> > >> 'netmask': '255.255.255.0', 'ipv4addrs': >>>>>>> >>> >>> > >> ['10.10.10.6/24' >>>>>>> >>> >>> > >> <http://10.10.10.6/24%27>], 'hwaddr': >>>>>>> >>> >>> > >> '00:10:18:cd:da:ac', >>>>>>> >>>
...
[Mensaje recortado]
participants (6)
-
Alberto Larraz
-
Alberto Larraz
-
Kanagaraj
-
Kanagaraj Mayilsamy
-
Kaushal M
-
Punit Dambiwal