Users
Threads by month
- ----- 2026 -----
- February
- January
- ----- 2025 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- 9 participants
- 19179 discussions
This is a multi-part message in MIME format.
--------------090502070204020205010802
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
+ ovirt-users
Some clarity on your setup -
sjcvhost03 - is this your arbiter node and ovirt management node? And
are you running a compute + storage on the same nodes - i.e,
sjcstorage01, sjcstorage02, sjcvhost03 (arbiter).
CreateStorageDomainVDSCommand(HostName = sjcvhost03,
CreateStorageDomainVDSCommandParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storageDomain='StorageDomainStatic:{name='sjcvmstore',
id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
args='sjcstorage01:/vmstore'}), log id: b9fe587
- fails with Error creating a storage domain's metadata: ("create meta
file 'outbox' failed: [Errno 5] Input/output error",
Are the vdsm logs you provided from sjcvhost03? There are no errors to
be seen in the gluster log you provided. Could you provide mount log
from sjcvhost03 (at
/rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore.log most likely)
If possible, /var/log/glusterfs/* from the 3 storage nodes.
thanks
sahina
On 09/23/2015 05:02 AM, Brett Stevens wrote:
> Hi Sahina,
>
> as requested here is some logs taken during a domain create.
>
> 2015-09-22 18:46:44,320 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-88) [] START,
> GlusterVolumesListVDSCommand(HostName = sjcstorage01,
> GlusterVolumesListVDSParameters:{runAsync='true',
> hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}), log id: 2205ff1
>
> 2015-09-22 18:46:44,413 WARN
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
> (DefaultQuartzScheduler_Worker-88) [] Could not associate brick
> 'sjcstorage01:/export/vmstore/brick01' of volume
> '030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct network as no
> gluster network found in cluster 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>
> 2015-09-22 18:46:44,417 WARN
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
> (DefaultQuartzScheduler_Worker-88) [] Could not associate brick
> 'sjcstorage02:/export/vmstore/brick01' of volume
> '030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct network as no
> gluster network found in cluster 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>
> 2015-09-22 18:46:44,417 WARN
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
> (DefaultQuartzScheduler_Worker-88) [] Could not add brick
> 'sjcvhost02:/export/vmstore/brick01' to volume
> '030f270a-0999-4df4-9b14-ae56eb0a2fb9' - server uuid
> '29b58278-9aa3-47c5-bfb4-1948ef7fdbba' not found in cluster
> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>
> 2015-09-22 18:46:44,418 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-88) [] FINISH,
> GlusterVolumesListVDSCommand, return:
> {030f270a-0999-4df4-9b14-ae56eb0a2fb9=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@a0628f36},
> log id: 2205ff1
>
> 2015-09-22 18:46:45,215 INFO
> [org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
> (default task-24) [5099cda3] Lock Acquired to object
> 'EngineLock:{exclusiveLocks='[sjcstorage01:/vmstore=<STORAGE_CONNECTION,
> ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
>
> 2015-09-22 18:46:45,230 INFO
> [org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
> (default task-24) [5099cda3] Running command:
> AddStorageServerConnectionCommand internal: false. Entities affected
> : ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group
> CREATE_STORAGE_DOMAIN with role type ADMIN
>
> 2015-09-22 18:46:45,233 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
> (default task-24) [5099cda3] START,
> ConnectStorageServerVDSCommand(HostName = sjcvhost03,
> StorageServerConnectionManagementVDSParameters:{runAsync='true',
> hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
> storagePoolId='00000000-0000-0000-0000-000000000000',
> storageType='GLUSTERFS',
> connectionList='[StorageServerConnections:{id='null',
> connection='sjcstorage01:/vmstore', iqn='null', vfsType='glusterfs',
> mountOptions='null', nfsVersion='null', nfsRetrans='null',
> nfsTimeo='null', iface='null', netIfaceName='null'}]'}), log id: 6a112292
>
> 2015-09-22 18:46:48,065 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
> (default task-24) [5099cda3] FINISH, ConnectStorageServerVDSCommand,
> return: {00000000-0000-0000-0000-000000000000=0}, log id: 6a112292
>
> 2015-09-22 18:46:48,073 INFO
> [org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
> (default task-24) [5099cda3] Lock freed to object
> 'EngineLock:{exclusiveLocks='[sjcstorage01:/vmstore=<STORAGE_CONNECTION,
> ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
>
> 2015-09-22 18:46:48,188 INFO
> [org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
> (default task-23) [6410419] Running command:
> AddGlusterFsStorageDomainCommand internal: false. Entities affected :
> ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group
> CREATE_STORAGE_DOMAIN with role type ADMIN
>
> 2015-09-22 18:46:48,206 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
> (default task-23) [6410419] START,
> ConnectStorageServerVDSCommand(HostName = sjcvhost03,
> StorageServerConnectionManagementVDSParameters:{runAsync='true',
> hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
> storagePoolId='00000000-0000-0000-0000-000000000000',
> storageType='GLUSTERFS',
> connectionList='[StorageServerConnections:{id='ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e',
> connection='sjcstorage01:/vmstore', iqn='null', vfsType='glusterfs',
> mountOptions='null', nfsVersion='null', nfsRetrans='null',
> nfsTimeo='null', iface='null', netIfaceName='null'}]'}), log id: 38a2b0d
>
> 2015-09-22 18:46:48,219 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
> (default task-23) [6410419] FINISH, ConnectStorageServerVDSCommand,
> return: {ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=0}, log id: 38a2b0d
>
> 2015-09-22 18:46:48,221 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
> (default task-23) [6410419] START,
> CreateStorageDomainVDSCommand(HostName = sjcvhost03,
> CreateStorageDomainVDSCommandParameters:{runAsync='true',
> hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
> storageDomain='StorageDomainStatic:{name='sjcvmstore',
> id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
> args='sjcstorage01:/vmstore'}), log id: b9fe587
>
> 2015-09-22 18:46:48,744 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-23) [6410419] Correlation ID: null, Call Stack: null,
> Custom Event ID: -1, Message: VDSM sjcvhost03 command failed: Error
> creating a storage domain's metadata: ("create meta file 'outbox'
> failed: [Errno 5] Input/output error",)
>
> 2015-09-22 18:46:48,744 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
> (default task-23) [6410419] Command
> 'org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand'
> return value 'StatusOnlyReturnForXmlRpc [status=StatusForXmlRpc
> [code=362, message=Error creating a storage domain's metadata:
> ("create meta file 'outbox' failed: [Errno 5] Input/output error",)]]'
>
> 2015-09-22 18:46:48,744 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
> (default task-23) [6410419] HostName = sjcvhost03
>
> 2015-09-22 18:46:48,745 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
> (default task-23) [6410419] Command
> 'CreateStorageDomainVDSCommand(HostName = sjcvhost03,
> CreateStorageDomainVDSCommandParameters:{runAsync='true',
> hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
> storageDomain='StorageDomainStatic:{name='sjcvmstore',
> id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
> args='sjcstorage01:/vmstore'})' execution failed: VDSGenericException:
> VDSErrorException: Failed in vdscommand to CreateStorageDomainVDS,
> error = Error creating a storage domain's metadata: ("create meta file
> 'outbox' failed: [Errno 5] Input/output error",)
>
> 2015-09-22 18:46:48,745 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
> (default task-23) [6410419] FINISH, CreateStorageDomainVDSCommand, log
> id: b9fe587
>
> 2015-09-22 18:46:48,745 ERROR
> [org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
> (default task-23) [6410419] Command
> 'org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand'
> failed: EngineException:
> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
> VDSGenericException: VDSErrorException: Failed in vdscommand to
> CreateStorageDomainVDS, error = Error creating a storage domain's
> metadata: ("create meta file 'outbox' failed: [Errno 5] Input/output
> error",) (Failed with error StorageDomainMetadataCreationError and
> code 362)
>
> 2015-09-22 18:46:48,755 INFO
> [org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
> (default task-23) [6410419] Command
> [id=5ae15f53-69a1-47c5-b3a5-82f32c20e48f]: Compensating NEW_ENTITY_ID
> of org.ovirt.engine.core.common.businessentities.StorageDomainDynamic;
> snapshot: 597d5b5b-7c09-4de9-8840-6993bd9b61a6.
>
> 2015-09-22 18:46:48,758 INFO
> [org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
> (default task-23) [6410419] Command
> [id=5ae15f53-69a1-47c5-b3a5-82f32c20e48f]: Compensating NEW_ENTITY_ID
> of org.ovirt.engine.core.common.businessentities.StorageDomainStatic;
> snapshot: 597d5b5b-7c09-4de9-8840-6993bd9b61a6.
>
> 2015-09-22 18:46:48,769 ERROR
> [org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
> (default task-23) [6410419] Transaction rolled-back for command
> 'org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand'.
>
> 2015-09-22 18:46:48,784 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-23) [6410419] Correlation ID: 6410419, Job ID:
> 78692780-a06f-49a5-b6b1-e6c24a820d62, Call Stack: null, Custom Event
> ID: -1, Message: Failed to add Storage Domain sjcvmstore. (User:
> admin@internal)
>
> 2015-09-22 18:46:48,996 INFO
> [org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
> (default task-32) [1635a244] Lock Acquired to object
> 'EngineLock:{exclusiveLocks='[ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=<STORAGE_CONNECTION,
> ACTION_TYPE_FAILED_OBJECT_LOCKED>,
> sjcstorage01:/vmstore=<STORAGE_CONNECTION,
> ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
>
> 2015-09-22 18:46:49,018 INFO
> [org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
> (default task-32) [1635a244] Running command:
> RemoveStorageServerConnectionCommand internal: false. Entities
> affected : ID: aaa00000-0000-0000-0000-123456789aaa Type:
> SystemAction group CREATE_STORAGE_DOMAIN with role type ADMIN
>
> 2015-09-22 18:46:49,024 INFO
> [org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
> (default task-32) [1635a244] Removing connection
> 'ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e' from database
>
> 2015-09-22 18:46:49,026 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
> (default task-32) [1635a244] START,
> DisconnectStorageServerVDSCommand(HostName = sjcvhost03,
> StorageServerConnectionManagementVDSParameters:{runAsync='true',
> hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
> storagePoolId='00000000-0000-0000-0000-000000000000',
> storageType='GLUSTERFS',
> connectionList='[StorageServerConnections:{id='ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e',
> connection='sjcstorage01:/vmstore', iqn='null', vfsType='glusterfs',
> mountOptions='null', nfsVersion='null', nfsRetrans='null',
> nfsTimeo='null', iface='null', netIfaceName='null'}]'}), log id: 39d3b568
>
> 2015-09-22 18:46:49,248 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
> (default task-32) [1635a244] FINISH,
> DisconnectStorageServerVDSCommand, return:
> {ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=0}, log id: 39d3b568
>
> 2015-09-22 18:46:49,252 INFO
> [org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
> (default task-32) [1635a244] Lock freed to object
> 'EngineLock:{exclusiveLocks='[ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=<STORAGE_CONNECTION,
> ACTION_TYPE_FAILED_OBJECT_LOCKED>,
> sjcstorage01:/vmstore=<STORAGE_CONNECTION,
> ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
>
> 2015-09-22 18:46:49,431 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-3) [] START,
> GlusterVolumesListVDSCommand(HostName = sjcstorage01,
> GlusterVolumesListVDSParameters:{runAsync='true',
> hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}), log id: 17014ae8
>
> 2015-09-22 18:46:49,511 WARN
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
> (DefaultQuartzScheduler_Worker-3) [] Could not associate brick
> 'sjcstorage01:/export/vmstore/brick01' of volume
> '030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct network as no
> gluster network found in cluster 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>
> 2015-09-22 18:46:49,515 WARN
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
> (DefaultQuartzScheduler_Worker-3) [] Could not associate brick
> 'sjcstorage02:/export/vmstore/brick01' of volume
> '030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct network as no
> gluster network found in cluster 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>
> 2015-09-22 18:46:49,516 WARN
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
> (DefaultQuartzScheduler_Worker-3) [] Could not add brick
> 'sjcvhost02:/export/vmstore/brick01' to volume
> '030f270a-0999-4df4-9b14-ae56eb0a2fb9' - server uuid
> '29b58278-9aa3-47c5-bfb4-1948ef7fdbba' not found in cluster
> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>
> 2015-09-22 18:46:49,516 INFO
> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
> (DefaultQuartzScheduler_Worker-3) [] FINISH,
> GlusterVolumesListVDSCommand, return:
> {030f270a-0999-4df4-9b14-ae56eb0a2fb9=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@92ed0f75},
> log id: 17014ae8
>
>
>
> ovirt engine thinks that sjcstorage01 is sjcstorage01, its all testbed
> at the moment and is all short names, defined in /etc/hosts (all
> copied to each server for consistancy)
>
>
> volume info for vmstore is
>
>
> Status of volume: vmstore
>
> Gluster process TCP Port RDMA Port Online Pid
>
> ------------------------------------------------------------------------------
>
> Brick sjcstorage01:/export/vmstore/brick01 49157 0 Y 7444
>
> Brick sjcstorage02:/export/vmstore/brick01 49157 0 Y 4063
>
> Brick sjcvhost02:/export/vmstore/brick01 49156 0 Y 3243
>
> NFS Server on localhost 2049 0 Y 3268
>
> Self-heal Daemon on localhost N/A N/A Y 3284
>
> NFS Server on sjcstorage01 2049 0 Y 7463
>
> Self-heal Daemon on sjcstorage01 N/A N/A Y
> 7472
>
> NFS Server on sjcstorage02 2049 0 Y 4082
>
> Self-heal Daemon on sjcstorage02 N/A N/A Y
> 4090
>
> Task Status of Volume vmstore
>
> ------------------------------------------------------------------------------
>
> There are no active volume tasks
>
>
>
> vdsm logs from time the domain is added
>
>
> hread-789::DEBUG::2015-09-22
> 19:12:05,865::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Thread-790::DEBUG::2015-09-22
> 19:12:07,797::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::moving from state init ->
> state preparing
>
> Thread-790::INFO::2015-09-22
> 19:12:07,797::logUtils::48::dispatcher::(wrapper) Run and protect:
> repoStats(options=None)
>
> Thread-790::INFO::2015-09-22
> 19:12:07,797::logUtils::51::dispatcher::(wrapper) Run and protect:
> repoStats, Return response: {}
>
> Thread-790::DEBUG::2015-09-22
> 19:12:07,797::task::1191::Storage.TaskManager.Task::(prepare)
> Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::finished: {}
>
> Thread-790::DEBUG::2015-09-22
> 19:12:07,797::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::moving from state
> preparing -> state finished
>
> Thread-790::DEBUG::2015-09-22
> 19:12:07,797::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources {}
>
> Thread-790::DEBUG::2015-09-22
> 19:12:07,797::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
>
> Thread-790::DEBUG::2015-09-22
> 19:12:07,797::task::993::Storage.TaskManager.Task::(_decref)
> Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::ref 0 aborting False
>
> Thread-790::DEBUG::2015-09-22
> 19:12:07,802::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Reactor thread::INFO::2015-09-22
> 19:12:14,816::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
> Accepting connection from 127.0.0.1:52510 <http://127.0.0.1:52510>
>
> Reactor thread::DEBUG::2015-09-22
> 19:12:14,822::protocoldetector::82::ProtocolDetector.Detector::(__init__)
> Using required_size=11
>
> Reactor thread::INFO::2015-09-22
> 19:12:14,823::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
> Detected protocol xml from 127.0.0.1:52510 <http://127.0.0.1:52510>
>
> Reactor thread::DEBUG::2015-09-22
> 19:12:14,823::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml
> over http detected from ('127.0.0.1', 52510)
>
> BindingXMLRPC::INFO::2015-09-22
> 19:12:14,823::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
> request handler for 127.0.0.1:52510 <http://127.0.0.1:52510>
>
> Thread-791::INFO::2015-09-22
> 19:12:14,823::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
> Request handler for 127.0.0.1:52510 <http://127.0.0.1:52510> started
>
> Thread-791::INFO::2015-09-22
> 19:12:14,825::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
> Request handler for 127.0.0.1:52510 <http://127.0.0.1:52510> stopped
>
> Thread-792::DEBUG::2015-09-22
> 19:12:20,872::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Thread-793::DEBUG::2015-09-22
> 19:12:22,832::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::moving from state init ->
> state preparing
>
> Thread-793::INFO::2015-09-22
> 19:12:22,832::logUtils::48::dispatcher::(wrapper) Run and protect:
> repoStats(options=None)
>
> Thread-793::INFO::2015-09-22
> 19:12:22,832::logUtils::51::dispatcher::(wrapper) Run and protect:
> repoStats, Return response: {}
>
> Thread-793::DEBUG::2015-09-22
> 19:12:22,832::task::1191::Storage.TaskManager.Task::(prepare)
> Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::finished: {}
>
> Thread-793::DEBUG::2015-09-22
> 19:12:22,832::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::moving from state
> preparing -> state finished
>
> Thread-793::DEBUG::2015-09-22
> 19:12:22,833::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources {}
>
> Thread-793::DEBUG::2015-09-22
> 19:12:22,833::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
>
> Thread-793::DEBUG::2015-09-22
> 19:12:22,833::task::993::Storage.TaskManager.Task::(_decref)
> Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::ref 0 aborting False
>
> Thread-793::DEBUG::2015-09-22
> 19:12:22,837::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Reactor thread::INFO::2015-09-22
> 19:12:29,841::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
> Accepting connection from 127.0.0.1:52511 <http://127.0.0.1:52511>
>
> Reactor thread::DEBUG::2015-09-22
> 19:12:29,848::protocoldetector::82::ProtocolDetector.Detector::(__init__)
> Using required_size=11
>
> Reactor thread::INFO::2015-09-22
> 19:12:29,849::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
> Detected protocol xml from 127.0.0.1:52511 <http://127.0.0.1:52511>
>
> Reactor thread::DEBUG::2015-09-22
> 19:12:29,849::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml
> over http detected from ('127.0.0.1', 52511)
>
> BindingXMLRPC::INFO::2015-09-22
> 19:12:29,849::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
> request handler for 127.0.0.1:52511 <http://127.0.0.1:52511>
>
> Thread-794::INFO::2015-09-22
> 19:12:29,849::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
> Request handler for 127.0.0.1:52511 <http://127.0.0.1:52511> started
>
> Thread-794::INFO::2015-09-22
> 19:12:29,851::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
> Request handler for 127.0.0.1:52511 <http://127.0.0.1:52511> stopped
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,520::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
> Calling 'StoragePool.connectStorageServer' in bridge with
> {u'connectionParams': [{u'id':
> u'00000000-0000-0000-0000-000000000000', u'connection':
> u'sjcstorage01:/vmstore', u'iqn': u'', u'user': u'', u'tpgt': u'1',
> u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}],
> u'storagepoolID': u'00000000-0000-0000-0000-000000000000',
> u'domainType': 7}
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,520::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::moving from state init ->
> state preparing
>
> Thread-795::INFO::2015-09-22
> 19:12:35,521::logUtils::48::dispatcher::(wrapper) Run and protect:
> connectStorageServer(domType=7,
> spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'id':
> u'00000000-0000-0000-0000-000000000000', u'connection':
> u'sjcstorage01:/vmstore', u'iqn': u'', u'user': u'', u'tpgt': u'1',
> u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}],
> options=None)
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,539::fileUtils::143::Storage.fileUtils::(createdir) Creating
> directory: /rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore mode:
> None
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,540::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/sudo
> -n /usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount
> -t glusterfs -o backup-volfile-servers=sjcstorage02:sjcvhost02
> sjcstorage01:/vmstore
> /rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore (cwd None)
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,706::hsm::2417::Storage.HSM::(__prefetchDomains)
> glusterDomPath: glusterSD/*
>
> Thread-796::DEBUG::2015-09-22
> 19:12:35,707::__init__::298::IOProcessClient::(_run) Starting IOProcess...
>
> Thread-797::DEBUG::2015-09-22
> 19:12:35,712::__init__::298::IOProcessClient::(_run) Starting IOProcess...
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,721::hsm::2429::Storage.HSM::(__prefetchDomains) Found SD
> uuids: ()
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,721::hsm::2489::Storage.HSM::(connectStorageServer) knownSDs:
> {41b75ca9-9783-42a7-9a23-10a2ae3cbb96: storage.glusterSD.findDomain,
> 597d5b5b-7c09-4de9-8840-6993bd9b61a6: storage.glusterSD.findDomain,
> ef17fec4-fecf-4d7e-b815-d1db4ef65225: storage.glusterSD.findDomain}
>
> Thread-795::INFO::2015-09-22
> 19:12:35,721::logUtils::51::dispatcher::(wrapper) Run and protect:
> connectStorageServer, Return response: {'statuslist': [{'status': 0,
> 'id': u'00000000-0000-0000-0000-000000000000'}]}
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,722::task::1191::Storage.TaskManager.Task::(prepare)
> Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::finished: {'statuslist':
> [{'status': 0, 'id': u'00000000-0000-0000-0000-000000000000'}]}
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,722::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::moving from state
> preparing -> state finished
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,722::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources {}
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,722::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,722::task::993::Storage.TaskManager.Task::(_decref)
> Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::ref 0 aborting False
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,722::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
> Return 'StoragePool.connectStorageServer' in bridge with [{'status':
> 0, 'id': u'00000000-0000-0000-0000-000000000000'}]
>
> Thread-795::DEBUG::2015-09-22
> 19:12:35,722::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Thread-798::DEBUG::2015-09-22
> 19:12:35,775::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
> Calling 'StoragePool.connectStorageServer' in bridge with
> {u'connectionParams': [{u'id':
> u'cd55e6a1-022a-4b32-8a94-cab506a9b054', u'connection':
> u'sjcstorage01:/vmstore', u'iqn': u'', u'user': u'', u'tpgt': u'1',
> u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}],
> u'storagepoolID': u'00000000-0000-0000-0000-000000000000',
> u'domainType': 7}
>
> Thread-798::DEBUG::2015-09-22
> 19:12:35,775::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::moving from state init ->
> state preparing
>
> Thread-798::INFO::2015-09-22
> 19:12:35,776::logUtils::48::dispatcher::(wrapper) Run and protect:
> connectStorageServer(domType=7,
> spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'id':
> u'cd55e6a1-022a-4b32-8a94-cab506a9b054', u'connection':
> u'sjcstorage01:/vmstore', u'iqn': u'', u'user': u'', u'tpgt': u'1',
> u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}],
> options=None)
>
> Thread-798::DEBUG::2015-09-22
> 19:12:35,777::hsm::2417::Storage.HSM::(__prefetchDomains)
> glusterDomPath: glusterSD/*
>
> Thread-798::DEBUG::2015-09-22
> 19:12:35,782::hsm::2429::Storage.HSM::(__prefetchDomains) Found SD
> uuids: ()
>
> Thread-798::DEBUG::2015-09-22
> 19:12:35,782::hsm::2489::Storage.HSM::(connectStorageServer) knownSDs:
> {41b75ca9-9783-42a7-9a23-10a2ae3cbb96: storage.glusterSD.findDomain,
> 597d5b5b-7c09-4de9-8840-6993bd9b61a6: storage.glusterSD.findDomain,
> ef17fec4-fecf-4d7e-b815-d1db4ef65225: storage.glusterSD.findDomain}
>
> Thread-798::INFO::2015-09-22
> 19:12:35,782::logUtils::51::dispatcher::(wrapper) Run and protect:
> connectStorageServer, Return response: {'statuslist': [{'status': 0,
> 'id': u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}
>
> Thread-798::DEBUG::2015-09-22
> 19:12:35,783::task::1191::Storage.TaskManager.Task::(prepare)
> Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::finished: {'statuslist':
> [{'status': 0, 'id': u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}
>
> Thread-798::DEBUG::2015-09-22
> 19:12:35,783::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::moving from state
> preparing -> state finished
>
> Thread-798::DEBUG::2015-09-22
> 19:12:35,783::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources {}
>
> Thread-798::DEBUG::2015-09-22
> 19:12:35,783::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
>
> Thread-798::DEBUG::2015-09-22
> 19:12:35,783::task::993::Storage.TaskManager.Task::(_decref)
> Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::ref 0 aborting False
>
> Thread-798::DEBUG::2015-09-22
> 19:12:35,783::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
> Return 'StoragePool.connectStorageServer' in bridge with [{'status':
> 0, 'id': u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]
>
> Thread-798::DEBUG::2015-09-22
> 19:12:35,783::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,787::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
> Calling 'StorageDomain.create' in bridge with {u'name':
> u'sjcvmstore01', u'domainType': 7, u'domainClass': 1, u'typeArgs':
> u'sjcstorage01:/vmstore', u'version': u'3', u'storagedomainID':
> u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3'}
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,788::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving from state init ->
> state preparing
>
> Thread-801::INFO::2015-09-22
> 19:12:35,788::logUtils::48::dispatcher::(wrapper) Run and protect:
> createStorageDomain(storageType=7,
> sdUUID=u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3',
> domainName=u'sjcvmstore01', typeSpecificArg=u'sjcstorage01:/vmstore',
> domClass=1, domVersion=u'3', options=None)
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,788::misc::733::Storage.SamplingMethod::(__call__) Trying to
> enter sampling method (storage.sdc.refreshStorage)
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,788::misc::736::Storage.SamplingMethod::(__call__) Got in to
> sampling method
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,788::misc::733::Storage.SamplingMethod::(__call__) Trying to
> enter sampling method (storage.iscsi.rescan)
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,788::misc::736::Storage.SamplingMethod::(__call__) Got in to
> sampling method
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,788::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI
> scan, this will take up to 30 seconds
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,788::iscsiadm::97::Storage.Misc.excCmd::(_runCmd)
> /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None)
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,821::misc::743::Storage.SamplingMethod::(__call__) Returning
> last result
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,821::misc::733::Storage.SamplingMethod::(__call__) Trying to
> enter sampling method (storage.hba.rescan)
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,821::misc::736::Storage.SamplingMethod::(__call__) Got in to
> sampling method
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,821::hba::56::Storage.HBA::(rescan) Starting scan
>
> Thread-802::DEBUG::2015-09-22
> 19:12:35,882::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,912::hba::62::Storage.HBA::(rescan) Scan finished
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,912::misc::743::Storage.SamplingMethod::(__call__) Returning
> last result
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,912::multipath::77::Storage.Misc.excCmd::(rescan)
> /usr/bin/sudo -n /usr/sbin/multipath (cwd None)
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,936::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS:
> <err> = ''; <rc> = 0
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,936::utils::661::root::(execCmd) /sbin/udevadm settle
> --timeout=5 (cwd None)
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,946::utils::679::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,947::lvm::498::Storage.OperationMutex::(_invalidateAllPvs)
> Operation 'lvm invalidate operation' got the operation mutex
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,947::lvm::500::Storage.OperationMutex::(_invalidateAllPvs)
> Operation 'lvm invalidate operation' released the operation mutex
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,947::lvm::509::Storage.OperationMutex::(_invalidateAllVgs)
> Operation 'lvm invalidate operation' got the operation mutex
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,948::lvm::511::Storage.OperationMutex::(_invalidateAllVgs)
> Operation 'lvm invalidate operation' released the operation mutex
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,948::lvm::529::Storage.OperationMutex::(_invalidateAllLvs)
> Operation 'lvm invalidate operation' got the operation mutex
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,948::lvm::531::Storage.OperationMutex::(_invalidateAllLvs)
> Operation 'lvm invalidate operation' released the operation mutex
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,948::misc::743::Storage.SamplingMethod::(__call__) Returning
> last result
>
> Thread-801::ERROR::2015-09-22
> 19:12:35,949::sdc::138::Storage.StorageDomainCache::(_findDomain)
> looking for unfetched domain c02fda97-62e3-40d3-9a6e-ac5d100f8ad3
>
> Thread-801::ERROR::2015-09-22
> 19:12:35,949::sdc::155::Storage.StorageDomainCache::(_findUnfetchedDomain)
> looking for domain c02fda97-62e3-40d3-9a6e-ac5d100f8ad3
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,949::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation
> 'lvm reload operation' got the operation mutex
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,950::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n
> /usr/sbin/lvm vgs --config ' devices { preferred_names =
> ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0
> disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [
> '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1
> wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50
> retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|'
> --ignoreskippedcluster -o
> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
> c02fda97-62e3-40d3-9a6e-ac5d100f8ad3 (cwd None)
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,985::lvm::291::Storage.Misc.excCmd::(cmd) FAILED: <err> = '
> WARNING: lvmetad is running but disabled. Restart lvmetad before
> enabling it!\n Volume group "c02fda97-62e3-40d3-9a6e-ac5d100f8ad3"
> not found\n Cannot process volume group
> c02fda97-62e3-40d3-9a6e-ac5d100f8ad3\n'; <rc> = 5
>
> Thread-801::WARNING::2015-09-22
> 19:12:35,986::lvm::376::Storage.LVM::(_reloadvgs) lvm vgs failed: 5 []
> [' WARNING: lvmetad is running but disabled. Restart lvmetad before
> enabling it!', ' Volume group "c02fda97-62e3-40d3-9a6e-ac5d100f8ad3"
> not found', ' Cannot process volume group
> c02fda97-62e3-40d3-9a6e-ac5d100f8ad3']
>
> Thread-801::DEBUG::2015-09-22
> 19:12:35,987::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation
> 'lvm reload operation' released the operation mutex
>
> Thread-801::ERROR::2015-09-22
> 19:12:35,997::sdc::144::Storage.StorageDomainCache::(_findDomain)
> domain c02fda97-62e3-40d3-9a6e-ac5d100f8ad3 not found
>
> Traceback (most recent call last):
>
> File "/usr/share/vdsm/storage/sdc.py", line 142, in _findDomain
>
> dom = findMethod(sdUUID)
>
> File "/usr/share/vdsm/storage/sdc.py", line 172, in _findUnfetchedDomain
>
> raise se.StorageDomainDoesNotExist(sdUUID)
>
> StorageDomainDoesNotExist: Storage domain does not exist:
> (u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3',)
>
> Thread-801::INFO::2015-09-22
> 19:12:35,998::nfsSD::69::Storage.StorageDomain::(create)
> sdUUID=c02fda97-62e3-40d3-9a6e-ac5d100f8ad3 domainName=sjcvmstore01
> remotePath=sjcstorage01:/vmstore domClass=1
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,015::__init__::298::IOProcessClient::(_run) Starting IOProcess...
>
> Thread-801::ERROR::2015-09-22
> 19:12:36,067::task::866::Storage.TaskManager.Task::(_setError)
> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Unexpected error
>
> Traceback (most recent call last):
>
> File "/usr/share/vdsm/storage/task.py", line 873, in _run
>
> return fn(*args, **kargs)
>
> File "/usr/share/vdsm/logUtils.py", line 49, in wrapper
>
> res = f(*args, **kwargs)
>
> File "/usr/share/vdsm/storage/hsm.py", line 2697, in createStorageDomain
>
> domVersion)
>
> File "/usr/share/vdsm/storage/nfsSD.py", line 84, in create
>
> remotePath, storageType, version)
>
> File "/usr/share/vdsm/storage/fileSD.py", line 264, in _prepareMetadata
>
> "create meta file '%s' failed: %s" % (metaFile, str(e)))
>
> StorageDomainMetadataCreationError: Error creating a storage domain's
> metadata: ("create meta file 'outbox' failed: [Errno 5] Input/output
> error",)
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,067::task::885::Storage.TaskManager.Task::(_run)
> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Task._run:
> d2d29352-8677-45cb-a4ab-06aa32cf1acb (7,
> u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3', u'sjcvmstore01',
> u'sjcstorage01:/vmstore', 1, u'3') {} failed - stopping task
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,067::task::1246::Storage.TaskManager.Task::(stop)
> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::stopping in state
> preparing (force False)
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,067::task::993::Storage.TaskManager.Task::(_decref)
> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::ref 1 aborting True
>
> Thread-801::INFO::2015-09-22
> 19:12:36,067::task::1171::Storage.TaskManager.Task::(prepare)
> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::aborting: Task is
> aborted: "Error creating a storage domain's metadata" - code 362
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,068::task::1176::Storage.TaskManager.Task::(prepare)
> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Prepare: aborted: Error
> creating a storage domain's metadata
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,068::task::993::Storage.TaskManager.Task::(_decref)
> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::ref 0 aborting True
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,068::task::928::Storage.TaskManager.Task::(_doAbort)
> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Task._doAbort: force False
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,068::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,068::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving from state
> preparing -> state aborting
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,068::task::550::Storage.TaskManager.Task::(__state_aborting)
> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::_aborting: recover policy
> none
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,068::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving from state
> aborting -> state failed
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,068::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources {}
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,068::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
>
> Thread-801::ERROR::2015-09-22
> 19:12:36,068::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
> {'message': 'Error creating a storage domain\'s metadata: ("create
> meta file \'outbox\' failed: [Errno 5] Input/output error",)', 'code':
> 362}}
>
> Thread-801::DEBUG::2015-09-22
> 19:12:36,069::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,180::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
> Calling 'StoragePool.disconnectStorageServer' in bridge with
> {u'connectionParams': [{u'id':
> u'cd55e6a1-022a-4b32-8a94-cab506a9b054', u'connection':
> u'sjcstorage01:/vmstore', u'iqn': u'', u'user': u'', u'tpgt': u'1',
> u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}],
> u'storagepoolID': u'00000000-0000-0000-0000-000000000000',
> u'domainType': 7}
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,181::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::moving from state init ->
> state preparing
>
> Thread-807::INFO::2015-09-22
> 19:12:36,182::logUtils::48::dispatcher::(wrapper) Run and protect:
> disconnectStorageServer(domType=7,
> spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'id':
> u'cd55e6a1-022a-4b32-8a94-cab506a9b054', u'connection':
> u'sjcstorage01:/vmstore', u'iqn': u'', u'user': u'', u'tpgt': u'1',
> u'vfs_type': u'glusterfs', u'password': '********', u'port': u''}],
> options=None)
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,182::mount::229::Storage.Misc.excCmd::(_runcmd) /usr/bin/sudo
> -n /usr/bin/umount -f -l
> /rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore (cwd None)
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,222::misc::733::Storage.SamplingMethod::(__call__) Trying to
> enter sampling method (storage.sdc.refreshStorage)
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,222::misc::736::Storage.SamplingMethod::(__call__) Got in to
> sampling method
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,222::misc::733::Storage.SamplingMethod::(__call__) Trying to
> enter sampling method (storage.iscsi.rescan)
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,222::misc::736::Storage.SamplingMethod::(__call__) Got in to
> sampling method
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,223::iscsi::431::Storage.ISCSI::(rescan) Performing SCSI
> scan, this will take up to 30 seconds
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,223::iscsiadm::97::Storage.Misc.excCmd::(_runCmd)
> /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None)
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,258::misc::743::Storage.SamplingMethod::(__call__) Returning
> last result
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,258::misc::733::Storage.SamplingMethod::(__call__) Trying to
> enter sampling method (storage.hba.rescan)
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,258::misc::736::Storage.SamplingMethod::(__call__) Got in to
> sampling method
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,258::hba::56::Storage.HBA::(rescan) Starting scan
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,350::hba::62::Storage.HBA::(rescan) Scan finished
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,350::misc::743::Storage.SamplingMethod::(__call__) Returning
> last result
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,350::multipath::77::Storage.Misc.excCmd::(rescan)
> /usr/bin/sudo -n /usr/sbin/multipath (cwd None)
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,374::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS:
> <err> = ''; <rc> = 0
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,374::utils::661::root::(execCmd) /sbin/udevadm settle
> --timeout=5 (cwd None)
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,383::utils::679::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,384::lvm::498::Storage.OperationMutex::(_invalidateAllPvs)
> Operation 'lvm invalidate operation' got the operation mutex
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,385::lvm::500::Storage.OperationMutex::(_invalidateAllPvs)
> Operation 'lvm invalidate operation' released the operation mutex
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,385::lvm::509::Storage.OperationMutex::(_invalidateAllVgs)
> Operation 'lvm invalidate operation' got the operation mutex
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,385::lvm::511::Storage.OperationMutex::(_invalidateAllVgs)
> Operation 'lvm invalidate operation' released the operation mutex
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,386::lvm::529::Storage.OperationMutex::(_invalidateAllLvs)
> Operation 'lvm invalidate operation' got the operation mutex
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,386::lvm::531::Storage.OperationMutex::(_invalidateAllLvs)
> Operation 'lvm invalidate operation' released the operation mutex
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,386::misc::743::Storage.SamplingMethod::(__call__) Returning
> last result
>
> Thread-807::INFO::2015-09-22
> 19:12:36,386::logUtils::51::dispatcher::(wrapper) Run and protect:
> disconnectStorageServer, Return response: {'statuslist': [{'status':
> 0, 'id': u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,387::task::1191::Storage.TaskManager.Task::(prepare)
> Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::finished: {'statuslist':
> [{'status': 0, 'id': u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,387::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::moving from state
> preparing -> state finished
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,387::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources {}
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,387::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,387::task::993::Storage.TaskManager.Task::(_decref)
> Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::ref 0 aborting False
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,388::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
> Return 'StoragePool.disconnectStorageServer' in bridge with
> [{'status': 0, 'id': u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]
>
> Thread-807::DEBUG::2015-09-22
> 19:12:36,388::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Thread-808::DEBUG::2015-09-22
> 19:12:37,868::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::moving from state init ->
> state preparing
>
> Thread-808::INFO::2015-09-22
> 19:12:37,868::logUtils::48::dispatcher::(wrapper) Run and protect:
> repoStats(options=None)
>
> Thread-808::INFO::2015-09-22
> 19:12:37,868::logUtils::51::dispatcher::(wrapper) Run and protect:
> repoStats, Return response: {}
>
> Thread-808::DEBUG::2015-09-22
> 19:12:37,868::task::1191::Storage.TaskManager.Task::(prepare)
> Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::finished: {}
>
> Thread-808::DEBUG::2015-09-22
> 19:12:37,868::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::moving from state
> preparing -> state finished
>
> Thread-808::DEBUG::2015-09-22
> 19:12:37,868::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources {}
>
> Thread-808::DEBUG::2015-09-22
> 19:12:37,868::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
>
> Thread-808::DEBUG::2015-09-22
> 19:12:37,868::task::993::Storage.TaskManager.Task::(_decref)
> Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::ref 0 aborting False
>
> Thread-808::DEBUG::2015-09-22
> 19:12:37,873::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Reactor thread::INFO::2015-09-22
> 19:12:44,867::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
> Accepting connection from 127.0.0.1:52512 <http://127.0.0.1:52512>
>
> Reactor thread::DEBUG::2015-09-22
> 19:12:44,874::protocoldetector::82::ProtocolDetector.Detector::(__init__)
> Using required_size=11
>
> Reactor thread::INFO::2015-09-22
> 19:12:44,875::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
> Detected protocol xml from 127.0.0.1:52512 <http://127.0.0.1:52512>
>
> Reactor thread::DEBUG::2015-09-22
> 19:12:44,875::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml
> over http detected from ('127.0.0.1', 52512)
>
> BindingXMLRPC::INFO::2015-09-22
> 19:12:44,875::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
> request handler for 127.0.0.1:52512 <http://127.0.0.1:52512>
>
> Thread-809::INFO::2015-09-22
> 19:12:44,876::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
> Request handler for 127.0.0.1:52512 <http://127.0.0.1:52512> started
>
> Thread-809::INFO::2015-09-22
> 19:12:44,877::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
> Request handler for 127.0.0.1:52512 <http://127.0.0.1:52512> stopped
>
> Thread-810::DEBUG::2015-09-22
> 19:12:50,889::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Thread-811::DEBUG::2015-09-22
> 19:12:52,902::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::moving from state init ->
> state preparing
>
> Thread-811::INFO::2015-09-22
> 19:12:52,902::logUtils::48::dispatcher::(wrapper) Run and protect:
> repoStats(options=None)
>
> Thread-811::INFO::2015-09-22
> 19:12:52,902::logUtils::51::dispatcher::(wrapper) Run and protect:
> repoStats, Return response: {}
>
> Thread-811::DEBUG::2015-09-22
> 19:12:52,902::task::1191::Storage.TaskManager.Task::(prepare)
> Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::finished: {}
>
> Thread-811::DEBUG::2015-09-22
> 19:12:52,903::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::moving from state
> preparing -> state finished
>
> Thread-811::DEBUG::2015-09-22
> 19:12:52,903::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources {}
>
> Thread-811::DEBUG::2015-09-22
> 19:12:52,903::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
>
> Thread-811::DEBUG::2015-09-22
> 19:12:52,903::task::993::Storage.TaskManager.Task::(_decref)
> Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::ref 0 aborting False
>
> Thread-811::DEBUG::2015-09-22
> 19:12:52,908::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Reactor thread::INFO::2015-09-22
> 19:12:59,895::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
> Accepting connection from 127.0.0.1:52513 <http://127.0.0.1:52513>
>
> Reactor thread::DEBUG::2015-09-22
> 19:12:59,902::protocoldetector::82::ProtocolDetector.Detector::(__init__)
> Using required_size=11
>
> Reactor thread::INFO::2015-09-22
> 19:12:59,902::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
> Detected protocol xml from 127.0.0.1:52513 <http://127.0.0.1:52513>
>
> Reactor thread::DEBUG::2015-09-22
> 19:12:59,902::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml
> over http detected from ('127.0.0.1', 52513)
>
> BindingXMLRPC::INFO::2015-09-22
> 19:12:59,903::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
> request handler for 127.0.0.1:52513 <http://127.0.0.1:52513>
>
> Thread-812::INFO::2015-09-22
> 19:12:59,903::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
> Request handler for 127.0.0.1:52513 <http://127.0.0.1:52513> started
>
> Thread-812::INFO::2015-09-22
> 19:12:59,904::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
> Request handler for 127.0.0.1:52513 <http://127.0.0.1:52513> stopped
>
> Thread-813::DEBUG::2015-09-22
> 19:13:05,898::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Thread-814::DEBUG::2015-09-22
> 19:13:07,934::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::moving from state init ->
> state preparing
>
> Thread-814::INFO::2015-09-22
> 19:13:07,935::logUtils::48::dispatcher::(wrapper) Run and protect:
> repoStats(options=None)
>
> Thread-814::INFO::2015-09-22
> 19:13:07,935::logUtils::51::dispatcher::(wrapper) Run and protect:
> repoStats, Return response: {}
>
> Thread-814::DEBUG::2015-09-22
> 19:13:07,935::task::1191::Storage.TaskManager.Task::(prepare)
> Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::finished: {}
>
> Thread-814::DEBUG::2015-09-22
> 19:13:07,935::task::595::Storage.TaskManager.Task::(_updateState)
> Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::moving from state
> preparing -> state finished
>
> Thread-814::DEBUG::2015-09-22
> 19:13:07,935::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources {}
>
> Thread-814::DEBUG::2015-09-22
> 19:13:07,935::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
>
> Thread-814::DEBUG::2015-09-22
> 19:13:07,935::task::993::Storage.TaskManager.Task::(_decref)
> Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::ref 0 aborting False
>
> Thread-814::DEBUG::2015-09-22
> 19:13:07,939::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
> Reactor thread::INFO::2015-09-22
> 19:13:14,921::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
> Accepting connection from 127.0.0.1:52515 <http://127.0.0.1:52515>
>
> Reactor thread::DEBUG::2015-09-22
> 19:13:14,927::protocoldetector::82::ProtocolDetector.Detector::(__init__)
> Using required_size=11
>
> Reactor thread::INFO::2015-09-22
> 19:13:14,928::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
> Detected protocol xml from 127.0.0.1:52515 <http://127.0.0.1:52515>
>
> Reactor thread::DEBUG::2015-09-22
> 19:13:14,928::bindingxmlrpc::1297::XmlDetector::(handle_socket) xml
> over http detected from ('127.0.0.1', 52515)
>
> BindingXMLRPC::INFO::2015-09-22
> 19:13:14,928::xmlrpc::73::vds.XMLRPCServer::(handle_request) Starting
> request handler for 127.0.0.1:52515 <http://127.0.0.1:52515>
>
> Thread-815::INFO::2015-09-22
> 19:13:14,928::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
> Request handler for 127.0.0.1:52515 <http://127.0.0.1:52515> started
>
> Thread-815::INFO::2015-09-22
> 19:13:14,930::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
> Request handler for 127.0.0.1:52515 <http://127.0.0.1:52515> stopped
>
> Thread-816::DEBUG::2015-09-22
> 19:13:20,906::stompreactor::304::yajsonrpc.StompServer::(send) Sending
> response
>
>
>
> gluster logs
>
> +------------------------------------------------------------------------------+
>
> 1: volume vmstore-client-0
>
> 2: type protocol/client
>
> 3: option ping-timeout 42
>
> 4: option remote-host sjcstorage01
>
> 5: option remote-subvolume /export/vmstore/brick01
>
> 6: option transport-type socket
>
> 7: option send-gids true
>
> 8: end-volume
>
> 9:
>
> 10: volume vmstore-client-1
>
> 11: type protocol/client
>
> 12: option ping-timeout 42
>
> 13: option remote-host sjcstorage02
>
> 14: option remote-subvolume /export/vmstore/brick01
>
> 15: option transport-type socket
>
> 16: option send-gids true
>
> 17: end-volume
>
> 18:
>
> 19: volume vmstore-client-2
>
> 20: type protocol/client
>
> 21: option ping-timeout 42
>
> 22: option remote-host sjcvhost02
>
> 23: option remote-subvolume /export/vmstore/brick01
>
> 24: option transport-type socket
>
> 25: option send-gids true
>
> 26: end-volume
>
> 27:
>
> 28: volume vmstore-replicate-0
>
> 29: type cluster/replicate
>
> 30: option arbiter-count 1
>
> 31: subvolumes vmstore-client-0 vmstore-client-1 vmstore-client-2
>
> 32: end-volume
>
> 33:
>
> 34: volume vmstore-dht
>
> 35: type cluster/distribute
>
> 36: subvolumes vmstore-replicate-0
>
> 37: end-volume
>
> 38:
>
> 39: volume vmstore-write-behind
>
> 40: type performance/write-behind
>
> 41: subvolumes vmstore-dht
>
> 42: end-volume
>
> 43:
>
> 44: volume vmstore-read-ahead
>
> 45: type performance/read-ahead
>
> 46: subvolumes vmstore-write-behind
>
> 47: end-volume
>
> 48:
>
> 49: volume vmstore-readdir-ahead
>
> 50: type performance/readdir-ahead
>
> 52: end-volume
>
> 53:
>
> 54: volume vmstore-io-cache
>
> 55: type performance/io-cache
>
> 56: subvolumes vmstore-readdir-ahead
>
> 57: end-volume
>
> 58:
>
> 59: volume vmstore-quick-read
>
> 60: type performance/quick-read
>
> 61: subvolumes vmstore-io-cache
>
> 62: end-volume
>
> 63:
>
> 64: volume vmstore-open-behind
>
> 65: type performance/open-behind
>
> 66: subvolumes vmstore-quick-read
>
> 67: end-volume
>
> 68:
>
> 69: volume vmstore-md-cache
>
> 70: type performance/md-cache
>
> 71: subvolumes vmstore-open-behind
>
> 72: end-volume
>
> 73:
>
> 74: volume vmstore
>
> 75: type debug/io-stats
>
> 76: option latency-measurement off
>
> 77: option count-fop-hits off
>
> 78: subvolumes vmstore-md-cache
>
> 79: end-volume
>
> 80:
>
> 81: volume meta-autoload
>
> 82: type meta
>
> 83: subvolumes vmstore
>
> 84: end-volume
>
> 85:
>
> +------------------------------------------------------------------------------+
>
> [2015-09-22 05:29:07.586205] I [rpc-clnt.c:1851:rpc_clnt_reconfig]
> 0-vmstore-client-0: changing port to 49153 (from 0)
>
> [2015-09-22 05:29:07.586325] I [rpc-clnt.c:1851:rpc_clnt_reconfig]
> 0-vmstore-client-1: changing port to 49153 (from 0)
>
> [2015-09-22 05:29:07.586480] I [rpc-clnt.c:1851:rpc_clnt_reconfig]
> 0-vmstore-client-2: changing port to 49153 (from 0)
>
> [2015-09-22 05:29:07.595052] I [MSGID: 114057]
> [client-handshake.c:1437:select_server_supported_programs]
> 0-vmstore-client-0: Using Program GlusterFS 3.3, Num (1298437),
> Version (330)
>
> [2015-09-22 05:29:07.595397] I [MSGID: 114057]
> [client-handshake.c:1437:select_server_supported_programs]
> 0-vmstore-client-1: Using Program GlusterFS 3.3, Num (1298437),
> Version (330)
>
> [2015-09-22 05:29:07.595576] I [MSGID: 114057]
> [client-handshake.c:1437:select_server_supported_programs]
> 0-vmstore-client-2: Using Program GlusterFS 3.3, Num (1298437),
> Version (330)
>
> [2015-09-22 05:29:07.595721] I [MSGID: 114046]
> [client-handshake.c:1213:client_setvolume_cbk] 0-vmstore-client-0:
> Connected to vmstore-client-0, attached to remote volume
> '/export/vmstore/brick01'.
>
> [2015-09-22 05:29:07.595738] I [MSGID: 114047]
> [client-handshake.c:1224:client_setvolume_cbk] 0-vmstore-client-0:
> Server and Client lk-version numbers are not same, reopening the fds
>
> [2015-09-22 05:29:07.596044] I [MSGID: 108005]
> [afr-common.c:3998:afr_notify] 0-vmstore-replicate-0: Subvolume
> 'vmstore-client-0' came back up; going online.
>
> [2015-09-22 05:29:07.596170] I [MSGID: 114046]
> [client-handshake.c:1213:client_setvolume_cbk] 0-vmstore-client-1:
> Connected to vmstore-client-1, attached to remote volume
> '/export/vmstore/brick01'.
>
> [2015-09-22 05:29:07.596189] I [MSGID: 114047]
> [client-handshake.c:1224:client_setvolume_cbk] 0-vmstore-client-1:
> Server and Client lk-version numbers are not same, reopening the fds
>
> [2015-09-22 05:29:07.596495] I [MSGID: 114046]
> [client-handshake.c:1213:client_setvolume_cbk] 0-vmstore-client-2:
> Connected to vmstore-client-2, attached to remote volume :
>
> [2015-09-22 05:29:07.596189] I [MSGID: 114047]
> [client-handshake.c:1224:client_setvolume_cbk] 0-vmstore-client-1:
> Server and Client lk-version numbers are not same, reopening the fds
>
> [2015-09-22 05:29:07.596495] I [MSGID: 114046]
> [client-handshake.c:1213:client_setvolume_cbk] 0-vmstore-client-2:
> Connected to vmstore-client-2, attached to remote volume
> '/export/vmstore/brick01'.
>
> [2015-09-22 05:29:07.596506] I [MSGID: 114047]
> [client-handshake.c:1224:client_setvolume_cbk] 0-vmstore-client-2:
> Server and Client lk-version numbers are not same, reopening the fds
>
> [2015-09-22 05:29:07.608758] I [fuse-bridge.c:5053:fuse_graph_setup]
> 0-fuse: switched to graph 0
>
> [2015-09-22 05:29:07.608910] I [MSGID: 114035]
> [client-handshake.c:193:client_set_lk_version_cbk] 0-vmstore-client-0:
> Server lk version = 1
>
> [2015-09-22 05:29:07.608936] I [MSGID: 114035]
> [client-handshake.c:193:client_set_lk_version_cbk] 0-vmstore-client-1:
> Server lk version = 1
>
> [2015-09-22 05:29:07.608950] I [MSGID: 114035]
> [client-handshake.c:193:client_set_lk_version_cbk] 0-vmstore-client-2:
> Server lk version = 1
>
> [2015-09-22 05:29:07.609695] I [MSGID: 101190]
> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started
> thread with index 2
>
> [2015-09-22 05:29:07.609868] I [fuse-bridge.c:3979:fuse_init]
> 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22
> kernel 7.22
>
> [2015-09-22 05:29:07.616577] I [MSGID: 109063]
> [dht-layout.c:702:dht_layout_normalize] 0-vmstore-dht: Found anomalies
> in / (gfid = 00000000-0000-0000-0000-000000000001). Holes=1 overlaps=0
>
> [2015-09-22 05:29:07.620230] I [MSGID: 109036]
> [dht-common.c:7754:dht_log_new_layout_for_dir_selfheal] 0-vmstore-dht:
> Setting layout of / with [Subvol_name: vmstore-replicate-0, Err: -1 ,
> Start: 0 , Stop: 4294967295 , Hash: 1 ],
>
> [2015-09-22 05:29:08.122415] W [fuse-bridge.c:1230:fuse_err_cbk]
> 0-glusterfs-fuse: 26: REMOVEXATTR() /__DIRECT_IO_TEST__ => -1 (No data
> available)
>
> [2015-09-22 05:29:08.137359] I [MSGID: 109036]
> [dht-common.c:7754:dht_log_new_layout_for_dir_selfheal] 0-vmstore-dht:
> Setting layout of /061b73d5-ae59-462e-b674-ea9c60d436c2 with
> [Subvol_name: vmstore-replicate-0, Err: -1 , Start: 0 , Stop:
> 4294967295 , Hash: 1 ],
>
> [2015-09-22 05:29:08.145835] I [MSGID: 109036]
> [dht-common.c:7754:dht_log_new_layout_for_dir_selfheal] 0-vmstore-dht:
> Setting layout of /061b73d5-ae59-462e-b674-ea9c60d436c2/dom_md with
> [Subvol_name: vmstore-replicate-0, Err: -1 , Start: 0 , Stop:
> 4294967295 , Hash: 1 ],
>
> [2015-09-22 05:30:57.897819] I [MSGID: 100030]
> [glusterfsd.c:2301:main] 0-/usr/sbin/glusterfs: Started running
> /usr/sbin/glusterfs version 3.7.4 (args: /usr/sbin/glusterfs
> --volfile-server=sjcvhost02 --volfile-server=sjcstorage01
> --volfile-server=sjcstorage02 --volfile-id=/vmstore
> /rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)
>
> [2015-09-22 05:30:57.909889] I [MSGID: 101190]
> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started
> thread with index 1
>
> [2015-09-22 05:30:57.923087] I [MSGID: 114020] [client.c:2118:notify]
> 0-vmstore-client-0: parent translators are ready, attempting connect
> on transport
>
> [2015-09-22 05:30:57.925701] I [MSGID: 114020] [client.c:2118:notify]
> 0-vmstore-client-1: parent translators are ready, attempting connect
> on transport
>
> [2015-09-22 05:30:57.927984] I [MSGID: 114020] [client.c:2118:notify]
> 0-vmstore-client-2: parent translators are ready, attempting connect
> on transport
>
> Final graph:
>
> +------------------------------------------------------------------------------+
>
> 1: volume vmstore-client-0
>
> 2: type protocol/client
>
> 3: option ping-timeout 42
>
> 4: option remote-host sjcstorage01
>
> 5: option remote-subvolume /export/vmstore/brick01
>
> 6: option transport-type socket
>
> 7: option send-gids true
>
> 8: end-volume
>
> 9:
>
> 10: volume vmstore-client-1
>
> 11: type protocol/client
>
> 12: option ping-timeout 42
>
> 13: option remote-host sjcstorage02
>
> 14: option remote-subvolume /export/vmstore/brick01
>
> 15: option transport-type socket
>
> 16: option send-gids true
>
> 17: end-volume
>
> 18:
>
> 19: volume vmstore-client-2
>
> 20: type protocol/client
>
> 21: option ping-timeout 42
>
> 22: option remote-host sjcvhost02
>
> 23: option remote-subvolume /export/vmstore/brick01
>
> 24: option transport-type socket
>
> 25: option send-gids true
>
> 26: end-volume
>
> 27:
>
> 28: volume vmstore-replicate-0
>
> 29: type cluster/replicate
>
> 30: option arbiter-count 1
>
> 31: subvolumes vmstore-client-0 vmstore-client-1 vmstore-client-2
>
> 32: end-volume
>
> 33:
>
> 34: volume vmstore-dht
>
> 35: type cluster/distribute
>
> 36: subvolumes vmstore-replicate-0
>
> 37: end-volume
>
> 38:
>
> 39: volume vmstore-write-behind
>
> 40: type performance/write-behind
>
> 41: subvolumes vmstore-dht
>
> 42: end-volume
>
> 43:
>
> 44: volume vmstore-read-ahead
>
> 45: type performance/read-ahead
>
> 46: subvolumes vmstore-write-behind
>
> 47: end-volume
>
> 48:
>
> 49: volume vmstore-readdir-ahead
>
> 50: type performance/readdir-ahead
>
> 51: subvolumes vmstore-read-ahead
>
> 52: end-volume
>
> 53:
>
> 54: volume vmstore-io-cache
>
> 55: type performance/io-cache
>
> 56: subvolumes vmstore-readdir-ahead
>
> 57: end-volume
>
> 58:
>
> 59: volume vmstore-quick-read
>
> 60: type performance/quick-read
>
> 61: subvolumes vmstore-io-cache
>
> 62: end-volume
>
> 63:
>
> 64: volume vmstore-open-behind
>
> 65: type performance/open-behind
>
> 66: subvolumes vmstore-quick-read
>
> 67: end-volume
>
> 68:
>
> 69: volume vmstore-md-cache
>
> 70: type performance/md-cache
>
> 71: subvolumes vmstore-open-behind
>
> 72: end-volume
>
> 73:
>
> 74: volume vmstore
>
> 75: type debug/io-stats
>
> 76: option latency-measurement off
>
> 77: option count-fop-hits off
>
> 78: subvolumes vmstore-md-cache
>
> 79: end-volume
>
> 80:
>
> 81: volume meta-autoload
>
> 82: type meta
>
> 83: subvolumes vmstore
>
> 84: end-volume
>
> 85:
>
> +------------------------------------------------------------------------------+
>
> [2015-09-22 05:30:57.934021] I [rpc-clnt.c:1851:rpc_clnt_reconfig]
> 0-vmstore-client-0: changing port to 49153 (from 0)
>
> [2015-09-22 05:30:57.934145] I [rpc-clnt.c:1851:rpc_clnt_reconfig]
> 0-vmstore-client-1: changing port to 49153 (from 0)
>
> [2015-09-22 05:30:57.934491] I [rpc-clnt.c:1851:rpc_clnt_reconfig]
> 0-vmstore-client-2: changing port to 49153 (from 0)
>
> [2015-09-22 05:30:57.942198] I [MSGID: 114057]
> [client-handshake.c:1437:select_server_supported_programs]
> 0-vmstore-client-0: Using Program GlusterFS 3.3, Num (1298437),
> Version (330)
>
> [2015-09-22 05:30:57.942545] I [MSGID: 114057]
> [client-handshake.c:1437:select_server_supported_programs]
> 0-vmstore-client-1: Using Program GlusterFS 3.3, Num (1298437),
> Version (330)
>
> [2015-09-22 05:30:57.942659] I [MSGID: 114057]
> [client-handshake.c:1437:select_server_supported_programs]
> 0-vmstore-client-2: Using Program GlusterFS 3.3, Num (1298437),
> Version (330)
>
> [2015-09-22 05:30:57.942797] I [MSGID: 114046]
> [client-handshake.c:1213:client_setvolume_cbk] 0-vmstore-client-0:
> Connected to vmstore-client-0, attached to remote volume
> '/export/vmstore/brick01'.
>
> [2015-09-22 05:30:57.942808] I [MSGID: 114047]
> [client-handshake.c:1224:client_setvolume_cbk] 0-vmstore-client-0:
> Server and Client lk-version numbers are not same, reopening the fds
>
> [2015-09-22 05:30:57.943036] I [MSGID: 108005]
> [afr-common.c:3998:afr_notify] 0-vmstore-replicate-0: Subvolume
> 'vmstore-client-0' came back up; going online.
>
> [2015-09-22 05:30:57.943078] I [MSGID: 114046]
> [client-handshake.c:1213:client_setvolume_cbk] 0-vmstore-client-1:
> Connected to vmstore-client-1, attached to remote volume
> '/export/vmstore/brick01'.
>
> [2015-09-22 05:30:57.943086] I [MSGID: 114047]
> [client-handshake.c:1224:client_setvolume_cbk] 0-vmstore-client-1:
> Server and Client lk-version numbers are not same, reopening the fds
>
> [2015-09-22 05:30:57.943292] I [MSGID: 114046]
> [client-handshake.c:1213:client_setvolume_cbk] 0-vmstore-client-2:
> Connected to vmstore-client-2, attached to remote volume
> '/export/vmstore/brick01'.
>
> [2015-09-22 05:30:57.943302] I [MSGID: 114047]
> [client-handshake.c:1224:client_setvolume_cbk] 0-vmstore-client-2:
> Server and Client lk-version numbers are not same, reopening the fds
>
> [2015-09-22 05:30:57.953887] I [fuse-bridge.c:5053:fuse_graph_setup]
> 0-fuse: switched to graph 0
>
> [2015-09-22 05:30:57.954071] I [MSGID: 114035]
> [client-handshake.c:193:client_set_lk_version_cbk] 0-vmstore-client-0:
> Server lk version = 1
>
> [2015-09-22 05:30:57.954105] I [MSGID: 114035]
> [client-handshake.c:193:client_set_lk_version_cbk] 0-vmstore-client-1:
> Server lk version = 1
>
> [2015-09-22 05:30:57.954124] I [MSGID: 114035]
> [client-handshake.c:193:client_set_lk_version_cbk] 0-vmstore-client-2:
> Server lk version = 1
>
> [2015-09-22 05:30:57.955282] I [fuse-bridge.c:3979:fuse_init]
> 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22
> kernel 7.22
>
> [2015-09-22 05:30:57.955738] I [MSGID: 101190]
> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started
> thread with index 2
>
> [2015-09-22 05:30:57.970232] I [fuse-bridge.c:4900:fuse_thread_proc]
> 0-fuse: unmounting /rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore
>
> [2015-09-22 05:30:57.970834] W [glusterfsd.c:1219:cleanup_and_exit]
> (-->/lib64/libpthread.so.0(+0x7df5) [0x7f187139fdf5]
> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7f1872a09785]
> -->/usr/sbin/glusterfs(cleanup_and_exit+0x69) [0x7f1872a09609] ) 0-:
> received signum (15), shutting down
>
> [2015-09-22 05:30:57.970848] I [fuse-bridge.c:5595:fini] 0-fuse:
> Unmounting '/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.
>
> [2015-09-22 05:30:58.420973] I [fuse-bridge.c:4900:fuse_thread_proc]
> 0-fuse: unmounting /rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore
>
> [2015-09-22 05:30:58.421355] W [glusterfsd.c:1219:cleanup_and_exit]
> (-->/lib64/libpthread.so.0(+0x7df5) [0x7f8267cd4df5]
> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7f826933e785]
> -->/usr/sbin/glusterfs(cleanup_and_exit+0x69) [0x7f826933e609] ) 0-:
> received signum (15), shutting down
>
> [2015-09-22 05:30:58.421369] I [fuse-bridge.c:5595:fini] 0-fuse:
> Unmounting '/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.
>
> [2015-09-22 05:31:09.534410] I [MSGID: 100030]
> [glusterfsd.c:2301:main] 0-/usr/sbin/glusterfs: Started running
> /usr/sbin/glusterfs version 3.7.4 (args: /usr/sbin/glusterfs
> --volfile-server=sjcvhost02 --volfile-server=sjcstorage01
> --volfile-server=sjcstorage02 --volfile-id=/vmstore
> /rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)
>
> [2015-09-22 05:31:09.545686] I [MSGID: 101190]
> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started
> thread with index 1
>
> [2015-09-22 05:31:09.553019] I [MSGID: 114020] [client.c:2118:notify]
> 0-vmstore-client-0: parent translators are ready, attempting connect
> on transport
>
> [2015-09-22 05:31:09.555552] I [MSGID: 114020] [client.c:2118:notify]
> 0-vmstore-client-1: parent translators are ready, attempting connect
> on transport
>
> [2015-09-22 05:31:09.557989] I [MSGID: 114020] [client.c:2118:notify]
> 0-vmstore-client-2: parent translators are ready, attempting connect
> on transport
>
> Final graph:
>
> +------------------------------------------------------------------------------+
>
> 1: volume vmstore-client-0
>
> 2: type protocol/client
>
> 3: option ping-timeout 42
>
> 4: option remote-host sjcstorage01
>
> 5: option remote-subvolume /export/vmstore/brick01
>
> 6: option transport-type socket
>
> 7: option send-gids true
>
> 8: end-volume
>
> 9:
>
> 10: volume vmstore-client-1
>
> 11: type protocol/client
>
> 12: option ping-timeout 42
>
> 13: option remote-host sjcstorage02
>
> 14: option remote-subvolume /export/vmstore/brick01
>
> 15: option transport-type socket
>
> 16: option send-gids true
>
> 17: end-volume
>
> 18:
>
> 19: volume vmstore-client-2
>
> 20: type protocol/client
>
> 21: option ping-timeout 42
>
> 22: option remote-host sjcvhost02
>
> 23: option remote-subvolume /export/vmstore/brick01
>
> 24: option transport-type socket
>
> 25: option send-gids true
>
> 26: end-volume
>
> 27:
>
> 28: volume vmstore-replicate-0
>
> 29: type cluster/replicate
>
> 30: option arbiter-count 1
>
> 31: subvolumes vmstore-client-0 vmstore-client-1 vmstore-client-2
>
> 32: end-volume
>
> 33:
>
> 34: volume vmstore-dht
>
> 35: type cluster/distribute
>
> 36: subvolumes vmstore-replicate-0
>
> 37: end-volume
>
> 38:
>
> 39: volume vmstore-write-behind
>
> 40: type performance/write-behind
>
> 41: subvolumes vmstore-dht
>
> 42: end-volume
>
> 43:
>
> 44: volume vmstore-read-ahead
>
> 45: type performance/read-ahead
>
> 46: subvolumes vmstore-write-behind
>
> 47: end-volume
>
> 48:
>
> 49: volume vmstore-readdir-ahead
>
> 50: type performance/readdir-ahead
>
> 51: subvolumes vmstore-read-ahead
>
> 52: end-volume
>
> 53:
>
> 54: volume vmstore-io-cache
>
> 55: type performance/io-cache
>
> 56: subvolumes vmstore-readdir-ahead
>
> 57: end-volume
>
> 58:
>
> 59: volume vmstore-quick-read
>
> 60: type performance/quick-read
>
> 61: subvolumes vmstore-io-cache
>
> 62: end-volume
>
> 63:
>
> 64: volume vmstore-open-behind
>
> 65: type performance/open-behind
>
> 66: subvolumes vmstore-quick-read
>
> 67: end-volume
>
> 68:
>
> 69: volume vmstore-md-cache
>
> 70: type performance/md-cache
>
> 71: subvolumes vmstore-open-behind
>
> 72: end-volume
>
> 73:
>
> 74: volume vmstore
>
> 75: type debug/io-stats
>
> 76: option latency-measurement off
>
> 77: option count-fop-hits off
>
> 78: subvolumes vmstore-md-cache
>
> 79: end-volume
>
> 80:
>
> 81: volume meta-autoload
>
> 82: type meta
>
> 83: subvolumes vmstore
>
> 84: end-volume
>
> 85:
>
> +------------------------------------------------------------------------------+
>
> [2015-09-22 05:31:09.563262] I [rpc-clnt.c:1851:rpc_clnt_reconfig]
> 0-vmstore-client-0: changing port to 49153 (from 0)
>
> [2015-09-22 05:31:09.563431] I [rpc-clnt.c:1851:rpc_clnt_reconfig]
> 0-vmstore-client-1: changing port to 49153 (from 0)
>
> [2015-09-22 05:31:09.563877] I [rpc-clnt.c:1851:rpc_clnt_reconfig]
> 0-vmstore-client-2: changing port to 49153 (from 0)
>
> [2015-09-22 05:31:09.572443] I [MSGID: 114057]
> [client-handshake.c:1437:select_server_supported_programs]
> 0-vmstore-client-1: Using Program GlusterFS 3.3, Num (1298437),
> Version (330)
>
> [2015-09-22 05:31:09.572599] I [MSGID: 114057]
> [client-handshake.c:1437:select_server_supported_programs]
> 0-vmstore-client-0: Using Program GlusterFS 3.3, Num (1298437),
> Version (330)
>
> [2015-09-22 05:31:09.572742] I [MSGID: 114057]
> [client-handshake.c:1437:select_server_supported_programs]
> 0-vmstore-client-2: Using Program GlusterFS 3.3, Num (1298437),
> Version (330)
>
> [2015-09-22 05:31:09.573165] I [MSGID: 114046]
> [client-handshake.c:1213:client_setvolume_cbk] 0-vmstore-client-1:
> Connected to vmstore-client-1, attached to remote volume
> '/export/vmstore/brick01'.
>
> [2015-09-22 05:31:09.573186] I [MSGID: 114047]
> [client-handshake.c:1224:client_setvolume_cbk] 0-vmstore-client-1:
> Server and Client lk-version numbers are not same, reopening the fds
>
> [2015-09-22 05:31:09.573395] I [MSGID: 108005]
> [afr-common.c:3998:afr_notify] 0-vmstore-replicate-0: Subvolume
> 'vmstore-client-1' came back up; going online.
>
> [2015-09-22 05:31:09.573427] I [MSGID: 114046]
> [client-handshake.c:1213:client_setvolume_cbk] 0-vmstore-client-0:
> Connected to vmstore-client-0, attached to remote volume
> '/export/vmstore/brick01'.
>
> [2015-09-22 05:31:09.573435] I [MSGID: 114047]
> [client-handshake.c:1224:client_setvolume_cbk] 0-vmstore-client-0:
> Server and Client lk-version numbers are not same, reopening the fds
>
> [2015-09-22 05:31:09.573754] I [MSGID: 114046]
> [client-handshake.c:1213:client_setvolume_cbk] 0-vmstore-client-2:
> Connected to vmstore-client-2, attached to remote volume
> '/export/vmstore/brick01'.
>
> [2015-09-22 05:31:09.573783] I [MSGID: 114047]
> [client-handshake.c:1224:client_setvolume_cbk] 0-vmstore-client-2:
> Server and Client lk-version numbers are not same, reopen:
>
> [2015-09-22 05:31:09.577192] I [fuse-bridge.c:5053:fuse_graph_setup]
> 0-fuse: switched to graph 0
>
> [2015-09-22 05:31:09.577302] I [MSGID: 114035]
> [client-handshake.c:193:client_set_lk_version_cbk] 0-vmstore-client-1:
> Server lk version = 1
>
> [2015-09-22 05:31:09.577325] I [MSGID: 114035]
> [client-handshake.c:193:client_set_lk_version_cbk] 0-vmstore-client-0:
> Server lk version = 1
>
> [2015-09-22 05:31:09.577339] I [MSGID: 114035]
> [client-handshake.c:193:client_set_lk_version_cbk] 0-vmstore-client-2:
> Server lk version = 1
>
> [2015-09-22 05:31:09.578125] I [fuse-bridge.c:3979:fuse_init]
> 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.22
> kernel 7.22
>
> [2015-09-22 05:31:09.578636] I [MSGID: 101190]
> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started
> thread with index 2
>
> [2015-09-22 05:31:10.073698] I [fuse-bridge.c:4900:fuse_thread_proc]
> 0-fuse: unmounting /rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore
>
> [2015-09-22 05:31:10.073977] W [glusterfsd.c:1219:cleanup_and_exit]
> (-->/lib64/libpthread.so.0(+0x7df5) [0x7f6b9ba88df5]
> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7f6b9d0f2785]
> -->/usr/sbin/glusterfs(cleanup_and_exit+0x69) [0x7f6b9d0f2609] ) 0-:
> received signum (15), shutting down
>
> [2015-09-22 05:31:10.073993] I [fuse-bridge.c:5595:fini] 0-fuse:
> Unmounting '/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.
>
> [2015-09-22 05:31:20.184700] I [MSGID: 100030]
> [glusterfsd.c:2301:main] 0-/usr/sbin/glusterfs: Started running
> /usr/sbin/glusterfs version 3.7.4 (args: /usr/sbin/glusterfs
> --volfile-server=sjcvhost02 --volfile-server=sjcstorage01
> --volfile-server=sjcstorage02 --volfile-id=/vmstore
> /rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)
>
> [2015-09-22 05:31:20.194928] I [MSGID: 101190]
> [event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll: Started
> thread with index 1
>
> [2015-09-22 05:31:20.200701] I [MSGID: 114020] [client.c:2118:notify]
> 0-vmstore-client-0: parent translators are ready, attempting connect
> on transport
>
> [2015-09-22 05:31:20.203110] I [MSGID: 114020] [client.c:2118:notify]
> 0-vmstore-client-1: parent translators are ready, attempting connect
> on transport
>
> [2015-09-22 05:31:20.205708] I [MSGID: 114020] [client.c:2118:notify]
> 0-vmstore-client-2: parent translators are ready, attempting connect
> on transport
>
> Final graph:
>
>
>
> Hope this helps.
>
>
> thanks again
>
>
> Brett Stevens
>
>
>
> On Tue, Sep 22, 2015 at 10:14 PM, Sahina Bose <sabose(a)redhat.com
> <mailto:sabose@redhat.com>> wrote:
>
>
>
> On 09/22/2015 02:17 PM, Brett Stevens wrote:
>> Hi. First time on the lists. I've searched for this but no luck
>> so sorry if this has been covered before.
>>
>> Im working with the latest 3.6 beta with the following
>> infrastructure.
>>
>> 1 management host (to be used for a number of tasks so chose not
>> to use self hosted, we are a school and will need to keep an eye
>> on hardware costs)
>> 2 compute nodes
>> 2 gluster nodes
>>
>> so far built one gluster volume using the gluster cli to give me
>> 2 nodes and one arbiter node (management host)
>>
>> so far, every time I create a volume, it shows up strait away on
>> the ovirt gui. however no matter what I try, I cannot create or
>> import it as a data domain.
>>
>> the current error in the ovirt gui is "Error while executing
>> action AddGlusterFsStorageDomain: Error creating a storage
>> domain's metadata"
>
> Please provide vdsm and gluster logs
>
>>
>> logs, continuously rolling the following errors around
>>
>> Scheduler_Worker-53) [] START,
>> GlusterVolumesListVDSCommand(HostName = sjcstorage02,
>> GlusterVolumesListVDSParameters:{runAsync='true',
>> hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}), log id: 24198fbf
>>
>> 2015-09-22 03:57:29,903 WARN
>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
>> (DefaultQuartzScheduler_Worker-53) [] Could not associate brick
>> 'sjcstorage01:/export/vmstore/brick01' of volume
>> '878a316d-2394-4aae-bdf8-e10eea38225e' with correct network as no
>> gluster network found in cluster
>> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>>
>
> What is the hostname provided in ovirt engine for sjcstorage01 ?
> Does this host have multiple nics?
>
> Could you provide output of gluster volume info?
> Please note, that these errors are not related to error in
> creating storage domain. However, these errors could prevent you
> from monitoring the state of gluster volume from oVirt
>
>> 2015-09-22 03:57:29,905 WARN
>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
>> (DefaultQuartzScheduler_Worker-53) [] Could not associate brick
>> 'sjcstorage02:/export/vmstore/brick01' of volume
>> '878a316d-2394-4aae-bdf8-e10eea38225e' with correct network as no
>> gluster network found in cluster
>> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>>
>> 2015-09-22 03:57:29,905 WARN
>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
>> (DefaultQuartzScheduler_Worker-53) [] Could not add brick
>> 'sjcvhost02:/export/vmstore/brick01' to volume
>> '878a316d-2394-4aae-bdf8-e10eea38225e' - server uuid
>> '29b58278-9aa3-47c5-bfb4-1948ef7fdbba' not found in cluster
>> 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
>>
>> 2015-09-22 03:57:29,905 INFO
>> [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
>> (DefaultQuartzScheduler_Worker-53) [] FINISH,
>> GlusterVolumesListVDSCommand, return:
>> {878a316d-2394-4aae-bdf8-e10eea38225e=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@41e93fb1},
>> log id: 24198fbf
>>
>>
>> I'm new to ovirt and gluster, so any help would be great
>>
>>
>> thanks
>>
>>
>> Brett Stevens
>>
>>
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org <mailto:Users@ovirt.org>
>> http://lists.ovirt.org/mailman/listinfo/users
>
>
--------------090502070204020205010802
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
+ ovirt-users<br>
<br>
Some clarity on your setup - <br>
<span class="">sjcvhost03 - is this your arbiter node and ovirt
management node? And are you running a compute + storage on the
same nodes - i.e, </span><span class="">sjcstorage01, </span><span
class="">sjcstorage02, </span><span class="">sjcvhost03
(arbiter).<br>
<br>
</span><br>
<span class=""> CreateStorageDomainVDSCommand(HostName = sjcvhost03,
CreateStorageDomainVDSCommandParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storageDomain='StorageDomainStatic:{name='sjcvmstore',
id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
args='sjcstorage01:/vmstore'}), log id: b9fe587<br>
<br>
- fails with </span><span class="">Error creating a storage
domain's metadata: ("create meta file 'outbox' failed: [Errno 5]
Input/output error",<br>
<br>
Are the vdsm logs you provided from </span><span class="">sjcvhost03?
There are no errors to be seen in the gluster log you provided.
Could you provide mount log from </span><span class=""><span
class="">sjcvhost03</span> (at
/rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore.log most
likely)<br>
If possible, /var/log/glusterfs/* from the 3 storage nodes.<br>
<br>
thanks<br>
sahina<br>
<br>
</span>
<div class="moz-cite-prefix">On 09/23/2015 05:02 AM, Brett Stevens
wrote:<br>
</div>
<blockquote
cite="mid:CAK02sjsh7JXf56xuMSEW_knZcNem9FNsdjEhd3NAQOQiLjeTrA@mail.gmail.com"
type="cite">
<div dir="ltr">Hi Sahina,Â
<div><br>
</div>
<div>as requested here is some logs taken during a domain
create.</div>
<div><br>
</div>
<div>
<p class=""><span class="">2015-09-22 18:46:44,320 INFOÂ
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-88) [] START,
GlusterVolumesListVDSCommand(HostName = sjcstorage01,
GlusterVolumesListVDSParameters:{runAsync='true',
hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}), log id:
2205ff1</span></p>
<p class=""><span class="">2015-09-22 18:46:44,413 WARNÂ
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-88) [] Could not associate
brick 'sjcstorage01:/export/vmstore/brick01' of volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct
network as no gluster network found in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p class=""><span class="">2015-09-22 18:46:44,417 WARNÂ
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-88) [] Could not associate
brick 'sjcstorage02:/export/vmstore/brick01' of volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct
network as no gluster network found in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p class=""><span class="">2015-09-22 18:46:44,417 WARNÂ
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-88) [] Could not add brick
'sjcvhost02:/export/vmstore/brick01' to volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9' - server uuid
'29b58278-9aa3-47c5-bfb4-1948ef7fdbba' not found in
cluster 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p class=""><span class="">2015-09-22 18:46:44,418 INFOÂ
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-88) [] FINISH,
GlusterVolumesListVDSCommand, return:
{030f270a-0999-4df4-9b14-ae56eb0a2fb9=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@a0628f36},
log id: 2205ff1</span></p>
<p class=""><span class="">2015-09-22 18:46:45,215 INFOÂ
[org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
(default task-24) [5099cda3] Lock Acquired to object
'EngineLock:{exclusiveLocks='[sjcstorage01:/vmstore=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
sharedLocks='null'}'</span></p>
<p class=""><span class="">2015-09-22 18:46:45,230 INFOÂ
[org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
(default task-24) [5099cda3] Running command:
AddStorageServerConnectionCommand internal: false.
Entities affected :Â ID:
aaa00000-0000-0000-0000-123456789aaa Type: SystemAction
group CREATE_STORAGE_DOMAIN with role type ADMIN</span></p>
<p class=""><span class="">2015-09-22 18:46:45,233 INFOÂ
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(default task-24) [5099cda3] START,
ConnectStorageServerVDSCommand(HostName = sjcvhost03,
StorageServerConnectionManagementVDSParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storagePoolId='00000000-0000-0000-0000-000000000000',
storageType='GLUSTERFS',
connectionList='[StorageServerConnections:{id='null',
connection='sjcstorage01:/vmstore', iqn='null',
vfsType='glusterfs', mountOptions='null',
nfsVersion='null', nfsRetrans='null', nfsTimeo='null',
iface='null', netIfaceName='null'}]'}), log id: 6a112292</span></p>
<p class=""><span class="">2015-09-22 18:46:48,065 INFOÂ
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(default task-24) [5099cda3] FINISH,
ConnectStorageServerVDSCommand, return:
{00000000-0000-0000-0000-000000000000=0}, log id: 6a112292</span></p>
<p class=""><span class="">2015-09-22 18:46:48,073 INFOÂ
[org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand]
(default task-24) [5099cda3] Lock freed to object
'EngineLock:{exclusiveLocks='[sjcstorage01:/vmstore=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
sharedLocks='null'}'</span></p>
<p class=""><span class="">2015-09-22 18:46:48,188 INFOÂ
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
(default task-23) [6410419] Running command:
AddGlusterFsStorageDomainCommand internal: false. Entities
affected :Â ID: aaa00000-0000-0000-0000-123456789aaa Type:
SystemAction group CREATE_STORAGE_DOMAIN with role type
ADMIN</span></p>
<p class=""><span class="">2015-09-22 18:46:48,206 INFOÂ
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(default task-23) [6410419] START,
ConnectStorageServerVDSCommand(HostName = sjcvhost03,
StorageServerConnectionManagementVDSParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storagePoolId='00000000-0000-0000-0000-000000000000',
storageType='GLUSTERFS',
connectionList='[StorageServerConnections:{id='ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e',
connection='sjcstorage01:/vmstore', iqn='null',
vfsType='glusterfs', mountOptions='null',
nfsVersion='null', nfsRetrans='null', nfsTimeo='null',
iface='null', netIfaceName='null'}]'}), log id: 38a2b0d</span></p>
<p class=""><span class="">2015-09-22 18:46:48,219 INFOÂ
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(default task-23) [6410419] FINISH,
ConnectStorageServerVDSCommand, return:
{ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=0}, log id: 38a2b0d</span></p>
<p class=""><span class="">2015-09-22 18:46:48,221 INFOÂ
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(default task-23) [6410419] START,
CreateStorageDomainVDSCommand(HostName = sjcvhost03,
CreateStorageDomainVDSCommandParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storageDomain='StorageDomainStatic:{name='sjcvmstore',
id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
args='sjcstorage01:/vmstore'}), log id: b9fe587</span></p>
<p class=""><span class="">2015-09-22 18:46:48,744 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-23) [6410419] Correlation ID: null, Call
Stack: null, Custom Event ID: -1, Message: VDSM sjcvhost03
command failed: Error creating a storage domain's
metadata: ("create meta file 'outbox' failed: [Errno 5]
Input/output error",)</span></p>
<p class=""><span class="">2015-09-22 18:46:48,744 INFOÂ
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(default task-23) [6410419] Command
'org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand'
return value 'StatusOnlyReturnForXmlRpc
[status=StatusForXmlRpc [code=362, message=Error creating
a storage domain's metadata: ("create meta file 'outbox'
failed: [Errno 5] Input/output error",)]]'</span></p>
<p class=""><span class="">2015-09-22 18:46:48,744 INFOÂ
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(default task-23) [6410419] HostName = sjcvhost03</span></p>
<p class=""><span class="">2015-09-22 18:46:48,745 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(default task-23) [6410419] Command
'CreateStorageDomainVDSCommand(HostName = sjcvhost03,
CreateStorageDomainVDSCommandParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storageDomain='StorageDomainStatic:{name='sjcvmstore',
id='597d5b5b-7c09-4de9-8840-6993bd9b61a6'}',
args='sjcstorage01:/vmstore'})' execution failed:
VDSGenericException: VDSErrorException: Failed in
vdscommand to CreateStorageDomainVDS, error = Error
creating a storage domain's metadata: ("create meta file
'outbox' failed: [Errno 5] Input/output error",)</span></p>
<p class=""><span class="">2015-09-22 18:46:48,745 INFOÂ
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand]
(default task-23) [6410419] FINISH,
CreateStorageDomainVDSCommand, log id: b9fe587</span></p>
<p class=""><span class="">2015-09-22 18:46:48,745 ERROR
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
(default task-23) [6410419] Command
'org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand'
failed: EngineException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed in
vdscommand to CreateStorageDomainVDS, error = Error
creating a storage domain's metadata: ("create meta file
'outbox' failed: [Errno 5] Input/output error",) (Failed
with error StorageDomainMetadataCreationError and code
362)</span></p>
<p class=""><span class="">2015-09-22 18:46:48,755 INFOÂ
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
(default task-23) [6410419] Command
[id=5ae15f53-69a1-47c5-b3a5-82f32c20e48f]: Compensating
NEW_ENTITY_ID of
org.ovirt.engine.core.common.businessentities.StorageDomainDynamic;
snapshot: 597d5b5b-7c09-4de9-8840-6993bd9b61a6.</span></p>
<p class=""><span class="">2015-09-22 18:46:48,758 INFOÂ
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
(default task-23) [6410419] Command
[id=5ae15f53-69a1-47c5-b3a5-82f32c20e48f]: Compensating
NEW_ENTITY_ID of
org.ovirt.engine.core.common.businessentities.StorageDomainStatic;
snapshot: 597d5b5b-7c09-4de9-8840-6993bd9b61a6.</span></p>
<p class=""><span class="">2015-09-22 18:46:48,769 ERROR
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand]
(default task-23) [6410419] Transaction rolled-back for
command
'org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand'.</span></p>
<p class=""><span class="">2015-09-22 18:46:48,784 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-23) [6410419] Correlation ID: 6410419, Job
ID: 78692780-a06f-49a5-b6b1-e6c24a820d62, Call Stack:
null, Custom Event ID: -1, Message: Failed to add Storage
Domain sjcvmstore. (User: admin@internal)</span></p>
<p class=""><span class="">2015-09-22 18:46:48,996 INFOÂ
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
(default task-32) [1635a244] Lock Acquired to object
'EngineLock:{exclusiveLocks='[ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>,
sjcstorage01:/vmstore=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
sharedLocks='null'}'</span></p>
<p class=""><span class="">2015-09-22 18:46:49,018 INFOÂ
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
(default task-32) [1635a244] Running command:
RemoveStorageServerConnectionCommand internal: false.
Entities affected :Â ID:
aaa00000-0000-0000-0000-123456789aaa Type: SystemAction
group CREATE_STORAGE_DOMAIN with role type ADMIN</span></p>
<p class=""><span class="">2015-09-22 18:46:49,024 INFOÂ
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
(default task-32) [1635a244] Removing connection
'ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e' from database </span></p>
<p class=""><span class="">2015-09-22 18:46:49,026 INFOÂ
[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
(default task-32) [1635a244] START,
DisconnectStorageServerVDSCommand(HostName = sjcvhost03,
StorageServerConnectionManagementVDSParameters:{runAsync='true',
hostId='80245ac2-32a3-4d5d-b0fe-08019e2d1c9c',
storagePoolId='00000000-0000-0000-0000-000000000000',
storageType='GLUSTERFS',
connectionList='[StorageServerConnections:{id='ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e',
connection='sjcstorage01:/vmstore', iqn='null',
vfsType='glusterfs', mountOptions='null',
nfsVersion='null', nfsRetrans='null', nfsTimeo='null',
iface='null', netIfaceName='null'}]'}), log id: 39d3b568</span></p>
<p class=""><span class="">2015-09-22 18:46:49,248 INFOÂ
[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand]
(default task-32) [1635a244] FINISH,
DisconnectStorageServerVDSCommand, return:
{ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=0}, log id: 39d3b568</span></p>
<p class=""><span class="">2015-09-22 18:46:49,252 INFOÂ
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand]
(default task-32) [1635a244] Lock freed to object
'EngineLock:{exclusiveLocks='[ec5ab31e-b5b9-4a8e-a2b2-0876df71a21e=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>,
sjcstorage01:/vmstore=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]',
sharedLocks='null'}'</span></p>
<p class=""><span class="">2015-09-22 18:46:49,431 INFOÂ
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-3) [] START,
GlusterVolumesListVDSCommand(HostName = sjcstorage01,
GlusterVolumesListVDSParameters:{runAsync='true',
hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}), log id:
17014ae8</span></p>
<p class=""><span class="">2015-09-22 18:46:49,511 WARNÂ
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-3) [] Could not associate
brick 'sjcstorage01:/export/vmstore/brick01' of volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct
network as no gluster network found in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p class=""><span class="">2015-09-22 18:46:49,515 WARNÂ
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-3) [] Could not associate
brick 'sjcstorage02:/export/vmstore/brick01' of volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9' with correct
network as no gluster network found in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p class=""><span class="">2015-09-22 18:46:49,516 WARNÂ
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-3) [] Could not add brick
'sjcvhost02:/export/vmstore/brick01' to volume
'030f270a-0999-4df4-9b14-ae56eb0a2fb9' - server uuid
'29b58278-9aa3-47c5-bfb4-1948ef7fdbba' not found in
cluster 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p class=""><span class="">2015-09-22 18:46:49,516 INFOÂ
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-3) [] FINISH,
GlusterVolumesListVDSCommand, return:
{030f270a-0999-4df4-9b14-ae56eb0a2fb9=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@92ed0f75},
log id: 17014ae8</span></p>
<p class=""><span class=""><br>
</span></p>
<p class=""><span class=""><br>
</span></p>
<p class=""><span class="">ovirt engine thinks that
sjcstorage01 is sjcstorage01, its all testbed at the
moment and is all short names, defined in /etc/hosts (all
copied to each server for consistancy)</span></p>
<p class=""><span class=""><br>
</span></p>
<p class=""><span class="">volume info for vmstore is</span></p>
<p class=""><span class=""><br>
</span></p>
<p class=""><span class="">Status of volume: vmstore</span></p>
<p class=""><span class="">Gluster process          Â
    TCP Port RDMA Port Online Pid</span></p>
<p class=""><span class="">------------------------------------------------------------------------------</span></p>
<p class=""><span class="">Brick
sjcstorage01:/export/vmstore/brick01Â 49157 Â Â 0Â Â Â Â Â
Y Â Â Â 7444Â </span></p>
<p class=""><span class="">Brick
sjcstorage02:/export/vmstore/brick01Â 49157 Â Â 0Â Â Â Â Â
Y Â Â Â 4063Â </span></p>
<p class=""><span class="">Brick
sjcvhost02:/export/vmstore/brick01Â Â 49156 Â Â 0Â Â Â Â Â
Y Â Â Â 3243Â </span></p>
<p class=""><span class="">NFS Server on localhost      Â
    2049   0     Y    3268 </span></p>
<p class=""><span class="">Self-heal Daemon on localhost   Â
    N/A    N/A    Y    3284 </span></p>
<p class=""><span class="">NFS Server on sjcstorage01Â Â Â Â Â
    2049   0     Y    7463 </span></p>
<p class=""><span class="">Self-heal Daemon on sjcstorage01Â Â
    N/A    N/A    Y    7472 </span></p>
<p class=""><span class="">NFS Server on sjcstorage02Â Â Â Â Â
    2049   0     Y    4082 </span></p>
<p class=""><span class="">Self-heal Daemon on sjcstorage02Â Â
    N/A    N/A    Y    4090 </span></p>
<p class=""><span class="">Â </span></p>
<p class=""><span class="">Task Status of Volume vmstore</span></p>
<p class=""><span class="">------------------------------------------------------------------------------</span></p>
<p class="">
</p>
<p class=""><span class="">There are no active volume tasks</span></p>
<p class=""><span class=""><br>
</span></p>
<p class=""><span class=""><br>
</span></p>
<p class=""><span class="">vdsm logs from time the domain is
added</span></p>
<p class=""><span class=""><br>
</span></p>
<p class="">hread-789::DEBUG::2015-09-22
19:12:05,865::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Thread-790::DEBUG::2015-09-22
19:12:07,797::task::595::Storage.TaskManager.Task::(_updateState)
Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::moving from
state init -> state preparing</p>
<p class="">Thread-790::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:07,797::logUtils::48::dispatcher::(wrapper) Run and
protect: repoStats(options=None)</p>
<p class="">Thread-790::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:07,797::logUtils::51::dispatcher::(wrapper) Run and
protect: repoStats, Return response: {}</p>
<p class="">Thread-790::DEBUG::2015-09-22
19:12:07,797::task::1191::Storage.TaskManager.Task::(prepare)
Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::finished: {}</p>
<p class="">Thread-790::DEBUG::2015-09-22
19:12:07,797::task::595::Storage.TaskManager.Task::(_updateState)
Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::moving from
state preparing -> state finished</p>
<p class="">Thread-790::DEBUG::2015-09-22
19:12:07,797::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}</p>
<p class="">Thread-790::DEBUG::2015-09-22
19:12:07,797::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</p>
<p class="">Thread-790::DEBUG::2015-09-22
19:12:07,797::task::993::Storage.TaskManager.Task::(_decref)
Task=`93731f26-a48f-45c9-9959-42c96b09cf85`::ref 0 aborting
False</p>
<p class="">Thread-790::DEBUG::2015-09-22
19:12:07,802::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Reactor thread::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:14,816::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from <a moz-do-not-send="true"
href="http://127.0.0.1:52510">127.0.0.1:52510</a></p>
<p class="">Reactor thread::DEBUG::2015-09-22
19:12:14,822::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11</p>
<p class="">Reactor thread::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:14,823::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from <a moz-do-not-send="true"
href="http://127.0.0.1:52510">127.0.0.1:52510</a></p>
<p class="">Reactor thread::DEBUG::2015-09-22
19:12:14,823::bindingxmlrpc::1297::XmlDetector::(handle_socket)
xml over http detected from ('127.0.0.1', 52510)</p>
<p class="">BindingXMLRPC::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:14,823::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52510">127.0.0.1:52510</a></p>
<p class="">Thread-791::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:14,823::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52510">127.0.0.1:52510</a> started</p>
<p class="">Thread-791::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:14,825::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52510">127.0.0.1:52510</a> stopped</p>
<p class="">Thread-792::DEBUG::2015-09-22
19:12:20,872::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Thread-793::DEBUG::2015-09-22
19:12:22,832::task::595::Storage.TaskManager.Task::(_updateState)
Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::moving from
state init -> state preparing</p>
<p class="">Thread-793::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:22,832::logUtils::48::dispatcher::(wrapper) Run and
protect: repoStats(options=None)</p>
<p class="">Thread-793::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:22,832::logUtils::51::dispatcher::(wrapper) Run and
protect: repoStats, Return response: {}</p>
<p class="">Thread-793::DEBUG::2015-09-22
19:12:22,832::task::1191::Storage.TaskManager.Task::(prepare)
Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::finished: {}</p>
<p class="">Thread-793::DEBUG::2015-09-22
19:12:22,832::task::595::Storage.TaskManager.Task::(_updateState)
Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::moving from
state preparing -> state finished</p>
<p class="">Thread-793::DEBUG::2015-09-22
19:12:22,833::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}</p>
<p class="">Thread-793::DEBUG::2015-09-22
19:12:22,833::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</p>
<p class="">Thread-793::DEBUG::2015-09-22
19:12:22,833::task::993::Storage.TaskManager.Task::(_decref)
Task=`a1f48f6f-a9ba-4dac-b024-ae6289f4a7dd`::ref 0 aborting
False</p>
<p class="">Thread-793::DEBUG::2015-09-22
19:12:22,837::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Reactor thread::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:29,841::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from <a moz-do-not-send="true"
href="http://127.0.0.1:52511">127.0.0.1:52511</a></p>
<p class="">Reactor thread::DEBUG::2015-09-22
19:12:29,848::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11</p>
<p class="">Reactor thread::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:29,849::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from <a moz-do-not-send="true"
href="http://127.0.0.1:52511">127.0.0.1:52511</a></p>
<p class="">Reactor thread::DEBUG::2015-09-22
19:12:29,849::bindingxmlrpc::1297::XmlDetector::(handle_socket)
xml over http detected from ('127.0.0.1', 52511)</p>
<p class="">BindingXMLRPC::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:29,849::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52511">127.0.0.1:52511</a></p>
<p class="">Thread-794::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:29,849::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52511">127.0.0.1:52511</a> started</p>
<p class="">Thread-794::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:29,851::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52511">127.0.0.1:52511</a> stopped</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,520::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
Calling 'StoragePool.connectStorageServer' in bridge with
{u'connectionParams': [{u'id':
u'00000000-0000-0000-0000-000000000000', u'connection':
u'sjcstorage01:/vmstore', u'iqn': u'', u'user': u'',
u'tpgt': u'1', u'vfs_type': u'glusterfs', u'password':
'********', u'port': u''}], u'storagepoolID':
u'00000000-0000-0000-0000-000000000000', u'domainType': 7}</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,520::task::595::Storage.TaskManager.Task::(_updateState)
Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::moving from
state init -> state preparing</p>
<p class="">Thread-795::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:35,521::logUtils::48::dispatcher::(wrapper) Run and
protect: connectStorageServer(domType=7,
spUUID=u'00000000-0000-0000-0000-000000000000',
conList=[{u'id': u'00000000-0000-0000-0000-000000000000',
u'connection': u'sjcstorage01:/vmstore', u'iqn': u'',
u'user': u'', u'tpgt': u'1', u'vfs_type': u'glusterfs',
u'password': '********', u'port': u''}], options=None)</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,539::fileUtils::143::Storage.fileUtils::(createdir)
Creating directory:
/rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore mode:
None</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,540::mount::229::Storage.Misc.excCmd::(_runcmd)
/usr/bin/sudo -n /usr/bin/systemd-run --scope
--slice=vdsm-glusterfs /usr/bin/mount -t glusterfs -o
backup-volfile-servers=sjcstorage02:sjcvhost02
sjcstorage01:/vmstore
/rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore (cwd
None)</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,706::hsm::2417::Storage.HSM::(__prefetchDomains)
glusterDomPath: glusterSD/*</p>
<p class="">Thread-796::DEBUG::2015-09-22
19:12:35,707::__init__::298::IOProcessClient::(_run)
Starting IOProcess...</p>
<p class="">Thread-797::DEBUG::2015-09-22
19:12:35,712::__init__::298::IOProcessClient::(_run)
Starting IOProcess...</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,721::hsm::2429::Storage.HSM::(__prefetchDomains)
Found SD uuids: ()</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,721::hsm::2489::Storage.HSM::(connectStorageServer)
knownSDs: {41b75ca9-9783-42a7-9a23-10a2ae3cbb96:
storage.glusterSD.findDomain,
597d5b5b-7c09-4de9-8840-6993bd9b61a6:
storage.glusterSD.findDomain,
ef17fec4-fecf-4d7e-b815-d1db4ef65225:
storage.glusterSD.findDomain}</p>
<p class="">Thread-795::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:35,721::logUtils::51::dispatcher::(wrapper) Run and
protect: connectStorageServer, Return response:
{'statuslist': [{'status': 0, 'id':
u'00000000-0000-0000-0000-000000000000'}]}</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,722::task::1191::Storage.TaskManager.Task::(prepare)
Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::finished:
{'statuslist': [{'status': 0, 'id':
u'00000000-0000-0000-0000-000000000000'}]}</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,722::task::595::Storage.TaskManager.Task::(_updateState)
Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::moving from
state preparing -> state finished</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,722::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,722::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,722::task::993::Storage.TaskManager.Task::(_decref)
Task=`6e8aec06-556f-4659-9ee8-efc60b637ff6`::ref 0 aborting
False</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,722::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
Return 'StoragePool.connectStorageServer' in bridge with
[{'status': 0, 'id':
u'00000000-0000-0000-0000-000000000000'}]</p>
<p class="">Thread-795::DEBUG::2015-09-22
19:12:35,722::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,775::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
Calling 'StoragePool.connectStorageServer' in bridge with
{u'connectionParams': [{u'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054', u'connection':
u'sjcstorage01:/vmstore', u'iqn': u'', u'user': u'',
u'tpgt': u'1', u'vfs_type': u'glusterfs', u'password':
'********', u'port': u''}], u'storagepoolID':
u'00000000-0000-0000-0000-000000000000', u'domainType': 7}</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,775::task::595::Storage.TaskManager.Task::(_updateState)
Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::moving from
state init -> state preparing</p>
<p class="">Thread-798::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:35,776::logUtils::48::dispatcher::(wrapper) Run and
protect: connectStorageServer(domType=7,
spUUID=u'00000000-0000-0000-0000-000000000000',
conList=[{u'id': u'cd55e6a1-022a-4b32-8a94-cab506a9b054',
u'connection': u'sjcstorage01:/vmstore', u'iqn': u'',
u'user': u'', u'tpgt': u'1', u'vfs_type': u'glusterfs',
u'password': '********', u'port': u''}], options=None)</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,777::hsm::2417::Storage.HSM::(__prefetchDomains)
glusterDomPath: glusterSD/*</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,782::hsm::2429::Storage.HSM::(__prefetchDomains)
Found SD uuids: ()</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,782::hsm::2489::Storage.HSM::(connectStorageServer)
knownSDs: {41b75ca9-9783-42a7-9a23-10a2ae3cbb96:
storage.glusterSD.findDomain,
597d5b5b-7c09-4de9-8840-6993bd9b61a6:
storage.glusterSD.findDomain,
ef17fec4-fecf-4d7e-b815-d1db4ef65225:
storage.glusterSD.findDomain}</p>
<p class="">Thread-798::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:35,782::logUtils::51::dispatcher::(wrapper) Run and
protect: connectStorageServer, Return response:
{'statuslist': [{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,783::task::1191::Storage.TaskManager.Task::(prepare)
Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::finished:
{'statuslist': [{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,783::task::595::Storage.TaskManager.Task::(_updateState)
Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::moving from
state preparing -> state finished</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,783::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,783::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,783::task::993::Storage.TaskManager.Task::(_decref)
Task=`b2c91515-bdda-45e5-a031-61a1e2c53c4d`::ref 0 aborting
False</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,783::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
Return 'StoragePool.connectStorageServer' in bridge with
[{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]</p>
<p class="">Thread-798::DEBUG::2015-09-22
19:12:35,783::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,787::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
Calling 'StorageDomain.create' in bridge with {u'name':
u'sjcvmstore01', u'domainType': 7, u'domainClass': 1,
u'typeArgs': u'sjcstorage01:/vmstore', u'version': u'3',
u'storagedomainID': u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3'}</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,788::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving from
state init -> state preparing</p>
<p class="">Thread-801::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:35,788::logUtils::48::dispatcher::(wrapper) Run and
protect: createStorageDomain(storageType=7,
sdUUID=u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3',
domainName=u'sjcvmstore01',
typeSpecificArg=u'sjcstorage01:/vmstore', domClass=1,
domVersion=u'3', options=None)</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,788::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter sampling method (storage.sdc.refreshStorage)</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,788::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling method</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,788::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter sampling method (storage.iscsi.rescan)</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,788::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling method</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,788::iscsi::431::Storage.ISCSI::(rescan) Performing
SCSI scan, this will take up to 30 seconds</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,788::iscsiadm::97::Storage.Misc.excCmd::(_runCmd)
/usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None)</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,821::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,821::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter sampling method (storage.hba.rescan)</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,821::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling method</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,821::hba::56::Storage.HBA::(rescan) Starting scan</p>
<p class="">Thread-802::DEBUG::2015-09-22
19:12:35,882::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,912::hba::62::Storage.HBA::(rescan) Scan finished</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,912::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,912::multipath::77::Storage.Misc.excCmd::(rescan)
/usr/bin/sudo -n /usr/sbin/multipath (cwd None)</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,936::multipath::77::Storage.Misc.excCmd::(rescan)
SUCCESS: <err> = ''; <rc> = 0</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,936::utils::661::root::(execCmd) /sbin/udevadm
settle --timeout=5 (cwd None)</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,946::utils::679::root::(execCmd) SUCCESS:
<err> = ''; <rc> = 0</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,947::lvm::498::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm invalidate operation' got the operation mutex</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,947::lvm::500::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm invalidate operation' released the operation
mutex</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,947::lvm::509::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm invalidate operation' got the operation mutex</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,948::lvm::511::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm invalidate operation' released the operation
mutex</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,948::lvm::529::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm invalidate operation' got the operation mutex</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,948::lvm::531::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm invalidate operation' released the operation
mutex</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,948::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p class="">Thread-801::ERROR::2015-09-22
19:12:35,949::sdc::138::Storage.StorageDomainCache::(_findDomain)
looking for unfetched domain
c02fda97-62e3-40d3-9a6e-ac5d100f8ad3</p>
<p class="">Thread-801::ERROR::2015-09-22
19:12:35,949::sdc::155::Storage.StorageDomainCache::(_findUnfetchedDomain)
looking for domain c02fda97-62e3-40d3-9a6e-ac5d100f8ad3</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,949::lvm::371::Storage.OperationMutex::(_reloadvgs)
Operation 'lvm reload operation' got the operation mutex</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,950::lvm::291::Storage.Misc.excCmd::(cmd)
/usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices {
preferred_names = ["^/dev/mapper/"]
ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 obtain_device_list_from_udev=0
filter = [ '\''r|.*|'\'' ] } Â global { Â locking_type=1
 prioritise_write_locks=1  wait_for_locks=1  use_lvmetad=0 }
 backup {  retain_min = 50  retain_days = 0 } ' --noheadings
--units b --nosuffix --separator '|' --ignoreskippedcluster
-o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
c02fda97-62e3-40d3-9a6e-ac5d100f8ad3 (cwd None)</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,985::lvm::291::Storage.Misc.excCmd::(cmd) FAILED:
<err> = ' Â WARNING: lvmetad is running but disabled.
Restart lvmetad before enabling it!\n  Volume group
"c02fda97-62e3-40d3-9a6e-ac5d100f8ad3" not found\n  Cannot
process volume group
c02fda97-62e3-40d3-9a6e-ac5d100f8ad3\n'; <rc> = 5</p>
<p class="">Thread-801::WARNING::2015-09-22
19:12:35,986::lvm::376::Storage.LVM::(_reloadvgs) lvm vgs
failed: 5 [] [' Â WARNING: lvmetad is running but disabled.
Restart lvmetad before enabling it!', ' Â Volume group
"c02fda97-62e3-40d3-9a6e-ac5d100f8ad3" not found', ' Â Cannot
process volume group c02fda97-62e3-40d3-9a6e-ac5d100f8ad3']</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:35,987::lvm::416::Storage.OperationMutex::(_reloadvgs)
Operation 'lvm reload operation' released the operation
mutex</p>
<p class="">Thread-801::ERROR::2015-09-22
19:12:35,997::sdc::144::Storage.StorageDomainCache::(_findDomain)
domain c02fda97-62e3-40d3-9a6e-ac5d100f8ad3 not found</p>
<p class="">Traceback (most recent call last):</p>
<p class="">Â File "/usr/share/vdsm/storage/sdc.py", line 142,
in _findDomain</p>
<p class="">Â Â dom = findMethod(sdUUID)</p>
<p class="">Â File "/usr/share/vdsm/storage/sdc.py", line 172,
in _findUnfetchedDomain</p>
<p class="">Â Â raise se.StorageDomainDoesNotExist(sdUUID)</p>
<p class="">StorageDomainDoesNotExist: Storage domain does not
exist: (u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3',)</p>
<p class="">Thread-801::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:35,998::nfsSD::69::Storage.StorageDomain::(create)
sdUUID=c02fda97-62e3-40d3-9a6e-ac5d100f8ad3
domainName=sjcvmstore01 remotePath=sjcstorage01:/vmstore
domClass=1</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,015::__init__::298::IOProcessClient::(_run)
Starting IOProcess...</p>
<p class="">Thread-801::ERROR::2015-09-22
19:12:36,067::task::866::Storage.TaskManager.Task::(_setError)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Unexpected
error</p>
<p class="">Traceback (most recent call last):</p>
<p class="">Â File "/usr/share/vdsm/storage/task.py", line
873, in _run</p>
<p class="">Â Â return fn(*args, **kargs)</p>
<p class="">Â File "/usr/share/vdsm/logUtils.py", line 49, in
wrapper</p>
<p class="">Â Â res = f(*args, **kwargs)</p>
<p class="">Â File "/usr/share/vdsm/storage/hsm.py", line
2697, in createStorageDomain</p>
<p class="">Â Â domVersion)</p>
<p class="">Â File "/usr/share/vdsm/storage/nfsSD.py", line
84, in create</p>
<p class="">Â Â remotePath, storageType, version)</p>
<p class="">Â File "/usr/share/vdsm/storage/fileSD.py", line
264, in _prepareMetadata</p>
<p class="">Â Â "create meta file '%s' failed: %s" %
(metaFile, str(e)))</p>
<p class="">StorageDomainMetadataCreationError: Error creating
a storage domain's metadata: ("create meta file 'outbox'
failed: [Errno 5] Input/output error",)</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,067::task::885::Storage.TaskManager.Task::(_run)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Task._run:
d2d29352-8677-45cb-a4ab-06aa32cf1acb (7,
u'c02fda97-62e3-40d3-9a6e-ac5d100f8ad3', u'sjcvmstore01',
u'sjcstorage01:/vmstore', 1, u'3') {} failed - stopping task</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,067::task::1246::Storage.TaskManager.Task::(stop)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::stopping in
state preparing (force False)</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,067::task::993::Storage.TaskManager.Task::(_decref)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::ref 1 aborting
True</p>
<p class="">Thread-801::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:36,067::task::1171::Storage.TaskManager.Task::(prepare)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::aborting: Task
is aborted: "Error creating a storage domain's metadata" -
code 362</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,068::task::1176::Storage.TaskManager.Task::(prepare)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Prepare:
aborted: Error creating a storage domain's metadata</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,068::task::993::Storage.TaskManager.Task::(_decref)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::ref 0 aborting
True</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,068::task::928::Storage.TaskManager.Task::(_doAbort)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::Task._doAbort:
force False</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,068::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,068::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving from
state preparing -> state aborting</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,068::task::550::Storage.TaskManager.Task::(__state_aborting)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::_aborting:
recover policy none</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,068::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d2d29352-8677-45cb-a4ab-06aa32cf1acb`::moving from
state aborting -> state failed</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,068::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,068::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</p>
<p class="">Thread-801::ERROR::2015-09-22
19:12:36,068::dispatcher::76::Storage.Dispatcher::(wrapper)
{'status': {'message': 'Error creating a storage domain\'s
metadata: ("create meta file \'outbox\' failed: [Errno 5]
Input/output error",)', 'code': 362}}</p>
<p class="">Thread-801::DEBUG::2015-09-22
19:12:36,069::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,180::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest)
Calling 'StoragePool.disconnectStorageServer' in bridge with
{u'connectionParams': [{u'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054', u'connection':
u'sjcstorage01:/vmstore', u'iqn': u'', u'user': u'',
u'tpgt': u'1', u'vfs_type': u'glusterfs', u'password':
'********', u'port': u''}], u'storagepoolID':
u'00000000-0000-0000-0000-000000000000', u'domainType': 7}</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,181::task::595::Storage.TaskManager.Task::(_updateState)
Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::moving from
state init -> state preparing</p>
<p class="">Thread-807::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:36,182::logUtils::48::dispatcher::(wrapper) Run and
protect: disconnectStorageServer(domType=7,
spUUID=u'00000000-0000-0000-0000-000000000000',
conList=[{u'id': u'cd55e6a1-022a-4b32-8a94-cab506a9b054',
u'connection': u'sjcstorage01:/vmstore', u'iqn': u'',
u'user': u'', u'tpgt': u'1', u'vfs_type': u'glusterfs',
u'password': '********', u'port': u''}], options=None)</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,182::mount::229::Storage.Misc.excCmd::(_runcmd)
/usr/bin/sudo -n /usr/bin/umount -f -l
/rhev/data-center/mnt/glusterSD/sjcstorage01:_vmstore (cwd
None)</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,222::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter sampling method (storage.sdc.refreshStorage)</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,222::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling method</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,222::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter sampling method (storage.iscsi.rescan)</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,222::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling method</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,223::iscsi::431::Storage.ISCSI::(rescan) Performing
SCSI scan, this will take up to 30 seconds</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,223::iscsiadm::97::Storage.Misc.excCmd::(_runCmd)
/usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None)</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,258::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,258::misc::733::Storage.SamplingMethod::(__call__)
Trying to enter sampling method (storage.hba.rescan)</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,258::misc::736::Storage.SamplingMethod::(__call__)
Got in to sampling method</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,258::hba::56::Storage.HBA::(rescan) Starting scan</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,350::hba::62::Storage.HBA::(rescan) Scan finished</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,350::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,350::multipath::77::Storage.Misc.excCmd::(rescan)
/usr/bin/sudo -n /usr/sbin/multipath (cwd None)</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,374::multipath::77::Storage.Misc.excCmd::(rescan)
SUCCESS: <err> = ''; <rc> = 0</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,374::utils::661::root::(execCmd) /sbin/udevadm
settle --timeout=5 (cwd None)</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,383::utils::679::root::(execCmd) SUCCESS:
<err> = ''; <rc> = 0</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,384::lvm::498::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm invalidate operation' got the operation mutex</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,385::lvm::500::Storage.OperationMutex::(_invalidateAllPvs)
Operation 'lvm invalidate operation' released the operation
mutex</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,385::lvm::509::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm invalidate operation' got the operation mutex</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,385::lvm::511::Storage.OperationMutex::(_invalidateAllVgs)
Operation 'lvm invalidate operation' released the operation
mutex</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,386::lvm::529::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm invalidate operation' got the operation mutex</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,386::lvm::531::Storage.OperationMutex::(_invalidateAllLvs)
Operation 'lvm invalidate operation' released the operation
mutex</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,386::misc::743::Storage.SamplingMethod::(__call__)
Returning last result</p>
<p class="">Thread-807::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:36,386::logUtils::51::dispatcher::(wrapper) Run and
protect: disconnectStorageServer, Return response:
{'statuslist': [{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,387::task::1191::Storage.TaskManager.Task::(prepare)
Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::finished:
{'statuslist': [{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]}</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,387::task::595::Storage.TaskManager.Task::(_updateState)
Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::moving from
state preparing -> state finished</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,387::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,387::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,387::task::993::Storage.TaskManager.Task::(_decref)
Task=`01af6594-9c7b-4ec7-b08f-02627db8f421`::ref 0 aborting
False</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,388::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest)
Return 'StoragePool.disconnectStorageServer' in bridge with
[{'status': 0, 'id':
u'cd55e6a1-022a-4b32-8a94-cab506a9b054'}]</p>
<p class="">Thread-807::DEBUG::2015-09-22
19:12:36,388::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Thread-808::DEBUG::2015-09-22
19:12:37,868::task::595::Storage.TaskManager.Task::(_updateState)
Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::moving from
state init -> state preparing</p>
<p class="">Thread-808::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:37,868::logUtils::48::dispatcher::(wrapper) Run and
protect: repoStats(options=None)</p>
<p class="">Thread-808::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:37,868::logUtils::51::dispatcher::(wrapper) Run and
protect: repoStats, Return response: {}</p>
<p class="">Thread-808::DEBUG::2015-09-22
19:12:37,868::task::1191::Storage.TaskManager.Task::(prepare)
Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::finished: {}</p>
<p class="">Thread-808::DEBUG::2015-09-22
19:12:37,868::task::595::Storage.TaskManager.Task::(_updateState)
Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::moving from
state preparing -> state finished</p>
<p class="">Thread-808::DEBUG::2015-09-22
19:12:37,868::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}</p>
<p class="">Thread-808::DEBUG::2015-09-22
19:12:37,868::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</p>
<p class="">Thread-808::DEBUG::2015-09-22
19:12:37,868::task::993::Storage.TaskManager.Task::(_decref)
Task=`adddaa68-dd1d-4d2e-9853-b7894ee4809c`::ref 0 aborting
False</p>
<p class="">Thread-808::DEBUG::2015-09-22
19:12:37,873::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Reactor thread::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:44,867::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from <a moz-do-not-send="true"
href="http://127.0.0.1:52512">127.0.0.1:52512</a></p>
<p class="">Reactor thread::DEBUG::2015-09-22
19:12:44,874::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11</p>
<p class="">Reactor thread::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:44,875::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from <a moz-do-not-send="true"
href="http://127.0.0.1:52512">127.0.0.1:52512</a></p>
<p class="">Reactor thread::DEBUG::2015-09-22
19:12:44,875::bindingxmlrpc::1297::XmlDetector::(handle_socket)
xml over http detected from ('127.0.0.1', 52512)</p>
<p class="">BindingXMLRPC::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:44,875::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52512">127.0.0.1:52512</a></p>
<p class="">Thread-809::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:44,876::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52512">127.0.0.1:52512</a> started</p>
<p class="">Thread-809::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:44,877::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52512">127.0.0.1:52512</a> stopped</p>
<p class="">Thread-810::DEBUG::2015-09-22
19:12:50,889::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Thread-811::DEBUG::2015-09-22
19:12:52,902::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::moving from
state init -> state preparing</p>
<p class="">Thread-811::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:52,902::logUtils::48::dispatcher::(wrapper) Run and
protect: repoStats(options=None)</p>
<p class="">Thread-811::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:52,902::logUtils::51::dispatcher::(wrapper) Run and
protect: repoStats, Return response: {}</p>
<p class="">Thread-811::DEBUG::2015-09-22
19:12:52,902::task::1191::Storage.TaskManager.Task::(prepare)
Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::finished: {}</p>
<p class="">Thread-811::DEBUG::2015-09-22
19:12:52,903::task::595::Storage.TaskManager.Task::(_updateState)
Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::moving from
state preparing -> state finished</p>
<p class="">Thread-811::DEBUG::2015-09-22
19:12:52,903::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}</p>
<p class="">Thread-811::DEBUG::2015-09-22
19:12:52,903::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</p>
<p class="">Thread-811::DEBUG::2015-09-22
19:12:52,903::task::993::Storage.TaskManager.Task::(_decref)
Task=`d9fb30bc-dff3-4df3-a25e-2ad09a940fde`::ref 0 aborting
False</p>
<p class="">Thread-811::DEBUG::2015-09-22
19:12:52,908::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Reactor thread::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:59,895::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from <a moz-do-not-send="true"
href="http://127.0.0.1:52513">127.0.0.1:52513</a></p>
<p class="">Reactor thread::DEBUG::2015-09-22
19:12:59,902::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11</p>
<p class="">Reactor thread::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:59,902::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from <a moz-do-not-send="true"
href="http://127.0.0.1:52513">127.0.0.1:52513</a></p>
<p class="">Reactor thread::DEBUG::2015-09-22
19:12:59,902::bindingxmlrpc::1297::XmlDetector::(handle_socket)
xml over http detected from ('127.0.0.1', 52513)</p>
<p class="">BindingXMLRPC::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:59,903::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52513">127.0.0.1:52513</a></p>
<p class="">Thread-812::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:59,903::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52513">127.0.0.1:52513</a> started</p>
<p class="">Thread-812::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:12:59,904::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52513">127.0.0.1:52513</a> stopped</p>
<p class="">Thread-813::DEBUG::2015-09-22
19:13:05,898::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Thread-814::DEBUG::2015-09-22
19:13:07,934::task::595::Storage.TaskManager.Task::(_updateState)
Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::moving from
state init -> state preparing</p>
<p class="">Thread-814::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:13:07,935::logUtils::48::dispatcher::(wrapper) Run and
protect: repoStats(options=None)</p>
<p class="">Thread-814::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:13:07,935::logUtils::51::dispatcher::(wrapper) Run and
protect: repoStats, Return response: {}</p>
<p class="">Thread-814::DEBUG::2015-09-22
19:13:07,935::task::1191::Storage.TaskManager.Task::(prepare)
Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::finished: {}</p>
<p class="">Thread-814::DEBUG::2015-09-22
19:13:07,935::task::595::Storage.TaskManager.Task::(_updateState)
Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::moving from
state preparing -> state finished</p>
<p class="">Thread-814::DEBUG::2015-09-22
19:13:07,935::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}</p>
<p class="">Thread-814::DEBUG::2015-09-22
19:13:07,935::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</p>
<p class="">Thread-814::DEBUG::2015-09-22
19:13:07,935::task::993::Storage.TaskManager.Task::(_decref)
Task=`c526c56c-6254-4497-9c3e-ffe09ed54af2`::ref 0 aborting
False</p>
<p class="">Thread-814::DEBUG::2015-09-22
19:13:07,939::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
<p class="">Reactor thread::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:13:14,921::protocoldetector::72::ProtocolDetector.AcceptorImpl::(handle_accept)
Accepting connection from <a moz-do-not-send="true"
href="http://127.0.0.1:52515">127.0.0.1:52515</a></p>
<p class="">Reactor thread::DEBUG::2015-09-22
19:13:14,927::protocoldetector::82::ProtocolDetector.Detector::(__init__)
Using required_size=11</p>
<p class="">Reactor thread::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:13:14,928::protocoldetector::118::ProtocolDetector.Detector::(handle_read)
Detected protocol xml from <a moz-do-not-send="true"
href="http://127.0.0.1:52515">127.0.0.1:52515</a></p>
<p class="">Reactor thread::DEBUG::2015-09-22
19:13:14,928::bindingxmlrpc::1297::XmlDetector::(handle_socket)
xml over http detected from ('127.0.0.1', 52515)</p>
<p class="">BindingXMLRPC::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:13:14,928::xmlrpc::73::vds.XMLRPCServer::(handle_request)
Starting request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52515">127.0.0.1:52515</a></p>
<p class="">Thread-815::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:13:14,928::xmlrpc::84::vds.XMLRPCServer::(_process_requests)
Request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52515">127.0.0.1:52515</a> started</p>
<p class="">Thread-815::<a class="moz-txt-link-freetext" href="INFO::2015-09-22">INFO::2015-09-22</a>
19:13:14,930::xmlrpc::92::vds.XMLRPCServer::(_process_requests)
Request handler for <a moz-do-not-send="true"
href="http://127.0.0.1:52515">127.0.0.1:52515</a> stopped</p>
<p class=""><span class=""></span></p>
<p class="">Thread-816::DEBUG::2015-09-22
19:13:20,906::stompreactor::304::yajsonrpc.StompServer::(send)
Sending response</p>
</div>
<div><br>
</div>
<div><br>
</div>
<div>gluster logs</div>
<div><br>
</div>
<div>
<p class=""><span class="">+------------------------------------------------------------------------------+</span></p>
<p class=""><span class="">Â 1: volume vmstore-client-0</span></p>
<p class=""><span class="">Â 2: Â Â type protocol/client</span></p>
<p class=""><span class="">Â 3: Â Â option ping-timeout 42</span></p>
<p class=""><span class="">Â 4: Â Â option remote-host
sjcstorage01</span></p>
<p class=""><span class="">Â 5: Â Â option remote-subvolume
/export/vmstore/brick01</span></p>
<p class=""><span class="">Â 6: Â Â option transport-type
socket</span></p>
<p class=""><span class="">Â 7: Â Â option send-gids true</span></p>
<p class=""><span class="">Â 8: end-volume</span></p>
<p class=""><span class="">Â 9: Â </span></p>
<p class=""><span class="">Â 10: volume vmstore-client-1</span></p>
<p class=""><span class="">Â 11: Â Â type protocol/client</span></p>
<p class=""><span class="">Â 12: Â Â option ping-timeout 42</span></p>
<p class=""><span class="">Â 13: Â Â option remote-host
sjcstorage02</span></p>
<p class=""><span class="">Â 14: Â Â option remote-subvolume
/export/vmstore/brick01</span></p>
<p class=""><span class="">Â 15: Â Â option transport-type
socket</span></p>
<p class=""><span class="">Â 16: Â Â option send-gids true</span></p>
<p class=""><span class="">Â 17: end-volume</span></p>
<p class=""><span class="">Â 18: Â </span></p>
<p class=""><span class="">Â 19: volume vmstore-client-2</span></p>
<p class=""><span class="">Â 20: Â Â type protocol/client</span></p>
<p class=""><span class="">Â 21: Â Â option ping-timeout 42</span></p>
<p class=""><span class="">Â 22: Â Â option remote-host
sjcvhost02</span></p>
<p class=""><span class="">Â 23: Â Â option remote-subvolume
/export/vmstore/brick01</span></p>
<p class=""><span class="">Â 24: Â Â option transport-type
socket</span></p>
<p class=""><span class="">Â 25: Â Â option send-gids true</span></p>
<p class=""><span class="">Â 26: end-volume</span></p>
<p class=""><span class="">Â 27: Â </span></p>
<p class=""><span class="">Â 28: volume vmstore-replicate-0</span></p>
<p class=""><span class="">Â 29: Â Â type cluster/replicate</span></p>
<p class=""><span class="">Â 30: Â Â option arbiter-count 1</span></p>
<p class=""><span class="">Â 31: Â Â subvolumes
vmstore-client-0 vmstore-client-1 vmstore-client-2</span></p>
<p class=""><span class="">Â 32: end-volume</span></p>
<p class=""><span class="">Â 33: Â </span></p>
<p class=""><span class="">Â 34: volume vmstore-dht</span></p>
<p class=""><span class="">Â 35: Â Â type cluster/distribute</span></p>
<p class=""><span class="">Â 36: Â Â subvolumes
vmstore-replicate-0</span></p>
<p class=""><span class="">Â 37: end-volume</span></p>
<p class=""><span class="">Â 38: Â </span></p>
<p class=""><span class="">Â 39: volume vmstore-write-behind</span></p>
<p class=""><span class="">Â 40: Â Â type
performance/write-behind</span></p>
<p class=""><span class="">Â 41: Â Â subvolumes vmstore-dht</span></p>
<p class=""><span class="">Â 42: end-volume</span></p>
<p class=""><span class="">Â 43: Â </span></p>
<p class=""><span class="">Â 44: volume vmstore-read-ahead</span></p>
<p class=""><span class="">Â 45: Â Â type
performance/read-ahead</span></p>
<p class=""><span class="">Â 46: Â Â subvolumes
vmstore-write-behind</span></p>
<p class=""><span class="">Â 47: end-volume</span></p>
<p class=""><span class="">Â 48: Â </span></p>
<p class=""><span class="">Â 49: volume vmstore-readdir-ahead</span></p>
<p class=""><span class="">Â 50: Â Â type
performance/readdir-ahead</span></p>
<p class=""><span class="">52: end-volume</span></p>
<p class=""><span class="">Â 53: Â </span></p>
<p class=""><span class="">Â 54: volume vmstore-io-cache</span></p>
<p class=""><span class="">Â 55: Â Â type performance/io-cache</span></p>
<p class=""><span class="">Â 56: Â Â subvolumes
vmstore-readdir-ahead</span></p>
<p class=""><span class="">Â 57: end-volume</span></p>
<p class=""><span class="">Â 58: Â </span></p>
<p class=""><span class="">Â 59: volume vmstore-quick-read</span></p>
<p class=""><span class="">Â 60: Â Â type
performance/quick-read</span></p>
<p class=""><span class="">Â 61: Â Â subvolumes
vmstore-io-cache</span></p>
<p class=""><span class="">Â 62: end-volume</span></p>
<p class=""><span class="">Â 63: Â </span></p>
<p class=""><span class="">Â 64: volume vmstore-open-behind</span></p>
<p class=""><span class="">Â 65: Â Â type
performance/open-behind</span></p>
<p class=""><span class="">Â 66: Â Â subvolumes
vmstore-quick-read</span></p>
<p class=""><span class="">Â 67: end-volume</span></p>
<p class=""><span class="">Â 68: Â </span></p>
<p class=""><span class="">Â 69: volume vmstore-md-cache</span></p>
<p class=""><span class="">Â 70: Â Â type performance/md-cache</span></p>
<p class=""><span class="">Â 71: Â Â subvolumes
vmstore-open-behind</span></p>
<p class=""><span class="">Â 72: end-volume</span></p>
<p class=""><span class="">Â 73: Â </span></p>
<p class=""><span class="">Â 74: volume vmstore</span></p>
<p class=""><span class="">Â 75: Â Â type debug/io-stats</span></p>
<p class=""><span class="">Â 76: Â Â option latency-measurement
off</span></p>
<p class=""><span class="">Â 77: Â Â option count-fop-hits off</span></p>
<p class=""><span class="">Â 78: Â Â subvolumes
vmstore-md-cache</span></p>
<p class=""><span class="">Â 79: end-volume</span></p>
<p class=""><span class="">Â 80: Â </span></p>
<p class=""><span class="">Â 81: volume meta-autoload</span></p>
<p class=""><span class="">Â 82: Â Â type meta</span></p>
<p class=""><span class="">Â 83: Â Â subvolumes vmstore</span></p>
<p class=""><span class="">Â 84: end-volume</span></p>
<p class=""><span class="">Â 85: Â </span></p>
<p class=""><span class="">+------------------------------------------------------------------------------+</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.586205] I
[rpc-clnt.c:1851:rpc_clnt_reconfig] 0-vmstore-client-0:
changing port to 49153 (from 0)</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.586325] I
[rpc-clnt.c:1851:rpc_clnt_reconfig] 0-vmstore-client-1:
changing port to 49153 (from 0)</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.586480] I
[rpc-clnt.c:1851:rpc_clnt_reconfig] 0-vmstore-client-2:
changing port to 49153 (from 0)</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.595052] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-0: Using Program GlusterFS 3.3, Num
(1298437), Version (330)</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.595397] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-1: Using Program GlusterFS 3.3, Num
(1298437), Version (330)</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.595576] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-2: Using Program GlusterFS 3.3, Num
(1298437), Version (330)</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.595721] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-0: Connected to vmstore-client-0,
attached to remote volume '/export/vmstore/brick01'.</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.595738] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-0: Server and Client lk-version numbers
are not same, reopening the fds</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.596044] I
[MSGID: 108005] [afr-common.c:3998:afr_notify]
0-vmstore-replicate-0: Subvolume 'vmstore-client-0' came
back up; going online.</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.596170] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-1: Connected to vmstore-client-1,
attached to remote volume '/export/vmstore/brick01'.</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.596189] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-1: Server and Client lk-version numbers
are not same, reopening the fds</span></p>
<p class=""><span class="">
</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.596495] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-2: Connected to vmstore-client-2,
attached to remote volume :</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.596189] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-1: Server and Client lk-version numbers
are not same, reopening the fds</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.596495] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-2: Connected to vmstore-client-2,
attached to remote volume '/export/vmstore/brick01'.</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.596506] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-2: Server and Client lk-version numbers
are not same, reopening the fds</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.608758] I
[fuse-bridge.c:5053:fuse_graph_setup] 0-fuse: switched to
graph 0</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.608910] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-0: Server lk version = 1</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.608936] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-1: Server lk version = 1</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.608950] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-2: Server lk version = 1</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.609695] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll:
Started thread with index 2</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.609868] I
[fuse-bridge.c:3979:fuse_init] 0-glusterfs-fuse: FUSE
inited with protocol versions: glusterfs 7.22 kernel 7.22</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.616577] I
[MSGID: 109063] [dht-layout.c:702:dht_layout_normalize]
0-vmstore-dht: Found anomalies in / (gfid =
00000000-0000-0000-0000-000000000001). Holes=1 overlaps=0</span></p>
<p class=""><span class="">[2015-09-22 05:29:07.620230] I
[MSGID: 109036]
[dht-common.c:7754:dht_log_new_layout_for_dir_selfheal]
0-vmstore-dht: Setting layout of / with [Subvol_name:
vmstore-replicate-0, Err: -1 , Start: 0 , Stop: 4294967295
, Hash: 1 ], </span></p>
<p class=""><span class="">[2015-09-22 05:29:08.122415] W
[fuse-bridge.c:1230:fuse_err_cbk] 0-glusterfs-fuse: 26:
REMOVEXATTR() /__DIRECT_IO_TEST__ => -1 (No data
available)</span></p>
<p class=""><span class="">[2015-09-22 05:29:08.137359] I
[MSGID: 109036]
[dht-common.c:7754:dht_log_new_layout_for_dir_selfheal]
0-vmstore-dht: Setting layout of
/061b73d5-ae59-462e-b674-ea9c60d436c2 with [Subvol_name:
vmstore-replicate-0, Err: -1 , Start: 0 , Stop: 4294967295
, Hash: 1 ], </span></p>
<p class=""><span class="">[2015-09-22 05:29:08.145835] I
[MSGID: 109036]
[dht-common.c:7754:dht_log_new_layout_for_dir_selfheal]
0-vmstore-dht: Setting layout of
/061b73d5-ae59-462e-b674-ea9c60d436c2/dom_md with
[Subvol_name: vmstore-replicate-0, Err: -1 , Start: 0 ,
Stop: 4294967295 , Hash: 1 ], </span></p>
<p class=""><span class="">[2015-09-22 05:30:57.897819] I
[MSGID: 100030] [glusterfsd.c:2301:main]
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs
version 3.7.4 (args: /usr/sbin/glusterfs
--volfile-server=sjcvhost02 --volfile-server=sjcstorage01
--volfile-server=sjcstorage02 --volfile-id=/vmstore
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.909889] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll:
Started thread with index 1</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.923087] I
[MSGID: 114020] [client.c:2118:notify] 0-vmstore-client-0:
parent translators are ready, attempting connect on
transport</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.925701] I
[MSGID: 114020] [client.c:2118:notify] 0-vmstore-client-1:
parent translators are ready, attempting connect on
transport</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.927984] I
[MSGID: 114020] [client.c:2118:notify] 0-vmstore-client-2:
parent translators are ready, attempting connect on
transport</span></p>
<p class=""><span class="">Final graph:</span></p>
<p class=""><span class="">+------------------------------------------------------------------------------+</span></p>
<p class=""><span class="">Â 1: volume vmstore-client-0</span></p>
<p class=""><span class="">Â 2: Â Â type protocol/client</span></p>
<p class=""><span class="">Â 3: Â Â option ping-timeout 42</span></p>
<p class=""><span class="">Â 4: Â Â option remote-host
sjcstorage01</span></p>
<p class=""><span class="">Â 5: Â Â option remote-subvolume
/export/vmstore/brick01</span></p>
<p class=""><span class="">Â 6: Â Â option transport-type
socket</span></p>
<p class=""><span class="">Â 7: Â Â option send-gids true</span></p>
<p class=""><span class="">Â 8: end-volume</span></p>
<p class=""><span class="">Â 9: Â </span></p>
<p class=""><span class="">Â 10: volume vmstore-client-1</span></p>
<p class=""><span class="">Â 11: Â Â type protocol/client</span></p>
<p class=""><span class="">Â 12: Â Â option ping-timeout 42</span></p>
<p class=""><span class="">Â 13: Â Â option remote-host
sjcstorage02</span></p>
<p class=""><span class="">
</span></p>
<p class=""><span class="">Â 14: Â Â option remote-subvolume
/export/vmstore/brick01</span></p>
<p class=""><span class="">Â 15: Â Â option transport-type
socket</span></p>
<p class=""><span class="">Â 16: Â Â option send-gids true</span></p>
<p class=""><span class="">Â 17: end-volume</span></p>
<p class=""><span class="">Â 18: Â </span></p>
<p class=""><span class="">Â 19: volume vmstore-client-2</span></p>
<p class=""><span class="">Â 20: Â Â type protocol/client</span></p>
<p class=""><span class="">Â 21: Â Â option ping-timeout 42</span></p>
<p class=""><span class="">Â 22: Â Â option remote-host
sjcvhost02</span></p>
<p class=""><span class="">Â 23: Â Â option remote-subvolume
/export/vmstore/brick01</span></p>
<p class=""><span class="">Â 24: Â Â option transport-type
socket</span></p>
<p class=""><span class="">Â 25: Â Â option send-gids true</span></p>
<p class=""><span class="">Â 26: end-volume</span></p>
<p class=""><span class="">Â 27: Â </span></p>
<p class=""><span class="">Â 28: volume vmstore-replicate-0</span></p>
<p class=""><span class="">Â 29: Â Â type cluster/replicate</span></p>
<p class=""><span class="">Â 30: Â Â option arbiter-count 1</span></p>
<p class=""><span class="">Â 31: Â Â subvolumes
vmstore-client-0 vmstore-client-1 vmstore-client-2</span></p>
<p class=""><span class="">Â 32: end-volume</span></p>
<p class=""><span class="">Â 33: Â </span></p>
<p class=""><span class="">Â 34: volume vmstore-dht</span></p>
<p class=""><span class="">Â 35: Â Â type cluster/distribute</span></p>
<p class=""><span class="">Â 36: Â Â subvolumes
vmstore-replicate-0</span></p>
<p class=""><span class="">Â 37: end-volume</span></p>
<p class=""><span class="">Â 38: Â </span></p>
<p class=""><span class="">Â 39: volume vmstore-write-behind</span></p>
<p class=""><span class="">Â 40: Â Â type
performance/write-behind</span></p>
<p class=""><span class="">Â 41: Â Â subvolumes vmstore-dht</span></p>
<p class=""><span class="">Â 42: end-volume</span></p>
<p class=""><span class="">Â 43: Â </span></p>
<p class=""><span class="">Â 44: volume vmstore-read-ahead</span></p>
<p class=""><span class="">Â 45: Â Â type
performance/read-ahead</span></p>
<p class=""><span class="">Â 46: Â Â subvolumes
vmstore-write-behind</span></p>
<p class=""><span class="">Â 47: end-volume</span></p>
<p class=""><span class="">Â 48: Â </span></p>
<p class=""><span class="">Â 49: volume vmstore-readdir-ahead</span></p>
<p class=""><span class="">Â 50: Â Â type
performance/readdir-ahead</span></p>
<p class=""><span class="">Â 51: Â Â subvolumes
vmstore-read-ahead</span></p>
<p class=""><span class="">Â 52: end-volume</span></p>
<p class=""><span class="">Â 53: Â </span></p>
<p class=""><span class="">Â 54: volume vmstore-io-cache</span></p>
<p class=""><span class="">Â 55: Â Â type performance/io-cache</span></p>
<p class=""><span class="">Â 56: Â Â subvolumes
vmstore-readdir-ahead</span></p>
<p class=""><span class="">Â 57: end-volume</span></p>
<p class=""><span class="">Â 58: Â </span></p>
<p class=""><span class="">Â 59: volume vmstore-quick-read</span></p>
<p class=""><span class="">Â 60: Â Â type
performance/quick-read</span></p>
<p class=""><span class="">Â 61: Â Â subvolumes
vmstore-io-cache</span></p>
<p class=""><span class="">Â 62: end-volume</span></p>
<p class=""><span class="">Â 63: Â </span></p>
<p class=""><span class="">Â 64: volume vmstore-open-behind</span></p>
<p class=""><span class="">Â 65: Â Â type
performance/open-behind</span></p>
<p class=""><span class="">Â 66: Â Â subvolumes
vmstore-quick-read</span></p>
<p class=""><span class="">Â 67: end-volume</span></p>
<p class=""><span class="">Â 68: Â </span></p>
<p class=""><span class="">Â 69: volume vmstore-md-cache</span></p>
<p class=""><span class="">
</span></p>
<p class=""><span class="">Â 70: Â Â type performance/md-cache</span></p>
<p class=""><span class="">Â 71: Â Â subvolumes
vmstore-open-behind</span></p>
<p class=""><span class="">Â 72: end-volume</span></p>
<p class=""><span class="">Â 73: Â </span></p>
<p class=""><span class="">Â 74: volume vmstore</span></p>
<p class=""><span class="">Â 75: Â Â type debug/io-stats</span></p>
<p class=""><span class="">Â 76: Â Â option latency-measurement
off</span></p>
<p class=""><span class="">Â 77: Â Â option count-fop-hits off</span></p>
<p class=""><span class="">Â 78: Â Â subvolumes
vmstore-md-cache</span></p>
<p class=""><span class="">Â 79: end-volume</span></p>
<p class=""><span class="">Â 80: Â </span></p>
<p class=""><span class="">Â 81: volume meta-autoload</span></p>
<p class=""><span class="">Â 82: Â Â type meta</span></p>
<p class=""><span class="">Â 83: Â Â subvolumes vmstore</span></p>
<p class=""><span class="">Â 84: end-volume</span></p>
<p class=""><span class="">Â 85: Â </span></p>
<p class=""><span class="">+------------------------------------------------------------------------------+</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.934021] I
[rpc-clnt.c:1851:rpc_clnt_reconfig] 0-vmstore-client-0:
changing port to 49153 (from 0)</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.934145] I
[rpc-clnt.c:1851:rpc_clnt_reconfig] 0-vmstore-client-1:
changing port to 49153 (from 0)</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.934491] I
[rpc-clnt.c:1851:rpc_clnt_reconfig] 0-vmstore-client-2:
changing port to 49153 (from 0)</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.942198] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-0: Using Program GlusterFS 3.3, Num
(1298437), Version (330)</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.942545] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-1: Using Program GlusterFS 3.3, Num
(1298437), Version (330)</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.942659] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-2: Using Program GlusterFS 3.3, Num
(1298437), Version (330)</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.942797] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-0: Connected to vmstore-client-0,
attached to remote volume '/export/vmstore/brick01'.</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.942808] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-0: Server and Client lk-version numbers
are not same, reopening the fds</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.943036] I
[MSGID: 108005] [afr-common.c:3998:afr_notify]
0-vmstore-replicate-0: Subvolume 'vmstore-client-0' came
back up; going online.</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.943078] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-1: Connected to vmstore-client-1,
attached to remote volume '/export/vmstore/brick01'.</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.943086] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-1: Server and Client lk-version numbers
are not same, reopening the fds</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.943292] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-2: Connected to vmstore-client-2,
attached to remote volume '/export/vmstore/brick01'.</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.943302] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-2: Server and Client lk-version numbers
are not same, reopening the fds</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.953887] I
[fuse-bridge.c:5053:fuse_graph_setup] 0-fuse: switched to
graph 0</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.954071] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-0: Server lk version = 1</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.954105] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-1: Server lk version = 1</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.954124] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-2: Server lk version = 1</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.955282] I
[fuse-bridge.c:3979:fuse_init] 0-glusterfs-fuse: FUSE
inited with protocol versions: glusterfs 7.22 kernel 7.22</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.955738] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll:
Started thread with index 2</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.970232] I
[fuse-bridge.c:4900:fuse_thread_proc] 0-fuse: unmounting
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.970834] W
[glusterfsd.c:1219:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7df5) [0x7f187139fdf5]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5)
[0x7f1872a09785]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x69)
[0x7f1872a09609] ) 0-: received signum (15), shutting down</span></p>
<p class=""><span class="">[2015-09-22 05:30:57.970848] I
[fuse-bridge.c:5595:fini] 0-fuse: Unmounting
'/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.</span></p>
<p class=""><span class="">[2015-09-22 05:30:58.420973] I
[fuse-bridge.c:4900:fuse_thread_proc] 0-fuse: unmounting
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore</span></p>
<p class=""><span class="">[2015-09-22 05:30:58.421355] W
[glusterfsd.c:1219:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7df5) [0x7f8267cd4df5]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5)
[0x7f826933e785]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x69)
[0x7f826933e609] ) 0-: received signum (15), shutting down</span></p>
<p class=""><span class="">[2015-09-22 05:30:58.421369] I
[fuse-bridge.c:5595:fini] 0-fuse: Unmounting
'/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.534410] I
[MSGID: 100030] [glusterfsd.c:2301:main]
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs
version 3.7.4 (args: /usr/sbin/glusterfs
--volfile-server=sjcvhost02 --volfile-server=sjcstorage01
--volfile-server=sjcstorage02 --volfile-id=/vmstore
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.545686] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll:
Started thread with index 1</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.553019] I
[MSGID: 114020] [client.c:2118:notify] 0-vmstore-client-0:
parent translators are ready, attempting connect on
transport</span></p>
<p class="">
</p>
<p class=""><span class="">[2015-09-22 05:31:09.555552] I
[MSGID: 114020] [client.c:2118:notify] 0-vmstore-client-1:
parent translators are ready, attempting connect on
transport</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.557989] I
[MSGID: 114020] [client.c:2118:notify] 0-vmstore-client-2:
parent translators are ready, attempting connect on
transport</span></p>
<p class=""><span class="">Final graph:</span></p>
<p class=""><span class="">+------------------------------------------------------------------------------+</span></p>
<p class=""><span class="">Â 1: volume vmstore-client-0</span></p>
<p class=""><span class="">Â 2: Â Â type protocol/client</span></p>
<p class=""><span class="">Â 3: Â Â option ping-timeout 42</span></p>
<p class=""><span class="">Â 4: Â Â option remote-host
sjcstorage01</span></p>
<p class=""><span class="">Â 5: Â Â option remote-subvolume
/export/vmstore/brick01</span></p>
<p class=""><span class="">Â 6: Â Â option transport-type
socket</span></p>
<p class=""><span class="">Â 7: Â Â option send-gids true</span></p>
<p class=""><span class="">Â 8: end-volume</span></p>
<p class=""><span class="">Â 9: Â </span></p>
<p class=""><span class="">Â 10: volume vmstore-client-1</span></p>
<p class=""><span class="">Â 11: Â Â type protocol/client</span></p>
<p class=""><span class="">Â 12: Â Â option ping-timeout 42</span></p>
<p class=""><span class="">Â 13: Â Â option remote-host
sjcstorage02</span></p>
<p class=""><span class="">Â 14: Â Â option remote-subvolume
/export/vmstore/brick01</span></p>
<p class=""><span class="">Â 15: Â Â option transport-type
socket</span></p>
<p class=""><span class="">Â 16: Â Â option send-gids true</span></p>
<p class=""><span class="">Â 17: end-volume</span></p>
<p class=""><span class="">Â 18: Â </span></p>
<p class=""><span class="">Â 19: volume vmstore-client-2</span></p>
<p class=""><span class="">Â 20: Â Â type protocol/client</span></p>
<p class=""><span class="">Â 21: Â Â option ping-timeout 42</span></p>
<p class=""><span class="">Â 22: Â Â option remote-host
sjcvhost02</span></p>
<p class=""><span class="">Â 23: Â Â option remote-subvolume
/export/vmstore/brick01</span></p>
<p class=""><span class="">Â 24: Â Â option transport-type
socket</span></p>
<p class=""><span class="">Â 25: Â Â option send-gids true</span></p>
<p class=""><span class="">Â 26: end-volume</span></p>
<p class=""><span class="">Â 27: Â </span></p>
<p class=""><span class="">Â 28: volume vmstore-replicate-0</span></p>
<p class=""><span class="">Â 29: Â Â type cluster/replicate</span></p>
<p class=""><span class="">Â 30: Â Â option arbiter-count 1</span></p>
<p class=""><span class="">Â 31: Â Â subvolumes
vmstore-client-0 vmstore-client-1 vmstore-client-2</span></p>
<p class=""><span class="">Â 32: end-volume</span></p>
<p class=""><span class="">Â 33: Â </span></p>
<p class=""><span class="">Â 34: volume vmstore-dht</span></p>
<p class=""><span class="">Â 35: Â Â type cluster/distribute</span></p>
<p class=""><span class="">Â 36: Â Â subvolumes
vmstore-replicate-0</span></p>
<p class=""><span class="">Â 37: end-volume</span></p>
<p class=""><span class="">Â 38: Â </span></p>
<p class=""><span class="">Â 39: volume vmstore-write-behind</span></p>
<p class=""><span class="">Â 40: Â Â type
performance/write-behind</span></p>
<p class=""><span class="">Â 41: Â Â subvolumes vmstore-dht</span></p>
<p class=""><span class="">Â 42: end-volume</span></p>
<p class=""><span class="">Â 43: Â </span></p>
<p class=""><span class="">Â 44: volume vmstore-read-ahead</span></p>
<p class=""><span class="">Â 45: Â Â type
performance/read-ahead</span></p>
<p class=""><span class="">Â 46: Â Â subvolumes
vmstore-write-behind</span></p>
<p class=""><span class="">Â 47: end-volume</span></p>
<p class=""><span class="">Â 48: Â </span></p>
<p class=""><span class="">Â 49: volume vmstore-readdir-ahead</span></p>
<p class=""><span class="">Â 50: Â Â type
performance/readdir-ahead</span></p>
<p class=""><span class="">Â 51: Â Â subvolumes
vmstore-read-ahead</span></p>
<p class="">
</p>
<p class=""><span class="">Â 52: end-volume</span></p>
<p class=""><span class="">Â 53: Â </span></p>
<p class=""><span class="">Â 54: volume vmstore-io-cache</span></p>
<p class=""><span class="">Â 55: Â Â type performance/io-cache</span></p>
<p class=""><span class="">Â 56: Â Â subvolumes
vmstore-readdir-ahead</span></p>
<p class=""><span class="">Â 57: end-volume</span></p>
<p class=""><span class="">Â 58: Â </span></p>
<p class=""><span class="">Â 59: volume vmstore-quick-read</span></p>
<p class=""><span class="">Â 60: Â Â type
performance/quick-read</span></p>
<p class=""><span class="">Â 61: Â Â subvolumes
vmstore-io-cache</span></p>
<p class=""><span class="">Â 62: end-volume</span></p>
<p class=""><span class="">Â 63: Â </span></p>
<p class=""><span class="">Â 64: volume vmstore-open-behind</span></p>
<p class=""><span class="">Â 65: Â Â type
performance/open-behind</span></p>
<p class=""><span class="">Â 66: Â Â subvolumes
vmstore-quick-read</span></p>
<p class=""><span class="">Â 67: end-volume</span></p>
<p class=""><span class="">Â 68: Â </span></p>
<p class=""><span class="">Â 69: volume vmstore-md-cache</span></p>
<p class=""><span class="">Â 70: Â Â type performance/md-cache</span></p>
<p class=""><span class="">Â 71: Â Â subvolumes
vmstore-open-behind</span></p>
<p class=""><span class="">Â 72: end-volume</span></p>
<p class=""><span class="">Â 73: Â </span></p>
<p class=""><span class="">Â 74: volume vmstore</span></p>
<p class=""><span class="">Â 75: Â Â type debug/io-stats</span></p>
<p class=""><span class="">Â 76: Â Â option latency-measurement
off</span></p>
<p class=""><span class="">Â 77: Â Â option count-fop-hits off</span></p>
<p class=""><span class="">Â 78: Â Â subvolumes
vmstore-md-cache</span></p>
<p class=""><span class="">Â 79: end-volume</span></p>
<p class=""><span class="">Â 80: Â </span></p>
<p class=""><span class="">Â 81: volume meta-autoload</span></p>
<p class=""><span class="">Â 82: Â Â type meta</span></p>
<p class=""><span class="">Â 83: Â Â subvolumes vmstore</span></p>
<p class=""><span class="">Â 84: end-volume</span></p>
<p class=""><span class="">Â 85: Â </span></p>
<p class=""><span class="">+------------------------------------------------------------------------------+</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.563262] I
[rpc-clnt.c:1851:rpc_clnt_reconfig] 0-vmstore-client-0:
changing port to 49153 (from 0)</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.563431] I
[rpc-clnt.c:1851:rpc_clnt_reconfig] 0-vmstore-client-1:
changing port to 49153 (from 0)</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.563877] I
[rpc-clnt.c:1851:rpc_clnt_reconfig] 0-vmstore-client-2:
changing port to 49153 (from 0)</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.572443] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-1: Using Program GlusterFS 3.3, Num
(1298437), Version (330)</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.572599] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-0: Using Program GlusterFS 3.3, Num
(1298437), Version (330)</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.572742] I
[MSGID: 114057]
[client-handshake.c:1437:select_server_supported_programs]
0-vmstore-client-2: Using Program GlusterFS 3.3, Num
(1298437), Version (330)</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.573165] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-1: Connected to vmstore-client-1,
attached to remote volume '/export/vmstore/brick01'.</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.573186] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-1: Server and Client lk-version numbers
are not same, reopening the fds</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.573395] I
[MSGID: 108005] [afr-common.c:3998:afr_notify]
0-vmstore-replicate-0: Subvolume 'vmstore-client-1' came
back up; going online.</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.573427] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-0: Connected to vmstore-client-0,
attached to remote volume '/export/vmstore/brick01'.</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.573435] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-0: Server and Client lk-version numbers
are not same, reopening the fds</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.573754] I
[MSGID: 114046]
[client-handshake.c:1213:client_setvolume_cbk]
0-vmstore-client-2: Connected to vmstore-client-2,
attached to remote volume '/export/vmstore/brick01'.</span></p>
<p class="">
</p>
<p class=""><span class="">[2015-09-22 05:31:09.573783] I
[MSGID: 114047]
[client-handshake.c:1224:client_setvolume_cbk]
0-vmstore-client-2: Server and Client lk-version numbers
are not same, reopen:</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.577192] I
[fuse-bridge.c:5053:fuse_graph_setup] 0-fuse: switched to
graph 0</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.577302] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-1: Server lk version = 1</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.577325] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-0: Server lk version = 1</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.577339] I
[MSGID: 114035]
[client-handshake.c:193:client_set_lk_version_cbk]
0-vmstore-client-2: Server lk version = 1</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.578125] I
[fuse-bridge.c:3979:fuse_init] 0-glusterfs-fuse: FUSE
inited with protocol versions: glusterfs 7.22 kernel 7.22</span></p>
<p class=""><span class="">[2015-09-22 05:31:09.578636] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll:
Started thread with index 2</span></p>
<p class=""><span class="">[2015-09-22 05:31:10.073698] I
[fuse-bridge.c:4900:fuse_thread_proc] 0-fuse: unmounting
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore</span></p>
<p class=""><span class="">[2015-09-22 05:31:10.073977] W
[glusterfsd.c:1219:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7df5) [0x7f6b9ba88df5]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5)
[0x7f6b9d0f2785]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x69)
[0x7f6b9d0f2609] ) 0-: received signum (15), shutting down</span></p>
<p class=""><span class="">[2015-09-22 05:31:10.073993] I
[fuse-bridge.c:5595:fini] 0-fuse: Unmounting
'/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore'.</span></p>
<p class=""><span class="">[2015-09-22 05:31:20.184700] I
[MSGID: 100030] [glusterfsd.c:2301:main]
0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs
version 3.7.4 (args: /usr/sbin/glusterfs
--volfile-server=sjcvhost02 --volfile-server=sjcstorage01
--volfile-server=sjcstorage02 --volfile-id=/vmstore
/rhev/data-center/mnt/glusterSD/sjcvhost02:_vmstore)</span></p>
<p class=""><span class="">[2015-09-22 05:31:20.194928] I
[MSGID: 101190]
[event-epoll.c:632:event_dispatch_epoll_worker] 0-epoll:
Started thread with index 1</span></p>
<p class=""><span class="">[2015-09-22 05:31:20.200701] I
[MSGID: 114020] [client.c:2118:notify] 0-vmstore-client-0:
parent translators are ready, attempting connect on
transport</span></p>
<p class=""><span class="">[2015-09-22 05:31:20.203110] I
[MSGID: 114020] [client.c:2118:notify] 0-vmstore-client-1:
parent translators are ready, attempting connect on
transport</span></p>
<p class=""><span class="">[2015-09-22 05:31:20.205708] I
[MSGID: 114020] [client.c:2118:notify] 0-vmstore-client-2:
parent translators are ready, attempting connect on
transport</span></p>
<p class=""><span class="">
</span></p>
<p class=""><span class="">Final graph:</span></p>
<p class=""><span class=""><br>
</span></p>
<p class=""><span class=""><br>
</span></p>
<p class=""><span class="">Hope this helps. </span></p>
<p class=""><span class=""><br>
</span></p>
<p class="">thanks again</p>
<p class=""><br>
</p>
<p class="">Brett Stevens</p>
<p class=""><br>
</p>
</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Tue, Sep 22, 2015 at 10:14 PM,
Sahina Bose <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:sabose@redhat.com" target="_blank">sabose(a)redhat.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF"><span class=""> <br>
<br>
<div>On 09/22/2015 02:17 PM, Brett Stevens wrote:<br>
</div>
<blockquote type="cite">
<div dir="ltr">Hi. First time on the lists. I've
searched for this but no luck so sorry if this has
been covered before.
<div><br>
</div>
<div>Im working with the latest 3.6 beta with the
following infrastructure. </div>
<div><br>
</div>
<div>1 management host (to be used for a number of
tasks so chose not to use self hosted, we are a
school and will need to keep an eye on hardware
costs)</div>
<div>2 compute nodes</div>
<div>2 gluster nodes</div>
<div><br>
</div>
<div>so far built one gluster volume using the
gluster cli to give me 2 nodes and one arbiter
node (management host)</div>
<div><br>
</div>
<div>so far, every time I create a volume, it shows
up strait away on the ovirt gui. however no matter
what I try, I cannot create or import it as a data
domain. </div>
<div><br>
</div>
<div>the current error in the ovirt gui is "Error
while executing action AddGlusterFsStorageDomain:
Error creating a storage domain's metadata"</div>
</div>
</blockquote>
<br>
</span> Please provide vdsm and gluster logs<span class=""><br>
<br>
<blockquote type="cite">
<div dir="ltr">
<div><br>
</div>
<div>logs, continuously rolling the following errors
around</div>
<div>
<p><span>Scheduler_Worker-53) [] START,
GlusterVolumesListVDSCommand(HostName =
sjcstorage02,
GlusterVolumesListVDSParameters:{runAsync='true',
hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}),
log id: 24198fbf</span></p>
<p><span>2015-09-22 03:57:29,903 WARNÂ
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-53) [] Could
not associate brick
'sjcstorage01:/export/vmstore/brick01' of
volume '878a316d-2394-4aae-bdf8-e10eea38225e'
with correct network as no gluster network
found in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
</div>
</div>
</blockquote>
<br>
</span> What is the hostname provided in ovirt engine for
<span>sjcstorage01 ? Does this host have multiple nics?<br>
<br>
Could you provide output of gluster volume info?<br>
Please note, that these errors are not related to error
in creating storage domain. However, these errors could
prevent you from monitoring the state of gluster volume
from oVirt<br>
<br>
</span>
<blockquote type="cite"><span class="">
<div dir="ltr">
<div>
<p><span>2015-09-22 03:57:29,905 WARNÂ
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-53) [] Could
not associate brick
'sjcstorage02:/export/vmstore/brick01' of
volume '878a316d-2394-4aae-bdf8-e10eea38225e'
with correct network as no gluster network
found in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p><span>2015-09-22 03:57:29,905 WARNÂ
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-53) [] Could
not add brick
'sjcvhost02:/export/vmstore/brick01' to volume
'878a316d-2394-4aae-bdf8-e10eea38225e' -
server uuid
'29b58278-9aa3-47c5-bfb4-1948ef7fdbba' not
found in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'</span></p>
<p><span>2015-09-22 03:57:29,905 INFOÂ
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-53) [] FINISH,
GlusterVolumesListVDSCommand, return:
{878a316d-2394-4aae-bdf8-e10eea38225e=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@41e93fb1},
log id: 24198fbf</span></p>
<p><span><br>
</span></p>
<p><span>I'm new to ovirt and gluster, so any help
would be great</span></p>
<p><span><br>
</span></p>
<p><span>thanks</span></p>
<p><span><br>
</span></p>
<p><span>Brett Stevens</span></p>
</div>
</div>
<br>
<fieldset></fieldset>
<br>
</span><span class="">
<pre>_______________________________________________
Users mailing list
<a moz-do-not-send="true" href="mailto:Users@ovirt.org" target="_blank">Users(a)ovirt.org</a>
<a moz-do-not-send="true" href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/mailman/listinfo/users</a>
</pre>
</span></blockquote>
<br>
</div>
</blockquote>
</div>
<br>
</div>
</blockquote>
<br>
</body>
</html>
--------------090502070204020205010802--
1
0
Hi list!
After a "war-week" I finally got a systemd-script to put the host in "maintenance" when a shutdown will started.
Now the problem is, that the automatically migration of the VM does NOT work...
I see in the Web console the host will "Preparing for maintenance" and the VM will start the migration, then the host is in "maintenance" and a couple of seconds later the VM will be killed on the other host...
In the Log of the engine I see
1
0
--_000_cb7184f1a52248629b7ff57903e73544hactar2asmorguk_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hi,
I have tried several times to import a VMware VM from a vcenter installati=
on but it just seems to hang when I hit the "load" button. I have had a l=
ook at the vdsm.log but there is so much going on (presumably due to debug=
) it is hard to distinguish what is happening. Does anyone have any pointe=
rs on how to work out what is going on?
Best regards
Ian Fraser
________________________________
The information in this message and any attachment is intended for the add=
ressee and is confidential. If you are not that addressee, no action shoul=
d be taken in reliance on the information and you should please reply to t=
his message immediately to inform us of incorrect receipt and destroy this=
message and any attachments.
For the purposes of internet level email security incoming and outgoing em=
ails may be read by personnel other than the named recipient or sender.
Whilst all reasonable efforts are made, ASM (UK) Ltd cannot guarantee that=
emails and attachments are virus free or compatible with your systems.=20=
You should make your own checks and ASM (UK) Ltd does not accept liability=
in respect of viruses or computer problems experienced.
Registered address: Agency Sector Management (UK) Ltd. Ashford House, 41-4=
5 Church Road, Ashford, Middlesex, TW15 2TQ
Registered in England No.2053849
______________________________________________________________________
This email has been scanned by the Symantec Email Security.cloud service.
For more information please visit http://www.symanteccloud.com
______________________________________________________________________
--_000_cb7184f1a52248629b7ff57903e73544hactar2asmorguk_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-mic=
rosoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word=
" xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"ht=
tp://www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii=
">
<meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
=09{font-family:"Cambria Math";
=09panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
=09{font-family:Calibri;
=09panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
=09{margin:0cm;
=09margin-bottom:.0001pt;
=09font-size:11.0pt;
=09font-family:"Calibri",sans-serif;
=09mso-fareast-language:EN-US;}
a:link, span.MsoHyperlink
=09{mso-style-priority:99;
=09color:#0563C1;
=09text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
=09{mso-style-priority:99;
=09color:#954F72;
=09text-decoration:underline;}
span.EmailStyle17
=09{mso-style-type:personal-compose;
=09font-family:"Calibri",sans-serif;
=09color:windowtext;}
.MsoChpDefault
=09{mso-style-type:export-only;
=09font-family:"Calibri",sans-serif;
=09mso-fareast-language:EN-US;}
@page WordSection1
=09{size:612.0pt 792.0pt;
=09margin:72.0pt 72.0pt 72.0pt 72.0pt;}
div.WordSection1
=09{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-GB" link=3D"#0563C1" vlink=3D"#954F72">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">Hi, <o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">I have tried several times to import a VMware VM fr=
om a vcenter installation but it just seems to hang when I hit the “=
load” button. I have had a look at the vdsm.log but there is s=
o much going on (presumably due to debug) it is hard to
distinguish what is happening. Does anyone have any pointers on how to wo=
rk out what is going on?<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal"><span style=3D"color:#17365D;mso-fareast-language:E=
N-GB">Best regards<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#17365D;mso-fareast-language:E=
N-GB"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><b><span style=3D"color:#17365D;mso-fareast-languag=
e:EN-GB">Ian Fraser</span></b><span style=3D"color:#17365D;mso-fareast-lan=
guage:EN-GB"><o:p></o:p></span></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
</div>
<br>
<hr>
<font face=3D"Arial" color=3D"Gray" size=3D"1"><br>
The information in this message and any attachment is intended for the add=
ressee and is confidential. If you are not that addressee, no action shoul=
d be taken in reliance on the information and you should please reply to t=
his message immediately to inform us
of incorrect receipt and destroy this message and any attachments.<br>
<br>
For the purposes of internet level email security incoming and outgoing em=
ails may be read by personnel other=20than the named recipient or sender.<=
br>
<br>
Whilst all reasonable efforts are made, ASM (UK) Ltd cannot guarantee that=
emails and attachments are virus free or compatible with your systems. Yo=
u should make your own checks and ASM (UK) Ltd does not accept liability i=
n respect of viruses or computer problems
experienced.<br>
Registered address: Agency Sector Management (UK) Ltd. Ashford House, 41-4=
5 Church Road, Ashford, Middlesex, TW15 2TQ<br>
Registered in England No.2053849<br>
</font>
<br clear=3D"both">
______________________________________________________________________<BR>=
This email has been scanned by the Symantec Email Security.cloud service.<=
BR>
For more information please visit http://www.symanteccloud.com<BR>
______________________________________________________________________<BR>=
</body>
</html>
--_000_cb7184f1a52248629b7ff57903e73544hactar2asmorguk_--
2
5
------=_Part_327_28715484.1442828138449
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Dear users,
I'm currently looking for a VDI solution. One of the requirements is to use OSS as most as possible and I think oVirt could do the job.
However, we would like to build this VDI infrastructure on the top of a Ceph storage with RBD.
I know KVM/Qemu supports rbd protocol but I cannot find information about how to use it with oVirt. It seems I only can use NFS and GlusterFS.
There is any way to use Ceph RDB devices attached to our virtual machines ?
Thank you for advance
Best regards,
------=_Part_327_28715484.1442828138449
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 7bit
<html><head><style type='text/css'>p { margin: 0; }</style></head><body><div style='font-family: trebuchet ms,sans-serif; font-size: 12pt; color: #000000'>Dear users,<br><br>I'm currently looking for a VDI solution. One of the requirements is to use OSS as most as possible and I think oVirt could do the job.<br>However, we would like to build this VDI infrastructure on the top of a Ceph storage with RBD.<br><br>I know KVM/Qemu supports rbd protocol but I cannot find information about how to use it with oVirt. It seems I only can use NFS and GlusterFS.<br><br>There is any way to use Ceph RDB devices attached to our virtual machines ?<br><br>Thank you for advance<br><br>Best regards,<br><br></div></body></html>
------=_Part_327_28715484.1442828138449--
2
3
I have a simple setup. One machine is a node. I installed 3.6 sixth beta
release node iso on a machine. IP adress is setup. Then I try to deploy
hosted engine over ssh. I input a local http url of the centos 7 iso and
hit Deploy.
The tui stops and I get:
login as: admin
admin(a)192.168.100.70's password:
Last login: Tue Sep 22 15:25:43 2015
An error appeared in the UI: AttributeError("'TransactionProgressDialog'
object has no attribute 'event'",)
Press ENTER to logout ...
or enter 's' to drop to shell
The iso file does exist and is working.
1
0
This is a multi-part message in MIME format.
--------------030305080206090500040208
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Hi Chris,
Replies inline..
On 09/22/2015 09:31 AM, Sahina Bose wrote:
>
>
>
> -------- Forwarded Message --------
> Subject: Re: [ovirt-users] urgent issue
> Date: Wed, 9 Sep 2015 08:31:07 -0700
> From: Chris Liebman <chris.l(a)taboola.com>
> To: users <users(a)ovirt.org>
>
>
>
> Ok - I think I'm going to switch to local storage - I've had way to
> many unexplainable issue with glusterfs  :-(. Is there any reason I
> cant add local storage to the existing shared-storage cluster? I see
> that the menu item is greyed out....
>
>
What version of gluster and ovirt are you using?
>
>
>
> On Tue, Sep 8, 2015 at 4:19 PM, Chris Liebman <chris.l(a)taboola.com
> <mailto:chris.l@taboola.com>> wrote:
>
> Its possible that this is specific to just one gluster volume...Â
> I've moved a few VM disks off of that volume and am able to start
> them fine. My recolection is that any VM started on the "bad"
> volume causes it to be disconnected and forces the ovirt node to
> be marked down until Maint->Activate.
>
> On Tue, Sep 8, 2015 at 3:52 PM, Chris Liebman
> <chris.l(a)taboola.com> wrote:
>
> In attempting to put an ovirt cluster in production I'm
> running into some off errors with gluster it looks like. Its
> 12 hosts each with one brick in distributed-replicate.
> Â (actually 2 bricks but they are separate volumes)
>
These 12 nodes in dist-rep config, are they in replica 2 or replica 3?
The latter is what is recommended for VM use-cases. Could you give the
output of `gluster volume info` ?
>
> [root@ovirt-node268 glusterfs]# rpm -qa | grep vdsm
>
> vdsm-jsonrpc-4.16.20-0.el6.noarch
>
> vdsm-gluster-4.16.20-0.el6.noarch
>
> vdsm-xmlrpc-4.16.20-0.el6.noarch
>
> vdsm-yajsonrpc-4.16.20-0.el6.noarch
>
> vdsm-4.16.20-0.el6.x86_64
>
> vdsm-python-zombiereaper-4.16.20-0.el6.noarch
>
> vdsm-python-4.16.20-0.el6.noarch
>
> vdsm-cli-4.16.20-0.el6.noarch
>
>
> Â Â Everything was fine last week, however, today various
> clients in the gluster cluster seem get "client quorum not
> met" periodically - when they get this they take one of the
> bricks offline - this causes VM's to be attempted to move -
> sometimes 20 at a time. That takes a long time :-(. I've
> tried disabling automatic migration and teh VM's get paused
> when this happens - resuming gets nothing at that point as the
> volumes mount on the server hosting the VM is not connected:
>
>
> from
> rhev-data-center-mnt-glusterSD-ovirt-node268.la.taboolasyndication.com:_LADC-TBX-V02.log:
>
> [2015-09-08 21:18:42.920771] W [MSGID: 108001]
> [afr-common.c:4043:afr_notify] 2-LADC-TBX-V02-replicate-2:
> Client-quorum is not met
>
When client-quorum is not met (due to network disconnects, or gluster
brick processes going down etc), gluster makes the volume read-only.
This is expected behavior and prevents split-brains. It's probably a bit
late, but do you have the gluster fuse mount logs to confirm this
indeed was the issue?
> [2015-09-08 21:18:42.931751] I
> [fuse-bridge.c:4900:fuse_thread_proc] 0-fuse: unmounting
> /rhev/data-center/mnt/glusterSD/ovirt-node268.la.taboolasyndication.com:_LADC-TBX-V02
>
> [2015-09-08 21:18:42.931836] W
> [glusterfsd.c:1219:cleanup_and_exit]
> (-->/lib64/libpthread.so.0(+0x7a51) [0x7f1bebc84a51]
> -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd) [0x405e4d]
> -->/usr/sbin/glusterfs(cleanup_and_exit+0x
>
> 65) [0x4059b5] ) 0-: received signum (15), shutting down
>
> [2015-09-08 21:18:42.931858] I [fuse-bridge.c:5595:fini]
> 0-fuse: Unmounting
> '/rhev/data-center/mnt/glusterSD/ovirt-node268.la.taboolasyndication.com:_LADC-TBX-V02'.
>
The VM pause you saw could be because of the unmount.I understand that a
fix (https://gerrit.ovirt.org/#/c/40240/) went in for ovirt 3-.6
(vdsm-4.17) to prevent vdsm from unmounting the gluster volume when vdsm
exits/restarts.
Is it possible to run a test setup on 3.6 and see if this is still
happening?
>
> And the mount is broken at that point:
>
> [root@ovirt-node267 ~]# df
>
> *df:
> `/rhev/data-center/mnt/glusterSD/ovirt-node268.la.taboolasyndication.com:_LADC-TBX-V02':
> Transport endpoint is not connected*
>
Yes because it received a SIGTERM above.
Thanks,
Ravi
>
> Filesystem       1K-blocks Â
>   Used  Available Use% Mounted on
>
> /dev/sda3Â Â Â Â Â Â
> Â Â 51475068Â Â Â 1968452Â Â Â 46885176Â Â Â 5% /
>
> tmpfs          132210244   Â
> Â Â 0Â Â 132210244Â Â Â 0% /dev/shm
>
> /dev/sda2Â Â Â Â Â Â Â Â Â 487652Â Â Â Â 32409Â Â
> Â Â 429643Â Â Â 8% /boot
>
> /dev/sda1Â Â Â Â Â Â Â Â Â 204580Â Â Â Â Â 260Â Â
> Â Â 204320Â Â Â 1% /boot/efi
>
> /dev/sda5Â Â Â Â Â Â Â 1849960960 156714056
> 1599267616Â Â Â 9% /data1
>
> /dev/sdb1Â Â Â Â Â Â Â 1902274676Â Â 18714468
> 1786923588Â Â Â 2% /data2
>
> ovirt-node268.la.taboolasyndication.com:/LADC-TBX-V01
>
> Â Â Â Â Â Â Â Â Â Â Â Â 9249804800 727008640
> 8052899712 <tel:8052899712>Â Â Â 9%
> /rhev/data-center/mnt/glusterSD/ovirt-node268.la.taboolasyndication.com:_LADC-TBX-V01
>
> ovirt-node251.la.taboolasyndication.com:/LADC-TBX-V03
>
> Â Â Â Â Â Â Â Â Â Â Â Â 1849960960Â Â Â Â 73728
> 1755907968Â Â Â 1%
> /rhev/data-center/mnt/glusterSD/ovirt-node251.la.taboolasyndication.com:_LADC-TBX-V03
>
> The fix for that is to put the server in maintenance mode then
> activate it again. But all VM's need to be migrated or stopped
> for that to work.
>
>
> I'm not seeing any obvious network or disk errors......Â
>
> Are their configuration options I'm missing?
>
>
>
>
>
--------------030305080206090500040208
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 8bit
<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
Hi Chris,<br>
<br>
Replies inline..<br>
<br>
<div class="moz-cite-prefix">On 09/22/2015 09:31 AM, Sahina Bose
wrote:<br>
</div>
<blockquote cite="mid:5600D288.8090608@redhat.com" type="cite">
<meta http-equiv="content-type" content="text/html; charset=utf-8">
<br>
<div class="moz-forward-container"><br>
<br>
-------- Forwarded Message --------
<table class="moz-email-headers-table" border="0"
cellpadding="0" cellspacing="0">
<tbody>
<tr>
<th nowrap="nowrap" valign="BASELINE" align="RIGHT">Subject:
</th>
<td>Re: [ovirt-users] urgent issue</td>
</tr>
<tr>
<th nowrap="nowrap" valign="BASELINE" align="RIGHT">Date:
</th>
<td>Wed, 9 Sep 2015 08:31:07 -0700</td>
</tr>
<tr>
<th nowrap="nowrap" valign="BASELINE" align="RIGHT">From:
</th>
<td>Chris Liebman <a moz-do-not-send="true"
class="moz-txt-link-rfc2396E"
href="mailto:chris.l@taboola.com"><chris.l(a)taboola.com></a></td>
</tr>
<tr>
<th nowrap="nowrap" valign="BASELINE" align="RIGHT">To: </th>
<td>users <a moz-do-not-send="true"
class="moz-txt-link-rfc2396E"
href="mailto:users@ovirt.org"><users(a)ovirt.org></a></td>
</tr>
</tbody>
</table>
<br>
<br>
<div dir="ltr">Ok - I think I'm going to switch to local storage
- I've had way to many unexplainable issue with glusterfs
 :-(. Is there any reason I cant add local storage to the
existing shared-storage cluster? I see that the menu item is
greyed out....
<div><br>
</div>
<div><br>
</div>
</div>
</div>
</blockquote>
<br>
What version of gluster and ovirt are you using? <br>
<br>
<blockquote cite="mid:5600D288.8090608@redhat.com" type="cite">
<div class="moz-forward-container">
<div dir="ltr">
<div> </div>
<div>
<div><br>
</div>
<div><br>
</div>
</div>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Tue, Sep 8, 2015 at 4:19 PM, Chris
Liebman <span dir="ltr"><<a moz-do-not-send="true"
href="mailto:chris.l@taboola.com" target="_blank">chris.l(a)taboola.com</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">Its possible that this is specific to just
one gluster volume... I've moved a few VM disks off of
that volume and am able to start them fine. My
recolection is that any VM started on the "bad" volume
causes it to be disconnected and forces the ovirt node
to be marked down until Maint->Activate.</div>
<div class="HOEnZb">
<div class="h5">
<div class="gmail_extra"><br>
<div class="gmail_quote">On Tue, Sep 8, 2015 at 3:52
PM, Chris Liebman <span dir="ltr"><<a
moz-do-not-send="true"
class="moz-txt-link-abbreviated"
href="mailto:chris.l@taboola.com"><a class="moz-txt-link-abbreviated" href="mailto:chris.l@taboola.com">chris.l(a)taboola.com</a></a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0
0 .8ex;border-left:1px #ccc
solid;padding-left:1ex">
<div dir="ltr">In attempting to put an ovirt
cluster in production I'm running into some
off errors with gluster it looks like. Its
12 hosts each with one brick in
distributed-replicate.  (actually 2 bricks
but they are separate volumes)
<div><br>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</blockquote>
<br>
These 12 nodes in dist-rep config, are they in replica 2 or replica
3? The latter is what is recommended for VM use-cases. Could you
give the output of `gluster volume info` ?<br>
<blockquote cite="mid:5600D288.8090608@redhat.com" type="cite">
<div class="moz-forward-container">
<div class="gmail_extra">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="HOEnZb">
<div class="h5">
<div class="gmail_extra">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0
0 .8ex;border-left:1px #ccc
solid;padding-left:1ex">
<div dir="ltr">
<div> </div>
<div>
<p><span>[root@ovirt-node268 glusterfs]# rpm
-qa | grep vdsm</span></p>
<p><span>vdsm-jsonrpc-4.16.20-0.el6.noarch</span></p>
<p><span>vdsm-gluster-4.16.20-0.el6.noarch</span></p>
<p><span>vdsm-xmlrpc-4.16.20-0.el6.noarch</span></p>
<p><span>vdsm-yajsonrpc-4.16.20-0.el6.noarch</span></p>
<p><span>vdsm-4.16.20-0.el6.x86_64</span></p>
<p><span>vdsm-python-zombiereaper-4.16.20-0.el6.noarch</span></p>
<p><span>vdsm-python-4.16.20-0.el6.noarch</span></p>
<p><span>vdsm-cli-4.16.20-0.el6.noarch</span></p>
<p><br>
</p>
<p>  Everything was fine last week,
however, today various clients in the
gluster cluster seem get "client quorum
not met" periodically - when they get this
they take one of the bricks offline - this
causes VM's to be attempted to move -
sometimes 20 at a time. That takes a
long time :-(. I've tried disabling
automatic migration and teh VM's get
paused when this happens - resuming gets
nothing at that point as the volumes mount
on the server hosting the VM is not
connected:</p>
<div><br>
</div>
<div>
<p>from
rhev-data-center-mnt-glusterSD-ovirt-node268.la.taboolasyndication.com:_LADC-TBX-V02.log:</p>
<p><span>[2015-09-08 21:18:42.920771] W
[MSGID: 108001]
[afr-common.c:4043:afr_notify]
2-LADC-TBX-V02-replicate-2:
Client-quorum is </span><span>not met</span></p>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</blockquote>
<br>
When client-quorum is not met (due to network disconnects, or
gluster brick processes going down etc), gluster makes the volume
read-only. This is expected behavior and prevents split-brains. It's
probably a bit late, but do you have the gluster fuse mount logs to
confirm this indeed was the issue?<br>
<br>
<blockquote cite="mid:5600D288.8090608@redhat.com" type="cite">
<div class="moz-forward-container">
<div class="gmail_extra">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="HOEnZb">
<div class="h5">
<div class="gmail_extra">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0
0 .8ex;border-left:1px #ccc
solid;padding-left:1ex">
<div dir="ltr">
<div>
<div>
<p><span>[2015-09-08 21:18:42.931751] I
[fuse-bridge.c:4900:fuse_thread_proc]
0-fuse: unmounting
/rhev/data-center/mnt/glusterSD/ovirt-node268.la.taboolasyndication.com:_LADC-TBX-V02</span></p>
<p><span>[2015-09-08 21:18:42.931836] W
[glusterfsd.c:1219:cleanup_and_exit]
(-->/lib64/libpthread.so.0(+0x7a51)
[0x7f1bebc84a51]
-->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xcd)
[0x405e4d]
-->/usr/sbin/glusterfs(cleanup_and_exit+0x</span></p>
<p><span>65) [0x4059b5] ) 0-: received
signum (15), shutting down</span></p>
<p><span>[2015-09-08 21:18:42.931858] I
[fuse-bridge.c:5595:fini] 0-fuse:
Unmounting
'/rhev/data-center/mnt/glusterSD/ovirt-node268.la.taboolasyndication.com:_LADC-TBX-V02'.</span></p>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</blockquote>
<br>
The VM pause you saw could be because of the unmount.I understand
that a fix (<a class="moz-txt-link-freetext" href="https://gerrit.ovirt.org/#/c/40240/">https://gerrit.ovirt.org/#/c/40240/</a>)Â went in for ovirt
3-.6 (vdsm-4.17) to prevent vdsm from unmounting the gluster volume
when vdsm exits/restarts. <br>
Is it possible to run a test setup on 3.6 and see if this is still
happening?<br>
<br>
<blockquote cite="mid:5600D288.8090608@redhat.com" type="cite">
<div class="moz-forward-container">
<div class="gmail_extra">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="HOEnZb">
<div class="h5">
<div class="gmail_extra">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0
0 .8ex;border-left:1px #ccc
solid;padding-left:1ex">
<div dir="ltr">
<div>
<div>
<p><span><br>
</span></p>
<p><span>And the mount is broken at that
point:</span></p>
</div>
<div>
<p><span>[root@ovirt-node267 ~]# df</span></p>
<p><span><font color="#ff0000"><b>df:
`/rhev/data-center/mnt/glusterSD/ovirt-node268.la.taboolasyndication.com:_LADC-TBX-V02':
Transport endpoint is not
connected</b></font></span></p>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</blockquote>
<br>
Yes because it received a SIGTERM above.<br>
<br>
Thanks,<br>
Ravi<br>
<blockquote cite="mid:5600D288.8090608@redhat.com" type="cite">
<div class="moz-forward-container">
<div class="gmail_extra">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="HOEnZb">
<div class="h5">
<div class="gmail_extra">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0
0 .8ex;border-left:1px #ccc
solid;padding-left:1ex">
<div dir="ltr">
<div>
<div>
<p><span>Filesystem    ÂÂ
  1K-blocks ÂÂ
  Used  Available Use% Mounted on</span></p>
<p><span>/dev/sda3     ÂÂ
  51475068   1968452   46885176   5%
/</span></p>
<p><span>tmpfs       ÂÂ
  132210244   ÂÂ
  0  132210244   0% /dev/shm</span></p>
<p><span>/dev/sda2      ÂÂ
  487652    32409 ÂÂ
  429643   8% /boot</span></p>
<p><span>/dev/sda1      ÂÂ
  204580     260 ÂÂ
  204320   1% /boot/efi</span></p>
<p><span>/dev/sda5    ÂÂ
  1849960960 156714056
1599267616   9% /data1</span></p>
<p><span>/dev/sdb1    ÂÂ
  1902274676  18714468
1786923588   2% /data2</span></p>
<p><span>ovirt-node268.la.taboolasyndication.com:/LADC-TBX-V01</span></p>
<p><span>         ÂÂ
  9249804800 727008640 <a
moz-do-not-send="true"
href="tel:8052899712"
value="+18052899712" target="_blank">8052899712</a>   9%
/rhev/data-center/mnt/glusterSD/ovirt-node268.la.taboolasyndication.com:_LADC-TBX-V01</span></p>
<p><span>ovirt-node251.la.taboolasyndication.com:/LADC-TBX-V03</span></p>
<p><span>         ÂÂ
  1849960960    73728
1755907968   1%
/rhev/data-center/mnt/glusterSD/ovirt-node251.la.taboolasyndication.com:_LADC-TBX-V03</span></p>
<p>The fix for that is to put the server
in maintenance mode then activate it
again. But all VM's need to be migrated
or stopped for that to work.</p>
</div>
<div><br>
</div>
<div>I'm not seeing any obvious network or
disk errors...... </div>
</div>
<div><br>
</div>
<div>Are their configuration options I'm
missing?</div>
<div><br>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</div>
</div>
</blockquote>
</div>
<br>
</div>
<br>
</div>
<br>
</blockquote>
<br>
</body>
</html>
--------------030305080206090500040208--
2
1
Hi. First time on the lists. I've searched for this but no luck so sorry if
this has been covered before.
Im working with the latest 3.6 beta with the following infrastructure.
1 management host (to be used for a number of tasks so chose not to use
self hosted, we are a school and will need to keep an eye on hardware costs)
2 compute nodes
2 gluster nodes
so far built one gluster volume using the gluster cli to give me 2 nodes
and one arbiter node (management host)
so far, every time I create a volume, it shows up strait away on the ovirt
gui. however no matter what I try, I cannot create or import it as a data
domain.
the current error in the ovirt gui is "Error while executing action
AddGlusterFsStorageDomain: Error creating a storage domain's metadata"
logs, continuously rolling the following errors around
Scheduler_Worker-53) [] START, GlusterVolumesListVDSCommand(HostName =
sjcstorage02, GlusterVolumesListVDSParameters:{runAsync='true',
hostId='c75682ba-1e4c-42a3-85c7-16e4bb2ce5da'}), log id: 24198fbf
2015-09-22 03:57:29,903 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-53) [] Could not associate brick
'sjcstorage01:/export/vmstore/brick01' of volume
'878a316d-2394-4aae-bdf8-e10eea38225e' with correct network as no gluster
network found in cluster 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
2015-09-22 03:57:29,905 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-53) [] Could not associate brick
'sjcstorage02:/export/vmstore/brick01' of volume
'878a316d-2394-4aae-bdf8-e10eea38225e' with correct network as no gluster
network found in cluster 'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
2015-09-22 03:57:29,905 WARN
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturnForXmlRpc]
(DefaultQuartzScheduler_Worker-53) [] Could not add brick
'sjcvhost02:/export/vmstore/brick01' to volume
'878a316d-2394-4aae-bdf8-e10eea38225e' - server uuid
'29b58278-9aa3-47c5-bfb4-1948ef7fdbba' not found in cluster
'b00d3c6d-fdfb-49e8-9f1a-f749c3d42486'
2015-09-22 03:57:29,905 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-53) [] FINISH, GlusterVolumesListVDSCommand,
return:
{878a316d-2394-4aae-bdf8-e10eea38225e=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@41e93fb1},
log id: 24198fbf
I'm new to ovirt and gluster, so any help would be great
thanks
Brett Stevens
3
2
21 Sep '15
On Mon, Sep 21, 2015 at 02:51:32PM -0400, Douglas Schilling Landgraf wrote:
> Hi Budur,
>
> On 09/21/2015 03:39 AM, Budur Nagaraju wrote:
> >Hi
> >
> >While converting vwware to ovirt getting below error ,can someone help me ?
Which version of virt-v2v?
The latest version can be found by reading the instructions here:
https://www.redhat.com/archives/libguestfs/2015-April/msg00038.html
https://www.redhat.com/archives/libguestfs/2015-April/msg00039.html
Please don't use the old (0.9) version.
> >I have given the passowd in the file " $HOME/.netrc" ,
> >
> >[root@cstnfs ~]# virt-v2v -ic esx://10.206.68.57?no_verify=1
> ><http://10.206.68.57?no_verify=1> -o rhev -os
> >10.204.206.10:/cst/secondary --network perfmgt vm
> >virt-v2v: Failed to connect to esx://10.206.68.57?no_verify=1
> ><http://10.206.68.57?no_verify=1>: libvirt error code: 45, message:
> >authentication failed: Password request failed
>
> Have you used the below format in the .netrc?
> machine esx.example.com login root password s3cr3t
>
> Additionally, have you set 0600 as permission to .netrc?
> chmod 600 ~/.netrc
The new version of virt-v2v does not use '.netrc' at all. Instead
there is a '--password-file' option. Best to read the manual page:
http://libguestfs.org/virt-v2v.1.html
Rich.
--
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
virt-top is 'top' for virtual machines. Tiny program with many
powerful monitoring features, net stats, disk stats, logging, etc.
http://people.redhat.com/~rjones/virt-top
1
0
So when I used oVirt 3.4.x noVNC worked wonderfully. We're running
3.5.1.1-1.el6 now and when I try to connect to a VMs console via noVNC I
get this error.
[image: Screen Shot 2015-09-21 at 10.09.23 AM.png]
I've downloaded the ca.crt file and installed it but I still get an HTTPS
error when connecting to the oVirt management console. Looking at the SSL
information Chrome says the following:
"The identity of this website has been verified by
ovirtm01.sharperlending.aws.96747. No Certificate Transparency information
was supplied by the server.
The certificate chain for this website contains at least one certificate
that was signed using a deprecated signature algorithm based on SHA-1."
Is this a known issue?
Thanks,
--
*Michael Kleinpaste*
Senior Systems Administrator
SharperLending, LLC.
www.SharperLending.com
Michael.Kleinpaste(a)SharperLending.com
(509) 324-1230 Fax: (509) 324-1234
2
2
Hi all,
We have o hosted-engine deployment with two hypervisors and iscsi for VMS
and NFS4 for the VM engine + ISO + Export.
Yesterday we did an update from ovirt 3.5.3 to 3.5.4 along with OS updates
for the hypervisors and the VM engine.
After that we are unable to clone VMs, the task does not finish.
We have this in vdsmd log
Thread-38::DEBUG::2015-09-21
11:50:42,374::libvirtconnection::143::root::(wrapper) Unknown libvirterror:
ecode: 80 edom: 20 level: 2 message: metadata not found: Requested metadata
element is not present
Thread-66::DEBUG::2015-09-21
11:50:43,721::libvirtconnection::143::root::(wrapper) Unknown libvirterror:
ecode: 80 edom: 20 level: 2 message: metadata not found: Requested metadata
element is not present
Thread-67::DEBUG::2015-09-21
11:50:44,189::libvirtconnection::143::root::(wrapper) Unknown libvirterror:
ecode: 80 edom: 20 level: 2 message: metadata not found: Requested metadata
element is not present
vdsm-python-4.16.26-0.el7.centos.noarch
vdsm-4.16.26-0.el7.centos.x86_64
vdsm-xmlrpc-4.16.26-0.el7.centos.noarch
vdsm-yajsonrpc-4.16.26-0.el7.centos.noarch
vdsm-jsonrpc-4.16.26-0.el7.centos.noarch
vdsm-cli-4.16.26-0.el7.centos.noarch
vdsm-python-zombiereaper-4.16.26-0.el7.centos.noarch
libvirt-daemon-driver-qemu-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-driver-nodedev-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-kvm-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-config-nwfilter-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-1.2.8-16.el7_1.4.x86_64
libvirt-python-1.2.8-7.el7_1.1.x86_64
libvirt-daemon-driver-secret-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-driver-network-1.2.8-16.el7_1.4.x86_64
libvirt-lock-sanlock-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-driver-interface-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-driver-nwfilter-1.2.8-16.el7_1.4.x86_64
libvirt-daemon-driver-storage-1.2.8-16.el7_1.4.x86_64
libvirt-client-1.2.8-16.el7_1.4.x86_64
Thanks in advance
2
1
HI
Can you pls provide me the info on how to import a vmware ova format to
ovirt format ?
Thanks,
Nagaraju
2
2
Hi
While converting vwware to ovirt getting below error ,can someone help me ?
I have given the passowd in the file " $HOME/.netrc" ,
[root@cstnfs ~]# virt-v2v -ic esx://10.206.68.57?no_verify=1 -o rhev -os
10.204.206.10:/cst/secondary --network perfmgt vm
virt-v2v: Failed to connect to esx://10.206.68.57?no_verify=1: libvirt
error code: 45, message: authentication failed: Password request failed
Thanks,
Nagaraju
1
0
The oVirt team is pleased to announce that today oVirt moved to its own
classification within our Bugzilla system as previously anticipated [1].
No longer limited as a set of sub-projects, each building block
(sub-project) of oVirt will be a Bugzilla product.
This will allow tracking of package versions and target releases based on
their own versioning schema.
Each maintainer, for example, will have administrative rights on his or her
Bugzilla sub-project and will be able to change flags,
versions, targets, and components.
As part of the improvements of the Bugzilla tracking system, a flag system
has been added to the oVirt product in order to ease its management [2].
The changes will go into affect in stages, please review the wiki for more
details.
We invite you to review the new tracking system and get involved with oVirt
QA [3] to make oVirt better than ever!
[1] http://community.redhat.com/blog/2015/06/moving-focus-to-the-upstream/
[2] http://www.ovirt.org/Bugzilla_rework
[3] http://www.ovirt.org/OVirt_Quality_Assurance
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
2
1
Hi,
I write currently a little backup tool in Python which use the following
workflow:
- create a snapshot -> works
- clone snapshot into VM -> help needed
- delete the snapshot -> works
- export VM to NFS share -> works
- delete cloned VM -> TODO
Is it possible to clone a snapshot into a VM like from the web-interface?
The above workflow is a little bit resource expensive but it will when
it is finished make Online-Full-backups of VM's.
cheers
gregor
3
5
------=_NextPartTM-000-f176d2c1-de3f-4d9e-a822-0f072a888750
Content-Type: multipart/alternative;
boundary="_000_12EF8D94C6F8734FB2FF37B9FBEDD1739B6DB27BEXCHANGEcollogi_"
--_000_12EF8D94C6F8734FB2FF37B9FBEDD1739B6DB27BEXCHANGEcollogi_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Hi,
somehow I got lost about the possibility to do a live storage migration.
We are using OVirt 3.5.4 + FC20 Nodes (virt-preview - qemu 2.1.3)
>From the WebUI I have the following possibilites:
1) disk without snapshot: VMs tab -> Disks -> Move: Button is active
but it does not allow to do a migration. No selectable storage domain
although we have 2 NFS systems. Gives warning hints about
"you are doing live migration, bla bla, ..."
2) disk with snapshot: VMs tab -> Disk -> Move: Button greyed out
3) BUT! Disks tab -> Move: Works! No hints about "live migration"
I do not dare to click go ...
While 1/2 might be consistent they do not match to 3. Maybe someone
can give a hint what should work, what not and where me might have
an error.
Thanks.
Markus
--_000_12EF8D94C6F8734FB2FF37B9FBEDD1739B6DB27BEXCHANGEcollogi_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html dir=3D"ltr">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-=
1">
<style type=3D"text/css" id=3D"owaParaStyle"></style>
</head>
<body fpstyle=3D"1" ocsi=3D"0">
<div style=3D"direction: ltr;font-family: Tahoma;color: #000000;font-size: =
10pt;">Hi,
<div><br>
</div>
<div>somehow I got lost about the possibility to do a live storage migratio=
n.</div>
<div>We are using OVirt 3.5.4 + FC20 Nodes (virt-preview - qemu 2.1.3)<=
/div>
<div><br>
</div>
<div>From the WebUI I have the following possibilites:</div>
<div><br>
</div>
<div>1) disk without snapshot: VMs tab -> Disks -> Move: Button is ac=
tive </div>
<div>but it <span style=3D"font-size: 10pt;">does not allow to do a mi=
gration. No selectable storage domain </span></div>
<div><span style=3D"font-size: 10pt;">although </span><span style=3D"f=
ont-size: 10pt;">we have 2 NFS systems. Gives warning hints about </sp=
an></div>
<div><span style=3D"font-size: 10pt;">"you are doing live migration, b=
la bla, ..."</span></div>
<div><br>
</div>
<div>2) disk with snapshot: VMs tab -> Disk -> Move: Button greyed ou=
t</div>
<div><br>
</div>
<div>3) BUT! Disks tab -> Move: Works! No hints about "live migrati=
on"</div>
<div>I do not dare to click go ...</div>
<div><br>
</div>
<div>While 1/2 might be consistent they do not match to 3. Maybe someone</d=
iv>
<div>can give a hint what should work, what not and where me might have</di=
v>
<div>an error.</div>
<div><br>
</div>
<div>Thanks.</div>
<div><br>
</div>
<div>Markus</div>
<div><br>
</div>
</div>
</body>
</html>
--_000_12EF8D94C6F8734FB2FF37B9FBEDD1739B6DB27BEXCHANGEcollogi_--
------=_NextPartTM-000-f176d2c1-de3f-4d9e-a822-0f072a888750
Content-Type: text/plain;
name="InterScan_Disclaimer.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
filename="InterScan_Disclaimer.txt"
****************************************************************************
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.
Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
Vorstand:
Kadir Akin
Dr. Michael Höhnerbach
Vorsitzender des Aufsichtsrates:
Hans Kristian Langva
Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497
This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.
e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
executive board:
Kadir Akin
Dr. Michael Höhnerbach
President of the supervisory board:
Hans Kristian Langva
Registry office: district court Cologne
Register number: HRB 52 497
****************************************************************************
------=_NextPartTM-000-f176d2c1-de3f-4d9e-a822-0f072a888750--
2
2
HI
Required help in installaling guest tool ,can some one help me in getting
the ISO image ?
Thanks,
Nagaraju
2
1
HI
After installing windows 7 OS in ovirt ,by default its creating the user id
as "user" ,May I know the default password for the vms ?
Thanks,
Nagaraju
2
2
--_004_867A8F0949022146AA9D43C9D643DDF24236C6FDchronos2adsunim_
Content-Type: multipart/alternative;
boundary="_000_867A8F0949022146AA9D43C9D643DDF24236C6FDchronos2adsunim_"
--_000_867A8F0949022146AA9D43C9D643DDF24236C6FDchronos2adsunim_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hi
If I try to edit a vm most of dialog is empty and I cant chos e anything.
Is that a known bug?
Best regards
Marc
[cid:image001.png@01D0EFB1.17A0FA20]
ovirt-engine.noarch =
3.6.0-0.0.master.2015090908=
3445.gitbcc44ff.el6 =
@ovirt-3.6
ovirt-engine-backend.noarch =
3.6.0-0.0.master.2015090908=
3445.gitbcc44ff.el6 =
@ovirt-3.6
ovirt-engine-cli.noarch =
3.6.0.1-0.1.20150821.gitac5=
082d.el6 =
@ovirt-3.6
ovirt-engine-dbscripts.noarch =
3.6.0-0.0.master.2015090908=
3445.gitbcc44ff.el6 =
@ovirt-3.6
ovirt-engine-extension-aaa-jdbc.noarch =
1.0.0-0.0.master.2015083114=
2838.git4d9c713.el6 =
@ovirt-3.6
ovirt-engine-extensions-api-impl.noarch =
3.6.0-0.0.master.2015090908=
3445.gitbcc44ff.el6 =
@ovirt-3.6
ovirt-engine-jboss-as.x86_64 =
7.1.1-1.el6 =
=
@ovirt-3.5-pre
ovirt-engine-lib.noarch =
3.6.0-0.0.master.2015090908=
3445.gitbcc44ff.el6 =
@ovirt-3.6
ovirt-engine-restapi.noarch =
3.6.0-0.0.master.2015090908=
3445.gitbcc44ff.el6 =
@ovirt-3.6
ovirt-engine-sdk-python.noarch =
3.6.0.1-0.1.20150821.gitc8d=
dcd8.el6 =
@ovirt-3.6
ovirt-engine-setup.noarch =
3.6.0-0.0.master.2015090908=
3445.gitbcc44ff.el6 =
@ovirt-3.6
ovirt-engine-setup-base.noarch =
3.6.0-0.0.master.2015090908=
3445.gitbcc44ff.el6 =
@ovirt-3.6
ovirt-engine-setup-plugin-ovirt-engine.noarch =
3.6.0-0.0.master.2015090908=
3445.gitbcc44ff.el6 =
@ovirt-3.6
ovirt-engine-setup-plugin-ovirt-engine-common.noarch =
3.6.0-0.0.master.2015090908=
3445.gitbcc44ff.el6 =
@ovirt-3.6
ovirt-engine-setup-plugin-vmconsole-proxy-helper.noarch =
3.6.0-0.0.master.2015090908=
3445.gitbcc44ff.el6 =
@ovirt-3.6
ovirt-engine-setup-plugin-websocket-proxy.noarch =
3.6.0-0.0.master.2015090908=
3445.gitbcc44ff.el6 =
@ovirt-3.6
ovirt-engine-tools.noarch =
3.6.0-0.0.master.2015090908=
3445.gitbcc44ff.el6 =
@ovirt-3.6
ovirt-engine-userportal.noarch =
3.6.0-0.0.master.2015090908=
3445.gitbcc44ff.el6 =
@ovirt-3.6
ovirt-engine-vmconsole-proxy-helper.noarch =
3.6.0-0.0.master.2015090908=
3445.gitbcc44ff.el6 =
@ovirt-3.6
ovirt-engine-webadmin-portal.noarch =
3.6.0-0.0.master.2015090908=
3445.gitbcc44ff.el6 =
@ovirt-3.6
ovirt-engine-websocket-proxy.noarch =
3.6.0-0.0.master.2015090908=
3445.gitbcc44ff.el6 =
@ovirt-3.6
ovirt-engine-wildfly.x86_64 =
8.2.0-1.el6 =
=
@ovirt-3.6
ovirt-engine-wildfly-overlay.noarch =
001-2.el6 =
=
@ovirt-3.6
--_000_867A8F0949022146AA9D43C9D643DDF24236C6FDchronos2adsunim_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)">
<!--[if !mso]><style>v\:* {behavior:url(#default#VML);}
o\:* {behavior:url(#default#VML);}
w\:* {behavior:url(#default#VML);}
.shape {behavior:url(#default#VML);}
</style><![endif]--><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";
mso-fareast-language:EN-US;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
span.E-MailFormatvorlage17
{mso-style-type:personal-compose;
font-family:"Calibri","sans-serif";
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri","sans-serif";
mso-fareast-language:EN-US;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:70.85pt 70.85pt 2.0cm 70.85pt;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"DE" link=3D"#0563C1" vlink=3D"#954F72">
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span lang=3D"EN-US">Hi <o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">If I try to edit a vm most of d=
ialog is empty and I cant chos e anything.<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Is that a known bug?<o:p></o:p>=
</span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Best regards <o:p></o:p></span>=
</p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Marc <o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"mso-fareast-language:DE"><img width=
=3D"852" height=3D"648" id=3D"Grafik_x0020_1" src=3D"cid:image001.png@01D0E=
FB1.17A0FA20"></span><span lang=3D"EN-US"><o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">ovirt-engine.noarch =
&nb=
sp; =
&nb=
sp; =
&nb=
sp; =
&nb=
sp; =
3.6.0-0.0.master.20150909083445.gitbcc44ff.el6 &nbs=
p; &=
nbsp; &nb=
sp; =
&nb=
sp; =
(a)ovirt-3.6
<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">ovirt-engine-backend.noarch&nbs=
p; &=
nbsp; &nbs=
p; &=
nbsp; &nbs=
p; &=
nbsp; &nbs=
p; &=
nbsp; 3.6.0-0.0.master.20150909083445.g=
itbcc44ff.el6 &n=
bsp; &nbs=
p; &=
nbsp; &nbs=
p; &=
nbsp; (a)ovirt-3.6
<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">ovirt-engine-cli.noarch &n=
bsp;  =
; &n=
bsp;  =
; &n=
bsp;  =
; &n=
bsp;  =
; 3.6.0.1-0.1.2=
0150821.gitac5082d.el6  =
; &n=
bsp;  =
; &=
nbsp; &nbs=
p; &=
nbsp; (a)ovirt-3.6
<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">ovirt-engine-dbscripts.noarch&n=
bsp;  =
; &n=
bsp;  =
; &n=
bsp;  =
; &n=
bsp;  =
; 3.6.0-0.0.master.20150909083445.gitbcc44ff.=
el6 =
&nb=
sp;  =
; &n=
bsp;  =
; (a)ovirt-3.6
<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">ovirt-engine-extension-aaa-jdbc=
.noarch &n=
bsp;  =
; &n=
bsp;  =
; &n=
bsp;  =
; 1.0.0-0.0.mas=
ter.20150831142838.git4d9c713.el6 =
&nb=
sp;  =
; &n=
bsp;  =
; @ovirt-3=
.6
<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">ovirt-engine-extensions-api-imp=
l.noarch &=
nbsp; &nbs=
p; &=
nbsp; &nbs=
p; &=
nbsp; &nbs=
p; 3.6.0-0.0.master.2=
0150909083445.gitbcc44ff.el6  =
; &n=
bsp;  =
; &=
nbsp; &nbs=
p; (a)ovirt-3.6&nb=
sp;
<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">ovirt-engine-jboss-as.x86_64&nb=
sp; =
&nb=
sp; =
&nb=
sp; =
&nb=
sp; =
7.1.1-1.el6 &nb=
sp; =
&nb=
sp; =
&nb=
sp; =
&n=
bsp;  =
; @ovirt-3=
.5-pre<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">ovirt-engine-lib.noarch &n=
bsp;  =
; &n=
bsp;  =
; &n=
bsp;  =
; &n=
bsp;  =
; 3.6.0-0.0.mas=
ter.20150909083445.gitbcc44ff.el6 =
&nb=
sp; =
&n=
bsp;  =
; @ovirt-3=
.6
<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">ovirt-engine-restapi.noarch&nbs=
p; &=
nbsp; &nbs=
p; &=
nbsp; &nbs=
p; &=
nbsp; &nbs=
p; &=
nbsp; 3.6.0-0.0.master.20150909083445.g=
itbcc44ff.el6 &n=
bsp;  =
; &n=
bsp; &nbs=
p; &=
nbsp; (a)ovirt-3.6
<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">ovirt-engine-sdk-python.noarch&=
nbsp; &nbs=
p; &=
nbsp; &nbs=
p; &=
nbsp; &nbs=
p; &=
nbsp; &nbs=
p; 3.6.0.1-0.1.20150821.gitc8ddcd8.el6 &=
nbsp; &nbs=
p; &=
nbsp; &nbs=
p; &=
nbsp; &nb=
sp; =
(a)ovirt-3.6
<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">ovirt-engine-setup.noarch =
&nb=
sp; =
&nb=
sp; =
&nb=
sp; =
&nb=
sp; 3.6.0-0.0.master.201509=
09083445.gitbcc44ff.el6 &nbs=
p; &=
nbsp; &nbs=
p; &=
nbsp;
&n=
bsp; (a)ovirt-3.6 =
<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">ovirt-engine-setup-base.noarch&=
nbsp; &nbs=
p; &=
nbsp; &nbs=
p; &=
nbsp; &nbs=
p; &=
nbsp; &nbs=
p; 3.6.0-0.0.master.20150909083445.gitbcc44ff.el6&n=
bsp;  =
; &n=
bsp;  =
; &n=
bsp;
&n=
bsp; (a)ovirt-3.6 <o:p></o:p></span>=
</p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">ovirt-engine-setup-plugin-ovirt=
-engine.noarch &=
nbsp; &nbs=
p; &=
nbsp; &nbs=
p; &=
nbsp; &nbs=
p; 3.6.0-0.0.master.20150909083445.gitbcc44ff.el6&n=
bsp;  =
; &n=
bsp;  =
; &n=
bsp;
&n=
bsp; (a)ovirt-3.6 <o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">ovirt-engine-setup-plugin-ovirt=
-engine-common.noarch =
&nb=
sp; =
&nb=
sp; =
3.6.0-0.=
0.master.20150909083445.gitbcc44ff.el6 &=
nbsp; &nbs=
p; &=
nbsp; &nbs=
p; &=
nbsp;
@ovirt-3=
.6 <o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">ovirt-engine-setup-plugin-vmcon=
sole-proxy-helper.noarch &nb=
sp; =
&nb=
sp; =
&nb=
sp; 3.6.0-0.0.master.201509=
09083445.gitbcc44ff.el6 &nbs=
p; &=
nbsp; &nbs=
p; &=
nbsp; &nbs=
p;
(a)ovirt-3.6 &nbs=
p; <o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">ovirt-engine-setup-plugin-webso=
cket-proxy.noarch &nbs=
p; &=
nbsp; &nbs=
p; &=
nbsp; &nbs=
p; &=
nbsp; 3.6.0-0.0.master.20150909083445.gitbcc44ff.el6  =
; &n=
bsp;  =
; &n=
bsp;  =
;
(a)ovirt-3.6 <o:p></o:p></sp=
an></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">ovirt-engine-tools.noarch =
&nb=
sp; =
&nb=
sp; =
&nb=
sp; =
&nb=
sp; 3.6.0-0.0.master.201509=
09083445.gitbcc44ff.el6 &nbs=
p; &=
nbsp; &nbs=
p; &=
nbsp; &nbs=
p;
(a)ovirt-3.6 <o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">ovirt-engine-userportal.noarch&=
nbsp; &nbs=
p; &=
nbsp; &nbs=
p; &=
nbsp; &nbs=
p; &=
nbsp; &nbs=
p; 3.6.0-0.0.master.20150909083445.gitbcc44ff.el6&n=
bsp;  =
; &n=
bsp;  =
; &n=
bsp;  =
;
@ovirt-3.6 <o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">ovirt-engine-vmconsole-proxy-he=
lper.noarch &nbs=
p; &=
nbsp; &nbs=
p; &=
nbsp; &nbs=
p; &=
nbsp; 3.6.0-0.0.master.20150909083445.g=
itbcc44ff.el6 &n=
bsp;  =
; &n=
bsp;  =
; &n=
bsp;
@ovirt-3.6 <o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">ovirt-engine-webadmin-portal.no=
arch  =
; &n=
bsp;  =
; &n=
bsp;  =
; &n=
bsp;  =
; 3.6.0-0.0.master.20150909083445.gitbcc44ff.el6 &nb=
sp; =
&nb=
sp; =
&nb=
sp; =
@ovirt-3.6 <o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">ovirt-engine-websocket-proxy.no=
arch  =
; &n=
bsp;  =
; &n=
bsp;  =
; &n=
bsp;  =
; 3.6.0-0.0.master.20150909083445.gitbcc44ff.el6 &nb=
sp; =
&nb=
sp; =
&nb=
sp; =
@ovirt-3.6 <o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">ovirt-engine-wildfly.x86_64&nbs=
p; &=
nbsp; &nbs=
p; &=
nbsp; &nbs=
p; &=
nbsp; &nbs=
p; &=
nbsp; 8.2.0-1.el6 &nbs=
p; &=
nbsp; &nbs=
p; &=
nbsp; &nbs=
p; &=
nbsp; &nbs=
p; &=
nbsp;
@ovirt-3.6 <o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">ovirt-engine-wildfly-overlay.no=
arch  =
; &n=
bsp;  =
; &n=
bsp;  =
; &n=
bsp;  =
; 001-2.el6 &nbs=
p; &=
nbsp; &nbs=
p; &=
nbsp; &nbs=
p; &=
nbsp; &nbs=
p; &=
nbsp;
@ovirt-3.6 <o:p></o:p></span></p>
</div>
</body>
</html>
--_000_867A8F0949022146AA9D43C9D643DDF24236C6FDchronos2adsunim_--
--_004_867A8F0949022146AA9D43C9D643DDF24236C6FDchronos2adsunim_
Content-Type: image/png; name="image001.png"
Content-Description: image001.png
Content-Disposition: inline; filename="image001.png"; size=45586;
creation-date="Tue, 15 Sep 2015 10:21:47 GMT";
modification-date="Tue, 15 Sep 2015 10:21:47 GMT"
Content-ID: <image001.png(a)01D0EFB1.17A0FA20>
Content-Transfer-Encoding: base64
iVBORw0KGgoAAAANSUhEUgAAA1QAAAKICAIAAAD4iCzqAAAAA3NCSVQFBgUzC42AAAAAAXNSR0IA
rs4c6QAAsb1JREFUeF7t/X9oG9f6/4uufE7vxYYUHOgXLEjBs0k4lfn6Upls2Nbd+SOCXPCYFGrR
cBvRC9lKCtlyAyfVyR+Nd+Bmu/uPoORwUmsHmmoHblE+0GJtaPAYjjnuH93IhQar4A9WIaZj2AEZ
voUYErDhBHKfteaHRtJImpFG8kh6D0M7Ga31rGe91ijzzvOstXQk+//Lrn2/9vzlcyaOoTeGYh/G
wlPhkZER9oZ2DwcIgAAIgAAIgAAIgEAHCLxie3t7+fV89j+zB68OtAaOHT0WORPhSqwzx5ELH104
ONAbS1xJRE5HoPk6gxpWQQAEQAAEQAAEQKAOgVds7Ye19P209vHQ0NDM2ZkO6b8jsx/Mco05cmzx
7uLQ0SHTo4OXB8Xt4kZhY/2HdTMuiBEDARAAARAAARAAARBokwDF9qZOT02GJoMnglXqa+7a3PM9
no+lMvK0TCqwzbZqq3PxR8rvq79/ZQb81B11/rN5M/boeZMwCAIgAAIgAAIgAAIgoAf53hha+NuC
NCbpQF6xS3++pOk/Virl/rVeBjW9sHwl1D43Lv6y/8hqqpPyv5mHGZoC2L5dWAABEAABEAABEAAB
EHBIgCb5xS/GtTgfZV9jf4ppFaVpOfJWQFyWctcvZSQP9N//tLi4ePJ/Pqkpv4VbCz9t/OTQSxQD
ARAAARAAARAAARDwhADlXYubxfD/M/wGHf/3N/7bW//tpydckh3svfx/nOQ6jbE3g2fH9/6afXbm
/xU8ytjP6ZmP/7+P/vPRo/8sDIs7hfszl56Mx37PlWLpcfL/nXkZORt8U7tee7r315+eHf0m+b/+
71TlP/gKD3FQzI8m+XnSARgBARAAARAAARAAARBwRYBkGIkxrQrJM9qAhYu/30p7L/cMO6OBd4ql
XcZ2c8mbbOG75WV+xkofJ3O7LPR+PLiSL/CipfUfiuyX/DqV5Ncs/n6YMSXPPuXlb8n/oU31I72J
bK+rEUJhEAABEAABEAABEPCWAIkxkmTc5htsZnpGM67fsbRU+ilfnA4bs/9C4eli/qcSG50Kv6Pk
f6ZyuyUWj2s3+XV4apRuBsMiKMjeDf+HZopWeHjrPayBAAiAAAiAAAiAAAi4JWBKsshZPTdbKpGG
qz6Cx7mg047R40Hx/8DU6aCyXmA/5xVJmp2Si8926Vo9PSVEnyQZNbj4o3mFWNvrdmxQHgRAAARA
AARAAAQaEAi8FQi+Q/G2MC3moJMu6I90szE0kmQkzKhMeZO/V0aN3fX8L3L4Xf5HLuzM28/0aXsB
amIln/u3Kk+FKMInr+TT66oe8LO0ysUfpvrh2QUBEAABEAABEAABrwjQol3awC84EaTVtMWnxfwP
eTrpgv5IN/nefg1379OEmVlG3/aFVvvezbDLs5Tt1USemN5HRyG/YqR0ReY384AFeJCPJggqiqrl
fCsOLv7Ioca9jd9bWvpmaf60pdTpebqzdC8uscg8XXwzr4cm7QxFrlfWdY3W2kTz5mrMS5r/dKY/
MjbRoejnR2ntpuiFh4fWnC2QFpxv4FiDhjzpTmv2q/vY9uh70hcYAQEQAAEQAIEuESDRNn5ynAJ4
pK9ouh5JNx7Me3VAF/RHuknXVKCB/qsSZiQZZ96bmXnvUv70V6lzInA4Opv6MpDlN+nMBr5MzeoK
j2d+2Tua4OPXQT3nWyP+CgVDO9bBsvYTDy2GpsoCL0LhRJpD+NOaytYWzkej5xfq7Q1I7/7EqS7h
btjM7u4uG/19xNB5UuT3NUq44242YdXx9rvRQEUffTP63eg52gABEAABEAABIiAdl56/4DrPdk4d
3eSK8MVzKlYPV5UwI5koVvUu68pPq0b6T1/tayo/fjtwLrV8e1ZLLdO1USWU+C5hLhDhkb+mv96m
/rDB1d+psKH+ImGu5wpLX9OCFGukR4sVpeevG0G1b3TlF/pkaek61a4KJom6lsBbORpXFWis/zQ5
r1Ki7PhowBR/FBHdfVIoJ8ypCS2cqZ/p+JjR6lg8Xb5vDemVY4o2ob5yLbNKLav5SLnRimCh836V
3a4MYda1YN/NqrGzdN8s78i+2UduUNP9xujz6zpeGQ9GHRr42wQEQAAEQAAEeoIATekLvB0oPdOX
aETPRa1um3+kAlSs3vy/psKsTRT6at8mVnbWhPoLSJoeGpO4onySrxPtGw2dEkG13UKltmrSCEWJ
UufK0ThSDNYsrW1lV1VK69SJUFhLXp/mC6Qrl89E5j+x/mTKqHxNSwdH5m/LliBhKKFrINIrKcsH
oURFqjeUKNcKJbjwtT1CiXKj5WJu+mWxMCqnDH1W30K9bmrumWO3sbaj3XFrv9EoN+uXPY02H3FU
BwEQAAEQAIGuERh5a0RV9ZgfST1atGEKPusfefxPValw1xyzNuRM/DFVZH5HJ09zOSSdniQxVFiv
/ytwT9JRygVfXVi4Gk0/4c0VvohGbzf81bixeJSiRLtKkieR6UwqFKc7F2swlZC5rfJvlXR4QPx2
njRG8rWQ/8GKQstXameaJ8K1MKGmdMuOUb8yfAeesQinwAppUUV001CWwuru4yQ3dZ36wdhxqV5s
Vy/2hci8a8Vc9ku3oPs8GRlrbKFON00S+tiJPlo74tS+aUjNVI2+g37Z0DiUrwUaBQEQAAEQAIGW
CBx789je3p5WdenxEl1r+k9TfvRHuql9StdUuKVG2q3kUPwx9eslkidizpw2W65KOVn92FW+cf/r
wG9rC1PklJ5g1YJqRqzRtpuuq6zln1i6sFsy9Y3FvJa1NPPijO1wyWhxzEjOitZ3H2e1rq7d5hJw
oawmdzd+EOa16nUPo9gP5podxtz1S0u+cxeod1yxvu3EQk03dQ9rx641+3Ydbt4vOxrtPuGoDwIg
AAIgAALdIzB0dGj/5b7Znqb/+ArfiaBV+VEBKkaFu+eZpSWn4k/XFqOT9HsjFPEyRc+hON1yoyr9
JAqP50l8wh9frWI59Cl6Ftmnf7i2oEXv9IPSu+X5cOU5hNU+ldQdJ246LObElLMydbvprHpnS3Wd
Rme7A+sgAAIgAAIg4EcCzsUfW6M9oykC9gmF5IwITYs90tOj0kfR8iS7f/OfqqvIrvJ0aiLTQEK5
ryJWroTC12nCX3UXtFx2Ra7W7N1OJmFkhEV6V0TXtNaNRTDaOoamkxQdAXPXr1BU379GW4WzW/q3
4Zs1VW3AbNRNe+fc2W/UQXf9coQKhUAABEAABEDAVwRof+bho8OmS1q2t7hZpNM6/48KUDFtM+fu
Hy7EHzNTk7vmagCnDhvrPVX1Ga/C//hNxfIOtpNZIl1VTvs62IGvlSo8CRs6xRd7VEXm1B2R3T2X
4qt9rSs8Kpb6astXRcpbXwRDgUCzL2aG1CkW+3Iu+6X7rKWqtaGpb6FuN+u77Mq+rRl99F32qz2I
qA0CIAACIAACh0CA9nAxf5nDOs/POv9Pc4uKUWGHLu69PPDwdCP+9FllNQnTho6vfWPkTMVqhrXb
Yi2FOApflK/pjzRtTlsdoh8UuNKWVtQ/3FfRJsbZLVX+YcFsneJ/yccU1xMRSgr7VaR9d5Xr2qaG
5QUNwqJ53+E4etWvQlpbLCJcMInVJVOvm3XdcWm/0k7N6LseYg9owgQIgAAIgAAIdIvA3m97kiQN
vcEn82mCz1zhYf0jFaBiVNihX7TVn4fHkdevX9NKBYdtoxgIgAAIgAAIgAAIgEADAvTrbdpOzg3K
SGNcIDb4fV3KQ1J1TaFRyegHMQ+Zu4r8edguTIEACIAACIAACIBAHxJQn6m0h4sm72q7x2N+YxIV
oGKH1XmIv8Mij3ZBAARAAARAAAT6kAD9FO/W0y0SeeHTYa7zRo7RNZ10QX+km3RNBajYYXUe4u+w
yKNdEAABEAABEACB/iRAwo5SurTClybqBU8GSfDRSRf0R77yd7t4iMqPiGPOX38+dugVCIAACIAA
CIBA7xLAnL/eHTt4DgIgAAIgAAIgAAL+IoC0r7/GA96AAAiAAAiAAAiAQEcJQPx1FC+MgwAIgAAI
gAAIgIC/CPA5f/7yCN6AAAiAAAiAAAiAAAjY7POXT8+uT+U+nWwPDiJ/7fFDbRAAARAAARAAARDo
KQJ65K+wfWg7DfYULjgLAiAAAiAAAiAAAp0lEDpBP4jLj5pf+DAjf/xCuljKPCxSschflhMhFy6V
xV/8ZsZFPRTtKQKZW/Ghw9tMsqdQwVkQAAEQAAEQOEwCB0NDzsTf52tnbyxdCbPdXPLP+cjfU/Ko
U7crxF/0fcVpPZTrHQJL/5Qh/npnuOApCIAACIDAQBNwLP5ykiH4Nu7PPDq+mDqnxwub4oP4a4qo
5wtYxd+xt471fH/QARAAARAAARDoRwLPf3tO3Wok/niQT73AF3xUrPwoPU7eYQnn4g8LPvrx8UGf
QAAEQAAEQAAE+pLACSlQ06/SMz7zz/kB8eecFUqCAAiAAAiAAAiAwGERUJW7GXY6ZIi/tUePxWrd
3dyj1WDk905zvlQD4u+whhDtggAIgAAIgAAIgEBzAgcHB9HZmehsml1btuR2IxGW5vf/nJH+4mK1
B8Rfc+IoAQIgAAIgAAIgAAKHSGBoaGgpt7yUq1Z4o+dS4r67fV4g/g5xKNE0CIAACIAACIAACBwC
AaR9DwE6mgQBEAABEAABEACBwyLgTPyNJBMfbNygc0q2Ojo6odDNxESQblqvu9mZw2q3m31EWyAA
AiAAAiAAAiBgIRBOtPELv47E3+jbkRGtweMLsy5Wk3RjmHY35c+/nUxvulvk3A3P0AYIgAAIgAAI
gAAI+I+Ak02e5dkPFoJsvfhsKnicsZePMiupXdETirrF3xnd+yVG2st6zShSePaCphf3Xu6OHB01
ammmjOPZ/Ofr9JsilTcZKz6ZLI5uzFJbbHd9VV7b08rb1rX6wCLT2amjVsbruW8TUIXY5Nl/3zt4
BAIgAAIgAALVBJpv8uwRMweRP2mG5NreL4u59XkupI5eiIo8b/1DnhXKjzTc59/Ol0j56UcwMs2V
H4lFitXlnvE4YkUembTg6iNSesFTXPkVn8TWX45OnU2LxprVtXgj7FNdujUVaeKqRxRhBgRAAARA
AARAAAR6hEBT8ReMvDNFEbjiv0n4KflfeMhv5J25RupPiEX2bD7Htx5Uck/WdRTSHA/LvXy0JFK0
RU1KHheF+bGeoyjgXmqNRKEQiDm1uPWMmpMCJCSb1LXC1lzV6rKRoz7LUvfIYwE3QQAEQAAEQAAE
/EOAtvrz8GiW9rUkcK0IalO9Ztp3id2hXDCJP5HSJf0lUsCULP5Fip8iHVl1UGJ3ceTsQtDIJgen
tLDfJNeOUvrGKYlnfo/RhW1deettM/WspX31VK/mT9kN/wxg1z1B2rfryNEgCIBAVwlkHmS62h4a
A4FWCcQvxxtUbZD2jX0Ya7VNm3pNxJ8mxSokFBdkPBZImswivCxz/l7O8QKm+NPKm+LPMmXQcEdM
5nMi/mzq1s75g/irHmaIPw+/MDAFAiDgQwIk/m785YYPHYNLfUwgeS3ptnfHjh3zifhrnPaV+QoP
yqHuihiedqgJPl2PjU4FK7Z9sRRYFvlcbV2wPGtG7FRx35wySKKQbx+jGAuJGzJsp67bwUF5EAAB
EAABEAABEGhGIHU35fxsZqyrnzcSf8EpsTL35aO8+OFg8ygW+bIMknc8r2pzKDlj3caNDxYCL7WV
wXQouW/5PL+Rd7Jc9nFRSFE6czFv4263U7erQNEYCIAACIAACIDAgBA4eHng5PQbjWZz/tr3V0v7
mlng9g3CgksCzdO+z5YSlzO6wj87r1wL8xb4TTW2khR/wAECIAAC/iWAtK9/x6Z/PaO0L4X9SPk5
6eLQ0SEq3ytpXyc9qimj/xyIsUULXyxMu/1VBg9bMoxKHSGwnpK5yFMUfmbiOwvy3XyLDZFenE61
WrnFNlENBEAABEAABEDAFYGmW724siYK76WW+I4wU7N8Sh/fdVlbGuzeEGp0gUA+dWstctMM7wWi
1+PSanZJbLiDAwRAAARAAARAoP8IdED80Upg8ZNr5gnl59/nZj2/xiJh6y46x6PplXRULPTRD2s8
z3pNIcNpWZwJIRbzKZ47XlvQ/yju6AWMcCBVv5pKXaUqCBD696GAZyAAAiAAAg4JHLw6cHI6tNa1
Yh0Rf13zHg15QOCk1NJG2BQyVOMPRLL4ppS5vVRi4eSDuMQi87p2LC1dXWA3eYHMn9SFq1RAHE/X
2Id0E1MJPRg6mAABEAABEACBFghA/LUADVU0Aqr6b/H/qaRyLxqoovIsv/ZUjykGPohFnq7l9VSy
JL0NgCAAAiAAAiDQVwRoMUft4VUP6bc9Zt6b4ef9gic2If48wdjLRp6qLS3GCSdX5tktLe0rp4zf
8KsEQSlgrcDCWvkDKWDNKfcyOfgOAiAAAiAw6AReMSbO5//jeRUKfsf4tE1KQ0NDy98tL3/3VVyd
90T/VWz10qZzqO5bAplb8aEDvhz92FvHKp2kaXk8OZssT/sz7rxtbPVi3fPFdv8Xmvx3i81TJrdp
SWwf49tHBI6BQC8TwFYvvTx6veq7ttWLul0RP/ndyd9p/fn16a/WjkknJI+2eimk38sGvkzN0j7L
P6dnbmq/wRGMizuF+zPzbGH5SohulR4nL/0Q/ur2LOXl+PUzSV5hgctq5gFfgFsWf4Gjw706AvC7
GYHSy/064o+Vvk3E/8HiD7RFHjRRL55h8QylcctCrSwQRWHJ0HlrEa1WhfgzbgpT6odCVpqmIP6a
jRQ+BwEQaIEAxF8L0FClTQK24o9skv6rUn500zvxV8pdv1S6sJwYzSU/LsW+S3ChxwxFyMybvFjm
F00U0vUddi1W+nhevfxV6lyAVGOF+Bs5OtImC1T3IYG9l3sNxB93mKs3IzFrt8mz0Hz8HzfSn+LS
P9SwtlyjXEuq0I5PWUQPJXLVKOwaBSD+fPh8wCUQ6H0CEH+9P4a914N64s+2J56Lv9l/UzAvpgX5
uPq7P5M9TsKO6dLw3UL6uhqQMnl+c5euZ29LOTNkyAqY89d7D5z3HtOKDX2TZ0X/eQ9qg+/5oq/J
DXyQ1gqkP4gmzYW65Vrm1jCB6D1ezEgi07xAbe9oo4DFpve9gEUQAAEQAAEQ6C4BJ/u8UJlOOBU8
Xv6J3dHj/Od4GQtMnQ4q6wX2c16RpNkpufhsl67V01NiUaYkGTUg/joxIrAJAiAAAiAAAiAAAl4T
2F3P/yKH3+VmubAzjt1n+k9pBH4fDq7kc/9W5akQezcsr+TT62r499UbckD8eT0wsAcCIAACIAAC
IDAgBMz1vI0vvKFRyt3NsMuzlOvVRJ6x70shvxLUFd7oVPgdJfOABXiQbzTwjqKo4alyiFD3A+LP
mwGBFRAAARAAARAAARDoBAFjn79L+dNixQbXdbOpLwNZbfO/8mQ++oBnftk7muDj10E951vhF8Rf
J4YJNkEABEAABEAABPqfQHfm/Bn7/C3ryk/jSvqPb/5Hp9j5xTgC51LLYocXrv7OpYwqoYS+NJhu
hyD++v/RRA9BAARAAARAAAQ6QSD0bsjJ2Ymm27GJrV7aodcbdZtv9dIb/YCXIAACIGBPAFu94Mno
PgHa6sVto/T7b/HL8Qa1nv/GfybkYGgodELSikXPR+m/Q28MxT6MuW2uQfmm4k9NnxlPPjEtxJde
LMou21euDivn9hfPuqyG4h4RgPjzCCTMgAAI+JQAib/EJwmfOge3QEAj8Iql76d9Iv6apH2Vq+PF
z/b3XxhnjkXfnNN+TMTxoaqbjsuiIAiAAAiAAAiAAAj0H4E3fNSlxuJPUR6Ggict7p5d3OeRPwoH
Dg+fSZs/aKfeD2t/5BdvDmvn3CqvSPKRAoeZ2eHwfVF8Jx3WC4TTO8Iy3aG6q3OillCWehmjgI9w
wRUQAAEQAAEQAAEQ6G0CjcVfMHiqkLxYFnlGX6XE90vxJ8k7Qt6R5FO+KcQ/S0irc+PfxLa0MOFm
amOWqzf53lbqFIvn9vNXKIGtzE0kJ3NagVh2wggiPkmOP5bp5tbtjShJwIssy69Z0izQ25DhPQiA
AAiAAAiAAAj4hUBj8UcibyvFkuNGMG/4TTMaJ8sXWeZvQhfuKNkncZmm9J0Mhp4kY1qEbyyRf5FP
jFX2c1XJnEp9qk3+G0vMX8wounyML93jMwml6ViIhVIPE6QTpSvzcbah7viFFPwAARAAARAAARAA
gUMhsPfywMOz6VYvpP/Kc/6WLhaSE7r+k6+lSOpR8E9dybLbn3LtRoJvM8Wuj+uZ36vVkwPV7Q1G
QT5DSkYfso1tM3VswpyUqiTjoWBGoyAAAiAAAiAAAiDgDwK01Z+HR1PxV9FpkcMtFJ+aoTsK/oVj
11lsWl+TLAJ+mlhcij+M6vP8rDYuLpWXj7zQcsE4QAAEQAAEQAAEQAAEukSgofjjizAq1/aaGV7h
ngj+FQqnYrKI1ZnLPsSHNF+QTRob1Wi94Vndh1FtIQif//emsQqkS51FMyAAAiAAAiAAAiAw6AQa
ij9a27sZXChP+BsenijOW/f5G5NjtJiDlnpo2u5KfmnCzOqOZ89vib39JPl8iFb7DlMWWOSFN+ia
24xu3N5C5G/QH0D0HwRAAARAAARAoLsEmm7y3NAd2pOlSg5213u05oQANnl2QgllQAAEepcANnnu
3bEbKM/TX7S+yXP0Ay9/4cPdnL+qQaKlHoWLstsf/BiokUZnQQAEQAAEQAAEQMBXBFoVf2JP5vHr
k9oWLThAAARAAARAAARAAAR6gkCr4o//1Act6XX9O789AQVOggAIgAAIgAAIgEC/EmhV/PUrD/QL
BEAABEAABEAABPqaAMRfXw8vOgcCIAACIAACINDjBA4ODqKzM/y8n/ekKxWrfT2xCCM+JFB6uT90
cECOHXvrmA/dg0sgAAIg0A4BrPZthx7qdo1A26t9VeX6XEa6sXQl3KbPEH9tAuyN6hB/vTFO8BIE
QKAlAiT+WqqHSiDQbQLxy/EGTT7/7Tl9ejA0FDJ+IyN6Pkp3ht4YMrZ6yadnc9LfU/IoY4U70b+u
CWvBuLizcX/mc6ZLw9Lj5NwP4cXbswHG+PWzQGSVSRdLmYdFqtDePn/dhob2WiGAff5aoYY6IAAC
IAACINBdAg7EHw/+qR8uJwK55J/VC7lPJ7mHhiJk5k0RI9zWRCFdp9m1WfXPn6sXF1PnJFKNmPPX
3YFFayAAAiAAAiAAAiDQHoHST3n17JRQfnSEp84W135S2WgocmJtvUB3SiqLx7Wb/DocokghC0Z+
L36RLTQF8dceftQGARAAARAAARAAga4TkI5TRlc/AseD4koKnQ6u/ZhnhfU1aVT+Q0R9VqJr9XRI
FA2McgnID4i/rg8XGgQBEAABEAABEACBFgjsFta2I1MhXpMLO+MoPeMz+bi++31YWl1XnpUifwhT
hC+yup7+saQH/CzNQfy1wB5VQAAEQAAEQAAEQKDLBFTlboZdnKVsrybyNvT28+urRkpXZH4zD5nE
Y30B6cTamqrlfCsOiL8ujxyaAwEQAAEQAAEQAAEXBIx9/ubWTosVG3SMzqb+Lj3SNv8z1//yD3jm
l53QBB+/lvScb0VzWO3rgn6PFsVq3x4dOLgNAiAAAiAwUAQcrPb1hgcif95whBUQAAEQAAEQAAEQ
6AkCjcWfMvfm8HD1GU7vOOqacnV4btVRyXKhnXSYmruquKymF1fvhzVvw/dpbTM5z10lN8Qf6aA7
cy2abs0h1AIBEAABEAABEAABnxFoGvkLpTb3919Yz3xizEknVHXTSbGKMupKlt1OxR8uONSXlQ2o
yjeFeI67mr9CGXF58QV3Vb6n/REHCIAACIAACIAACIBAk1/4oFDZQnDTTu2tzg0/DqY2k8knpA7z
8kp4/DrfVZAOkl+LZ3m8LfqQ/zF0e4trLwrpTSRFCV6+jnyk5qIsty8/Ho6ypf17srBHNxX5xaL4
Q/magnyVLarpM+PJJ7oLSy9kxaaWqJ5j0Vn+Q0C6YxUdmS9O1LbFawVvbyRFB7Xe8YMqCjvsYiq1
mWUPHWriQ3jmMOfvEKCjSRAAgS4SwM+7dRE2mmqLQNs/79ZW62blxgs+Goq/2Q1dxpEM+ltw6/sE
D69xkZeNcXnH1VjxM00q6aqOX/MCxXldzFX2wbRTNlJH/Dlq0VbGRTMXhazk0o0tkRv8wuiIvdDk
zm9oEtasZfFQyFBWX9F6M07tWIH4a4ce6oIACPifAIm/G3+54X8/4eEgE1B31EdfP/KJ+Gua9i0k
Jyqn/Z1JaxPo2KmYPCYuTgZDT5IxbV7dWCIvkq0Vx6qSOZX6VAuYjSXmL2YUm7mAavpvmdB5mStI
XqaQvFt/el7TFus+X6HUNRFDPCuXf1rZ7Ej9WrFpkTg2alF6unBxXuumdCWbOjXIzzP6DgIgAAIg
AAIg0IgAKb/ct4/8w6ip+KuZ86dF+KwHCb7NFLs+ri8NqVmuoW5vsCfJcWPhCKWDN7Z1AVk2s6Nk
n7CCYYSnjBvM/GvWYn2+k1KVMHU0FNW1isVCKKj9lAoOEAABEAABEACBgSZA+/A1PtpXfs1acPd5
U/HnbDh5wE9bFLIUfxg1Vtda6lKy1bJqpHYFhnI3Wagos5U6ZQb/NtQdYWpHNTaz1kKMDVtkdrWa
98ZRrWAwVCjqP6XS3KR/S5SWrsry3XyVg/m7NjerO/FsKTGdqq7p357CMxAAARAAARDoYQIjR4c8
PD0Qf3yDFTMXzILBU2zyREVwUJqOhR5GjW1f+PYx1epwJ73w0EjI6kMjJT6LG8G/QvEpv8sFovi0
aYuiVHUtZ2PuqJbokb4kWb0fMxaaOGvBR6UC0Q8jbDVfqeHy+VUWmQr7yE24AgIgAAIgAAIg4B2B
puKvZs7fm9W790lX8ksTZlZ3PHt+SyzykOTzocys2LRPZGk36Jpnfo3FE5Y+cFVXO/Hu7Kc8+HdR
/TQX53beHFbOLWkT9eq0aKUiL9bUcgDNcS3qUW5Smw05Xoz18Jy/qXCEreXXLWzW82ssEp5yQAtF
QAAEQAAEQAAEepAAft6t/UGrvya6fdteWGi82peSvAtsXrmmh/roj9m3M+kPAowSu5czYm6mFH+Q
jh5n/M5tVWJra08j8w+k7GU1tpLk1dZT8q013dOzhimz+tl4fCejfqgkuaDMp6YXRNHIvFYXBwiA
AAi0TQCrfdtGCANtEaAJd1Q/+3X20seXrIa++vKr2EcxupP+4g79l0q1vNo39iG349XRNPLnVUP9
ZYf2fDEz3Xwts7HwuQd7GZ6yZn4p5ytFpgJcpV3OSDcVZUVRbkqZy8b0vqdr7EO6adVt+dSttYhe
kkxll54JkWdUz7y9lhFZe8ZoiuECEyUzf1IXri6VehAXXAYBEAABEAABWwKk80jtmR+Zys+HuCD+
WhqUs4vlTDftF1i7Arolq4dTaSoWP2lkfinnezIS5kG+kmomf3lqWC1xSUeHJL1d5WY4uaJF9Rjj
JcXB7cRj4mbgg5h+81meQoZaQpnffLqW120eTr/RKgiAAAiAAAh4S8DUf35WftRliL8Wx51+Nc5Y
v6z9+kjvHoHwaWltna/6yK+vSafDFPdj/1bVk5KxbEeSTqrqv7UOSgGShlUHX/kry/zUUrqs9My6
lQ9VNyusLVSW7F1q8BwEQAAEQAAEaglo+k/L9vr2gPjz7dB0z7HAVESiNb/PlrKrkRjN9qPjbUl6
qhoKTlWf1gb8DPf43L61yAORIF6Z14J8gePW5d5U3ewLTfXTStIp5hHiAAEQAAEQAIH+IuBz5Uew
If7664lrrTfHw5GTa9nba+rZsL4I43iAFnboq4D5+l+7gJ/WFsUIjU/zd/XIH8//Ps1kxSLi0rdZ
fTGIaEW3iW0CWxsp1AIBEAABEACBtglA/LWNsB8M8A3/1KeqZXu/cPJBXL0lkrm31PiD+itzxZRB
LZmbfXs+rieIy9Xj/5Yi+kzBQPTePNNs8mAhVvv2w6ODPoAACIAACHSRQCH9Xlrb87ido2Krl3YM
oa6fCZRe7g+JhejH3jrWdT9pe5espG0WgwMEQAAEOkAAW710ACpMuiCgbfXS4PBuqxcSf/nwd4mQ
C+9siiLy1x4/1LYlwLO6CbHni1j52yBrDIAgAAIgAAIgAAKuCfAQYO5xcua9GTrTP7urj02e3fHq
xdKNN3nuUI9K3ybi/9BWjBh7RHeoJZgFARAYeAKI/A38I3CYANb/tU57ZTjxwItNns3IH13MK9ML
y1dCbDeX/Dgf/jI1O+rEC14GkT+npFDOFYHAB2ms6nVFDIVBAARAAAR6joBz5deBrgXj74v07+hs
bLqY/8nFLycg8teB4fCZyUOJ/PmMAdwBARDoZwIU+evn7qFvfUSg3Z9340G+UozP+auY/Fd6nLzD
Pk2dE5u1OTgg/hxA6vEiEH89PoBwHwRAAARAYCAIPP/tOfXzYGgodELfLjd6Pkp3ht4Y0n/bl8Tf
Xfbp7dlApfgr3J/JHv/KufhD2ncgnid0EgRAAARAAARAoMcJlHJ3M+z0lBHfU7KPRap3N5ddCYZ/
7zTsRzUg/nr8SYD7IAACIAACIAACfU2AtpIRq3rvsGvLlvCeHGZ3+P2PM9ItF6s9IP76+mFB50AA
BEAABEAABHqfwNDQ0PJ3y8vfVSs86VxK3F9OvOuuk4j8ueOF0iAAAiAAAiAAAiDQ0wQaiz9l7s1w
esfSwZ10+M3h4TfnlI50uqa5jrQCoyAAAiAAAiAAAiAwuATcRP5I+U0k2e2t/ReL8uASQ89BAARA
AARAAARA4HAJhBJt/Mibc/GnzAnll7+iLz9mq3PDPApIpxkdpNDdXPp+WLs/t6pzUY071pt21WtD
jFbjh0sZrYMACIAACIAACIBAPxBwKP5I1UU3rMqPooCzbOnF/j4/54sTpv7LZFmW38zFM7MiO7w6
N/5NbEsruZnamBUl61bXmHKhOZnTqsSyEx3KMvfD+KEPIAACIAACIAACIOCKgBPxx5Vf5uJSOebH
mLqSLVyUjeSvLF8sZFe0H3INxaZFaPCsHNccORkMPUnG7otPxxL5F/nEWIPqosqqkjmV+vSsuB5L
zF/MKEYQ0VXfUBgEQAAEQAAEQAAEQKCKQFPxV0hORDOnQqGH0bAm4IwjFAya18Gg+HU5fkxKY5VN
kODbTLHr43qO+Kq+VqROdV5X3d5gT5Ljek55OPqQbWxXNI1RBAEQAAEQAAEQAIHBIbD38sDDs/HP
u+kxv/17Mk/UTiQLF5f4Nemz++Hx4rx2TYdydXghSNMBi3NvKrK+HITqmtfm6Ojp4yyL1am+ENzM
yysVxgdnaDvUU/y8W4fAwiwIgIBPCOC3fX0yEHCjKYGWf9s3+kGsqXHnBZqKP67GKFHLD6v+49fF
+bLO04pZBZ9+HSSZSHP+vk+IZLCaPjNe/Gx/8WS96sIO40KT5vwt8sxvzXRD551DSUEA4g8PAgiA
QH8TIPF34y83+ruP6J3fCCSvJd26dOzYsR4Uf6b+Y6GUIdEKvOvij1wg2og/ig1SXJBSt9oRMleN
aFKyurqhNcufWqq4xYzyEH94BkAABAaAAMTfAAyy77pI4i91N+XcLSrfK+LPeadQ0r8EEPnz79jA
MxAAAS8IQPx5QRE23BHQxN/BywMn1YaODvlK/DVd8OGkUyjTswSeLSWmZbniTCw9c98dbieVr1uv
tHRVTq27N4saIAACIAACIAACXhOA+POaaO/Zi8yvKIp53pQyl+vLuHoi73g0vZIMH3rfm2jQQ/cP
DoAACIAACIDA4ROA+Dv8MfCXB1PhCFNLLQT//NUNeAMCIAACIAACIGBPAOIPT0YlgfX82tlY9Lh2
M5/SM8JaLDCfupxR2drCtEgNU5jtaip1lbLGqXxFyK2qlmF/PaXllxPflvRbxh1ZM6gdNTfzd2X5
bjmlXPo2of+xumSlezb+0y2RgP7WSHaTWTPxbWkCzwQIgAAIgAAIOCFw8OrAyenEVDfLQPx1k7Y/
2yIxZ5n2d2tNelv7+WbSSQvsJs8IZ/6kLlxdKrFw8kFcYpQmTuvq8Oka+5AKWBO+tbX0Xq/tSBme
XJ6X/hEX8//yqVtq/IHIOFOu+TbZt78ZPh+XVvOG+ivlf1AjU5Rhrq1e5V59T35gItM9H1ldkG+L
a+rXaraVyY7+HFJ4BQIgAAIgAAL1CUD84emonPP3IM7+sSACe/m1p5HwFOcT+CAWebqWt8kFS9Lb
lQDr14p8GA3wsuHwWba2rmk5Vf23+P9UUrmnfWp383g4cnItr60XsdivU93wp7knknSS6V7xJgxn
8ESAAAiAAAiAgEsCtI1L7eHShpPi+fTsnQ0nBRuWqRB/tCcIzv4j4O4hOR6QTE3GM7xaUHBhzd6K
FNATxNaPm9QyIovh5Mo8u6UHHY21wLY3A+HTkqYXS+tr7E8xsbLEtmSVl039d8cGpUEABEAABECg
gsArxsT5/H88ryLD7xif+g0aIn9+GxE/+GPG86xBQSPV29zBJrXUf6sW/aelfSNrt8wlxqTqqm/y
0CPP/FLOl0WmjBAh13+11a3+teZ/8x6iBAiAAAiAAAgQged7z83z16e/mkzo2vpRB1jxEKDyOBmd
naEzLX4zw/lRIf5Gjo7g7D8Czp8GKln6Nrt2MhKmeF5FstW6jV/DtcB1a7G1/xSz+p4tZVeZdDwg
VlrU7Cloe5N3gJLFa/lv82tM+Cbs2FTnHxju1ffEFRAUBgEQAAEQAAGHBDT9Z1WBDiu2VGwt82x2
Kbe89Pe4+teksuvCBiJ/LmD1adGKBR/xf0jz+vS7QPSekZa9vBZ5IFZ1iLlxmcsNdmy2qyXARcbU
OGWQL2fYnzJJmkpIWwPyPQVF2pev/NDs290U1cNTkbV/ZCR94mC96lb36nrSp+OIboEACIAACBw+
gW4pP+ppMP6+mAY1OnvhbHHtJ9V554+8fv2aShe21cDRYQp6Oa+Jkr1CoE9+3o1CfZfVmB+2ku6V
gYefIDAwBPDzbgMz1D7qqPbzbsVfik58Cr4T9Obn3XZzyT+rF3KfTjJK+65P8Qt+lB4n77BE6py2
WUfzA5G/5oxQwg8EaKmHejZ8+D8i4gcW8AEEQAAEQGBgCZyQzJnvJoPSM0ca1CwP8Tewj0/vdFzs
w8zz0deg/Xpn1OApCIAACAwCAXM9b+MLb1Coyt0MOx0yxN/ao8ci1bube7QajPzeadiPakD8eTMg
sNJBAvyHg6u2ku5gazANAiAAAiAAAr4icHBwIFb1ptm1ZUtuNxJhaX7/zxnpLyl51IXLEH8uYKEo
CIAACIAACIAACJgEnPy2G5Vpk9jQ0BBf1ZurVnij51Li/nIi5K4FiD93vFAaBEAABEAABEAABDQC
oXdDTk6/4Wq82leZe1ORXyzKZa9r7zTvkXJ1WDm3v3i2eUmU6ASBPlnt2wk0sAkCINAXBLDaty+G
scc6Qat33XpMv/8WvxxvUOv5b/xnQg6GhkIn9Al80fNRujP0xlD0g1hlxYrVvm496YL4U9Nnxouf
Qfy5HRrPykP8eYYShkAABHxJgMSfL/2CUyBQTcA78dcW2zbE3+rc8Kz2fQulNvOJMX6l3g+PX9d/
ZCSe44KPwn7Rh6LQ7a38FRdLUdrqFipbCED84XEAARAAARAAAf8TcBP5a6s3rc7520mHZ9nSi/19
fs4XJ8LpHcZW58a/iW1pNzdTG7P8pnxvK3WKkRCE8mtroFAZBEAABEAABEAABLwg0FT8ZaJvDg+X
z6gW61NXsoWLsjEXUJYvFrIrKjsZDD1Jxu6LXWfGEvkXejjQCz9hAwRAAARAAARAAARAwAMCTcVf
3AjvaUG+JXOmYigYNNsPBsUiYxJ8myl2fVwXi1cVDxyECRAAARAAARAAARAYbAK01Z+HR1PxVxd2
oVj+LZFiUZ/nJwJ+hkx8GA1rUUAcIAACIAACIAACIAACrRIYOTrk4dmi+JOmY6GHihHZU5SHodi0
RKs9hs+kDbkXDJ5ik8Za5VY7i3ogAAIgAAIgAAIgAAJeEmhR/IkMb3BBnwu4EBSrfaUr+aWJ5Lh+
czx7fkvs7SfJ50OZ2eFhZIG9HDjYAgEQAAEQAAEQAIFWCDTe6qUVi6jjNwLY6sVvIwJ/QAAEQAAE
QKCWQIOtXmIfVm3y3BY/iL+28PVEZYi/nhgmOAkCINAyAWzy3DI6VDxcAlV7PkP8He5w9FXrEH99
NZzoDAiAQA0BEn+f/q+fAgwIHDqBoaEhhz6oO+qjrx8dlvhrdc6fw86hGAiAAAiAAAiAAAiAgIUA
Kb/ct48OEQnE3yHCR9MgAAIgAAIgAAL9RqDphnyHq/wIN8Rfvz1z6A8IgAAIgAAIgAAINCBQseAD
pPqVQOnl/tDBAfXu2FvHKvr4bClxWY2tJMNuep6/K+enlOSUmzp1ynpoygNvYAIEQKA3CWDOX2+O
Wx96rc35o8hf476lv7gjijH3c/4K6ffy4e8S4nfVWj8Q+Wud3aDWLJV2vOq6h6a8cgl2QAAEQAAE
QKDPCWCrlz4fYOpeo9W+5chfPjWdl/6kZv7Bf6IlclMP7JW+TcTFHfMmxeoWVvkfpT9l0h8ErAW0
O4zZmLIWM62F101T85EfFjIsnrkXpfp08PI/RDL3Alk7r/p/zNBDEAABNwQQ+XNDC2U7SMAa+ct+
nb308SVrY199+VXsI75dnxeRPx4CDFxWMw/4b+3Kt5YT77roFyJ/LmD1e9G1NTavrCjKzcjarVSe
erue4gqM7tD5IK7eSiw9Y+FrmfhJrg65zqMC/5DmtQI3I+o/srwWP6pNBT5I8zLinKfffTkZj01Z
TYWj9+YjTzPZda16Kf+DGvlQE4JrmX/HrA70+yigfyAAAiAAAv1AgHQeqT2zJ6by865vSuZZbPm7
5eUv4+rNZG7XhWGIPxew+r2oFJkScmsqHNG6+rYkPc0sfFvi18ej6ZV09Hglg6mkYs4XNGvxIjWm
jHo8cLhTjvBZzIXDZ9nafy7xxp7l155GwvqcQil+XsxIPB6NnVXX1oUzOEAABPqCwP4Be/6SlX7b
V+nc3S/99py9QVOmhvjNPXF/9/nWs+d00RfdRScGjoCp/zqg/AhmMP6+mPs3OhubLuZ/cvF+hPgb
uGexfoelQJW2I8H3IM7+EZenZX7eNeJ6FSZKS1fFp9MLa+X7NabER5TMXViNzBu53SpPwufjpDUp
+FdaX2N/ihlrUMqmpLcljBYIgEDfECCFt/dy/+AVY2+woTfY/iu2oe5t/KKS4Ns7EGrvDTqH9g4O
iiq/2TcdR0cGioCm/7RsrwfHrqrPxOK2JGlUNzl6POjKOMSfK1yDV5gH/LR07XxkdSGhRQHNYz1F
mo9dNwo0xCOm/VGOuP7KYh7bo+BfYuEfTI9BVhpU/2155gdvKNBjEOgrAqTqXu6zN4aLT9WNQnFj
s8heHQSODQ3xyB8/SO/l1wv0X/Z/kQQc5gVwgEBvEvBM+WndfydgSL4yjt1n7r4gEH+9+Sh1xWuS
a/JVkYflhySdZNJxbT2GfpSeqexsTMsFl77NWiJ/1f41V36ihgj+qerJSLgcg1zLaorz2VJ21cgm
d6X7aAQEQKCDBF5RqI9vh7Hxy5ZoZYjifMdGRoaPDlHQj6eD9/b3Xh4IJXgw8uaIulMSGWEcIDDI
BEq5uxl2WpuhRYeSfSzej7u57Eow/PuKF3RjTBB/g/wYNek7rdKYH8vEtZzvdHztdEbs7RcIn5bW
bvEscOCDGIUDtaTwAotEmCHUqg3zBRy0dGNBN8XLiyBi2ZRe43g4QqtJ9KUe2j0yK5q4nJFu1kw6
xOiBAAj0KAFK6b6ifc72A28do/1HA4Fj64XC0sra0uqasrq29tP63quD0eOjQ8eOsVdDe8/3uPKj
BDEOEBhIArRx4Mx7MzPv3WHXllPnTJEnh9kdfv/jjHQrNVsbD6zPClu99P9z1GirF7/1vnrTab5r
TNjlHtR+6xP8AQEQsCHwxpD67Pn+wd6emN1HMb21Jxs0sy8QGKHNMo4dHR4dHT02QsrvgEKAu7/t
jb41MnN6vB5JbPWCZ8wnBDq/ybPZ0bZ2e24c+VPm3pxTKojW3mkKvIUqTW3yAur98PCbw3SG71NU
iVoJp3eYclX7Y72jU8448hiFmhGgpR7q2bCrnxtpZhKfgwAI+JLAqwNp9BjF80j2Hbw8KP1WCh4P
yKdDsXMROqdCQdJ/B3vPn//2nGIeo28NRabqKj9fdg9OgYCvCfRu2ldVvinEc/v7L/bzV2gRqLz4
Ip8YY/I97Y84eo0AXzsi8xUh16D9em3s4C8ItEjgYHwsQKG+0f82FBg5Nn5SCk9NkiIMjAzT9OLA
fxsZGTkmvR0IT9Bk38lhShPjAIFeILD+r3Xaw7nxeej9aEP8rc5pgbdhEXIzDgqtaTeHh8+kyyE4
o3BNWE5NnxkevloOL/J4nlZxJx2usk9GrqZ5eX5/PPmEZWbpgmKT1nie5dr0kNcqO2mGDOfEL1XQ
Yd4x4oiMlduy9u7Qx6t/HeBbBtKq4aq1wOEkcr79O+boGQjwFb4jwyQBJyck6fixY8aKDpJ6pAIn
g4HxE4HA6DEoPzwqvUKAlF+edivrxhFKtPELv43n/JGQUuQXi3K5G8YdUmYTxXn9I7q5ENykwBtd
RFluf5F+woESsVeHo2xp/x6jm5mLdCFzRTXLlioMCpHX2JRZgFffSPGGuGBLnxkvfqa1ZfXT6mE2
JgqTthu/zkRF7uHG7S0RHTTcfmrxyvSwoq1uDGPn2uilOX+dowDLIAAC/UuA5vz1b+fQs34mEL8c
t3aP5jnQHw+GhkIn9Bxm9HyU7tAWSLEPPdopULTXVPxFa75ScVJvQZJTxXmu58RBOm8huJU/cWf4
b8Gt7xOVaVdTGlapNLO/FhlHkkuzYF4Y9pVz+4vM+JTfbCL+Kj2kwjH2UBN/mk61uF2RJja0Y6UD
Pf3oQfz19PDBeRAAARAAgQEh0DXx1zTtS1KPz6szziVTo4aC5e2kg0HxAyN0UOjeZogmJSG26hyS
fD6UeUyZXzX9t0z8M64d1e0N9iQ5bmSQow/Zxra7DX6LxYLVQ0vTts6I7DNvrlbsDsgjh26CAAiA
AAiAAAgMBIGm4q8uhUKxvJ00KS293Kb1h0ecEpSmY6GHirKjZJ/EZZEy5gdlii26s9kyjg11R9Ta
UTfE/0mPWj20uGKUNG/xqYEUF9QEblndOvUe5UAABEAABEAABECgkwRoz3MPzxbFny7X9H4qysNQ
bFpiZ+X4k+Qd6yoK65qPBlDGEvMXMwsXs+z2p1oiWdiPGgsy+CKShhu4UI1C8SmvqNxNajpUWFjQ
VqKo92O0OsQ4CtkVLYiou82jjBfntUSwen8BM0c6+fTCNgiAAAiAAAiAgGsCtIOgh0eL4o+NJfKb
wQU9LWvOoqP9VpYYX4HLz/FvYjXz/+r2Vj4XLzxhXEFqB7ef2tBNmUs06lWXF3NxsfJ3WDlnhO7I
Qm4yOSE8KcZSp8y68RiLaRleluOT/6Qr8/GHUc1n+ijOMguNdgp0PWCoAAIgAAIgAAIgAAL+ITAg
v/BRsc7DP/S74wkWfHSHM1oBARAAARAAgXYINFjwEf3Ay9W+rUb+2ulcd+rSTD4z6byqZE7F5LHu
NIxWQAAEQAAEQAAEQMC/BPpX/J1dXJow1gvT5oLVG9D4d0jgGQiAAAiAAAiAAAh0jkD/ij/6xbd7
5g411n2qOwcTlkEABEAABEAABEDA7wT6Wfz5nT38AwEQAAEQAAEQAIGuE4D46zpyNAgCIAACIAAC
IAACh0cA4u/w2KNlEAABEAABEAABEOg6gYqtXrreOhrsEoHSy/2hgwNq7Nhbx7rUJJoBARAAARAA
ARBwQ8DBVi/59Oz6VO7TSTdma8si8tceP9QGARAAARAAARAAgZ4iMCCbPPfUmHjtLDZ59poo7IEA
CIAACICA9wTcRP54CFC6WMo8LJIfkb8sJ0Iu/EHkzwUsFAUBEAABEAABEAABfxBYyzybXcotL/09
rv41qey6cAqRPxewerQoIn89OnBwGwRAwCGBzIOMw5IoBgKHSyB+Od7AAZeRv5z095Q8yu1t3J95
dHwxdU5y2DuIP4egergYxF8PDx5cBwEQcECAxN+Nv9xwUBBFQMAzAslrSbe2jh071q74280l/6xe
4As+KlZ+lB4n77AExJ/bEenn8hB//Ty66BsIgABjEH94CrpPgMRf6m7KebtU3hvxd5d9ens2UCn+
3Eb+MOfP+cChJAiAAAiAAAiAAAiUCRy8PHByeoRMVe5m2OlQQDe39uixyi93c49Wg5HfO835Ug2I
P49GpKfNPFtKTMuyfqbyzvvCKzor77xk/dbzd4WTV5dKzj1ESRAAARAAARDocQIHBwfR2ZnobJpd
W7bkdiMRlub3/5yR/qJP/nPYUYg/h6D6t9h6Sr6ckW4qyoo4b7KF6cTSs/r9tcq449H0SjLshI3z
knVlYj6/KsUfKMq9aEAvk09Ny6l1J82jDAiAAAiAAAj0KoGhoSG+qjdXrfBGz6XEfXf7vDSN/Clz
bw4PV5zh9E6vsnPlt3J1eG61QQ0io6EwL1yZ90/h0tJ/rkVuKskpw6WpZOZPLHPbf9G1ZyWVSYHj
wk9dSoaTKxbP/QMVnoAACIAACICAjwk0jfzFl17s75tnbjI5Maf4uD8euaaqm40tyYsv8okxj1o7
RDPP8mtPI2FT+QlPAlMR6elangf/SktXE0vfpvSM8F1KCOdTlzMqW9Ojg+UoHZWUU98a6WMqaaaS
eS0m/mgmiHnETtg07tCnV1Opq+Km1b5Jhgro97VWLBW167IFZ2noQ2SOpkEABDpOQLl0pPL4uIUX
l5qeOnJppdpX5WObmx3vEBrwK4GDVwdOTr+531T8VTp8Vo6zDXVH3NxJh/WgYDkcqN4Pm5FCM3Jm
e5OtzhklzeoURTOVpXFNxa6m02coAGlG2oxg5Jm0mOho6wmvnjac0TyxuqE1Le7bNKpcHU8+YZnZ
4fB90YJNT2sDfnbOC+NVbnCDZt951w41mPpvVT0pVc8RPR6QmKr+W4OrZv7B5nlGeD6yupD4Vko+
iEssMr+SjmpBOMux9kO5pHxbXFPh1WxlEplk4gITWebMn9QFcwLf0zX2ocg729qnUJ9+v36oT7fg
LA3tty8i/AEBEPCYQHz5tXn8urg5c6QV/VfrU9PQgMfdgDkQqEMgnGjjF37diT/1/kKGTUpjQjNN
JCdzIii4Gctq4cDVufFvYltamHAztTErZI3tTZJTs8yIKc4XJxoKoIdZ9pBsUqSNtFSUaY2+2F+a
SI5fpWbtPOGoMlmW5SVz8cwsd0+6kjdDmEsXGTuV+vSsPVL53lbqFIvn9vNXSBfVs+/wgax2g0vJ
2Y3UJu/CVjBLKtPnR+SmJqfCsT9J6r91vW3rc+TDqFiCJEknmX59PBw5aepIUckSawx8EIvoIUZR
6+02SbRvoU0HUB0EQMCfBKTE+nL8wUJ6W7i3nQ7rYcGwfof+mfuFce+IXWBPqzKVTn/8u7kfWUY+
Ev5C/GW4YkYYTVMUdLyUNqzVBg79CQhetU+AtnGpPdo32yELTcVfJmqZ8zd+naU2F2XyZVXJmOJp
LDF/MaNQFO1kMPQkGdOiZWOJvJYYtbuprmQLF2Vuhx+yfLGQXamvKk7FZLJT1ShVu7e/f0+294SX
DsWmRUiLRysrDprPF91MbX2fcLQq2ranLkaj2g3R93ktZSxdyZLKPMzjbUl6qtqhtxFSgeMS2yl5
sdKWssZa2ndhrdx5Yz5f6zjat9B626gJAiDgbwLB0B/WC0/JR+XSyblxRQQFn8ayJy/xyMXKpd89
iv2qBQqfLm7JZVHIO0UK72Q29vT16/VE4stfF//A4srr/CcSF5EyM+KL84WTZq3M3Oa8vSl/M4J3
rRB4xZg4n/+P51XV+R3j01Ysd7JOU/FnzvnjwTBmqBZ1e4M9SY4bujD6kG1sq1zwbabY9XE9n8vD
ckIF1t4kaRYMmv0KBh3/HPFEdY7S3hNuWotQVh+U/I0+jC85VH70z0HbnroYkmo3isWCte8uLHWi
KI/MreUrF8yWvs2unYyEa7K6pWc0xAFje6F2vKGssbG42C593I5p1AUBEACBRgRWcpk/LN6YFkVO
JOYvZ3I0q+9kaOrHuZgWzDuRyL/OJ07oNrZuhY9whVe+Yxqnf8mvX541oxizl9eNKMbU4nVxm9s3
b2JY+pPA873n5vnr01/NTtK19SO/db6p+DMdlhLfL8UfRvVpcHT74lJ5IcgLLUOqBfy0tKylsN3N
QrFomiY9ZFybEwrVDVtUm3ZhKltP7KqT8hu/Prn0QgQvy0ezRp3ab2ZHtEhK19r3w34gAtEPI2u3
LBumrKfi/2Dx61oClx9r/6mt/M1n/6FGprR9XdRSg71gGnfJKjfr7urS0D6fkqgL1tL6WqM89GHD
RfsgAAJ+I6Bub7Ef535nrAaZecC2KHJBgu/pIrtq3LbMDlxnscXLmZk68wWnJixRjAlz6dy4ZGjH
YPmm30jAn44Q0PSfVQV61Qxt9efh4Vz8kf/yYi5euB6jmXzSdCz0MGos6eA7wpAo5CsqzEUYJHJO
sckTku1NUV0xFl8pykMjN8oKRR6WZ8rdpKkHy+AogfskecfYgUWzzOw8sWVdR/lR2UaN2va0zlg2
dN6oIwwuaDvmqPdjhz/nbypJiznYLWOT51u0UKNiMUdkTI2LFK36pwzfEUZM48tcbnmDvUD0ntHc
5bXIg5r1Gc3t89mHJFgpcbzAaItLHCAAAiDQjMC2kv0xPqsF/C5bloK8FglcOnjATztoduCMPqWP
sfjNROL64tSDGdvZe+ublijGps2mo0W7m818xee9TaATyo+IjBwd8vB0Jf5o/tzi0sVCktZnMJ7M
3ZjVFt5GN25vUeSPVlTwRRh6Lng8e35r8SxfZlF7U+SCgwt6yYXgprZtCheXtMaWbCrnlqom6oln
gTZYWWJ6o8N8cQllb0VaucoTuwdHVb4hPVkxhVFEMW0bleTzIe4JZa6d2m/qvOEUGeQ75vBujhdj
hzznT3eKb5inb/Jcu2kzV4f80/QHWjSQ1Bv/oxCC5ibP/KaxWaDddcUmz2Zzhsqs+NRi3zqQljKB
D9KGS9Gk5rDzTaR7+28VeA8CINACATX90Ry7d4NyPvTPb4uS4zvCkM7jqz2mjO0jGM0OZOMnLHPC
TySy96Yyt8wCugPCVM6MYuQeTOkTzVlmQcsgb6cXyjdbcBtVeoCAk31eqIzfenKE/plDPhW21cDR
4ZGjI37zr9/9oaCpqX071de9l3ull/tDB/zhO/bWMTfN0LYscfVDbKTshhnKggAIdJ1A5kHmxl9u
WJolVTeTsfx56t6veoRPaLLwyTktTGfep937KAWsHcZN2ufvd4Wbr7/i8UJ+PccWf11PsC/Cv7u6
zsOHX8oWU1OLT7V5gdR0LnRva47KUOBQ0arj6EMCyWvJ1N1U8Zdy9LdBJ4PvBKk8LQeOX7YLbRk1
n//GV40cDA2FjH9+RM9H6c7QG0OxD2MeQnQZ+fOw5YE1RZv8mclxvpTYWMs8sEDQcRAAARDwmID8
VXmTP35VVn7UUDnDW74vf1muYBSmDWJM6cavabUvxQOlT0R+mJRfhamKFSF6mddQfh6Pqx/Nmet5
G1/4zHWIv64PCKXOzeQ4bXboeN1x1x2lBq0J3MNoH22CAAiAAAiAAAh4TQDiz2uiDuzxHQr1NdFV
644dVEYREAABEAABEAABfxDo0Tl/EH/+eHzgBQiAAAiAQD8QoIzzV5W7ifVDr9CHegRC74acnH4D
iAUffhsR7/1pY8GH987AIgiAAAh4TqBmwYfnLcAgCFQToAUcbqH4Z8EHxJ/bseu98hB/vTdm8BgE
QMANARJ/boqjLAgcGgGfrPatEH+HBgMNd5hAq1u9dNgtmAcBEAABEAABEDAIONjqpZB+Lx/+LuH4
V3Ht4WLOHx46EAABEAABEAABEBggAkj79v9gI+3b/2OMHoIACIAACPQ+ATeRPx4CDFxWMw/4LtPy
reXEuy76j8ifC1goCgIgAAIgAAIgAAL+IKBknsWWv1te/jKu3kzmdl04BfHnAhaKggAIgAAIgAAI
gIA/CATj74u5f6Ozseli/qeSc68g/pyzQkkQAAEQAAEQAAEQODwCu6pablySRvU/jB4PuvIJ4s8V
LhQGARAAARAAARAAgcMj8E7AkHxlH3af8Zl/zg+IP+esUBIEQAAEQAAEQAAEDotAKXc3w05PBfT2
lexjkerdzWVXguHfG7cdeIfVvg4g9XgR62rfHu8K3AcBEAABEACBPidwMDQUOiFpnYyej/L/vWK5
f+YYC8a/TM3qcT+s9u3zxwDdAwEQAAEQAAEQGFwCQ0NDfFXvd6by01FI51Livrt9Xqhy48ifMvdm
tOJHc06ltr5P6HLU8SgoV4eVc/uLZx1XEAUttciNheBmPjHmzgJKawS0yJ/5zwhgAQEQAAEQAAEQ
8C2BwrZaFfkbemMo9mGs0uG2fuqj6Zy/UGpzf/+Ffi5NJMevKi55qeqmyxq8uLWWvPgCyq8FhqgC
AiAAAiAAAiAAAtUEmoq/igryuTjbNJYZr84NvzksznB6xyhWc1O5Op58wjKzw+H7YnmyTS0K7M2l
74c1a3OrvFRlLSpgNOGsOjexkw5Xu2fTEJ4IEAABEAABEAABEOg1AqFEG7/w6078KY8zofMyT/uS
tJplS3pEcL44IcSZ3U353lbqFIvn9vNXJPtanHcmWZzn8cXN1MYsN1VRyxwQ20ZF9SzL8uq5eGZ2
TkQmlbmJ5GROBCw3Y9kJ7aZtyV4bbvgLAiAAAiAAAiAAAm0QaCr+CskJLbzHz+hmKksajpKyK9nC
RVnWG5bli4Xsimp70+pb/QKh1DVhbCwxL0zZ9qhB9di0mIh4Vo5rNVeVzKnUp9osQ24zo4iAImOh
6pJtsENVEAABEAABEAABEOg5Ak3FX8Wcv/3PiuNv6lG0ULC8nXQwKH5ghLSV3U0rlDoFJqUxvZRp
yhZl0+paLXV7gz1Jjpua9SHb2NYEZbmhnhsqOAwCIAACIAACIAAC7RNoKv4qmzj7aeqUHkUrFMvb
SReLBa2c7U2riaYFTFO2fWtavVzr4pK5ToUueNIZBwiAAAiAAAiAAAj0IIG9lwceni7F346SfRKX
zzJpOhZ6qBgT6RTlIU+n2t60Eq5fILOgLQfZSS8IU7bj0tS+WUuUjGprR/j8vzeN5SY9ON5wGQRA
AARAAARAYMAJ0FZ/Hh5NxV/FnL/hiWxsc5HPzhtL5DeDC3pe1diEz/Ymk+TzIVrtO0x7xNgXIHPx
GIvxaYV8lYa2q4ulljngdavXPBK8JK0d0aYqRjdubyHyN+BfG3QfBEAABEAABEBAI+CHn3ejyJwi
vxCaEkcHCGCT5w5AhUkQAAEQAAEQ6AgB202eox9UbfLcVtNNI39tWUdlEAABEAABEAABEAABXxGA
+PPVcMAZEAABEAABEAABEOgsAT+IP/r1NuR8OzvMsA4CIAACIAACIAACGgE/iD+MBQiAAAiAAAiA
AAiAQJcIQPx1CTSaAQEQAAEQAAEQAAE/EKhY7esHh+BDJwiUXu6HTmCb606ghU0QAAEQAAEQ8JIA
Vvt6SRO2QAAEQAAEQAAEQKDnCBwcHERnZ/h5P++J837Y58+TjsBIXQLY5w8PBwiAAAiAAAj0CoH6
kT9VuT6XkW4sXQm32RfM+WsTIKqDAAiAAAiAAAiAQBcISPLtG5HVnLIr2irc0cOBs0ntzsb9cmiw
9DgZvZ4riYL8+v6d9OwdhS5EBBHirwujhSZAAARAAARAAARAoH0CAelEUSVNt5tL/pXdyC0v8XNW
/TPXf5Pvx6XV9Q3eiFr4oci28wUuCumaxd+fYmxtjSV4+b9EkPZtfyT8bgFpX7+PEPwDARDwgsDn
f/3cCzOwAQIdJ3DjLzcatNFwwQfP/KofLs8+S849mzXzvxTze3R8MXWOaZ8mQvn09V1JyqzxmyW6
nr09mpvNSX9PyaPUch7ir+NjfOgNQPwd+hDAARAAgS4QIPHX+J3aBR/QBAg0JdD0QXUo/u6wROqc
vo8HJXa1P9IFF4V/WI/+OGX+N/lslkvA2fWp3KeT3L880r5NhwkFQAAEQAAEQAAEQMAHBHYLa9uR
qRD3RH2mzejjR+lZUbsI/D5MmV/lWSnyhzALTUVW19M/liK/r97rDeLPB2MJF0AABEAABEAABECg
CQFVuZthF2cpeqeJPDG9j478+mpQV3ijociJtcxDJgW4FJROrK2p4RBP9VYcEH941kAABEAABEAA
BEDAvwSMff7m1k7THD4RxhudTf1deqRt/leezEcfSKHTQXZCE3z8Wjod4jrQhfhbnRt+c7jqnFut
S0e9H9YKh++rdoWUuTfD6R36xLzwEWjl6nCDrvnI0Q658uxRLDQZ+1pbPi4Ofmd+vUPNwSwIgAAI
gAAIgIAzAkNDQ2JV77I5yU/Xf/pqX20Zh34EzqWWbs9qgo+ujSrhhD7hj26HG0b+zi7uv9g3z6WL
VCEun63nqap8U4jnePn8lZ77JTFV3XQ2An1dqnh3EWqvr0cYnQMBEAABEAAB5jjtuzoXfRhKbS7K
HBqF7uYUnZ52rabPjCefsMwsRf74R2YUsH4gkNe3FisH3syI49V0+owWLGRsJx3Ww5DGHeFG2gg3
lqvXliSD3BT5xuvW+qZc1Z3XY5blkKfRVqWFPn1w5OQ1ddEa/OvTfqJbIAACIAACIDDIBJyJP5JT
s5l4Lp8Yq8dKSny/lTrFRORvUV6dG78+uaRFDXPxwvU7hlKsrE7FvoltacU2UxuzQmnxtjZSm/zm
VjBLglIcytxEclKEFfc3Y9kJU3pmsiyrtZKZ1W7WKfkwyx5S9XziqY1v8j3deR6z5A4w3fkX88UJ
Q/+ZFupC6PkHSfrozsyKXfDvX/OToUn9vKkFB3cfXZic/5oni/l9uikSx/q1TmJ9Xq+F9HHPPxvo
AAiAAAiAQN8QcCL+uJxit7cW6yZ8a2jwfLEWI2TsrByvR+tkMPQkGdMmCI4l8qTMxpi6ki1cnNdU
pnQlS4KSH6tK5lTqU82BscT8xYyiTz0MxaZFitlspV7JUzFZE23NfBMOyLrzTJYvFrIrwkPTQt8M
vk1HRi/8mSV0eWd+vD7/iSJ/sbFR2Nj4QmaPFx890z9SVtgdullIy48Tk9fF9eNkUC9A6jDBRC3l
mpq48MgynbCfCaJvIAACIAACIOBzAk3FH+Vzo5mLS+6n8VFFbbFINFOPAQm+zRS7Pq6vKbnKI3fF
YiEUDFbVULc32JPkuLH6JPqQbWxra0omJU3SGUf9khWlGvtmdSAYFNvpDM7xx7nkdlneiX5PLRQ2
Fv4oLv84Y8hi/if5zxfEHFNpPGhcH4/MBItbO7ReZG25KM+IWqMfzcnF5TVDMg4OS/QUBEAABEAA
BHxIoIn4E5Ph4kv3rG98rRcbKr3g6dhRjW1mLL3jc+ZiIs1K51LdyB/V4AE/o9jDKE25I7FVKOp7
FVbwurhkXX3SSIw2LunAN6sDJEZ9OGyddGn0wu2Z5S8rF36YKd1Qwj6Db++QktDTvq5qdbJzsA0C
IAACIAACA0+gofirWORRhapQfMrvKHeTteKIh9+M1K16f6Fe5I8vvDiTNnaFCQZPsckTkjQdCz1c
0BZ5qPdj2pw/cTNqLOmgdR71dpNpXrKpb6ItxVzOojw0MsuD86wcvzDHEvPfGx0m5XdueeaxSPtS
htcFBznNq2hn9sJxFzVRFARAAARAAARAwCRAW/15eDQQf2r6byTbCsmJiq3+xHpYeZEvsOD3lXM2
gT3pynz8YVRL5sZYLM4yC3Y7/0lX8ksTZjJ3PHteTCukWGBuUmt0vBjT5/yJBPGGaJHyyBu3t+pG
/pqVrOObJJ8P8R5R6plbCC7oKeaF4GaDZS59+1hOfZxU76b0AOzOVpFJkpBu6zcdx/B4/ldZ/pdA
hC0D+/ZJQcdAAARAAAS6QWDk6JCH55HXr1+T1/QrwoGjwyNHR7rRAxdtUJBvQOWXC0jNiu693Cu9
3A+daLj5Ig/vbc0VFqYMa7tfx+S7UprfoaUbsqYEg9fSMyuJrT/TFEB+U1zQ7XrXtNpXE4vB5GNE
/pqNEz4HARBoj8Dnf/38xl9utGcDtUGg4wSaPqgkycxXdvR8lBwaemMo9mHMQ8/8J/5oTt7fglvf
J7hUsV572OkBM+VI/A0YE3QXBECg/wg0faf2X5fRo14k0PRB7YL4a7rat+tgzy6Wc8G0356mAnGA
AAiAAAiAAAiAAAh4QcB/4o9mFN4zf1PO2CzQi67CBgiAAAiAAAiAAAiAgB/FH0YFBEAABEAABEAA
BECgQwT8N+evQx0dYLOY8zfAg4+ug8AAEaCpVAPUW3S1lwk0XpnUhTl/EH+9/Pg48x3izxknlAIB
EAABEACBwyfQbfF3+D2GB50h0Hyrl860C6sgAAIgAAIgAAKuCHRB/GHOn6sRQWEQAAEQAAEQAAEQ
OCwChfR76fZ/dhZp38Mav+61i7Rv91ijJRAAARAAARBoj0DDyB+Jv3z4u0SovSYQ+WuPH2qDAAiA
AAiAAAiAQLcJ8BBg7nFy5r0ZOtM/u2se4s8dL5QGARAAARAAARAAAR8QUDLPYsvfLS9/GVdvJnO7
LjyC+HMBC0VBAARAAARAAARAwB8EgvH3Rfp3dDY2Xcz/VHLuFcSfc1YoCQIgAAIgAAIgAAKHR2BX
VcuNS9Ko/ofR40FXPkH8ucKFwiAAAiAAAiAAAiBweATeCRiSr+zD7rOiK4cg/lzhQmEQAAEQAAEQ
AAEQOBQCpdzdDDs9FdAbV7KPRap3N5ddCYZ/b9x24BrEnwNIKAICIAACIAACIAACh0Tg4OBArOq9
w64tp86ZIk8Oszv8/scZ6VZqtjYeWN/bxvv8KXNvRjPWyqdSW98nJC86r1wdVs7tL571whYjPxX5
xaJcx5h6Pzx+ne+JGLqdmryeDW7mE2O2RcnOQv1PK6p46r8nEOoawT5/neUL6yAAAv4ggN/29cc4
wIvmBLz4bd+2NvxrKv4qxBApnihb2r9XT2U177BRQk2fGS9+1h3x521bWg86YdM5PXclIf7c8UJp
EACB3iRA4u/T/+XT3vQdXg8QgTv/251DF3/u0r7yuTjbNFaa7KTDbw4P8zOc3tGGjSJnc+n7YXFz
eG61fFPRh5UXoGvl6njyCcvMDofvqxSW08qbp16x1v7q3PDVdPqM0SL9UdQiI+WnproWV2laW8O8
aXJA87aeq0ZfGrYeNmzqTdugGKDnGF0FARAAARAAARDoIQLuxJ/yOBM6L4u0rzI3kZzM7e+/2N/f
jGUnuKQTRybLsvxmLp6ZNW9WA5HvbaVOsXhuP39Fkq7keXlxLl1k7FTqU54LrmP/YZY9pJL5BEuH
ZzdSm7zWfDFp5KZra0mJ7/W29qvzwg1cbdJ63rBJ/tdH0UOPAVwFARAAARAAARDoIQKhRBs/8tZU
/BWSE+WYXHQzleVyh7FVJaOrNMbGEvMXM4oe5wvFpkWBs3LcJUOeU9405hTWs38qJo9xu+pKtnBx
Xpu6J19L6T9yV9crW1fqu9qs9Qpz7hp1CaVLxffW73/++T+rF4oX/ylu/rae/uvnn/81V/lxMWdz
k4nCoqReiyrqZ/pfe13qDZoBARAAARAAARCoT6Cp+Atp0TX9/Kw4LvK26vYGe5IcN3K10YdsY1vL
vU5KQpC5PSj5G30YXzJWk9S3XzYcChpbGo5Jk+K2k1oWx+q66sqOq8JusXSr/MjUmSD7r2KVvCv+
FwvqkEdG3ioWf7G480ux+NbISBP/grN/uUEzG8Q5G/g+Df3XrQFFOyAAAiAAAiBQl0BT8VdZ8+yn
qVNGkO/iUlkUvuAJ3PqYN9Qd8eGOumFXSKzGnVyqSss2s18wtYrVbLNaTp8FV3ZcFXbqQXfLvUMq
r0bekfZ7R3MjMDkxUvptz/SpWCwGJzTJ7fAIzn4Q3Pt+zd02lA5toxgIgAAIgAAIgIBjAi7F346S
fRKXzzJpOhZ6GLUs6ahcdVHdfKH4lN9S7ib5hiuVh63ya2pfFFjQFprwFLCw2bSWQyyu7Lgq7NCB
wygWDP53Un9lbUbXI2ci5u/FjLwzyTaLe7pnxeJ/mbrQsbNcX5b2fnNcHgVBAARAAARAAAQEgb2X
Bx6eTcVfxZy/4YlsbFNspzeWyG+mNvgSWjqjG7e36kf+5EW++IOXVM4tGRMBJfl8iN+8mla+IeWW
iVpW+/IltE3tU4HcpDYfMcZiutmmtRw+Q83tmP4rzV112OhhF+MZ3nLml+TdyOQ7I2Wn3gpOso2i
Jt0o5/vfzaS7c78Dgbf2ShB/zoGhJAiAAAiAAAgIAqXfnnt4Nt7nD8j7gYDjff5o2Ue6dObGLKV6
f8l9/n0gcWWKqz9aunG/FKFJe/9KZ1ks8ccRWghSDN6YfUu/X/Fr0kbhoHlRRmix3w9c0QcQAAF/
EcA+f/4aD3hTh0Br+/xNTkU8JNo08udhWzDlcwIjwYkRLfPLc74TQa78LIeR+a3O+e79K60v6a1Z
L1xpoFT6bSTwls8hwD0QAAEQAAEQ6HMCEH99PsCuukfyboQyv7+tr/1XMPLHKu3H2Fsjgd82ir/s
ld4KWH8+euSPCX1J7/sVQcDqpilZzAIjEH+uhgSFQQAEQAAEQMBrAhB/XhPtaXs0se+t4tq3G3v2
U/poUcje2rdrrCYo6KDTxdy3FStIHFRBERAAARAAARAAAe8JQPx5z7SXLfIN//Z+2zO296vuirhf
uRCkUW+1jaC1M8c+uEHzBXsZDnwHARAAARAAgX4ggAUf/TCKjfvgeMFH/6NAD0EABPqYABZ89PHg
9lPXsOCjn0YTfQEBEAABEAABEACBHiCAtG8PDBJcBAEQAAEQAAEQAAGvCED8eUUSdkAABEAABEAA
BECgowSGA2PHhtpuAeKvbYQwAAIgAAIgAAIgAAK9QwALPnpnrFr1FAs+WiWHeiAAAr1EgBZ89JK7
8HWACdDmuA16X9hWQyckrUD0fJT+O/TGkPELHxT5G3q+8/yA8Yv9Pdo+l8cB934rlV66AArx5wJW
jxaF+OvRgYPbIAACIAACA0jAsfgbGTnYK+7us6Fj0ih7vvt878ApLaR9nZJCORAAARAAARAAARDw
DYGD0t4+d+bgeelg6NjQsHPHGos/Ze7N4eHqM5zeIfv0kXbRtYNanFOctabeD2tuh++rytXhuVVn
1VAKBEAABEAABEAABHxLYGjIutrjwAz1la8cud408hdKbe7vv7Ce+cQYmZYXX2gXPjxU5ZtCPMd9
zl9h6qYPPYRLIAACIAACIAACIOCewCu7KhWasLnNpuKvnglL5G91To8OXk2nzxjhwJ10WA8ZmgFC
HrpLGzE5MxpnRunIiCVEZwk6nkmrphdGWxTS0+5Zq4ubavrMePIJy8xS5G9u7qp+rZd37FVzcigB
AiAAAiAAAiAAAt0jMDzy1hB7aUb5hgIjItU7dCwwdPD8QKSAnR0tiz/DPMmp2Q0tOrgVzJLqEocy
N5GcFLG3/c1YdsJM12ayLMtv5uKZWXFzdW78m9iWFlncTG3MmjnlKNOqv9hfmkiOX9XyvZnoY1mr
Xrh+R69+fXJJq67flBLfb6VOMRH5W1y8p1/nr9DCGcdeOWOHUiAAAiAAAiAAAiDQBQLBsUBwbIj9
VlK1eX78OHjOhvj90aGD31ys9qCaTcVfITlROe3PGoejONtKtnBxXsv/SleypLr4sapkTqU+PSuu
xxLzFzOKPusuFJsWq5fPynHN85PB0JNkTAvjjSXyWirZWp0SzPf29+/JonQodU1cmNXPLpLC0z4r
39T+WHs496qeBdwHARAAARAAARAAga4TKO6UijvVCu9g77m4726fF/K9qfirmfP3fULffEb0vFgs
hILBKgjq9gZ7khw3VopEH7KNbS1LOymNVZYlwbeZYtfHjcSxsaJjQrK2YtSpqc4/oDyvJk+jmYaD
4cKrrg8qGgQBEAABEAABEACB7hBoKv6auBEMhgrFok2hi0vWZSIi61rn4AE/LcO7FH8Y1Sfnbarl
eX4NXOBTAGPsoVG9KTPnXjU1hQIgAAIgAAIgAAIg0IME2hV/0nQs9HBB2/NFvR/T5vyJm1Fj9QZf
umGuz6hCxJdrlPPIweApNkm7WlNW90nyjrE/S2WZCgM8mGckndX7C40jf8696sFxhMsgAAIgAAIg
AAKDQ2C/xH/no8Wj8S98kG6zyaXSWorFs/TRQnBTm6I3NzwrdNfFVGozyx6Km7QQZCJZEF6Fbm/p
6y3eVGR9ih5V169pHz7KC2uHUZIuLU2fSm3xXHO5ivhUq14uFrqdmrye3OBtMVrwW/yMnNQkaXj8
eoFRzI8mDjr2qkWcvqyGX/jw5bDAKRAAAY8J4OfdPAYKcx0j0MbPu3njk7c/72ZRhN64ByseEID4
8wAiTIAACPieAIm/xu9U3/cADg4EgaYPasOfd/MGUbtpXx72M/O2fDltTB7zxjNYAQEQAAEQAAEQ
AAEQ8JxA2+Lv7CLfh09b2DvLlirXAnvuLgyCAAiAAAiAAAiAwKARCLx1zMOzbfGn7cOnL9c1ttwb
tDHpg/4+exQLTU5WnLFHz7zoGLc8v+7WUrnW7qMLk/P/clsf5UEABEAABECgfwiMHB3y8PRA/PUP
2kHviZwubGyY5xdS6px70eYVw+MXsoWFKa+swQ4IgAAIgAAIgIBBAOIPz0IdAn+ckZmqehL8A2MQ
AAEQAAEQAAHfEKhY7esbr+CIxwRKL/dDtIFig4PSrOe25qzBtn/NT/4fMxu3tOjb+nwoIX59haKD
RkCOCnyi/SJLMPk4e+G4uOR2Utqu3/IXGwt/1O5szX3BEqJw+eb1Ldq+RylqBmvsl/2htK+8dUJW
HvPqwWtK9qNR3abmiVlSXMxcU1N3eft6SatL55LJ7dTWn4VXOEAABPqOQNNFlH3XY3SoJwk0fVBt
V/vGPox52FtE/jyE2eumlIR1zt8nSvCkphdJfiXYFzwjrFxTExce7fKb6/OfqMnHIk1MCeLrxs1z
KUmU3HicVD8xs8ZKgnQkLykr5s2iwv5MJUnA2dqvgKlsjys8H52W7soN5/8py+yO1lDx7qKYaLg+
b7iknFzWZWmvDxT8BwEQAAEQAIE2CFSIv5GjIzj7j4Djx6Nyzt/jJLv7KV/z8WxtuSjPiGjZ6Edz
cnF5Tc8FF7d2hO0/Lmw8ujBKF/9aVoLJOS2uVjFpL5j8WEQQx8YtvwMdHB8TJevaLzsu/1nYZ1Mz
55jyfzRYPRKcOSMKmg1ZXOLOO2aBgiAAAiAAAiDQrwQQ+evXkW27X8cliRnyjplBQS35y3XYQiHN
PtEXCGvRuN26P8gsSVpSuOKw3qy1b++/dNKiHm2KVDdU6ZJk1Z5tA4IBEAABEAABEOhJAhB/PTls
3XLaCM7xqX7mQmBjeh/Xf1raV0/mjpJcbPGwtW9jS31aNJLRjlqqdEnd0mYj4gABEAABEACBASYA
8TfAg9+w67tfLyrBmQhF7I5HZoLKsrbTnrn9Hr+o2QiQFggXU4vlks52CrS1X+mb8ncxp/DZo8XH
TJJGGY9K6i7tfr/cSNFZXOI9wmiDAAiAAAiAwMATgPgb+EegDKBiwYd8V0prM/nY6IVHRob33PLM
Y7HGlqb08Y0ARdqXr/zQlgBPLfB1HuImX2ZhxggbQ7azX1lDPrElC5vsmiLW6k7NXQsqoqFPGW1J
0+AouyQ/lWRmxjIx7iAAAiAAAiDQGwQODg5m3pvh5/2CJx5XbPVCax08MQojviKw93Kv+VYvvvK4
U87QhjKL4+auNJ1qBXZBAAQOh0DTHTQOxy20CgKVBJo+qPW3einlrl/KSAvLV0JtQkXkr02AqO5v
Atb0NK38paSxzdITf3cB3oEACIAACIAAJxCYvb0gr2RzYsc19nNaDwe+l9TuFO6XQ4Olx8mZ67mS
KMiv76fT76VzdCEiiBB/eKD6msDxC3eusZr0dF93GZ0DARAAARDoWwKjgXeKJZJ6u7nkTbbw3fIy
P2Olj7n+C70fD67kRWK4tP5Dkf2SX+eikK5Z/P0wY0qefcrL35Kbij81fWZ4+E3znGtvyrwy96ZD
C1QynN7p29FDx7pGYPSjrPGDxQ7nIHbNNTQEAiAAAiAAAq0QKP2UL06HjexvKDxdzP9UYqNT4XeU
/M9kcLfE4nHtJr8OT/Ep/MHw7wO8sXfDTcSfcnW8+Nn+/gvjzLGoU/XWSmdQBwRAAARAAARAAARA
oCmB4HGxJlMco8e1TXADU6eDynqB/ZxXJGl2Si4+26Vr9fSUEH18twztaLzgg8JvC8HNfGKsygcK
B44nWWrr+4S2sZt6Pzz+TWzrIYtdZNnPiuOzGcbiSy8W5Z10eCJZYKGUboQMKsHbG8nrPCoZz+0v
nhX1V+eGeRU6rCVtm25KAwWqCWDBB54JEACBQSBA8+gHoZvoYx8QuPGXGw160ei3fSnb+3Ep9l1i
9HHy0rOYufKDZvtlj3+VOhfg6eCPS+HLauntVOLdQvq9PJtWA++nZkf5dfi7hAgWFhqLv2qRZ/GV
ZFyU6eqNF6MA4eJJIfUuLu3fk7kcJIV3igtExq8nuRZkvNbG7a38FRKNhrJkVKs4zz+lw5Sb9XRn
Hwx6t7sA8ddt4mgPBEAABEAABFol0Hi1b/50WeSRCtTEXPq9bOBLUnh0LVYE/xKM8z+Kaxb/6vZs
gJcpi7/GaV8p8f1WiiXHy3P+zHl4snyRZf6WVqmhHSX7JC5rMTwK+N3jKk6ajoUojPeQhwalK/Nx
tqHuaAVCsWktXEgWCtkVVV3JFi7KxlZt+s1WiaEeCIAACIAACIAACPQVAWOfv0u68qPOjc6mvgxk
tc3/ysqPPuCZX/aONsmPXwf1nG8FkKYLPkj/lef8LV0sJCd0/SdfS4WeJO+sMlJv7PandhvtTkpj
tfRtboaC5R9sDQbb3b2mrwYcnQEBEAABEAABEBhsAkNDQ2JV7zJP7JoH6T99ta8W89OPwLnUMg/1
8YOujSqhhB4mpNuhpuKvgrd8byt1qlB8Km6OJeZ58C8cu86MYJ6TwTFDgOXChWL5B7qKRW92r3bi
CsqAAAiAAAiAAAiAwKARaCj+aB1G1dreigwvE8G/QuFUTB5zzo2nekVpRXnIU8A8QfxQMXaQ0W86
N4eSIAACIAACIAACIAACzgk0FH9nF/c3gwvlCX/Dw+WVGVrwT46dYvHP9DW/zlqNx1hMbBxI60XE
OuKxRL7cClb4OqOIUiAAAiAAAiAAAiDQEoH2ftuX7+RiLtRtqX1U6jwBrPbtPGO0AAIgAAIgAALe
EGi01Ys3LTB3c/6qGq1cqOuRRzADAiAAAiAAAiAAAiDQMQKtij8+HXCY794nNnbBAQIgAAIgAAIg
AAIg0BME2kv79kQXB95JpH0H/hEAABAAARAAgZ4hYJv2nTkX9bADEH8ewvSpKYg/nw4M3AIBEPCU
AH7ezVOcMNZBAq3/vJtHTkH8eQTSx2Yg/nw8OHANBEDAMwIk/hq/Uz1rCYZAoA0CTR9Uvy/4aKPv
qAoCIAACIAACIAACIHAIBCoif4fQPprsCoHSy/3QCe0nlXGAAAiAQH8SaBpQ6c9uo1e9RqDpg4rI
X68Nae/6++xRLDS/bvWf35mc/1fvdgmegwAIgAAIgAAI2BDAnL/+fywczfkjqXdua66wMKXx4H9c
nnmcvXDcB3yqfPOBR3ABBEDAhwSaBlR86DNcGkACTR9URP4G8KnwQ5fX5/2j/PzAAz6AAAiAAAiA
QB8RaHWT5z5CgK5UElifDyXYF5UxP5ECnuRn7NEzrfjuowuT818b92+u82ChVoauHRWgQtSWZtbI
OIvs86OvY+LmZOzrXV7mXKrIlITZ9L/mtU/LbVGtC/PzFyx2MKggAAIgAAIgAAJ1CED84dGwEiBJ
l1DOpRf+aL3J5Zf0xcZGYWPjCyl1rjw1UFlhd+hmIS0/TkxeF9ePk8HHi4ZAZA0L8LaYMKtcUxMX
HpHQE4eyzO6ItuTi3cV1NrVANpmcLmh6dH3+E0XWnZGZ2VZRYX8mU0baGqMKAiAAAiAAAiAA8Ydn
oBkBdfGCnDqRTG4nRMjNOJ6pKpNnNDn4xxmZqaoe/GPyny+M8rvSOKkz7fp4ZCZY3NrR6zYq8Gxt
uaibHf1oTi4ur+lmgzNnhNUxslp7TC0UNnRtyp0xj+D4WLP+4XMQAAEQAAEQAAHGEPnDU2ASKEoU
PLt14cLtJLv7qRm9YztbxeC4sU8M6byytmubHSVztQRuQinbkqTGq0zKOWg3tdr2FQZAAARAAARA
oD8IQPz1xzh60gsjvHf8wp1rrJzepQhccUvVW1C5EvQsxkbJXJFN5qezlcX6MmStStoS+fOEAIyA
AAiAAAiAQP8TgPjr/zFuoYejH91JBpWEtnTjuCTRPDxtw79/LSusWWTOYXs8QWyYrd1lsNqIkWum
MKThwPpNa+TPYasoBgIgAAIgAAKDTqCx+FPm3hwerjznVv2CTL0fHr6qZQvV9Bmrn3OWHKKNt8rV
4Ya9oF6H0ztUsfaCNavrFzht+zFKyd8gLePg+o8vuVA/EfnZT9TkY68WVYxeeJRmmlm+s0x9s2Ie
Yeqc2HH6j3NclYpk8eLJdNLLHHTbzGAABEAABEAABHqBQONNnkn9LAQ384kxH3aFBF+MPeS+kSBT
zu0vnjWcXJ0bnmVLLxbr5ASp4njxM0v56s6Zva7tftO6PgTFHG3y7EfH4RMIgAAIuCDQdO9cF7ZQ
FAQ6RqDpg+rjTZ5JYOkRQS1IxhjduZoWEThxZycdripAd86kVb2iCM7pZQwLFMGjYJ4RaAzf16aZ
kQKbSxv3LRG7YvHJpMRVqaI8DAVPWkbp7OK+ofxqDSpXx5NPWGZ2WLdf66fNeOshwJbqduzxgWEQ
AAEQAAEQAAEQcE+gpTl/JJh4aG1/n5/zxQlDvT3Msod0h6JxytxEcjInCmzGshNGHvZJcvyxTDe3
bm9ESeRdZFl+zZJagdW58euTutlcvHD9jpG9zWRZlpvKxTOzhqlVJXNRFrG9YPBUIXkxbaxIsDCw
Myjf20qdYvHcfv4KLWCt42cdju3UdT80qAECIAACIAACIAAC3hNoKv4KyQnLdDoxx05dyRZ04UV/
kuWLheyKkF6nYvKYcJGU2anUp1oediwxfzGj6DMF40v3uGCTpmMhFko9TJD+kq7Mx9mGusOYJWLH
zsrxcmdDsWmx04jlpvI4Ez+n5XWlxPdbKZYcL89NNMRoXYOG6bp+OgDdTl0H5lEEBEAABEAABEAA
BDpBoKn4C6U2tQifOIV0oyMULO+/GwyGqjxTtzcYBfkMNRZ9yDa2awNzWtK2uqqxdCOaKX9SW5JS
vXHZnOTH9V/ZyaWLJFjNVLK5FsRqUDftzE977O3U7cRAwiYIgAAIgAAIgAAIOCHQyoIPmkg3Xpw3
hSCtt1gIbuVP3Bn+W3Drex7Mqyqg+0HJ4onivDYbz3rNZ/WJZSVPaaHGRkpfX0I3FZkXNi+omnFN
1SllLNqyO4xlGczWYHnRhr2fpj+UvNbXu5gXTes6Yd7tMljw0W3iaA8EQOAwCNA8+sNoFm2CgGsC
N/5yo0GdLiz4aEX81ZVuhvgTBficP7ECl5RTdOP2Vn5aaSz+5JWypuSyjM//sxd/wfvhGMuKSXti
oUnV2l5DWVIxU6RaDFpW7Nr6eaVYo/lsxJ99HzWXfHZA/PlsQOAOCIAACIAACNQl0AXx1zTta+fc
WCK/GVzQs7p2e8HwAqmNWW2yoFB+DlQRn/z3MKqt9o2xWJxlFvQFv1U+KHeuM30WIH1CE/vKzogW
jfhiHYOSfD5Eq335HoGu/WynLh50EAABEAABEAABEDh8Ao0jf4fvHzxonwAif+0zhAUQAAEQAAEQ
6A4Bv0b+utN7tAICIAACIAACIAACIOA1gZbSvl47AXsgAAIgAAIgAAIgAALdIQDx1x3OaAUEQAAE
QAAEQAAEfEEA4s8XwwAnQAAEQAAEQAAEQKA7BCD+usMZrYAACIAACIAACICALwhA/PliGOAECIAA
CIAACIAACHSHQMVWL91pEq10n0Dp5X7ohB83oO4+CrQIAiAAAiAAAn4mgK1e/Dw68A0EQAAEQAAE
QAAEeo8A0r69N2bwGARAAARAAARAAARaJgDx1zI6VAQBEAABEAABEACB3iMA8dd7YwaPQQAEQAAE
QAAEQKBlAhB/LaNDRRAAARAAARAAARDoPQIQf703ZvAYBEAABEAABEAABFomAPHXMjpUBAEQAAEQ
AAEQAIHeIwDx13tjBo9BAARAAARAAARAoGUCEH8to0NFEAABEAABEAABEOg9AhB/vTdm8BgEQAAE
QAAEQAAEWiYA8dcyOlQEARAAARAAARAAgd4jAPHXe2MGj0EABEAABEAABECgZQIQfy2jQ0UQAAEQ
AAEQAAEQ6D0CEH+9N2bwGARAAARAAARAAARaJgDx1zI6VAQBEAABEAABEACB3iMA8dd7YwaPQQAE
QAAEQAAEQKBlAhB/LaNDRRAAARAAARAAARDoPQIQf703ZvAYBEAABEAABEAABFomAPHXMjpUBAEQ
AAEQAAEQAIHeIwDx13tjBo9BAARAAARAAARAoGUCEH8to0NFEAABEAABEAABEOg9AhB/vTdm8BgE
QAAEQAAEQAAEWiYA8dcyOlQEARAAARAAARAAgd4jcOT169fkdWFbDRwd7j334bEzAqWX+6ETkrOy
vNSd/+2Ok8Kf/i+fOimGMrYEABkPRpsE8Ai1CRDVQcCfBEiSma/s6PkoOTn0xlDsw5iH3kL8eQjT
v6ZaEH+JK4nG/UnfT0P8tTPk9OYG5HYAoi4eITwDINCXBLog/pD27csnx5tOKav56vP7/NoP+fx6
wb6BlUtHao5LK+6cUT4+4rZKZQPKpSOXlLpt0qfh9LY7lzpa2jXk7XS4UQfrOUsdrz7a49xRKg16
0WBwW3TJ1SPnqnCLDrms5voRYvW+I/q3g/oY/kJ160XD711TY42/thXVfTgETbuHAiDgNwIQf34b
EX/5I70dMM+gJAVP8nP4/1bHyemvaBaBeSxfpmLx2WlXPVLVTVfl+6GwO8it93hq8alleJ4ubsn+
0sGt96ytmq4eOVeF23LLVWWPHiH5q9f5xAkmf/k6/4mLWSKuXG27sE+HoO1+wQAIdJUAxF9Xcfdq
Y2+wIX7QtIPh4TeG2RtDzTuycmnmAamNr2Re1PrP+vK1+kXYDEZpUSjl49/N/cgysh54sBYwQhG8
etqoWI5dGUFHa8Si1n6F2+U4ZVkDNanSvNttlHALmUKAU2lV74WIh/GgIB3OJN2JxPzl9cJT7nA9
zkYAtdGQ8fp6u9am7YbJicN2g8JsBtfpE1U1Hq08ctT6x+n0FO9deKri+XTa8TYeCndV3T5CVdZd
fCMsgWR6CE07NSNV5wvlsLrNKFv/irAa1/4m6cFgtrsRRmkQ8IoAxJ9XJPvUjvY6eYMrv+Gh4WHS
f0IFNjnoNS9n4gqPItQ9Vi797lHsVy0UZUSh5C9/XfwDiysi8EAFro4vawWU+PrVzw0tksmyrHYz
IxuiR97Swlrzm3MZrUk7+2VnuIdMN/56vnBSCKbGVZp1uvXPW4NM7f0497t/zlKvf723NUOvvo84
l1/vsbmTDnKj2+mFB1Ohk6LX9pxrOmTPR7l0cm5c0cYxli03XTNMTR22HRR+s2ZwbVk3Hb6WH7kH
WfY1dY8mPFieT/onjfOOt/5wOKvZ8iNkmnfxjSBNNsO0EX/9enli7ncfa1/NzIx4Gsvf1noPjMPq
dl23/hUhfZKvyDP8YfGGuzyDM7YoBQL9SADirx9H1aM+DRmBhGNHh48dPTZ0VOg+cbNhC/ylyO79
+lXjv4hPhqZ+nItpU4tOJPIi31Rx8CSyFjhkbHo2Xv5sKjYtclLGTXUlu355XqsuX1+c0ko2tC+q
zOrGmTx7eT27ojau4hHUajOtQtbsxJe/5J2QpmNTbGrx6wRxkT6Zj7Mt1WZe4/rcScu0v5PZ2FPB
vC7nmh7bIl3JZcyXLo8mZnL6LM/qYWrqsO2g2A+u7WA0faKaFqiH4g8xufafMe463qHHh5tt7xHS
HXPxjbB2nL5xX75+LR5CRk/gdXFhflubPjCNqztmRrMAZzYXf13nzz8OEAABJwQg/pxQGtAy/KVC
ed6hYV3zUfxPOxuJPzU9NZO5vNx8zhAJvqeL7OrvdD2iBw+qUJM17fMZPZ7HPx+Xat7EUxNBveYJ
aVy7ama/XIWx4IRQjM2qdOI5aAmyrSM2WCrLVc75q1DbtpxrWrHjo25v8QCkoSpnHrCtbS0N2NSf
huNoDgppitrBtQXQdPiaFuBmnaGgcq133OPnyKtHyMU3YkKyk1k1A1oPuMPqzjhR8nfmQXwZys8Z
LpQCAY0AxB+ehPoEuPLj2wvxaJ/4r3byaX91DjEjRw9HVRYxYlHb6pb5AQ/46bmj+IOZ6gWGfP5Q
TKTbeHLJEvmzaXt9s6jfdWy/XIWx4ua6Xr2xS514WNxD9tiLupwdD9llI38uhqq57q/fAdtBsR9c
M7rpeMR5s949ctyadx1va0w9eoRcfCM2VaeLgW2BO69uO8oWWKT8xKQFI0XQFkdUBoEBIgDxN0CD
7barw8OVys+sTxLQ9qhY5FFVQl9boNye03QWn6xdnioeDP2BjVduQ80jK0YyV/1iwRL5q26bJz0f
LGgbuPDslQP7okrOXNCQe8BzlE1dcgvQSXnXkJ0YdVOmPmdHQyZIzhgT7flEfve7hOju2g6K7eCK
Co7cs5JoOr7OHzky62HH3QyXTVlPHiEX3wjK6v4497mxhVMl1Qr37IE7rm47ylUDCuXX5sOD6gNL
AOJvYIe+eccp2lf6rfT8t+d74jh4ecCPlwc8CmhzqOlbpNAqJ5bpUkD+ii/O4KnB3Pt6DI8ma/Op
4nq68HfZC9ocQUm+MMVLfqzwuWsP+DIGOmIsFmeZhXp7j1F0QRnXJrSJkvyoY9/wmyekQgt66wsh
MfutSZXmwFop4RJyK000rlOHs+MhE6m9LTG4lJ3fuvdr65E/u0HhsbqawaWpYo6fqHLvvXjkys+n
NknAm463N6otPUIZ/aslho3rdRffCNoRZpnpI36EL9uqk3KtA9xpddtRtvwVkVYe0T/0ajrSHkzU
BoEBIYBf+BiIgcYvfPhwmPHzDD4clN5yCY9Qb40XvAUBhwS68AsfEH8Ox6K3i7Ug/px0GD/v5oRS
vTL4YdZ26KEuEcAjhMcABPqSAMRfXw7rIXTKrfg7BBfRJAiAAAiAAAiAAGNdEH+Y84cHDQRAAARA
AARAAAQGiADE3wANNroKAiAAAiAAAiAAAhB/eAZAAARAAARAAARAYIAIQPwN0GCjqyAAAiAAAiAA
AiAA8YdnAARAAARAAARAAAQGiADE3wANNroKAiAAAiAAAiAAAhB/eAZAAARAAARAAARAYIAIQPwN
0GCjqyAAAiAAAiAAAiAA8YdnAARAAARAAARAAAQGiADE3wANNroKAiAAAiAAAiAAAhB/eAZAAARA
AARAAARAYIAIQPwN0GCjqyAAAiAAAiAAAiAA8YdnAARAAARAAARAAAQGiADE3wANNroKAiAAAiAA
AiAAAhB/eAZAAARAAARAAARAYIAIQPwN0GCjqyAAAiAAAiAAAiAA8YdnAARAAARAAARAAAQGiADE
3wANNroKAiAAAiAAAiAAAhB/eAbsCGynw0cuKX5g4x9P/EADPoAACIAACIBA2wQg/tpGCAMaAag0
PAkgAAIgAAIg0AsEIP56YZTgIwiAAAiAAAiAAAh4RADizyOQ/WpGxPPSX4SPiCP8hap3dOWSdofu
pbfpnnLp5Nw6y8zof2SsXODIkY9FAtnWlLWYMHdppR5KNT115NIX5I84yCY3aFxrlWob1dvViqW5
Bd2+YnTAkt2u7lS/Dir6BQIgAAIgMNAEIP4GeviddT6TZdnXdCjx9aufCx2nXJK3Fp/ye6+V8bmP
0iqTv3q6OMXiy6/ziRNagUxc0QrE2YMFIRDpqDE1/ZUoxI/ly4xdXv5qupFTmUeaK8vxBzNHPhLX
1K5u37ZRrkrHhSe/TmTnftSMk46cYdrNe1szU+S/baec4UEpEAABEAABEOgpAhB/PTVch+PsVGxa
4i2fDE2VHVgvPBV/IPW2nhAfWw+ZNJ0u46Zn4+VPbE0J5fXxkZnNxV+/lBt3MX5TaysY+gPTr0/I
sT9oztg1upLL/GHxhhCU0ifzuifbSvbH+Kx588esomvTxp06HPpoFQRAAARAAAS8JQDx5y3PvrQ2
LvFgXpW2W2aynve1T9SaOdkjM5ly1VpT/DP1izBXfoaIpD/qprV8sfOjplF1e8tSm0tG46AMtXaY
7pF2bNYp556gJAiAAAiAAAj4lQDEn19Hxu9+8TCbltXNyDWbwpAIO5mNaXlhStE27svKpd9dHV+2
hA+lT/J6JrhZILDCsF2j0olxS5liQU/70j3KUJuHlqqmo2Gn/D4i8A8EQAAEQAAEHBGA+HOECYWq
ZZa5sKPigy1Vy58+LawzPcinfGyN/NWApDUWfPrgV03SvU4GwLZRSjr/OPe5WOShfrGgxyB5pjiT
01Z+mDvU8Att8QoOEAABEAABEOhnAhB//Ty6nerbiUSe1nmcFFlTU7qJuXd0k2eBp28s/kHPqy5M
LC/qc/Jq3VHTt0iP8VqVa4dbcty+Ub4SZUtkqH+3OR5nU6GTZFxKrBsZXh6hFNLTtlMtOYJKIAAC
IAACIOBnAkco9UX+FbbVwNFhPzsK39ohUHq5HzpRsyqjHYs9WZe2d1kIPTWTvD3ZBzgNAiAAAiDQ
3wRIkpmv7Oj5KHV26I2h2IcxD3uNyJ+HMGHKfwSsyVxa+Wsko/3nKDwCARAAARAAgS4RgPjrEmg0
czgETiSy91h1hvpwXEGrIAACIAACIOALAhB/vhgGONE5AuW1w/oG1J1rCpZBAARAAARAoAcIQPz1
wCDBRRAAARAAARAAARDwigDEn1ckYQcEQAAEQAAEQAAEeoAAxF8PDBJcBAEQAAEQAAEQAAGvCED8
eUUSdkAABEAABEAABECgBwhA/PXAIMFFEAABEAABEAABEPCKAMSfVyRhBwRAAARAAARAAAR6gADE
Xw8MElwEARAAARAAARAAAa8I4OfdvCLpazv4eTdfDw+cA4G+I1D4udDrfQq9G6rtQk/3y7ZHvT5M
fel/F37eDeKvL5+c6k5B/A3EMKOTIOAPAqSQuM545Q9vWvJi97fdtdW1Cx9dsNbu6X7Z9qgxm4OD
A7fw8j/kI2cjDWp1wqZbJ/1fvgviD2lf/z8G8BAEQAAEQKC7BHpZudqTaqlHQy4PJ4Pk0uSQE5so
45YAxJ9bYigPAiAAAiDggACpjd49G/SvRzvlYMRQZHAIQPwNzlijpyAAAiDQPQKUMuzds1Hisjf7
1b2BR0u9QADirxdGCT6CAAiAQO8RqNR+K5eGh49Yz8gXRS90lDI3HE5ve60zGyVJK9vavhOp7Jfo
46U1T6QvNy5MmRctm20p7du1hy73bY7OrjWHhiD+8AyAAAiAAAh0nsD04n75WI4zNn5C6nyrLbaw
765efMnSN3G52GjVg2l8Ox0ZnlOatWUuu3C9/sJi2WWPjJpuc9zN+sI/t7WpVWzwkRPLKOOYAMSf
Y1QoCAIgAAIg4JxAfd2g/Hkmc3F58axHkwIb6Aa32sUs36CbVTZF9LLF2Y1N65oFmpZs2lPnA4eS
A0AA4m8ABhldBAEQAIEuE2iQZFydiz6ML90TobGddPhNI/RVfT08/CadkfSOcJ0+PTM3d4buiPKr
c+LT4eGrRq5QVE/fj2j3w/dVo8fKnFaysiG7kg4YuUqe1vNZ75QyN5EssEzU7COrcdXeI0975KDT
WpHcP3O2p2MDNgVNg1VNtGMTdZ0QgPhzQgllQAAEQAAEPCGgzM1m4rlFWTNm1VLla66KJnP7+y/2
93PjyQmh9ujTJxn2Gd2kupoRXmDj5EaGQm9a3Itlsq8yola8cP2OSKeq6TNRppX820b0TJqmGdYp
2WrvRLtRXV9qKpMkqWi82ueNVMHo1EXyRF4spEIsvvRiLTHWwNUqUJ3vUaskUK+HCED89dBgwVUQ
AAEQ6G0CytVo5uIST/g2OLbVDRaXtTJn5TjbKG5rpUOSNktwRcmcSiVEgeAn8zR90JgMF4pOixJS
UP91jm0l+0Q3xUs+ySqGqeqSVf64ivBxASdUnXne08Vt2Wduv1B8qnVqcf/7RLCqxbquVpbrUo/0
Rg9eHZinfE62Pa1lnDydtja1iqZ9tzadtIsyVgIQf3geQAAEQAAEukJgO73wMLRw1RRGdRp9Wiyc
ChqLQYLSqYKqaSY2GTzB/1fc3qhTUy9Q+akZlotmyh/YlrTWa3GBRI1jZkPy4osldl4PDc6t2PbA
1tXakofbo648KmikwwQg/joMGOZBAARAYAAJvGIH1WfxzuUk+1uGUpzljzQyWknz+mQw9KSo6tWL
6pOQdNLy6SsmnZgs1zJCdOXqVlO8XHzp+f5z/czz1m0brfS27ojV9stqrerT6o/klObGN/HM+TnF
6qd+Xd9V05TnPRrAhxNdZgziD08BCIAACICAxwRqNyVR78fn2ULmSmW2c0yaZBlllbeurmQLmheW
m2xVyVDAb4zuWkxSLvhJMq3Vur9gxPOsbRrXY5HoKd0+27kTOTa3xivZlXQGwG6zlQYbsFg+4q2H
9cUr5baowEZxR+t1Y1c71aMm/W66iLhm7XNzkHY25WmZTmz10pyeRyUg/jwCCTMgAAIgAAIVssaK
o6jkCuzJ/OSx4WPmyVdFyIm/hTLn+c04i17Qa8ipwsKGuHns/MZCoXbPvHIBSy1b+sHE6tK+Ziq0
FLUx5XbMbKVeJmrtl7hOCG1aPsY+Xftmcj5U2akxOXqqQDdFYYeuOizmtl8oP1gEjrx+/Zp6XNhW
A0eHB6vrg9Tb0sv9kI/3Ux2koUBfQaD/CRR+LgTfCR68bGdP4kOm9Py35+s/rV/4yJCjwp2e7pdt
jxpTPjgQy6jdHPn1fORso/2tO2HTjYO9UZYkmfnKjp6PktNDbwzFPox56D0ifx7ChCkQAAEQAAFB
wKVo6Blq/dqvnhkAOOoNAYg/bzjCCgiAAAiAQN8Q6OGgZZ0xaK1H1i1XnFw7eQCc2MFWL05ItlMG
ad926PVMXaR9e2ao4CgI9D4Bnh49EeQJvp49Sr89L9imfXu2X7Y9ajw+a6tieYzLo3HatxM2XTrY
A8W7kPaF+OuB56B9FyH+2mcICyAAAs4JPPr6kfPC/ixZNeFPc7Kn+2XbI3/CH3CvIP4G/AHwrPsQ
f56hhCEQAAEQAAEQ6CSBLog/zPnr5ADCNgiAAAiAAAiAAAj4jADEn88GBO6AAAiAAAiAAAiAQCcJ
QPx1ki5sgwAIgAAIgAAIgIDPCED8+WxA4A4IgAAIgAAIgAAIdJIAxF8n6cI2CIAACIAACIAACPiM
AMSfzwYE7oAACIAACIAACIBAJwlA/HWSLmyDAAiAAAiAAAiAgM8IQPz5bEDgDgiAAAiAAAiAAAh0
kgDEXyfpwjYIgAAIgAAIgAAI+IwAxJ/PBgTugAAIgAAIgAAIgEAnCUD8dZIubIMACIAACIAACICA
zwhA/PlsQOAOCIAACIAACIAACHSSAMRfJ+nCNgiAAAiAAAiAAAj4jADEn88GBO6AAAiAAAiAAAiA
QCcJQPx1ki5sgwAIgAAIgAAIgIDPCED8+WxA4A4IgAAIgAAIgAAIdJIAxF8n6cI2CIAACIAACIAA
CPiMAMSfzwYE7oAACIAACIAACIBAJwlA/HWSLmyDAAiAAAiAAAiAgM8IHHn9+jW5VNhWA0eHfeYb
3PGMQOnlfuiE1MDckSNHPGsMhkAABEAABEDAOwKaUBmcgySZ+cqOno9Sx4feGIp9GPOQAMSfhzD9
a6qp+POv6/AMBEAABEAABAaJQBfEH9K+g/RAoa8gAAIgAAIgAAIDTwDib+AfAQAAARAAARAAARAY
JAIQf4M02ugrCIAACIAACIDAwBOA+Bv4RwAAQAAEQAAEQAAEBokAxN8gjTb6CgIgAAIgAAIgMPAE
IP4G/hEAABAAARAAARAAgUEiAPE3SKONvoIACIAACIAACAw8AYi/gX8EAAAEQAAEQAAEQGCQCED8
DdJoo68gAAIgAAIgAAIDTwDib+AfAQAAARAAARAAARAYJAIQf4M02s77up0OH7mkOC/vSUmz0UNp
3V0X1PTUkSMfVxNSPhY3uf90VAFULtncZFrh8Bdquf3D7D7v16UVdyxa8bxDfezII0QDF05vd57J
Cj0gbTTkykFb/s4HxXlJV15ZCzf4XlS1rn/d+Ler9ivJTVoLiFKuOPMvtX6Ib3QX+t4yNFQEAccE
IP4co0LBxgRa+zvRWutEIv/6K5laMS/8y1xK3IyzB7lK9afkHrD4+7wHjE1N/SGTs6qolVzmD1NT
dXq0fvVzp1K7Nc5+I2kOsbU7TbvWtIDfuunCHzV9KxO/PD532+mD4MK250W79Q1t/r0gxXyyMP9a
O35d3Jyx138svqyXEf9TxudOOtTZ9KWeWnyqVRZ/O+EAgb4gAPHXF8OITnSfwPRsnNXIOxafndZc
GY9dmNraLsfzlH9m4hdiddyML97bWrAG/7rfHbR4uAS2leyP8dnroanqf1EcrluH23rT74VySc7E
FVOTSYmvF6ceLDQP007fWPzDeuGpg95tq1tsXDrhoCSKgEBPEYD466nh6r6zItaS/kKkMa3ZSZ6i
siZQlEsn59ZZZsbMW5ULGLkYG1OVtexzdlq21JJFpWJTly5R1lXPq9oVsPVZo1cRPTKynOXEkBEP
qNfx8hDIs5dZ5p/lOA3Ju6l7N8zYgDQdY48UQ/1R/MDUhTajKH2SjT2yDf5V9a6SmBWyxmjFkrcV
ndJzuFRyKi2cacrTcE8Y1/PR1cNtkjTGpiLGqY+Y3rQtSX0UrN1p8Ahpg1JToLYv9l8Ql0+IZsTs
8sc53aptkLLqabRkGCuy54a1eil1dSW7fnlWPpGYv5wp/zOgXli0ejiaDXrtl7Gyj80S/fYAzRyo
zd8PtkNj+xQ1+FYyVv97oY1RLlP+55a4w0OS+YRzrVZ3+MQjR5/qf60dufSF3UyYBj3q/t/VaBEE
3BCA+HNDa0DLZrIsK3IlcSMLQ//g3tJTIZRA+YhUhfzV08UpnlvR/ubV/kWuZVgoPWr+W7zKVFWt
Wr70Vpthws6v97ZmdPnC2I8ZdlPLwtQpwGp9NoyfkGNmQpbCLWzxxjRXFeO6t5QPMufq1TcijMnv
WzO/PD0Um5bKfaCGWFbR5orRW4pe7Y2eH8ojs5nqSYS1vaskNv2VmctavszY5eWvpiX5wpQuSZ8W
2B/06KO6vTV1QZbq4SrzNFykt5oY4vwn1KPa4RZDbEJ7urgll6HN/HNWG/eM5WbNI6Q1ZO1O7SPU
+BmrN/RVlN0/IdxA+QH+dWIr0/iLX6bnionVqPL5VbZ4nT8g9FCtl//NYNtw7XA0HvS6X0a7kXL8
HSwXrP2a2DK3fYoafCu1j2y/FxYn/xAKNh4d209XPp+jOKsWpLcdPp4XvqSQlNT/Wnv9lR7Rrxi1
mr8GW3AFVUDgcAhA/B0O955q1dA0J0OWKWtG0oT0x3rCInmMlzpJM+2vS54eNQ9bU/VhaLkwYUf6
ZD7+o6Gl2FTopKhVv4Cuwyp81l8npjyicAsjScQzO8abgHu7peqz+5t5y5NHRuaXT+mLyRUhB3ol
s+wKD7fxnK8+F7B+Z8naZmXGqm7vqo3QnPSZzcVfv+TqgSKOU5sqtar8cyt2M8Y2i4ypyqP18RNS
A1w6T80wBTy48rNGUGqGm/eXdLMoXzEDbEoTMayCfDOS9lQaPmMO4bTyhAixbvSOP3hNvq7G0+iO
icWo9eGhJ/DHuc+brLmpJtNo0Elh1/ky2o1UTVebc64Z3LpVGgxoWb/q30rTkdrvRSt/e1JSwnLw
x9tMFpt/mdT7e6BBe43/GmzFUdQBge4QgPjrDueebqV2ygu9TpaZbJfyMztazn/NWAInLcyeMf/W
rmenaYFq+MabkiQR4xrxaWG9HD8IhsqTgZp6W35j8ZwvD61VHEbmtzrnqxpp9MrJ6TRjKZatnu9v
27uKVsgaV36mBD8hjXOVrKqb49K0NM7nkBULZpyDp+a1o+64ZB6x+XvMsvLAZrgplFjnmbaF1pRk
rTEHz5h9X2pNNe9yVZ36vbPttN47l0xMU3ypB/tx7neWcbFOJ6hp0o5M40Fv98vY+CG0HdzaKk0G
tPpbWe627ffC+PjHAv3jpvao+YpVLvioSA0b/tf9e6De395OHtGe/psfzvczAYi/fh7dTvaNhxNq
sntGg3yuTDamL5FbbhY4aeym9W9t29k8TQvU2KeZVRNZZYVyviJWRzGq8iuEdJIRCXCAj7+xSF1t
pxcexOd5hrTy0F7JK+pWZXJK+iSvp2tFrK58kGNspnICVrPerVz63dXx5YrgK01GXC/w3lFGjK63
1C+sSedmBhmL30zIFO6qmDhfPdzSiXEHeNos0vAZ47ab90V44LCYRW601LsWmYg4WcVaVEo1Nlm1
UEum/qB78GV0DbAO84YDWvWtbPK9EB/XLroScw3pG1T3K9bgkWzl74Gmj2ibXwFUB4FOEYD46xTZ
frbLAwm2GyUYCVP6N7SxRE752BphsqVipllrVZp1fp7dhOuKCXwu9iaU3x+fk+d4zpfaJIlmrtvl
U8jdhKmEAwsfidn6Np3jr+RyQw6eCfn64tZVWjojjrq9M4jpM/Oqd6CgeWNbj0RGm7HgBMs+2pqa
EDOjXOCSv9Jnc2pLZGqG25qdrPs8OOgwL2J9AIzrps+Yw744LFblqaV36hcLeuja8pzw9Rm1nWuJ
iXJ7rvrh4cs+1vmEAdsW65CpO+juvozuv4O1HGyZO3hIKr6VlWYrvhflj+Qb96Yysvlw8rmG5dkI
Dp8+s5jbvwcc9MitCygPAl0jAPHXNdR91BDN8eITokWaypw9w//GX6ebPHAlJsNp6cWFieVGuypY
a9kQkhLrRn6ZhxJr99lqWqAOdh4zMGN1fKnBlpbFrpgM5GTI+IZ/6z+u15vSxxeFsMqFII2tnkhk
75lTK+16VyYm0oWMMzcO4y14MsR+FJP8aArgifH1H0V2mx9ucNEgsrnf0RoU2+EWazV0aHzlh5sl
llYC1geg4rrZM+a0L266bFEVZu9iLGaErjWpwWFbblo70wITsY2cNkvScvBlH3zrR7sW7YeDB7Dt
B935l9H+yWwBoO1zazegtZq7/K2s/Kzie1H+iEf4zL+LjvxubmLZbgqyky8ylXH590CdgaAJuBV7
tjtsHMVAoLsEjlD6iVosbKuBo8PdbRqtdY9A6eV+SEgBHCAAAiAAAiAAAn4mQJLMfGVHz0fJ1aE3
hmIf1tsptpWuIPLXCjXUAQEQAAEQAAEQAIEeJQDx16MDB7dBAARAAARAAARAoBUCEH+tUEMdEAAB
EAABEAABEOhRAhB/PTpwcBsEQAAEQAAEQAAEWiEA8dcKNdQBARAAARAAARAAgR4lAPHXowMHt0EA
BEAABEAABECgFQIQf61QQx0QAAEQAAEQAAEQ6FECEH89OnBwGwRAAARAAARAAARaIQDx1wo11AEB
EAABEAABEACBHiUA8dejAwe3QQAEQAAEQAAEQKAVAvh5t1ao9Vwd/Lxb1ZB9/tfPe24Q4fBgErjx
lxuD2XH0GgQGlkAXft4N4m8gni6Iv1rxh3fqQDz6vdxJdUd99PUjPKi9PIa97fvBwYHbDuR/yEfO
RhrU6oRNt076v3wXxB/Svv5/DOAhCIDAwBEg5Zf79tHAdRsd9hmBIZeHE/ddmhxyYhNl3BKA+HNL
DOVBAARAoF0CFP9ofED5tYsY9UEABOoTgPjD0+GYwG/r6b/mik2LOyzW1A4KgAAIgAAIgAAIdIAA
xF8HoParybemEn+ZDTbpXTF3f22vXwmgXyAAAiAAAiDQ+wQg/np/DH3Tg71/pT//a67034MjvnEJ
joAACIAACLRO4BVjrk4nLdkZzH2bo9O+LSc2UcYlAYg/l8AGpLhI3a5zMfc5nel/iVieNZ/Lr/lH
dOZ+MaC8FUn85UbidGBAIKGbINAmgezX2eHKg+60aRPVQQAEQKApAYi/pogGtkBxg8Vom4kbHwT3
vl+rnOrHc7uBD27wT69ESt/qEwFH3kHMb2CfFnS8FQKxj2JfffmVWZOu6U4rhlAHBEAABNwQgPhz
Q2uwyo5MvjPCe/xWQPzPcvxSLL4Vibwj7jiaCDhY4NBbEHBOwNR/UH7OoaHkoRDI/TNne7bjjGlQ
M1L1x3Yso25jAhB/eELqEQiMvGX/0d5vJVADARDwioCm/xDz84on7HhI4ODVgXnWM+ukjLWuk/JO
ynjYzQE0BfE3gIPebpdH3sKsvnYZoj4IWAlA+eF58D8B+Zxse7bjuWlQM1L1x3Ysoy4if3gGvCbw
TjD429qats6Dr/xIr//mdROwBwIgAAIgAAIg0BkCiPx1hmufWw3O8nUeYrUvX/mRmKqTIO5zDOge
CIAACPQ3AVf7vFBhJ4etTa1ig4+cWEYZxwSOvH79mgrTrwgHjg47roWCPUag9HI/dELqMac76S7J
Vlqq3MkWYBsEGhFo+vP26S/uUP2DA4YHFU/SYRHgT6lDPWe4mF/PR85GGjjcCZuHxadz7ZIkM1/Z
0fNRamjojaHYh15uBYDIX+eGD5ZBAARAAARAAARAwHcEEPnz3ZB0wiFE/qqoIvLXiccMNh0SWP/X
en59zUlhRP6cUEKZDhHoRJSuEzY71P1DNNuFyB/E3yGOb/eahviD+Ove04aWGhJwrvzIDMQfnqZD
JEBCren8hCr3Nn7aaJr29dzmISLqUNO14o8aGhoa8rA5iD8PYfrXFMRfrfjz72jBMxCwEMCcPzwO
h0VgbdVRfLrKvcbirxM2D4tP59qF+Osc28GyDPE3WOON3oIACIAACPQsgS6IPyz46NmnA46DAAiA
AAiAAAiAgHsCSPu6Z9aDNRD568FBg8sgAAIgAAKDSMB2wcd4cNxDFoj8eQgTpkAABEAABEAABEDA
7wQg/vw+QvAPBEAABEAABEAABDwkAPHnIUyYAgEQAAEQAAEQAAG/E4D48/sIwT8QAAEQAAEQAAEQ
8JAAxJ+HMGEKBEAABEAABEAABPxOAOLP7yME/0AABEAABEAABEDAQwIQfx7ChCkQAAEQAAEQAAEQ
8DsBiD+/jxD8AwEQAAEQAAEQAAEPCUD8eQgTpkAABEAABEAABEDA7wQg/vw+QvAPBEAABEAABEAA
BDwkAPHnIUyYAgEQAAEQAAEQAAG/E4D48/sIwT8QAAEQAAEQAAEQ8JAAxJ+HMGEKBEAABEAABEAA
BPxOAOLP7yME/0AABEAABEAABEDAQwIQfx7ChCkQAAEQAAEQAAEQ8DsBiD+/jxD8AwEQAAEQAAEQ
AAEPCUD8eQgTpkAABEAABEAABEDA7wQg/vw+QvAPBEAABEAABEAABDwkAPHnIUyYAgEQAAEQAAEQ
AAG/E4D48/sIwT8QAAEQAAEQAAEQ8JAAxJ+HMGEKBEAABEAABEAABPxOAOLP7yME/0AABEAABEAA
BEDAQwIQfx7ChCkQAAEQAAEQAAEQ8DsBiD+/jxD8AwEQAAEQAAEQAAEPCUD8eQgTpkAABEAABEAA
BEDA7wQg/vw+QvAPBEAABEAABEAABDwkAPHnIUyYAgEQAAEQAAEQAAG/Ezjy+vVr8rGwrQaODvvd
WfjXKoHSy/3QCanV2qgHAiAAAiAAAiDQJQIkycxXdvR8tBOtQvx1gqrvbEL8+W5I4BAIgAAIgAAI
2BGoFX9DbwzFPox5SAtpXw9hwhQIgAAIgAAIgAAI+J0AxJ/fRwj+gQAIgAAIgAAIgICHBCD+PIQJ
UyAAAiAAAiAAAiDgdwIQf34fIfgHAiAAAiAAAiAAAh4SgPjzECZMgQAIgAAIgAAIgIDfCUD8+X2E
4B8IgAAIgAAIgAAIeEgA4s9DmDAFAiAAAiAAAiAAAn4nAPHn9xGCfyAAAiAAAiAAAiDgIQGIPw9h
whQIgAAIgAAIgAAI+J0AxJ/fRwj+gQAIgAAIgAAIgICHBCD+PIQJUyAAAiAAAiAAAiDgdwIQf34f
IfgHAiAAAiAAAiAAAh4SgPjzECZMgQAIgAAIgAAIgIDfCUD8+X2E4B8IgAAIgAAIgAAIeEgA4s9D
mDAFAiAAAiAAAiAAAn4nAPHn9xGCfyAAAiAAAiAAAiDgIQGIPw9hwhQIgAAIgAAIgAAI+J0AxJ/f
Rwj+gQAIgAAIgAAIgICHBCD+PIQJUyAAAiAAAiAAAiDgdwIQf34fIfgHAiAAAiAAAiAAAh4SgPjz
ECZMgQAIgAAIgAAIgIDfCUD8+X2E4B8IgAAIgAAIgAAI2BDYzSXfm5nRzuu5kmNGEH+OUaEgCIAA
CIAACIAACPiEwM/pmY/z4S+Xl7/j51en85feSxec+Qbx54wTSoEACIAACIAACICAXwgU0jcV+VZq
dlR3KHAutTCtzN93JP8g/vwyjPADBEAABEAABEAABBwR+DmvvBOffbeibGhKZit5rv54OtiIAlqv
WSEtcsQQf44goxAIgAAIgAAIgAAI+IRA6d8qk6RAlTejgWAj/0q56/PsFs8RQ/z5ZBzhBgiAAAiA
AAiAAAh0jMDuev4XOSyChRB/HaMMwyAAAiAAAiAAAiDQAQKBtyWmqtXLe3dLxSZtKfNI+3ZgOGAS
BEAABEAABEAABDpM4N2w/Esm97PWCs3k4zP8CusKmw6HGrUsL4ilwYj8dXh4YB4EQAAEQAAEQAAE
PCYQStySlZvJ3C7ZpWtGIb35lWD8faH9RiWJKXkhDUs/5fVw4OhU+B39JsSfx6MBcyAAAiAAAiAA
AiDQcQLvJpa/DOc/Fjs831REc8XMxzNprvlCs5eDyk3+0R0WlnVXArO3F5i4eeT169d0s7CtBo4O
d9xRNHBIBEov90MnpENqHM2CAAiAAAiAAAg4JUCSzHxlR89HqdrQG0OxD2OO6v9cKLwbapj55WYQ
+XMEE4VAAARAAARAAARAwO8EHCg/iD+/DyL8AwEQAAEQAAEQAAFvCSDy5y1PWAMBEAABEAABEAAB
XxOA+PP18MA5EAABEAABEAABEPCWAMSftzxhDQRAAARAAARAAAR8TQDiz9fDA+dAAARAAARAAARA
wFsCEH/e8oQ1EAABEAABEAABEPA1gfI+f752E86BAAiAAAiAAAiAwGAQaH2fP2d8dPHnrDBKgQAI
gAAIgAAIgAAIdImA602enfmFtK8zTigFAiAAAiAAAiAAAn1BAOKvL4YRnQABEAABEAABEAABZwQg
/pxxQikQAAEQAAEQAAEQ6AsCEH99MYzoBAiAAAiAAAiAAAg4IwDx54wTSoEACIAACIAACIBAXxBo
stq38HOhL7qJThw+gdC7ocN3Ah6AAAiAAAi0RKCjeqD2BdHR5loC4GUl5y/E5qt9f07P3FTjX6Zm
R1142Ej8EXru3ysX5lAUBGwJ7P62u7a6duGjC+ADAiAAAiDQcwQ6qgdqXxAdbe7Q4bt6ITYTf6Xc
9UslSVZYePmKiwgL0r6H/hgMhgP4J8RgjDN6CQIgAAKuCQzaC8LD/u6u53+Rw+8Hgit5V4laB+KP
vMQJAm0ScP2XASqAAAiAAAj4jECbL4J61ev1skPNHbpZ70a19FO+OB0Ojc7GppXs45JueDeXfC+t
a0HrNSWI35sRZ7K5+DtgDCcItEnAu0cdlkAABEAABA6HQJsvgnrV63WmQ80dulnvBq+Qe8Di7/Ns
b2hKLv6wbqg/2xYKaT41cHn5u+XlW1Jz8Vet/bbvRIaPDFefl9balIjcbNtGnPhQ4b+bFrvmoZNe
NCpTTJ85MrdS+Xi367wyNxxOb7fxlfEwyu3d9waWQAAEQAAE3BCwewvUe6vWvndEyerXE73O6r4g
6rx0Vi4NV7+S7F58zPZmSy+ydt+hlY169UL8Oa+8E57S1nm8G5Z/yeR+bjyYxdKuVjjhRPzV2oov
7VcdixE3j49N2ROJtX1hZDsdGZ5THFpzVZhsrswNTyTHc4bzOTYzHElv12/Mat/00KFv/itGT99h
HfuH1TDaBQEQAAEQ6ByBZm/V8nuH3qcTS9HN/cXpam9cviDU9O1M/OJ48u5a57pla9mrd6jL/tbr
ZSn3SGG/ZC7pmdx5Ek7KeoOJf6HEdwvsppb2nXEg/qqy4yIS2MFZgK7suyr8Sk3/LRPP7S+eNfw/
u7h1myUvptX6ExE629nOzTyoGiN3oOzGV3v8Wna4y99RNAcCIAACIOA5gepXQMO3asV7Z22OK7+1
xFj990utt7ZvnG0l+2NcvhYMPcwpNfrE5iXVzpvLtN/+O9TqqifjIpZ6LFAO1zy/jAdXsjkttmd/
kP7T0r5yM/HnKji5kw6fmZs7Mzz8ZiR8ZnhuVTRON980rlfnhs+kVX5XmXuTitFpBPl4MbpW5iaS
BZaJvhlJ72i+25XUWxmmMF6Twtb+7yjZJ3GZlJ/lkKajoSdLCm9LTZ+JpO/PCa+Gh69yDV3hjO5h
uVOipOGn+DR9P6JVD98XvazfU7uSFrcIlM5H80TDWMVKlC+XNImJ+491CxZPTPv28G1cMo1fzVmc
c1y9gjT+AAIgAAIg0MsEavVAk7eq9aUzw3JC+Tk/6sgPdWWpcFGWxxLzFzML5Vdt/Rdf1duwLFS0
V2rNG01/4WoSxVAvmtvGO1GXNy2/4p1DqF+y8M8MX+phLcCXfRTzP5XYqCQxJf8z/4yvCNHK8JUf
SVMaNhN/tQ3z8SBxpnOxSCUh559k2Gf7+y/WMrOhzGMB9mmRnQpt/MLFUPGXjdCsLHGZFWWUe32x
v/G3jeiZNPdMH2Z5sZAKsfjSC+0pqVNSb2V/v2lhq/9Pi4VTQamqR2PSJFsvPtXurievs6UX5NhS
/GE0/EWwwpnyg8hF4aTwfz83npwQD5DAkn2VETfjhet3ROfr9bS2pNUtZW6WRyg1U+zhAk9MW78G
Vk9mN1IFw5OLgqQ4MpvBDdGRyevj/DEtV3HuUtmNjZMbGX1ahvPqtY8O7oAACIAACPQRgcZvVf29
w98amYtLPOfmwaHcuX6QuiqTJflcvPCNYr71bF589d6GuoRYlG3f0SLuo7/iC6mN2fIrPvpY1t7L
Ge1mi694DygwVsivBLWlHtaDL/t4kCuw0OzloCIyvHdYmMOiY3Q2dUvKfCzSvjdV9+KP2yBxJgSH
ed7TjdOKExKcdASnY6FNlUZFebwRvR7jEpCpSq4weUJiFLM1InDBT+bjT7JKvVl3dUvqrVR0uqnZ
BlFM8RFleOPf0NPAH6rE30IF7rPdsa1uMCOCeFaOs42i7n8oOi06LwX1Aanvf3XJinbkxRciN00H
ty/y7HWPgq5czy7uf58IimK8I9e1a1m+yHQVrllw7tKKkjmVSgg3+DBpbjivXuWwqxByo/7iMxAA
ARAAAX8QaPZWZWxj4QyFSFILmxRPMRNiNc47f0HwF1MscsJ4Pz5Jpld0a7YvvjpvQ0NC2L7RLO8+
diKRf6GpAjpCC0J0tv2Kr4zmtDiSlMC1+0mPdxPL3yVIgQTOpbR0cOrcbELc4Qf/VEsTp1oTfw2c
nQxqo3JCmuSqTi1uTganpcmHisKKKmk+faanGTuMZpr03Lak0Up13YZmTwZDT4p2T19IOlltKHhi
kgnxanNU/FsnKJ0qqHrg0NYrV/4brW3zXLmIqjbmQzJxiZ03otPGd8Dqs3SydstvRy4VtzfqjIyj
6jV1PZrh2uL3BNVAAARAAAS8JtD8rVqYvL6/fy/x6YMU+yx2p+7ySocvCPXO7Qx7kpy0vB8zKzYL
RO1efMxy0/qyrn6j1X/3efeKZw776/V4Wew1E3+v2EHVqVWuvU93Kj6S5Y8K6oqyxILSK7reKH6h
ZD6SI3qx+NLz/ef6macMb7mu1Qi/rl+yqkXbwlY/xyLRUxllpaJHxS8W6J8RMjnwSndB6y8f/qAW
xDQ6a15rj7tumRQt144VfTdLuvLfdJWUXygbLWh8lijkRoe9fV5FTmkYv4lnzs/R7NeqjqjFQuik
VInXEVKJ5K9loLU+CTuOqlc9Nh18hGEaBEAABECgCwRq3/uN36rifUHz7PnrYCyR+RubD2kvqerT
3vfaktvK0hPrC2j/OU39erhAmtL+xVf5Wq9+G9ZRI1XvPt1bWzHQyy/EJuLPLuHYIAlZ8VHkXHwj
l2WzEZJQUpAt5TZC2ow77XHRl4PciRybE8u1zbp0sVHcEaSblNRqNS5sfaiCieukkIaTWtN0rM5N
fsYWHiTMiYCZ23e09SjpzwrxcxTgtdg3PeTTBA3/V5UMmwyOWf23XDvyv+axp8iibpOtXTUif5ZG
1ZWsvph7h+iFjZUxFXb0juzcWfiaTZ6gDLCB17lLlHGmiLpgpd5f0AO0zqt34W8iNAECIAACINAt
Anbv/sZv1Yoa0pXMwqlMVFvC6OCobW7tbrJAISRrXVr28VFhaUXP0tW8+HjRum9D+sz2jWZ597Hy
S9bqjvv3qYP+drlIs8if/XyzTPTY8LHKM2EqKrMHJyX2pBDkyoORlC48YdFpbU5aMLG6tH9eWAgt
RQuVewSOydFThfnQsDDYsCQfOTeFqfzZRYql6U1T6+fZ0nMedzSPC0F1kvcrSitRUjTdrcK+WUpO
FRY2NP/PbyxU+V8xgM38tx3tswn+DRF4F4JLC6cKWzytzKchknKlm3EWvaBVHPt07ZtJYlXrid6R
0DzTOlI+nLtU7ma5xaYj0uXnF82BAAiAAAh0iYCd/Gv2Vq14+zxYCH0dPeZU/1U1pyx9HVq4Zi4w
0A1TmKnwWVrb8c/2xVf/bVhPY1he8aH54DcVIqGStPP3aZdGyHkzR16/fl2vdOHnQvCd4MFLr/Y1
dO7VoZRU09OTW9efp88cSut93ujz356v/7R+4SNdtfZ5b9E9EAABEOgvAh3VA7UviI42d+gj4+qF
GD0fJYeH3hiKfRjz0PNmkT/na3A8dOoQTQ1afw8RNZoGARAAARDoIQJdfj92uTlfDsTBgR59OzZy
zFsHm4k/b1uDtUElMCDR40EdXvQbBEAABFonMGgvCOf93dvb07G+0Tpe25oQfyYWKbHyPO3NLpQe
DxLMgQAIgAAIgAAIDBqBtVX994sDgYC3fT/y+v96zeorykdfP/K2PVgbWAKY8DewQ4+OgwAI9AGB
juqB2hdER5s79OFw9EJ8xWL/n5i2i030/ejI0REP3T7yf679n5EzFUunPbQOUyAAAiAAAiAAAiAA
Am4JrH2/lr6fplqBtwLydPUyZ7fWqsr/B5kemPW8bbJCdRAAARAAARAAARDoOAESZpryoyP0+9pf
6mrXAT7nb+7aHP1iBw4QAAEQAAEQAAEQAIFDJvBKCDNxSMclivx57g8Xf8/3nl/68yXE/zyHC4Mg
AAIgAAIgAAIg4JwAiTGSZCTMqMqxo8fCU2HndZ2XPDL7waxZOnElETkdabD+w7ldlAQBEAABEAAB
EAABEHBK4BVb+0Gf50dVhoaGZs7OjIyM2FffzSU/zug/bKeXCMa/TM2OOmrtSPZ2YuknEpj6vjPa
LtKkNHl7Xu8r48gjFAIBEAABEAABEACBASHwitF+fvn1fPY/s9raXjoo5keLcesqPyrBxV8p9l2i
PB/w5/TMTbZgvVMf4JHl7xbXvi7sjY4831UHhDO6CQIgAAIgAAIgAAL+JEDz/CgGR5G/Ru7Vij9W
SL+XDTgL/v0He6aqY6HZsxH5dHCINWzJn5DgFQiAAAiAAAiAAAj0PgFtVxeK+TVRfrY9/TmvTMeM
tC8JwZkZfqYLWmESi++lc4+T4uYMn/N3LBSd/e8j7GUh98/nwXOTB882tgoqGzrGXu1TOvjgFf9F
YarIo5FvaEK0fJO9OjjgNy13ep8+egACdgSaPOT8Rxj176v4PUZxXXFT/ypZ7lU0Y2/fWtr8nUf+
Taw/K8OuUfG1NbzSv87ix8L1DzRvze84fa/1T01jFW7bNmF/k2xSx8XfHLyA8LxhQ5XQdKjCeb3X
5Xbq2TEGwsq3DE+/a7Fmlm/smPirTyPf7GGwIWnbr+qbNo9NIwJVz1J77tmMVPXTV/lQOx+Ies+8
NhTimyMeP+2BNA8tBWbc1HrHyfNDfxlVDrE+OlVfXxNL5ffN0lxt07pjdgathW0r6jfFM2C1YxY2
U3tlh2y6g7+GB4XA8NDw0NEh+g0PCvg1yvNW8bCZ88eCl79KnaOlwaXc9UulC8uJd1npcfLSD+Gv
bs8GRHmmFfg5fWT50ULun6r0/myIcfEX+igiVTSgUlL4mP6pKHaUaTJRlNwrPF56Lk3uFYyPBmWw
0M/BI0DTKR4F+FdIdL1wf2Z+RYNQO8fWjL3TxTy7xb+B5cMmVi8+rLRvlLezwGP78wqVmF5YvlKz
/1OFHUt13m4+rGcEyn816H1hC8vvq5YZJEYXmDGtpMLtul5Vd9bydxA1xP8aehZr0pAtB6vzpifV
Lol8B3fY7KYD7LaWxQjrCRSTgBj0/JQYzcYPg6VK2c5uxfNj9/0xWrQt2ZxAe+6NVjwSNiNVi935
QNR75jkFde3x3uS50Ej5tVLJ5tla5nsWodcNv1D1UAW/3pvU3kfi2Puv3FJhhBerIav+kFGleOR4
5QcVzRmvOcOaVrTFioblY/+VW2MRCqzodkaM92ZtT/kdvEPtvhO414BA7dfK/FvC9q8g69+Nu7n/
YEdpZcfz53s1X4yvMxl+rllmAo6MVH439G+IykIhtkHBQhwg0McERgPBX0q7RgdDV5aXv6Nzobzt
Og+qa2F2ocz4EUp8t8Buajdn0j9X06FXrPbRzP0Cq7RvKRoMVK/eIrPLy1/GgyvZ3C5XVGUjJLYe
KeyXzCWLJ8q6HvWnHaMku4Vgo8eDRnPKfHUXbEfUtl9NOkuGAm9LTFVL3GTdhkr/rvc3ia3ztnbs
u1n/2bSWb0CgkF+Rw+8KM00fhpoO1u1XzWPjkoDZrbbcq4JjGSmHX+kWB0L9YY3eHyMNGjkuUZRh
76UoMTIZoSQVHccnJ0fKry2h/Njk+zbKjyScuiNJVcrPUZ9arqhbHxnjT/ueEwds38KOnEQhELAQ
GKUvS9F4SzX5SvJ9/mhZyTHrl0//V0g8/hGdtl+nCtxSKCT995C0Uyho308cINCXBEanwu8o2cdC
utQe+j+5ahQh13/i5i1ZuWnMvTCqB86lhIJc5gE8YT//c4Xp0uOs8k54ihQbhZrMqRtaEV6ef88r
jOyu53+RFzSb2mloxAZjsvusGDyuqUJr3cZbBtj2q1FnyTqXNZIkQqd1G+Kyw8Xh3GGHRusb5PNp
wnqgtfHDYNdB+37ZPTYuCRj9as+9KjqWkWqbWyMDJLCY+r0INPxz4zlFAb/OrD2rU2Hk2LG953zr
s8rDiPmVo4AVn/NJ7fRGrDm43jI05cu9PVYT2mi5otkUNbGnqs/29sjz5hQr38LNy6MECNgSMIMF
Tf5u/I+9/yqoI5JkDenxL5j+TaB/ljkL6EmRMyMb3xf2MBwg0LcEArPX4uzBpWRZ/1GmzAjy7ZaK
RlytcN+8SbHAJAXnag5VtbkZmL1AAtESIPw5fekBi18TieZ3Z+PvKPPXc6b2LD2+k2Hx2XcrbBf+
mSmaAkX7ZHQ2Nl3M/2SjWZVHwtpuLrvCpLcDFepTTA02A4ZGG4bb/NOaftneFDX1hmhiyYOiPFUp
c2sbejcs/5LJ/Sxq1rcpumaRy/YOu3wWGxokPcSd14+GD4OtHdt+2T42zglwZ/RBadc9+5Ei2aT/
g6T0U75yRzELW0cDYfvMSxEeYhDn+5PHGP8jz89SVtdMOpEy095QR0OhMbXwX+Il82xjY0/E814W
1urG/HhB2kFDstN+PAo+9lxLWKmFjec1ArHlihYuoonvN6ipkYZPos1b2OWTi+IgQATKwQIHX8n/
4PMkaL6FlRyPqPN/gdG/xgrHItboeiO+VIttLP3gTCtioECgFwmMzqa++yr8wyU9zfrepYy0sPyd
iJBp4kzkTLPHF+IiJkfCK3VLynwsErs31fiXYkMmEbGjm7VZYPZugvLIZppY7Nhkht8Cs7eXFyQz
nzujz+GtwEiJv2D8/epZgKEpufggV6PkmCyVeHZYTAEWsxKpCaN1Pm3Osn1Uldv2/bLrrHBPb+i9
edVJQxQr/TKuarnyjzPSrQYByIYOt/KANTBYWv+BVeTfGzwM9iTt+mX72DgnUH6W2nfPdqRCs5eD
9A8SGos7LFz/h+WbDUSDZ952mI5HoqE97R2UofnlxhtKOh2V1CV+83tVOsOzUly3secb/9QmKfGz
MnBIkTdWkdeySrPTEWmHVGZmbUeKnK4KDrZcsaI/Qncek8ZqtZ/+htV8tnkLt/L0os4AEjBzu/xL
eumBtKDPSm/2lWSM9vlbHkBg6DIIgEBXCFQsI+hKi2ikNQIYqda4oRYI9CQBbc4fDhAAARAAARAA
ARAAgYEgAPE3EMOMToIACIAACIAACICARgBpXzwJIAACIAACIAACIDBABBD5G6DBRldBAARAAARA
AARAAOIPzwAIgAAIgAAIgAAIDBCB/z8Qpx6xGfR+ewAAAABJRU5ErkJggg==
--_004_867A8F0949022146AA9D43C9D643DDF24236C6FDchronos2adsunim_--
3
3
18 Sep '15
Hi list!
I'm new by oVirt. Right now I configured a Cluster with two hosts and a VM.
I can migrate the VM between the two hosts without any problem, but what I need is, that the VM automatically migrate if an host is down.
The migration occurs just if I set an host in "Maintenance", but this is not (only!) what I need...
Can someone help me to configure oVirt (3.5) to automatically check the hosts and migrate the VM on host failure?
Thanks a lot!
Mit freundlichen Grüßen
Luca Bertoncello
--
Besuchen Sie unsere Webauftritte:
www.queo.biz Agentur für Markenführung und Kommunikation
www.queoflow.com IT-Consulting und Individualsoftwareentwicklung
Luca Bertoncello
Administrator
Telefon: +49 351 21 30 38 0
Fax: +49 351 21 30 38 99
E-Mail: l.bertoncello(a)queo-group.com
queo GmbH
Tharandter Str. 13
01159 Dresden
Sitz der Gesellschaft: Dresden
Handelsregistereintrag: Amtsgericht Dresden HRB 22352
Geschäftsführer: Rüdiger Henke, André Pinkert
USt-IdNr.: DE234220077
5
15
HI
Unable to access the web console of the vm ,asking for console.vv .
any pluggin needs to be installed ,using firefox latest version.
Thanks,
Nagaraju
2
1
HI
Unable to attach the ISO to the newly created datacenter,below are the logs,
[root@cstlb2 ~]# tail -f /var/log/ovirt-engine/engine.log
2015-09-18 12:50:08,493 ERROR
[org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand]
(org.ovirt.thread.pool-8-thread-8) [303cb8a5] Transaction rolled-back for
command: org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand.
2015-09-18 12:50:08,494 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.AttachStorageDomainVDSCommand]
(org.ovirt.thread.pool-8-thread-44) [59c91a6d] START,
AttachStorageDomainVDSCommand( storagePoolId =
92328f51-9152-4730-a558-8c1fd0b4e076, ignoreFailoverLimit = false,
storageDomainId = 263f7911-c5a2-495a-92c7-ce765b65a5b3), log id: c271dba
2015-09-18 12:50:08,724 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.AttachStorageDomainVDSCommand]
(org.ovirt.thread.pool-8-thread-44) [59c91a6d] Failed in
AttachStorageDomainVDS method
2015-09-18 12:50:08,728 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.AttachStorageDomainVDSCommand]
(org.ovirt.thread.pool-8-thread-44) [59c91a6d] Command
AttachStorageDomainVDSCommand( storagePoolId =
92328f51-9152-4730-a558-8c1fd0b4e076, ignoreFailoverLimit = false,
storageDomainId = 263f7911-c5a2-495a-92c7-ce765b65a5b3) execution failed.
Exception: IrsOperationFailedNoFailoverException: IRSGenericException:
IRSErrorException: Failed to AttachStorageDomainVDS, error = Storage domain
does not exist: (u'263f7911-c5a2-495a-92c7-ce765b65a5b3',), code = 358
2015-09-18 12:50:08,729 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.AttachStorageDomainVDSCommand]
(org.ovirt.thread.pool-8-thread-44) [59c91a6d] FINISH,
AttachStorageDomainVDSCommand, log id: c271dba
2015-09-18 12:50:08,730 ERROR
[org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand]
(org.ovirt.thread.pool-8-thread-44) [59c91a6d] Command
org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand throw
Vdc Bll exception. With error message VdcBLLException:
org.ovirt.engine.core.vdsbroker.irsbroker.IrsOperationFailedNoFailoverException:
IRSGenericException: IRSErrorException: Failed to AttachStorageDomainVDS,
error = Storage domain does not exist:
(u'263f7911-c5a2-495a-92c7-ce765b65a5b3',), code = 358 (Failed with error
StorageDomainDoesNotExist and code 358)
2015-09-18 12:50:08,733 INFO
[org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand]
(org.ovirt.thread.pool-8-thread-44) [59c91a6d] Command
[id=fdb5bcec-5aec-4ca1-a4f1-2ca3895e30f4]: Compensating NEW_ENTITY_ID of
org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap; snapshot:
storagePoolId = 92328f51-9152-4730-a558-8c1fd0b4e076, storageId =
263f7911-c5a2-495a-92c7-ce765b65a5b3.
2015-09-18 12:50:08,736 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-44) [59c91a6d] Correlation ID: 59c91a6d,
Job ID: a31fb420-9106-49f6-bf9f-78a87cca4f0a, Call Stack: null, Custom
Event ID: -1, Message: Failed to attach Storage Domain ISO_DOMAIN to Data
Center Pulse. (User: admin@internal)
2015-09-18 12:50:08,739 INFO
[org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand]
(org.ovirt.thread.pool-8-thread-44) [59c91a6d] Lock freed to object
EngineLock [exclusiveLocks= key: 263f7911-c5a2-495a-92c7-ce765b65a5b3
value: STORAGE
, sharedLocks= ]
2015-09-18 12:50:53,060 INFO
[org.ovirt.engine.core.bll.storage.GetStorageDomainsWithAttachedStoragePoolGuidQuery]
(ajp--127.0.0.1-8702-1) vds id b8804829-6107-4486-8c98-5ee4c0f4e797 was
chosen to fetch the Storage domain info
2015-09-18 12:50:53,103 INFO
[org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand]
(ajp--127.0.0.1-8702-2) [1f9cdc9a] Lock Acquired to object EngineLock
[exclusiveLocks= key: 263f7911-c5a2-495a-92c7-ce765b65a5b3 value: STORAGE
, sharedLocks= ]
2015-09-18 12:50:53,118 INFO
[org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand]
(org.ovirt.thread.pool-8-thread-46) [1f9cdc9a] Running command:
AttachStorageDomainToPoolCommand internal: false. Entities affected : ID:
263f7911-c5a2-495a-92c7-ce765b65a5b3 Type: StorageAction group
MANIPULATE_STORAGE_DOMAIN with role type ADMIN
2015-09-18 12:50:53,123 INFO
[org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand]
(org.ovirt.thread.pool-8-thread-6) [58cbe80] Running command:
ConnectStorageToVdsCommand internal: true. Entities affected : ID:
aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group
CREATE_STORAGE_DOMAIN with role type ADMIN
2015-09-18 12:50:53,124 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(org.ovirt.thread.pool-8-thread-6) [58cbe80] START,
ConnectStorageServerVDSCommand(HostName = host1, HostId =
b8804829-6107-4486-8c98-5ee4c0f4e797, storagePoolId =
00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList =
[{ id: 6fbce0a8-955f-4ad4-8822-1ea0c31990fb, connection:
cstlb2.bnglab.psecure.net:/var/lib/exports/iso, iqn: null, vfsType: null,
mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null
};]), log id: 671b2a09
2015-09-18 12:50:53,156 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-6) [58cbe80] Correlation ID: null, Call
Stack: null, Custom Event ID: -1, Message: Failed to connect Host host1 to
the Storage Domains ISO_DOMAIN.
2015-09-18 12:50:53,157 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(org.ovirt.thread.pool-8-thread-6) [58cbe80] FINISH,
ConnectStorageServerVDSCommand, return:
{6fbce0a8-955f-4ad4-8822-1ea0c31990fb=477}, log id: 671b2a09
2015-09-18 12:50:53,158 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-6) [58cbe80] Correlation ID: null, Call
Stack: null, Custom Event ID: -1, Message: The error message for connection
cstlb2.bnglab.psecure.net:/var/lib/exports/iso returned by VDSM was:
Problem while trying to mount target
2015-09-18 12:50:53,159 ERROR
[org.ovirt.engine.core.bll.storage.NFSStorageHelper]
(org.ovirt.thread.pool-8-thread-6) [58cbe80] The connection with details
cstlb2.bnglab.psecure.net:/var/lib/exports/iso failed because of error code
477 and error message is: problem while trying to mount target
2015-09-18 12:50:53,161 ERROR
[org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand]
(org.ovirt.thread.pool-8-thread-6) [58cbe80] Transaction rolled-back for
command: org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand.
2015-09-18 12:50:53,161 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.AttachStorageDomainVDSCommand]
(org.ovirt.thread.pool-8-thread-46) [1f9cdc9a] START,
AttachStorageDomainVDSCommand( storagePoolId =
92328f51-9152-4730-a558-8c1fd0b4e076, ignoreFailoverLimit = false,
storageDomainId = 263f7911-c5a2-495a-92c7-ce765b65a5b3), log id: 718da44b
2015-09-18 12:50:53,369 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.AttachStorageDomainVDSCommand]
(org.ovirt.thread.pool-8-thread-46) [1f9cdc9a] Failed in
AttachStorageDomainVDS method
2015-09-18 12:50:53,372 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.AttachStorageDomainVDSCommand]
(org.ovirt.thread.pool-8-thread-46) [1f9cdc9a] Command
AttachStorageDomainVDSCommand( storagePoolId =
92328f51-9152-4730-a558-8c1fd0b4e076, ignoreFailoverLimit = false,
storageDomainId = 263f7911-c5a2-495a-92c7-ce765b65a5b3) execution failed.
Exception: IrsOperationFailedNoFailoverException: IRSGenericException:
IRSErrorException: Failed to AttachStorageDomainVDS, error = Storage domain
does not exist: (u'263f7911-c5a2-495a-92c7-ce765b65a5b3',), code = 358
2015-09-18 12:50:53,373 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.AttachStorageDomainVDSCommand]
(org.ovirt.thread.pool-8-thread-46) [1f9cdc9a] FINISH,
AttachStorageDomainVDSCommand, log id: 718da44b
2015-09-18 12:50:53,374 ERROR
[org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand]
(org.ovirt.thread.pool-8-thread-46) [1f9cdc9a] Command
org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand throw
Vdc Bll exception. With error message VdcBLLException:
org.ovirt.engine.core.vdsbroker.irsbroker.IrsOperationFailedNoFailoverException:
IRSGenericException: IRSErrorException: Failed to AttachStorageDomainVDS,
error = Storage domain does not exist:
(u'263f7911-c5a2-495a-92c7-ce765b65a5b3',), code = 358 (Failed with error
StorageDomainDoesNotExist and code 358)
2015-09-18 12:50:53,376 INFO
[org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand]
(org.ovirt.thread.pool-8-thread-46) [1f9cdc9a] Command
[id=07076de5-abca-4eb8-91c1-3e147b03c4e7]: Compensating NEW_ENTITY_ID of
org.ovirt.engine.core.common.businessentities.StoragePoolIsoMap; snapshot:
storagePoolId = 92328f51-9152-4730-a558-8c1fd0b4e076, storageId =
263f7911-c5a2-495a-92c7-ce765b65a5b3.
2015-09-18 12:50:53,379 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-46) [1f9cdc9a] Correlation ID: 1f9cdc9a,
Job ID: a8c14a35-93b8-47d4-afdc-83effd15c308, Call Stack: null, Custom
Event ID: -1, Message: Failed to attach Storage Domain ISO_DOMAIN to Data
Center Pulse. (User: admin@internal)
2015-09-18 12:50:53,381 INFO
[org.ovirt.engine.core.bll.storage.AttachStorageDomainToPoolCommand]
(org.ovirt.thread.pool-8-thread-46) [1f9cdc9a] Lock freed to object
EngineLock [exclusiveLocks= key: 263f7911-c5a2-495a-92c7-ce765b65a5b3
value: STORAGE
, sharedLocks= ]
^C
Thanks,
Nagaraju
2
1
This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--8DQ1estunpO0IIDrhSg9sVKrbFlJwL1AS
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hi oVirt Team,
I'm trying to set up oVirt 3.6 on three virtualization hosts
connected to two other machines solely providing a replica 2 gluster
storage. I know and understand that this setup is basically unwise
and not supported but to prevent the gluster storage from going off
line I followed the suggestion in
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Admi=
nistration_Guide/sect-Managing_Split-brain.html
8.10.1.1 and added a dummy node with no bricks. Removing the
requirement for replica 3 in nfs.py and allowing replica count 2 in
vdsm.conf makes the installation possible without error.
But at the first login on the web UI the only thing showing that's
configured during the installation process is the virtualization host.
The Engine VM is not listed in the Virtual Machines tab though the
hosts shows one running VM. After watching several youtube videos
from self hosed engine installations I'm guessing the Engine VM
should be visible. Is there anything I can do?
The Storage tab is completely empty. The ISO Domain that is
configured during the install is not shown. As well as the gluster
storage that is storing the Engine VM.
I've heard in this video from Martin Sivak
https://www.youtube.com/watch?v=3DEbRdUPVlxyQ that the storage domain
where the Engine is stored is hidden. Why is that?
So if the Engine storage is separated from the other VMs storage
there should be two gluster volumes for VM storage if the self
hosted setup is used?
Why is the ISO Domain not shown?
Since I'm (still) an oVirt newbie I'm guessing something has gone
wrong during the installation or I'm missing some piece of the
puzzle. So here are more details:
- The gluster hosts are running an up date CentOS 7.1, Gluster 3.7.
The volume where the Engine is stored has the recommended
optimizations as described in
http://www.ovirt.org/Features/GlusterFS_Storage_Domain
- The self hosted engine deployment install is running on a minimal
CentOS 7.1 (up to date).
http://www.ovirt.org/Hosted_Engine_Howto#Fresh_Install
- The repo from which oVirt is coming:
http://resources.ovirt.org/pub/yum-repo/ovirt-release36.rpm
- I modified
/usr/share/ovirt-hosted-engine-setup/plugins/ovirt-hosted-engine-setup/st=
orage/nfs.py
and substituted the replica 3 requirements appropriately for replica 2
- and added allowed_replica_counts =3D 1,2,3 in /etc/vdsm/vdsm.conf
- hosted-engine --deploy runs without errors. Except for
'balloonInfo' errors vdsm.log doesn't show any (obvious) errors either.
- The engine setup in the VM (CentOS 7.1 again) runs without
problems as well.
- After the install has finished the Engine is _not_ run
automatically. After waiting some minutes I've started it running
hosted-engine --vm-start.
So far I've wiped everything clean and tried to install oVirt 3.6
twice and twice it was the same result so I'm not sure how to proceed.
Any suggestions on what may have gone wrong or what I've missed to do?
Cheers
Richard
--=20
/dev/null
--8DQ1estunpO0IIDrhSg9sVKrbFlJwL1AS
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iEYEARECAAYFAlX5b78ACgkQnGohgOrO9GHSfwCg6+7+ZSSeFV2CyskTeS/xWu8b
lVMAn1rL7HsxSNIulAZE8CZ5XV4DXPOD
=GcBV
-----END PGP SIGNATURE-----
--8DQ1estunpO0IIDrhSg9sVKrbFlJwL1AS--
3
3
Hi,
what is the best way to make daily backups of my VM's without shutting
them down?
I found the Backup-Restore API and other stuff but no running
tool/script which I can use. I plan to integrate it into backuppc. Or is
there any "Best practice guide for backup"? ;-)
In the meantime I integrated engine-backup into backuppc as a pre-script.
cheers
gregor
2
2
Hi,
I am not sure if it's the right place to post this question.
I tried to do a P2V operation of a Win2k3 server to ovirt 3.5
the physical server is RAID5 of 3x146Go configured, about 290 Go usable
space, but the server uses about 20Go only.
I am wondering how to shrink the disk of the migrated physical machine, do
I have to do it before using P2V tool?
The same question for Linux servers.
Thanks.
3
3
17 Sep '15
The oVirt Project is pleased to announce the availability
of the Sixth Beta release of oVirt 3.6 for testing, as of September 17th,
2015.
This release is available now for Fedora 22,
Red Hat Enterprise Linux 6.7, CentOS Linux 6.7 (or similar) and
Red Hat Enterprise Linux 7.1, CentOS Linux 7.1 (or similar).
This release supports Hypervisor Hosts running
Red Hat Enterprise Linux 7.1, CentOS Linux 7.1 (or similar),
Fedora 21 and Fedora 22.
Highly experimental support for Debian 8.1 Jessie has been added too.
This release of oVirt 3.6.0 includes numerous bug fixes.
See the release notes [1] for an initial list of the new features and bugs
fixed.
Please refer to release notes [1] for Installation / Upgrade instructions.
New oVirt Node ISO and oVirt Live ISO will be available soon as well[2].
Please note that mirrors[3] may need usually one day before being
synchronized.
Please refer to the release notes for known issues in this release.
[1] http://www.ovirt.org/OVirt_3.6_Release_Notes
[2] http://plain.resources.ovirt.org/pub/ovirt-3.6-pre/iso/
[3] http://www.ovirt.org/Repository_mirrors#Current_mirrors
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
1
0