On 25-12-2015 5:26, Arik Mitschang wrote:
> Hi ovirt-users,
>
> I have been working on a new install of ovirt 3.6 hosted-engine and ran
> into difficulty adding a gluster data storage domain to host my VMs. I
> have 4 servers for gluster (separate from vm hosts) and would like to
> have the quorum enforcement of replica 3 without sacrificing space. I
> created a gluster using
>
> replica 3 arbiter 1
>
> That looks like this:
>
> Volume Name: arbtest
> Type: Distributed-Replicate
> Volume ID: 01b36368-1f37-435c-9f48-0442e0c34160
> Status: Stopped
> Number of Bricks: 2 x 3 = 6
> Transport-type: tcp
> Bricks:
> Brick1: t2-gluster01b:/gluster/00/arbtest
> Brick2: t2-gluster02b:/gluster/00/arbtest
> Brick3: t2-gluster03b:/gluster/00/arbtest.arb
> Brick4: t2-gluster03b:/gluster/00/arbtest
> Brick5: t2-gluster04b:/gluster/00/arbtest
> Brick6: t2-gluster01b:/gluster/00/arbtest.arb
> Options Reconfigured:
> nfs.disable: true
> network.ping-timeout: 10
> storage.owner-uid: 36
> storage.owner-gid: 36
> cluster.server-quorum-type: server
> cluster.quorum-type: auto
> network.remote-dio: enable
> cluster.eager-lock: enable
> performance.stat-prefetch: off
> performance.io-cache: off
> performance.read-ahead: off
> performance.quick-read: off
> performance.readdir-ahead: on
>
> But adding to gluster I get the following error:
>
> "Error while executing action AddGlusterFsStorageDomain: Error creating
> a storage domain's metadata"
>
>
Anything in engine.log (/var/log/ovirt-engine/engine.log) around that time?
Anything in vdsm.log on your 2 hypervisors around that time?
(Guessing that you'll see an error about replication unsupported by
vdsm, if so, have a look at /etc/vdsmd.conf.rpmnew)
Hi Joop,
Thanks for your response, and sorry for the long delay in mine. I had a
chance to test adding again and catch the logs around that operation. I
am attaching the engine logs and vdsm logs of the hypervisor that was
responsible for the storage operations.
Also, I have the following:
[gluster]
allowed_replica_counts = 1,2,3
in /etc/vdsm/vdsm.conf.
The volume was successfully mounted and I see the following in it after
trying to add:
arik@t2-virt01:~$ sudo mount -t glusterfs t2-gluster01b:arbtest /mnt/
arik@t2-virt01:~$ ls -ltr /mnt/
total 0
-rwxr-xr-x 1 vdsm kvm 0 Jan 20 08:08 __DIRECT_IO_TEST__
drwxr-xr-x 3 vdsm kvm 54 Jan 20 08:08 3d31af0b-18ad-45c4-90f1-18e2f804f34b
I hope you can see something interesting in these logs!
Regards,
-Arik
ENGINE logs:
2016-01-20 08:08:36,492 INFO
[org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand] (default task-11)
[5021a29d] Lock Acquired to object
'EngineLock:{exclusiveLocks='[t2-gluster01b:arbtest=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2016-01-20 08:08:36,506 INFO
[org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand] (default task-11)
[5021a29d] Running command: AddStorageServerConnectionCommand internal: false. Entities
affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group
CREATE_STORAGE_DOMAIN with role type ADMIN
2016-01-20 08:08:36,507 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (default
task-11) [5021a29d] START, ConnectStorageServerVDSCommand(HostName = t2-virt02,
StorageServerConnectionManagementVDSParameters:{runAsync='true',
hostId='757bf4c3-5352-4d00-aa4a-651c8e0ffe34',
storagePoolId='00000000-0000-0000-0000-000000000000',
storageType='GLUSTERFS',
connectionList='[StorageServerConnections:{id='null',
connection='t2-gluster01b:arbtest', iqn='null',
vfsType='glusterfs', mountOptions='null', nfsVersion='null',
nfsRetrans='null', nfsTimeo='null', iface='null',
netIfaceName='null'}]'}), log id: 74e8ac0a
2016-01-20 08:08:36,745 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (default
task-11) [5021a29d] FINISH, ConnectStorageServerVDSCommand, return:
{00000000-0000-0000-0000-000000000000=0}, log id: 74e8ac0a
2016-01-20 08:08:36,757 INFO
[org.ovirt.engine.core.bll.storage.AddStorageServerConnectionCommand] (default task-11)
[5021a29d] Lock freed to object
'EngineLock:{exclusiveLocks='[t2-gluster01b:arbtest=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2016-01-20 08:08:36,812 INFO
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand] (default task-30)
[6eb6b10] Running command: AddGlusterFsStorageDomainCommand internal: false. Entities
affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group
CREATE_STORAGE_DOMAIN with role type ADMIN
2016-01-20 08:08:36,825 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (default
task-30) [6eb6b10] START, ConnectStorageServerVDSCommand(HostName = t2-virt02,
StorageServerConnectionManagementVDSParameters:{runAsync='true',
hostId='757bf4c3-5352-4d00-aa4a-651c8e0ffe34',
storagePoolId='00000000-0000-0000-0000-000000000000',
storageType='GLUSTERFS',
connectionList='[StorageServerConnections:{id='2d502b8e-dcab-4c2b-bf5b-600e3465fe64',
connection='t2-gluster01b:arbtest', iqn='null',
vfsType='glusterfs', mountOptions='null', nfsVersion='null',
nfsRetrans='null', nfsTimeo='null', iface='null',
netIfaceName='null'}]'}), log id: 629ef971
2016-01-20 08:08:36,847 ERROR
[org.ovirt.engine.core.bll.host.provider.foreman.SystemProviderFinder] (default task-5) []
Failed to find host on any provider by host name 't2-ovirt01.sbisec.int'
2016-01-20 08:08:36,851 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] (default
task-30) [6eb6b10] FINISH, ConnectStorageServerVDSCommand, return:
{2d502b8e-dcab-4c2b-bf5b-600e3465fe64=0}, log id: 629ef971
2016-01-20 08:08:36,854 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (default
task-30) [6eb6b10] START, CreateStorageDomainVDSCommand(HostName = t2-virt02,
CreateStorageDomainVDSCommandParameters:{runAsync='true',
hostId='757bf4c3-5352-4d00-aa4a-651c8e0ffe34',
storageDomain='StorageDomainStatic:{name='arbtest',
id='3d31af0b-18ad-45c4-90f1-18e2f804f34b'}',
args='t2-gluster01b:arbtest'}), log id: 2c2b090d
2016-01-20 08:08:37,344 ERROR
[org.ovirt.engine.core.bll.host.provider.foreman.SystemProviderFinder] (default task-52)
[] Failed to find host on any provider by host name 't2-ovirt01.sbisec.int'
2016-01-20 08:08:37,911 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-30)
[6eb6b10] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VDSM
t2-virt02 command failed: Error creating a storage domain's metadata: ("create
meta file 'outbox' failed: [Errno 5] Input/output error",)
2016-01-20 08:08:37,911 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (default
task-30) [6eb6b10] Command
'org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand' return
value 'StatusOnlyReturnForXmlRpc [status=StatusForXmlRpc [code=362, message=Error
creating a storage domain's metadata: ("create meta file 'outbox' failed:
[Errno 5] Input/output error",)]]'
2016-01-20 08:08:37,911 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (default
task-30) [6eb6b10] HostName = t2-virt02
2016-01-20 08:08:37,911 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (default
task-30) [6eb6b10] Command 'CreateStorageDomainVDSCommand(HostName = t2-virt02,
CreateStorageDomainVDSCommandParameters:{runAsync='true',
hostId='757bf4c3-5352-4d00-aa4a-651c8e0ffe34',
storageDomain='StorageDomainStatic:{name='arbtest',
id='3d31af0b-18ad-45c4-90f1-18e2f804f34b'}',
args='t2-gluster01b:arbtest'})' execution failed: VDSGenericException:
VDSErrorException: Failed in vdscommand to CreateStorageDomainVDS, error = Error creating
a storage domain's metadata: ("create meta file 'outbox' failed: [Errno
5] Input/output error",)
2016-01-20 08:08:37,911 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (default
task-30) [6eb6b10] FINISH, CreateStorageDomainVDSCommand, log id: 2c2b090d
2016-01-20 08:08:37,911 ERROR
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand] (default task-30)
[6eb6b10] Command
'org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand' failed:
EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed in vdscommand to CreateStorageDomainVDS,
error = Error creating a storage domain's metadata: ("create meta file
'outbox' failed: [Errno 5] Input/output error",) (Failed with error
StorageDomainMetadataCreationError and code 362)
2016-01-20 08:08:37,913 INFO
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand] (default task-30)
[6eb6b10] Command [id=28ba9b19-35dd-4e93-84dc-79d8abec1849]: Compensating NEW_ENTITY_ID of
org.ovirt.engine.core.common.businessentities.StorageDomainDynamic; snapshot:
3d31af0b-18ad-45c4-90f1-18e2f804f34b.
2016-01-20 08:08:37,914 INFO
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand] (default task-30)
[6eb6b10] Command [id=28ba9b19-35dd-4e93-84dc-79d8abec1849]: Compensating NEW_ENTITY_ID of
org.ovirt.engine.core.common.businessentities.StorageDomainStatic; snapshot:
3d31af0b-18ad-45c4-90f1-18e2f804f34b.
2016-01-20 08:08:37,923 ERROR
[org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand] (default task-30)
[6eb6b10] Transaction rolled-back for command
'org.ovirt.engine.core.bll.storage.AddGlusterFsStorageDomainCommand'.
2016-01-20 08:08:37,930 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-30)
[6eb6b10] Correlation ID: 6eb6b10, Job ID: 0a06ba3e-ffae-4ae1-bf57-516ec0dc824a, Call
Stack: null, Custom Event ID: -1, Message: Failed to add Storage Domain arbtest. (User:
admin@internal)
2016-01-20 08:08:38,125 INFO
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand] (default task-12)
[50f3b70d] Lock Acquired to object
'EngineLock:{exclusiveLocks='[2d502b8e-dcab-4c2b-bf5b-600e3465fe64=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>, t2-gluster01b:arbtest=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2016-01-20 08:08:38,139 INFO
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand] (default task-12)
[50f3b70d] Running command: RemoveStorageServerConnectionCommand internal: false. Entities
affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group
CREATE_STORAGE_DOMAIN with role type ADMIN
2016-01-20 08:08:38,145 INFO
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand] (default task-12)
[50f3b70d] Removing connection '2d502b8e-dcab-4c2b-bf5b-600e3465fe64' from
database
2016-01-20 08:08:38,147 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand] (default
task-12) [50f3b70d] START, DisconnectStorageServerVDSCommand(HostName = t2-virt02,
StorageServerConnectionManagementVDSParameters:{runAsync='true',
hostId='757bf4c3-5352-4d00-aa4a-651c8e0ffe34',
storagePoolId='00000000-0000-0000-0000-000000000000',
storageType='GLUSTERFS',
connectionList='[StorageServerConnections:{id='2d502b8e-dcab-4c2b-bf5b-600e3465fe64',
connection='t2-gluster01b:arbtest', iqn='null',
vfsType='glusterfs', mountOptions='null', nfsVersion='null',
nfsRetrans='null', nfsTimeo='null', iface='null',
netIfaceName='null'}]'}), log id: 4112db0e
2016-01-20 08:08:38,164 ERROR
[org.ovirt.engine.core.bll.host.provider.foreman.SystemProviderFinder] (default task-62)
[] Failed to find host on any provider by host name 't2-ovirt01.sbisec.int'
2016-01-20 08:08:38,365 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand] (default
task-12) [50f3b70d] FINISH, DisconnectStorageServerVDSCommand, return:
{2d502b8e-dcab-4c2b-bf5b-600e3465fe64=0}, log id: 4112db0e
2016-01-20 08:08:38,372 INFO
[org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand] (default task-12)
[50f3b70d] Lock freed to object
'EngineLock:{exclusiveLocks='[2d502b8e-dcab-4c2b-bf5b-600e3465fe64=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>, t2-gluster01b:arbtest=<STORAGE_CONNECTION,
ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2016-01-20 08:08:38,446 ERROR
[org.ovirt.engine.core.bll.host.provider.foreman.SystemProviderFinder] (default task-55)
[] Failed to find host on any provider by host name 't2-ovirt01.sbisec.int'
2016-01-20 08:08:38,955 ERROR
[org.ovirt.engine.core.bll.host.provider.foreman.SystemProviderFinder] (default task-54)
[] Failed to find host on any provider by host name 't2-ovirt01.sbisec.int'
2016-01-20 08:08:39,286 ERROR
[org.ovirt.engine.core.bll.host.provider.foreman.SystemProviderFinder] (default task-45)
[] Failed to find host on any provider by host name 't2-ovirt01.sbisec.int'
2016-01-20 08:08:40,317 INFO
[org.ovirt.engine.core.bll.scheduling.policyunits.EvenGuestDistributionBalancePolicyUnit]
(DefaultQuartzScheduler_Worker-54) [503c9d86] There is no host with more than 50 running
guests, no balancing is needed
2016-01-20 08:09:40,357 INFO
[org.ovirt.engine.core.bll.scheduling.policyunits.EvenGuestDistributionBalancePolicyUnit]
(DefaultQuartzScheduler_Worker-34) [c5d3662] There is no host with more than 50 running
guests, no balancing is needed
2016-01-20 08:10:40,362 INFO
[org.ovirt.engine.core.bll.scheduling.policyunits.EvenGuestDistributionBalancePolicyUnit]
(DefaultQuartzScheduler_Worker-25) [12860ca7] There is no host with more than 50 running
guests, no balancing is needed
2016-01-20 08:11:40,400 INFO
[org.ovirt.engine.core.bll.scheduling.policyunits.EvenGuestDistributionBalancePolicyUnit]
(DefaultQuartzScheduler_Worker-68) [8e985fe] There is no host with more than 50 running
guests, no balancing is needed
VDSM logs:
Thread-823832::DEBUG::2016-01-20
08:08:36,511::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest) Calling
'StoragePool.connectStorageServer' in bridge with {u'connectionParams':
[{u'id': u'00000000-0000-0000-0000-000000000000', u'connection':
u't2-gluster01b:arbtest', u'iqn': u'', u'user':
u'', u'tpgt': u'1', u'vfs_type': u'glusterfs',
u'password': '********', u'port': u''}],
u'storagepoolID': u'00000000-0000-0000-0000-000000000000',
u'domainType': 7}
Thread-823832::DEBUG::2016-01-20
08:08:36,512::task::595::Storage.TaskManager.Task::(_updateState)
Task=`5613e0b2-79c2-460e-bc85-faf236ab39f6`::moving from state init -> state preparing
Thread-823832::INFO::2016-01-20 08:08:36,512::logUtils::48::dispatcher::(wrapper) Run and
protect: connectStorageServer(domType=7,
spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'id':
u'00000000-0000-0000-0000-000000000000', u'connection':
u't2-gluster01b:arbtest', u'iqn': u'', u'user':
u'', u'tpgt': u'1', u'vfs_type': u'glusterfs',
u'password': '********', u'port': u''}], options=None)
Thread-823832::DEBUG::2016-01-20
08:08:36,534::fileUtils::143::Storage.fileUtils::(createdir) Creating directory:
/rhev/data-center/mnt/glusterSD/t2-gluster01b:arbtest mode: None
Thread-823832::DEBUG::2016-01-20 08:08:36,534::mount::229::Storage.Misc.excCmd::(_runcmd)
/usr/bin/sudo -n /usr/bin/systemd-run --scope --slice=vdsm-glusterfs /usr/bin/mount -t
glusterfs -o
backup-volfile-servers=t2-gluster02b:t2-gluster03b:t2-gluster03b:t2-gluster04b:t2-gluster01b
t2-gluster01b:arbtest /rhev/data-center/mnt/glusterSD/t2-gluster01b:arbtest (cwd None)
Thread-823832::DEBUG::2016-01-20 08:08:36,678::__init__::298::IOProcessClient::(_run)
Starting IOProcess...
Thread-823832::DEBUG::2016-01-20 08:08:36,705::hsm::2417::Storage.HSM::(__prefetchDomains)
glusterDomPath: glusterSD/*
Thread-823835::DEBUG::2016-01-20 08:08:36,710::__init__::298::IOProcessClient::(_run)
Starting IOProcess...
Thread-823832::DEBUG::2016-01-20 08:08:36,742::hsm::2429::Storage.HSM::(__prefetchDomains)
Found SD uuids: (u'01e65989-18a6-4d01-b2bb-d1ba04e9afdb',
u'0e62840d-da29-4d34-80c2-178b61c9ce11',
u'1e5b4666-043b-41bd-9896-aa349705126c')
Thread-823832::DEBUG::2016-01-20
08:08:36,742::hsm::2489::Storage.HSM::(connectStorageServer) knownSDs:
{0e62840d-da29-4d34-80c2-178b61c9ce11: storage.glusterSD.findDomain,
01e65989-18a6-4d01-b2bb-d1ba04e9afdb: storage.glusterSD.findDomain,
1e5b4666-043b-41bd-9896-aa349705126c: storage.glusterSD.findDomain}
Thread-823832::INFO::2016-01-20 08:08:36,742::logUtils::51::dispatcher::(wrapper) Run and
protect: connectStorageServer, Return response: {'statuslist': [{'status':
0, 'id': u'00000000-0000-0000-0000-000000000000'}]}
Thread-823832::DEBUG::2016-01-20
08:08:36,743::task::1191::Storage.TaskManager.Task::(prepare)
Task=`5613e0b2-79c2-460e-bc85-faf236ab39f6`::finished: {'statuslist':
[{'status': 0, 'id': u'00000000-0000-0000-0000-000000000000'}]}
Thread-823832::DEBUG::2016-01-20
08:08:36,743::task::595::Storage.TaskManager.Task::(_updateState)
Task=`5613e0b2-79c2-460e-bc85-faf236ab39f6`::moving from state preparing -> state
finished
Thread-823832::DEBUG::2016-01-20
08:08:36,743::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-823832::DEBUG::2016-01-20
08:08:36,743::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-823832::DEBUG::2016-01-20
08:08:36,743::task::993::Storage.TaskManager.Task::(_decref)
Task=`5613e0b2-79c2-460e-bc85-faf236ab39f6`::ref 0 aborting False
Thread-823832::DEBUG::2016-01-20
08:08:36,743::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest) Return
'StoragePool.connectStorageServer' in bridge with [{'status': 0,
'id': u'00000000-0000-0000-0000-000000000000'}]
Thread-823839::DEBUG::2016-01-20
08:08:36,828::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest) Calling
'StoragePool.connectStorageServer' in bridge with {u'connectionParams':
[{u'id': u'2d502b8e-dcab-4c2b-bf5b-600e3465fe64', u'connection':
u't2-gluster01b:arbtest', u'iqn': u'', u'user':
u'', u'tpgt': u'1', u'vfs_type': u'glusterfs',
u'password': '********', u'port': u''}],
u'storagepoolID': u'00000000-0000-0000-0000-000000000000',
u'domainType': 7}
Thread-823839::DEBUG::2016-01-20
08:08:36,829::task::595::Storage.TaskManager.Task::(_updateState)
Task=`8aad9b06-200d-4003-adca-34a98019e264`::moving from state init -> state preparing
Thread-823839::INFO::2016-01-20 08:08:36,830::logUtils::48::dispatcher::(wrapper) Run and
protect: connectStorageServer(domType=7,
spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'id':
u'2d502b8e-dcab-4c2b-bf5b-600e3465fe64', u'connection':
u't2-gluster01b:arbtest', u'iqn': u'', u'user':
u'', u'tpgt': u'1', u'vfs_type': u'glusterfs',
u'password': '********', u'port': u''}], options=None)
Thread-823839::DEBUG::2016-01-20 08:08:36,831::hsm::2417::Storage.HSM::(__prefetchDomains)
glusterDomPath: glusterSD/*
Thread-823839::DEBUG::2016-01-20 08:08:36,848::hsm::2429::Storage.HSM::(__prefetchDomains)
Found SD uuids: (u'01e65989-18a6-4d01-b2bb-d1ba04e9afdb',
u'0e62840d-da29-4d34-80c2-178b61c9ce11',
u'1e5b4666-043b-41bd-9896-aa349705126c')
Thread-823839::DEBUG::2016-01-20
08:08:36,849::hsm::2489::Storage.HSM::(connectStorageServer) knownSDs:
{0e62840d-da29-4d34-80c2-178b61c9ce11: storage.glusterSD.findDomain,
01e65989-18a6-4d01-b2bb-d1ba04e9afdb: storage.glusterSD.findDomain,
1e5b4666-043b-41bd-9896-aa349705126c: storage.glusterSD.findDomain}
Thread-823839::INFO::2016-01-20 08:08:36,849::logUtils::51::dispatcher::(wrapper) Run and
protect: connectStorageServer, Return response: {'statuslist': [{'status':
0, 'id': u'2d502b8e-dcab-4c2b-bf5b-600e3465fe64'}]}
Thread-823839::DEBUG::2016-01-20
08:08:36,849::task::1191::Storage.TaskManager.Task::(prepare)
Task=`8aad9b06-200d-4003-adca-34a98019e264`::finished: {'statuslist':
[{'status': 0, 'id': u'2d502b8e-dcab-4c2b-bf5b-600e3465fe64'}]}
Thread-823839::DEBUG::2016-01-20
08:08:36,849::task::595::Storage.TaskManager.Task::(_updateState)
Task=`8aad9b06-200d-4003-adca-34a98019e264`::moving from state preparing -> state
finished
Thread-823839::DEBUG::2016-01-20
08:08:36,849::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-823839::DEBUG::2016-01-20
08:08:36,849::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-823839::DEBUG::2016-01-20
08:08:36,849::task::993::Storage.TaskManager.Task::(_decref)
Task=`8aad9b06-200d-4003-adca-34a98019e264`::ref 0 aborting False
Thread-823839::DEBUG::2016-01-20
08:08:36,850::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest) Return
'StoragePool.connectStorageServer' in bridge with [{'status': 0,
'id': u'2d502b8e-dcab-4c2b-bf5b-600e3465fe64'}]
Thread-823846::DEBUG::2016-01-20
08:08:36,855::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest) Calling
'StorageDomain.create' in bridge with {u'name': u'arbtest',
u'domainType': 7, u'domainClass': 1, u'typeArgs':
u't2-gluster01b:arbtest', u'version': u'3',
u'storagedomainID': u'3d31af0b-18ad-45c4-90f1-18e2f804f34b'}
Thread-823846::DEBUG::2016-01-20
08:08:36,856::task::595::Storage.TaskManager.Task::(_updateState)
Task=`4abe19f6-0e90-41be-ab53-6bf08626e92f`::moving from state init -> state preparing
Thread-823846::INFO::2016-01-20 08:08:36,856::logUtils::48::dispatcher::(wrapper) Run and
protect: createStorageDomain(storageType=7,
sdUUID=u'3d31af0b-18ad-45c4-90f1-18e2f804f34b', domainName=u'arbtest',
typeSpecificArg=u't2-gluster01b:arbtest', domClass=1, domVersion=u'3',
options=None)
Thread-823846::DEBUG::2016-01-20
08:08:36,857::misc::750::Storage.SamplingMethod::(__call__) Trying to enter sampling
method (storage.sdc.refreshStorage)
Thread-823846::DEBUG::2016-01-20
08:08:36,857::misc::753::Storage.SamplingMethod::(__call__) Got in to sampling method
Thread-823846::DEBUG::2016-01-20
08:08:36,857::misc::750::Storage.SamplingMethod::(__call__) Trying to enter sampling
method (storage.iscsi.rescan)
Thread-823846::DEBUG::2016-01-20
08:08:36,857::misc::753::Storage.SamplingMethod::(__call__) Got in to sampling method
Thread-823846::DEBUG::2016-01-20 08:08:36,857::iscsi::431::Storage.ISCSI::(rescan)
Performing SCSI scan, this will take up to 30 seconds
Thread-823846::DEBUG::2016-01-20
08:08:36,857::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm
-m session -R (cwd None)
Thread-823846::DEBUG::2016-01-20
08:08:36,885::misc::760::Storage.SamplingMethod::(__call__) Returning last result
Thread-823846::DEBUG::2016-01-20
08:08:36,885::misc::750::Storage.SamplingMethod::(__call__) Trying to enter sampling
method (storage.hba.rescan)
Thread-823846::DEBUG::2016-01-20
08:08:36,885::misc::753::Storage.SamplingMethod::(__call__) Got in to sampling method
Thread-823846::DEBUG::2016-01-20 08:08:36,885::hba::56::Storage.HBA::(rescan) Starting
scan
Thread-823846::DEBUG::2016-01-20 08:08:36,968::hba::62::Storage.HBA::(rescan) Scan
finished
Thread-823846::DEBUG::2016-01-20
08:08:36,968::misc::760::Storage.SamplingMethod::(__call__) Returning last result
Thread-823846::DEBUG::2016-01-20
08:08:36,968::multipath::77::Storage.Misc.excCmd::(rescan) /usr/bin/sudo -n
/usr/sbin/multipath (cwd None)
Thread-823846::DEBUG::2016-01-20
08:08:37,030::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS: <err> =
''; <rc> = 0
Thread-823846::DEBUG::2016-01-20 08:08:37,030::utils::676::root::(execCmd) /sbin/udevadm
settle --timeout=5 (cwd None)
Thread-823846::DEBUG::2016-01-20 08:08:37,044::utils::694::root::(execCmd) SUCCESS:
<err> = ''; <rc> = 0
Thread-823846::DEBUG::2016-01-20
08:08:37,045::lvm::498::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm
invalidate operation' got the operation mutex
Thread-823846::DEBUG::2016-01-20
08:08:37,045::lvm::500::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm
invalidate operation' released the operation mutex
Thread-823846::DEBUG::2016-01-20
08:08:37,045::lvm::509::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm
invalidate operation' got the operation mutex
Thread-823846::DEBUG::2016-01-20
08:08:37,045::lvm::511::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm
invalidate operation' released the operation mutex
Thread-823846::DEBUG::2016-01-20
08:08:37,045::lvm::529::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm
invalidate operation' got the operation mutex
Thread-823846::DEBUG::2016-01-20
08:08:37,045::lvm::531::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm
invalidate operation' released the operation mutex
Thread-823846::DEBUG::2016-01-20
08:08:37,046::misc::760::Storage.SamplingMethod::(__call__) Returning last result
Thread-823846::ERROR::2016-01-20
08:08:37,046::sdc::138::Storage.StorageDomainCache::(_findDomain) looking for unfetched
domain 3d31af0b-18ad-45c4-90f1-18e2f804f34b
Thread-823846::ERROR::2016-01-20
08:08:37,046::sdc::155::Storage.StorageDomainCache::(_findUnfetchedDomain) looking for
domain 3d31af0b-18ad-45c4-90f1-18e2f804f34b
Thread-823846::DEBUG::2016-01-20
08:08:37,046::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload
operation' got the operation mutex
Thread-823846::DEBUG::2016-01-20 08:08:37,047::lvm::291::Storage.Misc.excCmd::(cmd)
/usr/bin/sudo -n /usr/sbin/lvm vgs --config ' devices { preferred_names =
["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0
disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [
'\''a|/dev/mapper/3644a84200be317001c56846212b97c61|'\'',
'\''r|.*|'\'' ] } global { locking_type=1
prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50
retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|'
--ignoreskippedcluster -o
uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name
3d31af0b-18ad-45c4-90f1-18e2f804f34b (cwd None)
Thread-823846::DEBUG::2016-01-20 08:08:37,413::lvm::291::Storage.Misc.excCmd::(cmd)
FAILED: <err> = ' Volume group "3d31af0b-18ad-45c4-90f1-18e2f804f34b"
not found\n Cannot process volume group 3d31af0b-18ad-45c4-90f1-18e2f804f34b\n';
<rc> = 5
Thread-823846::WARNING::2016-01-20 08:08:37,413::lvm::376::Storage.LVM::(_reloadvgs) lvm
vgs failed: 5 [] [' Volume group "3d31af0b-18ad-45c4-90f1-18e2f804f34b" not
found', ' Cannot process volume group 3d31af0b-18ad-45c4-90f1-18e2f804f34b']
Thread-823846::DEBUG::2016-01-20
08:08:37,413::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload
operation' released the operation mutex
Thread-823853::DEBUG::2016-01-20 08:08:37,432::__init__::298::IOProcessClient::(_run)
Starting IOProcess...
Thread-823854::DEBUG::2016-01-20 08:08:37,454::__init__::298::IOProcessClient::(_run)
Starting IOProcess...
Thread-823846::ERROR::2016-01-20
08:08:37,497::sdc::144::Storage.StorageDomainCache::(_findDomain) domain
3d31af0b-18ad-45c4-90f1-18e2f804f34b not found
Traceback (most recent call last):
File "/usr/share/vdsm/storage/sdc.py", line 142, in _findDomain
dom = findMethod(sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 172, in _findUnfetchedDomain
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist:
(u'3d31af0b-18ad-45c4-90f1-18e2f804f34b',)
Thread-823846::INFO::2016-01-20 08:08:37,498::nfsSD::69::Storage.StorageDomain::(create)
sdUUID=3d31af0b-18ad-45c4-90f1-18e2f804f34b domainName=arbtest
remotePath=t2-gluster01b:arbtest domClass=1
Thread-823846::DEBUG::2016-01-20 08:08:37,510::__init__::298::IOProcessClient::(_run)
Starting IOProcess...
Thread-823846::ERROR::2016-01-20
08:08:37,903::task::866::Storage.TaskManager.Task::(_setError)
Task=`4abe19f6-0e90-41be-ab53-6bf08626e92f`::Unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 873, in _run
return fn(*args, **kargs)
File "/usr/share/vdsm/logUtils.py", line 49, in wrapper
res = f(*args, **kwargs)
File "/usr/share/vdsm/storage/hsm.py", line 2697, in createStorageDomain
domVersion)
File "/usr/share/vdsm/storage/nfsSD.py", line 84, in create
remotePath, storageType, version)
File "/usr/share/vdsm/storage/fileSD.py", line 264, in _prepareMetadata
"create meta file '%s' failed: %s" % (metaFile, str(e)))
StorageDomainMetadataCreationError: Error creating a storage domain's metadata:
("create meta file 'outbox' failed: [Errno 5] Input/output error",)
Thread-823846::DEBUG::2016-01-20 08:08:37,904::task::885::Storage.TaskManager.Task::(_run)
Task=`4abe19f6-0e90-41be-ab53-6bf08626e92f`::Task._run:
4abe19f6-0e90-41be-ab53-6bf08626e92f (7, u'3d31af0b-18ad-45c4-90f1-18e2f804f34b',
u'arbtest', u't2-gluster01b:arbtest', 1, u'3') {} failed -
stopping task
Thread-823846::DEBUG::2016-01-20
08:08:37,904::task::1246::Storage.TaskManager.Task::(stop)
Task=`4abe19f6-0e90-41be-ab53-6bf08626e92f`::stopping in state preparing (force False)
Thread-823846::DEBUG::2016-01-20
08:08:37,904::task::993::Storage.TaskManager.Task::(_decref)
Task=`4abe19f6-0e90-41be-ab53-6bf08626e92f`::ref 1 aborting True
Thread-823846::INFO::2016-01-20
08:08:37,905::task::1171::Storage.TaskManager.Task::(prepare)
Task=`4abe19f6-0e90-41be-ab53-6bf08626e92f`::aborting: Task is aborted: "Error
creating a storage domain's metadata" - code 362
Thread-823846::DEBUG::2016-01-20
08:08:37,905::task::1176::Storage.TaskManager.Task::(prepare)
Task=`4abe19f6-0e90-41be-ab53-6bf08626e92f`::Prepare: aborted: Error creating a storage
domain's metadata
Thread-823846::DEBUG::2016-01-20
08:08:37,905::task::993::Storage.TaskManager.Task::(_decref)
Task=`4abe19f6-0e90-41be-ab53-6bf08626e92f`::ref 0 aborting True
Thread-823846::DEBUG::2016-01-20
08:08:37,905::task::928::Storage.TaskManager.Task::(_doAbort)
Task=`4abe19f6-0e90-41be-ab53-6bf08626e92f`::Task._doAbort: force False
Thread-823846::DEBUG::2016-01-20
08:08:37,905::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-823846::DEBUG::2016-01-20
08:08:37,905::task::595::Storage.TaskManager.Task::(_updateState)
Task=`4abe19f6-0e90-41be-ab53-6bf08626e92f`::moving from state preparing -> state
aborting
Thread-823846::DEBUG::2016-01-20
08:08:37,905::task::550::Storage.TaskManager.Task::(__state_aborting)
Task=`4abe19f6-0e90-41be-ab53-6bf08626e92f`::_aborting: recover policy none
Thread-823846::DEBUG::2016-01-20
08:08:37,905::task::595::Storage.TaskManager.Task::(_updateState)
Task=`4abe19f6-0e90-41be-ab53-6bf08626e92f`::moving from state aborting -> state
failed
Thread-823846::DEBUG::2016-01-20
08:08:37,906::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-823846::DEBUG::2016-01-20
08:08:37,906::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-823846::ERROR::2016-01-20
08:08:37,906::dispatcher::76::Storage.Dispatcher::(wrapper) {'status':
{'message': 'Error creating a storage domain\'s metadata: ("create
meta file \'outbox\' failed: [Errno 5] Input/output error",)',
'code': 362}}
mailbox.SPMMonitor::DEBUG::2016-01-20
08:08:37,972::storage_mailbox::735::Storage.Misc.excCmd::(_checkForMail) dd
if=/rhev/data-center/00000001-0001-0001-0001-00000000037e/mastersd/dom_md/inbox
iflag=direct,fullblock count=1 bs=1024000 (cwd None)
mailbox.SPMMonitor::DEBUG::2016-01-20
08:08:37,992::storage_mailbox::735::Storage.Misc.excCmd::(_checkForMail) SUCCESS:
<err> = '1+0 records in\n1+0 records out\n1024000 bytes (1.0 MB) copied,
0.0033988 s, 301 MB/s\n'; <rc> = 0
Thread-823863::DEBUG::2016-01-20
08:08:38,151::__init__::503::jsonrpc.JsonRpcServer::(_serveRequest) Calling
'StoragePool.disconnectStorageServer' in bridge with {u'connectionParams':
[{u'id': u'2d502b8e-dcab-4c2b-bf5b-600e3465fe64', u'connection':
u't2-gluster01b:arbtest', u'iqn': u'', u'user':
u'', u'tpgt': u'1', u'vfs_type': u'glusterfs',
u'password': '********', u'port': u''}],
u'storagepoolID': u'00000000-0000-0000-0000-000000000000',
u'domainType': 7}
Thread-823863::DEBUG::2016-01-20
08:08:38,152::task::595::Storage.TaskManager.Task::(_updateState)
Task=`88d2d366-1f95-4247-87d7-516645984b49`::moving from state init -> state preparing
Thread-823863::INFO::2016-01-20 08:08:38,153::logUtils::48::dispatcher::(wrapper) Run and
protect: disconnectStorageServer(domType=7,
spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'id':
u'2d502b8e-dcab-4c2b-bf5b-600e3465fe64', u'connection':
u't2-gluster01b:arbtest', u'iqn': u'', u'user':
u'', u'tpgt': u'1', u'vfs_type': u'glusterfs',
u'password': '********', u'port': u''}], options=None)
Thread-823863::DEBUG::2016-01-20 08:08:38,153::mount::229::Storage.Misc.excCmd::(_runcmd)
/usr/bin/sudo -n /usr/bin/umount -f -l
/rhev/data-center/mnt/glusterSD/t2-gluster01b:arbtest (cwd None)
Thread-823863::DEBUG::2016-01-20
08:08:38,183::misc::750::Storage.SamplingMethod::(__call__) Trying to enter sampling
method (storage.sdc.refreshStorage)
Thread-823863::DEBUG::2016-01-20
08:08:38,183::misc::753::Storage.SamplingMethod::(__call__) Got in to sampling method
Thread-823863::DEBUG::2016-01-20
08:08:38,183::misc::750::Storage.SamplingMethod::(__call__) Trying to enter sampling
method (storage.iscsi.rescan)
Thread-823863::DEBUG::2016-01-20
08:08:38,183::misc::753::Storage.SamplingMethod::(__call__) Got in to sampling method
Thread-823863::DEBUG::2016-01-20 08:08:38,183::iscsi::431::Storage.ISCSI::(rescan)
Performing SCSI scan, this will take up to 30 seconds
Thread-823863::DEBUG::2016-01-20
08:08:38,184::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm
-m session -R (cwd None)
Thread-823863::DEBUG::2016-01-20
08:08:38,208::misc::760::Storage.SamplingMethod::(__call__) Returning last result
Thread-823863::DEBUG::2016-01-20
08:08:38,208::misc::750::Storage.SamplingMethod::(__call__) Trying to enter sampling
method (storage.hba.rescan)
Thread-823863::DEBUG::2016-01-20
08:08:38,208::misc::753::Storage.SamplingMethod::(__call__) Got in to sampling method
Thread-823863::DEBUG::2016-01-20 08:08:38,208::hba::56::Storage.HBA::(rescan) Starting
scan
Thread-823863::DEBUG::2016-01-20 08:08:38,292::hba::62::Storage.HBA::(rescan) Scan
finished
Thread-823863::DEBUG::2016-01-20
08:08:38,292::misc::760::Storage.SamplingMethod::(__call__) Returning last result
Thread-823863::DEBUG::2016-01-20
08:08:38,292::multipath::77::Storage.Misc.excCmd::(rescan) /usr/bin/sudo -n
/usr/sbin/multipath (cwd None)
Thread-823863::DEBUG::2016-01-20
08:08:38,347::multipath::77::Storage.Misc.excCmd::(rescan) SUCCESS: <err> =
''; <rc> = 0
Thread-823863::DEBUG::2016-01-20 08:08:38,347::utils::676::root::(execCmd) /sbin/udevadm
settle --timeout=5 (cwd None)
Thread-823863::DEBUG::2016-01-20 08:08:38,361::utils::694::root::(execCmd) SUCCESS:
<err> = ''; <rc> = 0
Thread-823863::DEBUG::2016-01-20
08:08:38,362::lvm::498::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm
invalidate operation' got the operation mutex
Thread-823863::DEBUG::2016-01-20
08:08:38,362::lvm::500::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm
invalidate operation' released the operation mutex
Thread-823863::DEBUG::2016-01-20
08:08:38,362::lvm::509::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm
invalidate operation' got the operation mutex
Thread-823863::DEBUG::2016-01-20
08:08:38,362::lvm::511::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm
invalidate operation' released the operation mutex
Thread-823863::DEBUG::2016-01-20
08:08:38,363::lvm::529::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm
invalidate operation' got the operation mutex
Thread-823863::DEBUG::2016-01-20
08:08:38,363::lvm::531::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm
invalidate operation' released the operation mutex
Thread-823863::DEBUG::2016-01-20
08:08:38,363::misc::760::Storage.SamplingMethod::(__call__) Returning last result
Thread-823863::INFO::2016-01-20 08:08:38,363::logUtils::51::dispatcher::(wrapper) Run and
protect: disconnectStorageServer, Return response: {'statuslist':
[{'status': 0, 'id': u'2d502b8e-dcab-4c2b-bf5b-600e3465fe64'}]}
Thread-823863::DEBUG::2016-01-20
08:08:38,363::task::1191::Storage.TaskManager.Task::(prepare)
Task=`88d2d366-1f95-4247-87d7-516645984b49`::finished: {'statuslist':
[{'status': 0, 'id': u'2d502b8e-dcab-4c2b-bf5b-600e3465fe64'}]}
Thread-823863::DEBUG::2016-01-20
08:08:38,363::task::595::Storage.TaskManager.Task::(_updateState)
Task=`88d2d366-1f95-4247-87d7-516645984b49`::moving from state preparing -> state
finished
Thread-823863::DEBUG::2016-01-20
08:08:38,363::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-823863::DEBUG::2016-01-20
08:08:38,363::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-823863::DEBUG::2016-01-20
08:08:38,364::task::993::Storage.TaskManager.Task::(_decref)
Task=`88d2d366-1f95-4247-87d7-516645984b49`::ref 0 aborting False
Thread-823863::DEBUG::2016-01-20
08:08:38,364::__init__::533::jsonrpc.JsonRpcServer::(_serveRequest) Return
'StoragePool.disconnectStorageServer' in bridge with [{'status': 0,
'id': u'2d502b8e-dcab-4c2b-bf5b-600e3465fe64'}]