[ovirt-users] Error while executing action New SAN Storage Domain: Cannot zero out volume
Devin A. Bougie
devin.bougie at cornell.edu
Fri Oct 23 20:12:23 UTC 2015
Every time I try to create a Data / iSCSI Storage Domain, I receive an "Error while executing action New SAN Storage Domain: Cannot zero out volume" error.
iscsid does login to the node, and the volumes appear to have been created. However, I cannot use it to create or import a Data / iSCSI storage domain.
[root at lnx84 ~]# iscsiadm -m node
#.#.#.#:3260,1 iqn.2015-10.N.N.N.lnx88:lnx88.target1
[root at lnx84 ~]# iscsiadm -m session
tcp: [1] #.#.#.#:3260,1 iqn.2015-10.N.N.N.lnx88:lnx88.target1 (non-flash)
[root at lnx84 ~]# pvscan
PV /dev/mapper/1IET_00010001 VG f73c8720-77c3-42a6-8a29-9677db54bac6 lvm2 [547.62 GiB / 543.75 GiB free]
...
[root at lnx84 ~]# lvscan
inactive '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/metadata' [512.00 MiB] inherit
inactive '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/outbox' [128.00 MiB] inherit
inactive '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/leases' [2.00 GiB] inherit
inactive '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/ids' [128.00 MiB] inherit
inactive '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/inbox' [128.00 MiB] inherit
inactive '/dev/f73c8720-77c3-42a6-8a29-9677db54bac6/master' [1.00 GiB] inherit
...
Any help would be greatly appreciated.
Many thanks,
Devin
Here are the relevant lines from engine.log:
------
2015-10-23 16:04:56,925 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand] (ajp--127.0.0.1-8702-8) START, GetDeviceListVDSCommand(HostName = lnx84, HostId = a650e161-75f6-4916-bc18-96044bf3fc26, storageType=ISCSI), log id: 44a64578
2015-10-23 16:04:57,681 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand] (ajp--127.0.0.1-8702-8) FINISH, GetDeviceListVDSCommand, return: [LUNs [id=1IET_00010001, physicalVolumeId=wpmBIM-tgc1-yKtH-XSwc-40wZ-Kn49-btwBFn, volumeGroupId=8gZEwa-3x5m-TiqA-uEPX-gC04-wkzx-PlaQDu, serial=SIET_VIRTUAL-DISK, lunMapping=1, vendorId=IET, productId=VIRTUAL-DISK, _lunConnections=[{ id: null, connection: #.#.#.#, iqn: iqn.2015-10.N.N.N.lnx88:lnx88.target1, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };], deviceSize=547, vendorName=IET, pathsDictionary={sdi=true}, lunType=ISCSI, status=Used, diskId=null, diskAlias=null, storageDomainId=null, storageDomainName=null]], log id: 44a64578
2015-10-23 16:05:06,474 INFO [org.ovirt.engine.core.bll.storage.AddSANStorageDomainCommand] (ajp--127.0.0.1-8702-8) [53dd8c98] Running command: AddSANStorageDomainCommand internal: false. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group CREATE_STORAGE_DOMAIN with role type ADMIN
2015-10-23 16:05:06,488 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand] (ajp--127.0.0.1-8702-8) [53dd8c98] START, CreateVGVDSCommand(HostName = lnx84, HostId = a650e161-75f6-4916-bc18-96044bf3fc26, storageDomainId=cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19, deviceList=[1IET_00010001], force=true), log id: 12acc23b
2015-10-23 16:05:07,379 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVGVDSCommand] (ajp--127.0.0.1-8702-8) [53dd8c98] FINISH, CreateVGVDSCommand, return: dDaCCO-PLDu-S2nz-yOjM-qpOW-PGaa-ecpJ8P, log id: 12acc23b
2015-10-23 16:05:07,384 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (ajp--127.0.0.1-8702-8) [53dd8c98] START, CreateStorageDomainVDSCommand(HostName = lnx84, HostId = a650e161-75f6-4916-bc18-96044bf3fc26, storageDomain=StorageDomainStatic[lnx88, cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19], args=dDaCCO-PLDu-S2nz-yOjM-qpOW-PGaa-ecpJ8P), log id: cc93ec6
2015-10-23 16:05:10,356 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (ajp--127.0.0.1-8702-8) [53dd8c98] Failed in CreateStorageDomainVDS method
2015-10-23 16:05:10,360 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (ajp--127.0.0.1-8702-8) [53dd8c98] Command org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand return value
StatusOnlyReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=374, mMessage=Cannot zero out volume: ('/dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/metadata',)]]
2015-10-23 16:05:10,367 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (ajp--127.0.0.1-8702-8) [53dd8c98] HostName = lnx84
2015-10-23 16:05:10,370 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (ajp--127.0.0.1-8702-8) [53dd8c98] Command CreateStorageDomainVDSCommand(HostName = lnx84, HostId = a650e161-75f6-4916-bc18-96044bf3fc26, storageDomain=StorageDomainStatic[lnx88, cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19], args=dDaCCO-PLDu-S2nz-yOjM-qpOW-PGaa-ecpJ8P) execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed to CreateStorageDomainVDS, error = Cannot zero out volume: ('/dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/metadata',), code = 374
2015-10-23 16:05:10,381 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateStorageDomainVDSCommand] (ajp--127.0.0.1-8702-8) [53dd8c98] FINISH, CreateStorageDomainVDSCommand, log id: cc93ec6
2015-10-23 16:05:10,385 ERROR [org.ovirt.engine.core.bll.storage.AddSANStorageDomainCommand] (ajp--127.0.0.1-8702-8) [53dd8c98] Command org.ovirt.engine.core.bll.storage.AddSANStorageDomainCommand throw Vdc Bll exception. With error message VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to CreateStorageDomainVDS, error = Cannot zero out volume: ('/dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/metadata',), code = 374 (Failed with error VolumesZeroingError and code 374)
2015-10-23 16:05:10,394 INFO [org.ovirt.engine.core.bll.storage.AddSANStorageDomainCommand] (ajp--127.0.0.1-8702-8) [53dd8c98] Command [id=624f979b-fa98-4b1e-9244-c2c2c675d91b]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.StorageDomainDynamic; snapshot: cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19.
2015-10-23 16:05:10,395 INFO [org.ovirt.engine.core.bll.storage.AddSANStorageDomainCommand] (ajp--127.0.0.1-8702-8) [53dd8c98] Command [id=624f979b-fa98-4b1e-9244-c2c2c675d91b]: Compensating NEW_ENTITY_ID of org.ovirt.engine.core.common.businessentities.StorageDomainStatic; snapshot: cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19.
2015-10-23 16:05:10,422 ERROR [org.ovirt.engine.core.bll.storage.AddSANStorageDomainCommand] (ajp--127.0.0.1-8702-8) [53dd8c98] Transaction rolled-back for command: org.ovirt.engine.core.bll.storage.AddSANStorageDomainCommand.
2015-10-23 16:05:10,430 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ajp--127.0.0.1-8702-8) [53dd8c98] Correlation ID: 53dd8c98, Job ID: 7c09ed86-b0b3-448e-8ae1-50b4cce57508, Call Stack: null, Custom Event ID: -1, Message: Failed to add Storage Domain lnx88. (User: admin at internal)
------
And here's some of what I see in vdsm.log. Note that I'm trying to add the domain on the node that the engine is running on.
------
JsonRpc (StompReactor)::DEBUG::2015-10-23 16:04:38,546::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2015-10-23 16:04:38,548::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1838::DEBUG::2015-10-23 16:04:38,551::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-10-23 16:04:41,558::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2015-10-23 16:04:41,560::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1839::DEBUG::2015-10-23 16:04:41,563::task::595::Storage.TaskManager.Task::(_updateState) Task=`e649893f-f5b1-423a-a529-7def05ced600`::moving from state init -> state preparing
Thread-1839::INFO::2015-10-23 16:04:41,563::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None)
Thread-1839::INFO::2015-10-23 16:04:41,563::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {}
Thread-1839::DEBUG::2015-10-23 16:04:41,563::task::1191::Storage.TaskManager.Task::(prepare) Task=`e649893f-f5b1-423a-a529-7def05ced600`::finished: {}
Thread-1839::DEBUG::2015-10-23 16:04:41,564::task::595::Storage.TaskManager.Task::(_updateState) Task=`e649893f-f5b1-423a-a529-7def05ced600`::moving from state preparing -> state finished
Thread-1839::DEBUG::2015-10-23 16:04:41,564::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-1839::DEBUG::2015-10-23 16:04:41,564::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-1839::DEBUG::2015-10-23 16:04:41,564::task::993::Storage.TaskManager.Task::(_decref) Task=`e649893f-f5b1-423a-a529-7def05ced600`::ref 0 aborting False
Thread-1839::DEBUG::2015-10-23 16:04:41,566::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-10-23 16:04:41,574::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2015-10-23 16:04:41,575::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1840::DEBUG::2015-10-23 16:04:41,578::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-10-23 16:04:44,616::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2015-10-23 16:04:44,618::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1841::DEBUG::2015-10-23 16:04:44,621::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-10-23 16:04:47,628::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2015-10-23 16:04:47,630::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1842::DEBUG::2015-10-23 16:04:47,633::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-10-23 16:04:50,640::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2015-10-23 16:04:50,641::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1843::DEBUG::2015-10-23 16:04:50,643::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-10-23 16:04:53,650::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2015-10-23 16:04:53,652::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1844::DEBUG::2015-10-23 16:04:53,655::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-10-23 16:04:56,663::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2015-10-23 16:04:56,665::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1845::DEBUG::2015-10-23 16:04:56,667::task::595::Storage.TaskManager.Task::(_updateState) Task=`1b4d0d0d-7ea5-4f29-99b1-e5fa5448c632`::moving from state init -> state preparing
Thread-1845::INFO::2015-10-23 16:04:56,668::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None)
Thread-1845::INFO::2015-10-23 16:04:56,668::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {}
Thread-1845::DEBUG::2015-10-23 16:04:56,668::task::1191::Storage.TaskManager.Task::(prepare) Task=`1b4d0d0d-7ea5-4f29-99b1-e5fa5448c632`::finished: {}
Thread-1845::DEBUG::2015-10-23 16:04:56,668::task::595::Storage.TaskManager.Task::(_updateState) Task=`1b4d0d0d-7ea5-4f29-99b1-e5fa5448c632`::moving from state preparing -> state finished
Thread-1845::DEBUG::2015-10-23 16:04:56,668::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-1845::DEBUG::2015-10-23 16:04:56,668::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-1845::DEBUG::2015-10-23 16:04:56,669::task::993::Storage.TaskManager.Task::(_decref) Task=`1b4d0d0d-7ea5-4f29-99b1-e5fa5448c632`::ref 0 aborting False
Thread-1845::DEBUG::2015-10-23 16:04:56,671::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-10-23 16:04:56,678::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2015-10-23 16:04:56,679::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1846::DEBUG::2015-10-23 16:04:56,682::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-10-23 16:04:56,931::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2015-10-23 16:04:56,933::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1847::DEBUG::2015-10-23 16:04:56,933::__init__::481::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'Host.getDeviceList' in bridge with {u'storageType': 3}
Thread-1847::DEBUG::2015-10-23 16:04:56,937::task::595::Storage.TaskManager.Task::(_updateState) Task=`8706ee69-83be-4b2d-90dd-d1d4561fe8c9`::moving from state init -> state preparing
Thread-1847::INFO::2015-10-23 16:04:56,937::logUtils::44::dispatcher::(wrapper) Run and protect: getDeviceList(storageType=3, options={})
Thread-1847::DEBUG::2015-10-23 16:04:56,937::misc::741::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage)
Thread-1847::DEBUG::2015-10-23 16:04:56,937::misc::743::Storage.SamplingMethod::(__call__) Got in to sampling method
Thread-1847::DEBUG::2015-10-23 16:04:56,938::misc::741::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan)
Thread-1847::DEBUG::2015-10-23 16:04:56,938::misc::743::Storage.SamplingMethod::(__call__) Got in to sampling method
Thread-1847::DEBUG::2015-10-23 16:04:56,938::iscsi::424::Storage.ISCSI::(rescan) Performing SCSI scan, this will take up to 30 seconds
Thread-1847::DEBUG::2015-10-23 16:04:56,938::iscsiadm::97::Storage.Misc.excCmd::(_runCmd) /usr/bin/sudo -n /sbin/iscsiadm -m session -R (cwd None)
Thread-1847::DEBUG::2015-10-23 16:04:56,961::misc::751::Storage.SamplingMethod::(__call__) Returning last result
Thread-1847::DEBUG::2015-10-23 16:04:56,961::misc::741::Storage.SamplingMethod::(__call__) Trying to enter sampling method (storage.hba.rescan)
Thread-1847::DEBUG::2015-10-23 16:04:56,962::misc::743::Storage.SamplingMethod::(__call__) Got in to sampling method
Thread-1847::DEBUG::2015-10-23 16:04:56,962::hba::53::Storage.HBA::(rescan) Starting scan
Thread-1847::DEBUG::2015-10-23 16:04:56,962::utils::739::Storage.HBA::(execCmd) /usr/bin/sudo -n /usr/libexec/vdsm/fc-scan (cwd None)
Thread-1847::DEBUG::2015-10-23 16:04:57,002::hba::66::Storage.HBA::(rescan) Scan finished
Thread-1847::DEBUG::2015-10-23 16:04:57,002::misc::751::Storage.SamplingMethod::(__call__) Returning last result
Thread-1847::DEBUG::2015-10-23 16:04:57,003::multipath::131::Storage.Misc.excCmd::(rescan) /usr/bin/sudo -n /sbin/multipath (cwd None)
Thread-1847::DEBUG::2015-10-23 16:04:57,186::multipath::131::Storage.Misc.excCmd::(rescan) SUCCESS: <err> = ''; <rc> = 0
Thread-1847::DEBUG::2015-10-23 16:04:57,186::utils::739::root::(execCmd) /sbin/udevadm settle --timeout=5 (cwd None)
Thread-1847::DEBUG::2015-10-23 16:04:57,196::utils::759::root::(execCmd) SUCCESS: <err> = ''; <rc> = 0
Thread-1847::DEBUG::2015-10-23 16:04:57,197::lvm::498::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex
Thread-1847::DEBUG::2015-10-23 16:04:57,197::lvm::500::Storage.OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the operation mutex
Thread-1847::DEBUG::2015-10-23 16:04:57,197::lvm::509::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex
Thread-1847::DEBUG::2015-10-23 16:04:57,198::lvm::511::Storage.OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the operation mutex
Thread-1847::DEBUG::2015-10-23 16:04:57,198::lvm::529::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex
Thread-1847::DEBUG::2015-10-23 16:04:57,198::lvm::531::Storage.OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the operation mutex
Thread-1847::DEBUG::2015-10-23 16:04:57,198::misc::751::Storage.SamplingMethod::(__call__) Returning last result
Thread-1847::DEBUG::2015-10-23 16:04:57,199::lvm::320::Storage.OperationMutex::(_reloadpvs) Operation 'lvm reload operation' got the operation mutex
Thread-1847::DEBUG::2015-10-23 16:04:57,202::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm pvs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/1IET_00010001|/dev/mapper/3500000e113735610|/dev/mapper/3500000e113740e60|/dev/mapper/3500000e113768d70|/dev/mapper/3500000e11376a440|/dev/mapper/3500000e11376a660|/dev/mapper/3500000e11376aa80|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size (cwd None)
Thread-1847::DEBUG::2015-10-23 16:04:57,443::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0
Thread-1847::DEBUG::2015-10-23 16:04:57,444::lvm::348::Storage.OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex
Thread-1847::DEBUG::2015-10-23 16:04:57,510::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm pvcreate --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/1IET_00010001|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --test --metadatasize 128m --metadatacopies 2 --metadataignore y /dev/mapper/1IET_00010001 (cwd None)
Thread-1847::DEBUG::2015-10-23 16:04:57,610::lvm::291::Storage.Misc.excCmd::(cmd) FAILED: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated.\n Can\'t initialize physical volume "/dev/mapper/1IET_00010001" of volume group "24cd6134-8229-42ca-9fe1-489143d1d8d3" without -ff\n'; <rc> = 5
Thread-1847::DEBUG::2015-10-23 16:04:57,613::lvm::301::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm pvcreate --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/1IET_00010001|/dev/mapper/3500000e113735610|/dev/mapper/3500000e113740e60|/dev/mapper/3500000e113768d70|/dev/mapper/3500000e11376a440|/dev/mapper/3500000e11376a660|/dev/mapper/3500000e11376aa80|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --test --metadatasize 128m --metadatacopies 2 --metadataignore y /dev/mapper/1IET_00010001 (cwd None)
Thread-1847::DEBUG::2015-10-23 16:04:57,676::lvm::301::Storage.Misc.excCmd::(cmd) FAILED: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated.\n Can\'t initialize physical volume "/dev/mapper/1IET_00010001" of volume group "24cd6134-8229-42ca-9fe1-489143d1d8d3" without -ff\n'; <rc> = 5
Thread-1847::DEBUG::2015-10-23 16:04:57,676::lvm::861::Storage.LVM::(testPVCreate) rc: 5, out: [], err: [' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!', ' TEST MODE: Metadata will NOT be updated and volumes will not be (de)activated.', ' Can\'t initialize physical volume "/dev/mapper/1IET_00010001" of volume group "24cd6134-8229-42ca-9fe1-489143d1d8d3" without -ff'], unusedDevs: set([]), usedDevs: set(['/dev/mapper/1IET_00010001'])
Thread-1847::INFO::2015-10-23 16:04:57,677::logUtils::47::dispatcher::(wrapper) Run and protect: getDeviceList, Return response: {'devList': [{'status': 'used', 'vendorID': 'IET', 'capacity': '588332924928', 'fwrev': '0001', 'vgUUID': '8gZEwa-3x5m-TiqA-uEPX-gC04-wkzx-PlaQDu', 'pathlist': [{'initiatorname': 'default', 'connection': '#.#.#.#', 'iqn': 'iqn.2015-10.N.N.N.lnx88:lnx88.target1', 'portal': '1', 'password': '******', 'port': '3260'}], 'logicalblocksize': '512', 'pathstatus': [{'physdev': 'sdi', 'type': 'iSCSI', 'state': 'active', 'lun': '1'}], 'devtype': 'iSCSI', 'physicalblocksize': '512', 'pvUUID': 'wpmBIM-tgc1-yKtH-XSwc-40wZ-Kn49-btwBFn', 'serial': 'SIET_VIRTUAL-DISK', 'GUID': '1IET_00010001', 'productID': 'VIRTUAL-DISK'}]}
Thread-1847::DEBUG::2015-10-23 16:04:57,677::task::1191::Storage.TaskManager.Task::(prepare) Task=`8706ee69-83be-4b2d-90dd-d1d4561fe8c9`::finished: {'devList': [{'status': 'used', 'fwrev': '0001', 'vgUUID': '8gZEwa-3x5m-TiqA-uEPX-gC04-wkzx-PlaQDu', 'pathlist': [{'connection': '#.#.#.#', 'iqn': 'iqn.2015-10.N.N.N.lnx88:lnx88.target1', 'portal': '1', 'port': '3260', 'initiatorname': 'default'}], 'logicalblocksize': '512', 'devtype': 'iSCSI', 'physicalblocksize': '512', 'serial': 'SIET_VIRTUAL-DISK', 'GUID': '1IET_00010001', 'productID': 'VIRTUAL-DISK', 'vendorID': 'IET', 'capacity': '588332924928', 'pathstatus': [{'physdev': 'sdi', 'type': 'iSCSI', 'state': 'active', 'lun': '1'}], 'pvUUID': 'wpmBIM-tgc1-yKtH-XSwc-40wZ-Kn49-btwBFn'}]}
Thread-1847::DEBUG::2015-10-23 16:04:57,678::task::595::Storage.TaskManager.Task::(_updateState) Task=`8706ee69-83be-4b2d-90dd-d1d4561fe8c9`::moving from state preparing -> state finished
Thread-1847::DEBUG::2015-10-23 16:04:57,678::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-1847::DEBUG::2015-10-23 16:04:57,678::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-1847::DEBUG::2015-10-23 16:04:57,678::task::993::Storage.TaskManager.Task::(_decref) Task=`8706ee69-83be-4b2d-90dd-d1d4561fe8c9`::ref 0 aborting False
Thread-1847::DEBUG::2015-10-23 16:04:57,679::__init__::514::jsonrpc.JsonRpcServer::(_serveRequest) Return 'Host.getDeviceList' in bridge with [{'status': 'used', 'vendorID': 'IET', 'capacity': '588332924928', 'fwrev': '0001', 'vgUUID': '8gZEwa-3x5m-TiqA-uEPX-gC04-wkzx-PlaQDu', 'pathlist': [{'connection': '#.#.#.#', 'iqn': 'iqn.2015-10.N.N.N.lnx88:lnx88.target1', 'portal': '1', 'port': '3260', 'initiatorname': 'default'}], 'logicalblocksize': '512', 'pathstatus': [{'type': 'iSCSI', 'physdev': 'sdi', 'deviceType': 'iSCSI', 'state': 'active', 'lun': '1'}], 'devtype': 'iSCSI', 'physicalblocksize': '512', 'pvUUID': 'wpmBIM-tgc1-yKtH-XSwc-40wZ-Kn49-btwBFn', 'serial': 'SIET_VIRTUAL-DISK', 'GUID': '1IET_00010001', 'productID': 'VIRTUAL-DISK'}]
Thread-1847::DEBUG::2015-10-23 16:04:57,679::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-10-23 16:04:59,721::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2015-10-23 16:04:59,723::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1848::DEBUG::2015-10-23 16:04:59,726::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-10-23 16:05:02,734::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2015-10-23 16:05:02,736::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1849::DEBUG::2015-10-23 16:05:02,739::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-10-23 16:05:05,746::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2015-10-23 16:05:05,747::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1850::DEBUG::2015-10-23 16:05:05,750::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-10-23 16:05:06,515::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2015-10-23 16:05:06,517::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1851::DEBUG::2015-10-23 16:05:06,517::__init__::481::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'LVMVolumeGroup.create' in bridge with {u'devlist': [u'1IET_00010001'], u'force': True, u'name': u'cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19'}
Thread-1851::DEBUG::2015-10-23 16:05:06,519::task::595::Storage.TaskManager.Task::(_updateState) Task=`2148889f-5651-400b-98da-56e58614342c`::moving from state init -> state preparing
Thread-1851::INFO::2015-10-23 16:05:06,519::logUtils::44::dispatcher::(wrapper) Run and protect: createVG(vgname=u'cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19', devlist=[u'1IET_00010001'], force=True, options=None)
Thread-1851::DEBUG::2015-10-23 16:05:06,623::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm pvcreate --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/1IET_00010001|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' -y -ff --metadatasize 128m --metadatacopies 2 --metadataignore y /dev/mapper/1IET_00010001 (cwd None)
Thread-1851::DEBUG::2015-10-23 16:05:06,866::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n WARNING: Forcing physical volume creation on /dev/mapper/1IET_00010001 of volume group "24cd6134-8229-42ca-9fe1-489143d1d8d3"\n'; <rc> = 0
Thread-1851::DEBUG::2015-10-23 16:05:06,868::lvm::492::Storage.OperationMutex::(_invalidatepvs) Operation 'lvm invalidate operation' got the operation mutex
Thread-1851::DEBUG::2015-10-23 16:05:06,871::lvm::495::Storage.OperationMutex::(_invalidatepvs) Operation 'lvm invalidate operation' released the operation mutex
Thread-1851::DEBUG::2015-10-23 16:05:06,876::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm pvchange --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/1IET_00010001|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --metadataignore n /dev/mapper/1IET_00010001 (cwd None)
Thread-1851::DEBUG::2015-10-23 16:05:07,021::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0
Thread-1851::DEBUG::2015-10-23 16:05:07,021::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm vgcreate --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/1IET_00010001|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --physicalextentsize 128m --addtag RHAT_storage_domain_UNREADY cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19 /dev/mapper/1IET_00010001 (cwd None)
Thread-1851::DEBUG::2015-10-23 16:05:07,294::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0
Thread-1851::DEBUG::2015-10-23 16:05:07,295::lvm::492::Storage.OperationMutex::(_invalidatepvs) Operation 'lvm invalidate operation' got the operation mutex
Thread-1851::DEBUG::2015-10-23 16:05:07,295::lvm::495::Storage.OperationMutex::(_invalidatepvs) Operation 'lvm invalidate operation' released the operation mutex
Thread-1851::DEBUG::2015-10-23 16:05:07,296::lvm::504::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' got the operation mutex
Thread-1851::DEBUG::2015-10-23 16:05:07,296::lvm::506::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' released the operation mutex
Thread-1851::DEBUG::2015-10-23 16:05:07,296::lvm::932::Storage.LVM::(createVG) Cache after createvg {u'cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19': Stub(name=u'cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19', stale=True)}
Thread-1851::DEBUG::2015-10-23 16:05:07,296::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex
Thread-1851::DEBUG::2015-10-23 16:05:07,297::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/1IET_00010001|/dev/mapper/3500000e113735610|/dev/mapper/3500000e113740e60|/dev/mapper/3500000e113768d70|/dev/mapper/3500000e11376a440|/dev/mapper/3500000e11376a660|/dev/mapper/3500000e11376aa80|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19 (cwd None)
Thread-1851::DEBUG::2015-10-23 16:05:07,374::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0
Thread-1851::DEBUG::2015-10-23 16:05:07,375::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex
Thread-1851::INFO::2015-10-23 16:05:07,375::logUtils::47::dispatcher::(wrapper) Run and protect: createVG, Return response: {'uuid': 'dDaCCO-PLDu-S2nz-yOjM-qpOW-PGaa-ecpJ8P'}
Thread-1851::DEBUG::2015-10-23 16:05:07,376::task::1191::Storage.TaskManager.Task::(prepare) Task=`2148889f-5651-400b-98da-56e58614342c`::finished: {'uuid': 'dDaCCO-PLDu-S2nz-yOjM-qpOW-PGaa-ecpJ8P'}
Thread-1851::DEBUG::2015-10-23 16:05:07,376::task::595::Storage.TaskManager.Task::(_updateState) Task=`2148889f-5651-400b-98da-56e58614342c`::moving from state preparing -> state finished
Thread-1851::DEBUG::2015-10-23 16:05:07,376::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-1851::DEBUG::2015-10-23 16:05:07,376::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-1851::DEBUG::2015-10-23 16:05:07,376::task::993::Storage.TaskManager.Task::(_decref) Task=`2148889f-5651-400b-98da-56e58614342c`::ref 0 aborting False
Thread-1851::DEBUG::2015-10-23 16:05:07,377::__init__::514::jsonrpc.JsonRpcServer::(_serveRequest) Return 'LVMVolumeGroup.create' in bridge with dDaCCO-PLDu-S2nz-yOjM-qpOW-PGaa-ecpJ8P
Thread-1851::DEBUG::2015-10-23 16:05:07,377::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-10-23 16:05:07,392::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
Thread-1852::DEBUG::2015-10-23 16:05:07,393::__init__::481::jsonrpc.JsonRpcServer::(_serveRequest) Calling 'StorageDomain.create' in bridge with {u'name': u'lnx88', u'domainType': 3, u'domainClass': 1, u'typeArgs': u'dDaCCO-PLDu-S2nz-yOjM-qpOW-PGaa-ecpJ8P', u'version': u'3', u'storagedomainID': u'cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19'}
JsonRpcServer::DEBUG::2015-10-23 16:05:07,393::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1852::DEBUG::2015-10-23 16:05:07,397::task::595::Storage.TaskManager.Task::(_updateState) Task=`f0b046fd-009c-4437-95c8-b5648808bc72`::moving from state init -> state preparing
Thread-1852::INFO::2015-10-23 16:05:07,397::logUtils::44::dispatcher::(wrapper) Run and protect: createStorageDomain(storageType=3, sdUUID=u'cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19', domainName=u'lnx88', typeSpecificArg=u'dDaCCO-PLDu-S2nz-yOjM-qpOW-PGaa-ecpJ8P', domClass=1, domVersion=u'3', options=None)
Thread-1852::ERROR::2015-10-23 16:05:07,397::sdc::137::Storage.StorageDomainCache::(_findDomain) looking for unfetched domain cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19
Thread-1852::ERROR::2015-10-23 16:05:07,398::sdc::154::Storage.StorageDomainCache::(_findUnfetchedDomain) looking for domain cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19
Thread-1852::ERROR::2015-10-23 16:05:07,400::sdc::143::Storage.StorageDomainCache::(_findDomain) domain cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19 not found
Traceback (most recent call last):
File "/usr/share/vdsm/storage/sdc.py", line 141, in _findDomain
dom = findMethod(sdUUID)
File "/usr/share/vdsm/storage/sdc.py", line 171, in _findUnfetchedDomain
raise se.StorageDomainDoesNotExist(sdUUID)
StorageDomainDoesNotExist: Storage domain does not exist: (u'cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19',)
Thread-1852::INFO::2015-10-23 16:05:07,401::blockSD::484::Storage.StorageDomain::(create) sdUUID=cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19 domainName=lnx88 domClass=1 vgUUID=dDaCCO-PLDu-S2nz-yOjM-qpOW-PGaa-ecpJ8P storageType=3 version=3
Thread-1852::DEBUG::2015-10-23 16:05:07,401::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:07,401::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/1IET_00010001|/dev/mapper/3500000e113735610|/dev/mapper/3500000e113740e60|/dev/mapper/3500000e113768d70|/dev/mapper/3500000e11376a440|/dev/mapper/3500000e11376a660|/dev/mapper/3500000e11376aa80|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name (cwd None)
Thread-1852::DEBUG::2015-10-23 16:05:07,477::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0
Thread-1852::DEBUG::2015-10-23 16:05:07,477::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:07,478::lvm::428::Storage.OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:07,478::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm lvs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/1IET_00010001|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19 (cwd None)
Thread-1852::DEBUG::2015-10-23 16:05:07,534::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0
Thread-1852::DEBUG::2015-10-23 16:05:07,534::lvm::463::Storage.LVM::(_reloadlvs) lvs reloaded
Thread-1852::DEBUG::2015-10-23 16:05:07,535::lvm::463::Storage.OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex
Thread-1852::INFO::2015-10-23 16:05:07,535::blockSD::468::Storage.StorageDomain::(metaSize) size 512 MB (metaratio 262144)
Thread-1852::DEBUG::2015-10-23 16:05:07,535::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm lvcreate --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/1IET_00010001|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --autobackup n --contiguous n --size 512m --name metadata cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19 (cwd None)
Thread-1852::DEBUG::2015-10-23 16:05:07,652::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n WARNING: This metadata update is NOT backed up\n'; <rc> = 0
Thread-1852::DEBUG::2015-10-23 16:05:07,653::lvm::504::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' got the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:07,653::lvm::506::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' released the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:07,653::lvm::514::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:07,654::lvm::526::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:07,654::lvm::1081::Storage.Misc.excCmd::(createLV) /usr/bin/sudo -n /usr/bin/chown vdsm:qemu /dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/metadata (cwd None)
Thread-1852::DEBUG::2015-10-23 16:05:07,674::lvm::1081::Storage.Misc.excCmd::(createLV) SUCCESS: <err> = ''; <rc> = 0
Thread-1852::DEBUG::2015-10-23 16:05:07,675::lvm::428::Storage.OperationMutex::(_reloadlvs) Operation 'lvm reload operation' got the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:07,676::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm lvs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/1IET_00010001|/dev/mapper/3500000e113735610|/dev/mapper/3500000e113740e60|/dev/mapper/3500000e113768d70|/dev/mapper/3500000e11376a440|/dev/mapper/3500000e11376a660|/dev/mapper/3500000e11376aa80|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,vg_name,attr,size,seg_start_pe,devices,tags cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19 (cwd None)
Thread-1852::DEBUG::2015-10-23 16:05:07,747::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0
Thread-1852::DEBUG::2015-10-23 16:05:07,748::lvm::463::Storage.LVM::(_reloadlvs) lvs reloaded
Thread-1852::DEBUG::2015-10-23 16:05:07,748::lvm::463::Storage.OperationMutex::(_reloadlvs) Operation 'lvm reload operation' released the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:07,749::lvm::371::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' got the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:07,749::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm vgs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/1IET_00010001|/dev/mapper/3500000e113735610|/dev/mapper/3500000e113740e60|/dev/mapper/3500000e113768d70|/dev/mapper/3500000e11376a440|/dev/mapper/3500000e11376a660|/dev/mapper/3500000e11376aa80|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free,lv_count,pv_count,pv_name cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19 (cwd None)
Thread-1852::DEBUG::2015-10-23 16:05:07,831::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0
Thread-1852::DEBUG::2015-10-23 16:05:07,832::lvm::416::Storage.OperationMutex::(_reloadvgs) Operation 'lvm reload operation' released the operation mutex
Thread-1852::INFO::2015-10-23 16:05:07,832::blockSD::653::Storage.StorageDomain::(getMetaDataMapping) Create: SORT MAPPING: ['/dev/mapper/1IET_00010001']
Thread-1852::DEBUG::2015-10-23 16:05:07,832::lvm::320::Storage.OperationMutex::(_reloadpvs) Operation 'lvm reload operation' got the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:07,833::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm pvs --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/1IET_00010001|/dev/mapper/3500000e113735610|/dev/mapper/3500000e113740e60|/dev/mapper/3500000e113768d70|/dev/mapper/3500000e11376a440|/dev/mapper/3500000e11376a660|/dev/mapper/3500000e11376aa80|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --noheadings --units b --nosuffix --separator '|' --ignoreskippedcluster -o uuid,name,size,vg_name,vg_uuid,pe_start,pe_count,pe_alloc_count,mda_count,dev_size /dev/mapper/1IET_00010001 (cwd None)
Thread-1852::DEBUG::2015-10-23 16:05:07,899::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0
Thread-1852::DEBUG::2015-10-23 16:05:07,899::lvm::348::Storage.OperationMutex::(_reloadpvs) Operation 'lvm reload operation' released the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:07,900::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm lvcreate --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/1IET_00010001|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --autobackup n --contiguous n --size 16m --name outbox cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19 (cwd None)
Thread-1852::DEBUG::2015-10-23 16:05:08,021::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n WARNING: This metadata update is NOT backed up\n'; <rc> = 0
Thread-1852::DEBUG::2015-10-23 16:05:08,021::lvm::504::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' got the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:08,022::lvm::506::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' released the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:08,022::lvm::514::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:08,022::lvm::526::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:08,023::lvm::1081::Storage.Misc.excCmd::(createLV) /usr/bin/sudo -n /usr/bin/chown vdsm:qemu /dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/outbox (cwd None)
Thread-1852::DEBUG::2015-10-23 16:05:08,042::lvm::1081::Storage.Misc.excCmd::(createLV) SUCCESS: <err> = ''; <rc> = 0
Thread-1852::DEBUG::2015-10-23 16:05:08,042::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm lvcreate --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/1IET_00010001|/dev/mapper/3500000e113735610|/dev/mapper/3500000e113740e60|/dev/mapper/3500000e113768d70|/dev/mapper/3500000e11376a440|/dev/mapper/3500000e11376a660|/dev/mapper/3500000e11376aa80|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --autobackup n --contiguous n --size 2048m --name leases cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19 (cwd None)
Thread-1852::DEBUG::2015-10-23 16:05:08,205::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n WARNING: This metadata update is NOT backed up\n'; <rc> = 0
Thread-1852::DEBUG::2015-10-23 16:05:08,206::lvm::504::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' got the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:08,206::lvm::506::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' released the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:08,207::lvm::514::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:08,207::lvm::526::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:08,207::lvm::1081::Storage.Misc.excCmd::(createLV) /usr/bin/sudo -n /usr/bin/chown vdsm:qemu /dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/leases (cwd None)
Thread-1852::DEBUG::2015-10-23 16:05:08,227::lvm::1081::Storage.Misc.excCmd::(createLV) SUCCESS: <err> = ''; <rc> = 0
Thread-1852::DEBUG::2015-10-23 16:05:08,228::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm lvcreate --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/1IET_00010001|/dev/mapper/3500000e113735610|/dev/mapper/3500000e113740e60|/dev/mapper/3500000e113768d70|/dev/mapper/3500000e11376a440|/dev/mapper/3500000e11376a660|/dev/mapper/3500000e11376aa80|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --autobackup n --contiguous n --size 8m --name ids cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19 (cwd None)
Thread-1852::DEBUG::2015-10-23 16:05:08,353::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n WARNING: This metadata update is NOT backed up\n'; <rc> = 0
Thread-1852::DEBUG::2015-10-23 16:05:08,354::lvm::504::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' got the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:08,354::lvm::506::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' released the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:08,354::lvm::514::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:08,355::lvm::526::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:08,355::lvm::1081::Storage.Misc.excCmd::(createLV) /usr/bin/sudo -n /usr/bin/chown vdsm:qemu /dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/ids (cwd None)
Thread-1852::DEBUG::2015-10-23 16:05:08,374::lvm::1081::Storage.Misc.excCmd::(createLV) SUCCESS: <err> = ''; <rc> = 0
Thread-1852::DEBUG::2015-10-23 16:05:08,375::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm lvcreate --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/1IET_00010001|/dev/mapper/3500000e113735610|/dev/mapper/3500000e113740e60|/dev/mapper/3500000e113768d70|/dev/mapper/3500000e11376a440|/dev/mapper/3500000e11376a660|/dev/mapper/3500000e11376aa80|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --autobackup n --contiguous n --size 16m --name inbox cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19 (cwd None)
Thread-1852::DEBUG::2015-10-23 16:05:08,547::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n WARNING: This metadata update is NOT backed up\n'; <rc> = 0
Thread-1852::DEBUG::2015-10-23 16:05:08,548::lvm::504::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' got the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:08,548::lvm::506::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' released the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:08,549::lvm::514::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:08,549::lvm::526::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:08,549::lvm::1081::Storage.Misc.excCmd::(createLV) /usr/bin/sudo -n /usr/bin/chown vdsm:qemu /dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/inbox (cwd None)
Thread-1852::DEBUG::2015-10-23 16:05:08,570::lvm::1081::Storage.Misc.excCmd::(createLV) SUCCESS: <err> = ''; <rc> = 0
Thread-1852::DEBUG::2015-10-23 16:05:08,571::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm lvcreate --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/1IET_00010001|/dev/mapper/3500000e113735610|/dev/mapper/3500000e113740e60|/dev/mapper/3500000e113768d70|/dev/mapper/3500000e11376a440|/dev/mapper/3500000e11376a660|/dev/mapper/3500000e11376aa80|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --autobackup n --contiguous n --size 1024m --name master cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19 (cwd None)
Thread-1852::DEBUG::2015-10-23 16:05:08,744::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n WARNING: This metadata update is NOT backed up\nWARNING: ext3 signature detected on /dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/master at offset 1080. Wipe it? [y/n]: n\n Aborted wiping of ext3.\n 1 existing signature left on the device.\n'; <rc> = 0
Thread-1852::DEBUG::2015-10-23 16:05:08,745::lvm::504::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' got the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:08,745::lvm::506::Storage.OperationMutex::(_invalidatevgs) Operation 'lvm invalidate operation' released the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:08,745::lvm::514::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:08,746::lvm::526::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:08,746::lvm::1081::Storage.Misc.excCmd::(createLV) /usr/bin/sudo -n /usr/bin/chown vdsm:qemu /dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/master (cwd None)
JsonRpc (StompReactor)::DEBUG::2015-10-23 16:05:08,759::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2015-10-23 16:05:08,761::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1854::DEBUG::2015-10-23 16:05:08,764::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
Thread-1852::DEBUG::2015-10-23 16:05:08,768::lvm::1081::Storage.Misc.excCmd::(createLV) SUCCESS: <err> = ''; <rc> = 0
Thread-1852::DEBUG::2015-10-23 16:05:08,768::blockSD::1342::Storage.Misc.excCmd::(_createVMSfs) /usr/bin/sudo -n /usr/sbin/mkfs -q -j -E nodiscard /dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/master (cwd None)
Thread-1852::DEBUG::2015-10-23 16:05:10,184::blockSD::1342::Storage.Misc.excCmd::(_createVMSfs) SUCCESS: <err> = ''; <rc> = 0
Thread-1852::DEBUG::2015-10-23 16:05:10,185::lvm::291::Storage.Misc.excCmd::(cmd) /usr/bin/sudo -n /sbin/lvm lvchange --config ' devices { preferred_names = ["^/dev/mapper/"] ignore_suspended_devices=1 write_cache_state=0 disable_after_error_count=3 obtain_device_list_from_udev=0 filter = [ '\''a|/dev/mapper/1IET_00010001|/dev/mapper/3500000e113735610|/dev/mapper/3500000e113740e60|/dev/mapper/3500000e113768d70|/dev/mapper/3500000e11376a440|/dev/mapper/3500000e11376a660|/dev/mapper/3500000e11376aa80|'\'', '\''r|.*|'\'' ] } global { locking_type=1 prioritise_write_locks=1 wait_for_locks=1 use_lvmetad=0 } backup { retain_min = 50 retain_days = 0 } ' --autobackup n --available n cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/master (cwd None)
Thread-1852::DEBUG::2015-10-23 16:05:10,339::lvm::291::Storage.Misc.excCmd::(cmd) SUCCESS: <err> = ' WARNING: lvmetad is running but disabled. Restart lvmetad before enabling it!\n'; <rc> = 0
Thread-1852::DEBUG::2015-10-23 16:05:10,340::lvm::514::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' got the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:10,340::lvm::526::Storage.OperationMutex::(_invalidatelvs) Operation 'lvm invalidate operation' released the operation mutex
Thread-1852::DEBUG::2015-10-23 16:05:10,341::blockSD::535::Storage.Misc.excCmd::(create) /usr/bin/nice -n 19 /usr/bin/ionice -c 3 /usr/bin/dd if=/dev/zero of=/dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/metadata bs=1048576 seek=0 skip=0 conv=notrunc count=40 oflag=direct (cwd None)
Thread-1852::DEBUG::2015-10-23 16:05:10,351::blockSD::535::Storage.Misc.excCmd::(create) FAILED: <err> = "/usr/bin/dd: failed to open '/dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/metadata': Permission denied\n"; <rc> = 1
Thread-1852::ERROR::2015-10-23 16:05:10,351::task::866::Storage.TaskManager.Task::(_setError) Task=`f0b046fd-009c-4437-95c8-b5648808bc72`::Unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 873, in _run
return fn(*args, **kargs)
File "/usr/share/vdsm/logUtils.py", line 45, in wrapper
res = f(*args, **kwargs)
File "/usr/share/vdsm/storage/hsm.py", line 2687, in createStorageDomain
domVersion)
File "/usr/share/vdsm/storage/blockSD.py", line 543, in create
raise se.VolumesZeroingError(path)
VolumesZeroingError: Cannot zero out volume: ('/dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/metadata',)
Thread-1852::DEBUG::2015-10-23 16:05:10,352::task::885::Storage.TaskManager.Task::(_run) Task=`f0b046fd-009c-4437-95c8-b5648808bc72`::Task._run: f0b046fd-009c-4437-95c8-b5648808bc72 (3, u'cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19', u'lnx88', u'dDaCCO-PLDu-S2nz-yOjM-qpOW-PGaa-ecpJ8P', 1, u'3') {} failed - stopping task
Thread-1852::DEBUG::2015-10-23 16:05:10,352::task::1217::Storage.TaskManager.Task::(stop) Task=`f0b046fd-009c-4437-95c8-b5648808bc72`::stopping in state preparing (force False)
Thread-1852::DEBUG::2015-10-23 16:05:10,352::task::993::Storage.TaskManager.Task::(_decref) Task=`f0b046fd-009c-4437-95c8-b5648808bc72`::ref 1 aborting True
Thread-1852::INFO::2015-10-23 16:05:10,352::task::1171::Storage.TaskManager.Task::(prepare) Task=`f0b046fd-009c-4437-95c8-b5648808bc72`::aborting: Task is aborted: 'Cannot zero out volume' - code 374
Thread-1852::DEBUG::2015-10-23 16:05:10,353::task::1176::Storage.TaskManager.Task::(prepare) Task=`f0b046fd-009c-4437-95c8-b5648808bc72`::Prepare: aborted: Cannot zero out volume
Thread-1852::DEBUG::2015-10-23 16:05:10,353::task::993::Storage.TaskManager.Task::(_decref) Task=`f0b046fd-009c-4437-95c8-b5648808bc72`::ref 0 aborting True
Thread-1852::DEBUG::2015-10-23 16:05:10,353::task::928::Storage.TaskManager.Task::(_doAbort) Task=`f0b046fd-009c-4437-95c8-b5648808bc72`::Task._doAbort: force False
Thread-1852::DEBUG::2015-10-23 16:05:10,353::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-1852::DEBUG::2015-10-23 16:05:10,353::task::595::Storage.TaskManager.Task::(_updateState) Task=`f0b046fd-009c-4437-95c8-b5648808bc72`::moving from state preparing -> state aborting
Thread-1852::DEBUG::2015-10-23 16:05:10,354::task::550::Storage.TaskManager.Task::(__state_aborting) Task=`f0b046fd-009c-4437-95c8-b5648808bc72`::_aborting: recover policy none
Thread-1852::DEBUG::2015-10-23 16:05:10,354::task::595::Storage.TaskManager.Task::(_updateState) Task=`f0b046fd-009c-4437-95c8-b5648808bc72`::moving from state aborting -> state failed
Thread-1852::DEBUG::2015-10-23 16:05:10,354::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-1852::DEBUG::2015-10-23 16:05:10,354::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-1852::ERROR::2015-10-23 16:05:10,354::dispatcher::76::Storage.Dispatcher::(wrapper) {'status': {'message': "Cannot zero out volume: ('/dev/cb5b0e2e-d68d-462a-b8fa-8894a6e0ed19/metadata',)", 'code': 374}}
Thread-1852::DEBUG::2015-10-23 16:05:10,355::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-10-23 16:05:11,771::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2015-10-23 16:05:11,773::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1855::DEBUG::2015-10-23 16:05:11,776::task::595::Storage.TaskManager.Task::(_updateState) Task=`23187509-3a17-4bf0-bc7a-dcaa97ed4cdf`::moving from state init -> state preparing
Thread-1855::INFO::2015-10-23 16:05:11,776::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None)
Thread-1855::INFO::2015-10-23 16:05:11,776::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response: {}
Thread-1855::DEBUG::2015-10-23 16:05:11,776::task::1191::Storage.TaskManager.Task::(prepare) Task=`23187509-3a17-4bf0-bc7a-dcaa97ed4cdf`::finished: {}
Thread-1855::DEBUG::2015-10-23 16:05:11,777::task::595::Storage.TaskManager.Task::(_updateState) Task=`23187509-3a17-4bf0-bc7a-dcaa97ed4cdf`::moving from state preparing -> state finished
Thread-1855::DEBUG::2015-10-23 16:05:11,777::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
Thread-1855::DEBUG::2015-10-23 16:05:11,777::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
Thread-1855::DEBUG::2015-10-23 16:05:11,777::task::993::Storage.TaskManager.Task::(_decref) Task=`23187509-3a17-4bf0-bc7a-dcaa97ed4cdf`::ref 0 aborting False
Thread-1855::DEBUG::2015-10-23 16:05:11,779::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-10-23 16:05:11,786::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2015-10-23 16:05:11,789::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1856::DEBUG::2015-10-23 16:05:11,791::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
JsonRpc (StompReactor)::DEBUG::2015-10-23 16:05:14,845::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2015-10-23 16:05:14,847::__init__::530::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request
Thread-1857::DEBUG::2015-10-23 16:05:14,850::stompReactor::163::yajsonrpc.StompServer::(send) Sending response
------
More information about the Users
mailing list