[Users] HELP, ISCSI Freakin!
Ayal Baron
abaron at redhat.com
Mon Jul 22 07:46:59 UTC 2013
----- Original Message -----
> Forgot one thing, the paste bin link!
>
> http://pastebin.com/3t7E41JK
>
>
> Date: Wednesday, July 17, 2013 7:08 PM
> To: users < users at ovirt.org >
> Subject: HELP, ISCSI Freakin!
>
> Hey Guys,
>
> I am running oVirt 3.1; and I am getting the following in dmesg, and I have
> pasted my VG info below. We are running iSCSI luns and this is a node
> complaining in the log. Our equalogic storage was close to full; and this
> started happening. I have extended it, and it is thin provisioned; but I am
> not sure if a pvresize is needed. If so, can it be done while online? Trying
Looking at your VG info you have at least 450GB free on every storage domain so free space doesn't seem like the issue.
You have real I/O errors there.
The problematic device is: /dev/mapper/36090a028108c2e4a2214151d000000c9
And it probably belongs to vg: 2b93dc46-0db7-4a2a-a494-a294452c9d2f
If this device/vg is no longer relevant then run "dmsetup remove /dev/mapper/36090a028108c2e4a2214151d000000c9"
If that doesn't work then you can try running it with '-f' (force).
This will remove the device from the system.
If it is relevant, then you need to fix the problem and make sure the system has access to it.
> not to take down this cluster; several heavily used VMs on it. The VMs seem
> to be running ok for now; but this is filling up the logs pretty quickly; as
> it appears to be retrying to connections and to release the resources. All
> help is appreciated .
>
> Thanks for all your help guys; and feel free to email me directly if I can
> provide any more info.
>
> Log Snippet:
> Thread-550892::DEBUG::2013-07-17
> 19:06:14,335::resourceManager::565::ResourceManager::(releaseResource) No
> one is waiting for resource 'Storage.69d66cf5-9426-4385-be6c-c55e2bd40617',
> Clearing records.
> Thread-550892::DEBUG::2013-07-17
> 19:06:14,335::task::978::TaskManager.Task::(_decref)
> Task=`03810e67-5dab-4193-8cbd-c633c8c25fa4`::ref 0 aborting False
> Thread-550892::DEBUG::2013-07-17
> 19:06:14,336::task::588::TaskManager.Task::(_updateState)
> Task=`7379fc72-5137-4432-861c-85b1f33ba843`::moving from state init -> state
> preparing
> Thread-550892::INFO::2013-07-17
> 19:06:14,336::logUtils::37::dispatcher::(wrapper) Run and protect:
> getVolumeSize(sdUUID='7cf8a585-34a8-470c-9b40-9d486456fcea',
> spUUID='2ecfa6dd-a1fa-428c-9d04-db17c6c4491d',
> imgUUID='8cbc183a-0afe-497a-a54b-ec5f03b7fe16',
> volUUID='bcb8d500-5aa4-404a-9be2-9fb84bc87300', options=None)
> Thread-550892::DEBUG::2013-07-17
> 19:06:14,336::resourceManager::175::ResourceManager.Request::(__init__)
> ResName=`Storage.7cf8a585-34a8-470c-9b40-9d486456fcea`ReqID=`7411edba-20b8-4855-8271-b6aa52a9203a`::Request
> was made in '/usr/share/vdsm/storage/resourceManager.py' line '485' at
> 'registerResource'
> Thread-550892::DEBUG::2013-07-17
> 19:06:14,336::resourceManager::486::ResourceManager::(registerResource)
> Trying to register resource 'Storage.7cf8a585-34a8-470c-9b40-9d486456fcea'
> for lock type 'shared'
> Thread-550892::DEBUG::2013-07-17
> 19:06:14,336::resourceManager::528::ResourceManager::(registerResource)
> Resource 'Storage.7cf8a585-34a8-470c-9b40-9d486456fcea' is free. Now locking
> as 'shared' (1 active user)
> Thread-550892::DEBUG::2013-07-17
> 19:06:14,336::resourceManager::212::ResourceManager.Request::(grant)
> ResName=`Storage.7cf8a585-34a8-470c-9b40-9d486456fcea`ReqID=`7411edba-20b8-4855-8271-b6aa52a9203a`::Granted
> request
> Thread-550892::DEBUG::2013-07-17
> 19:06:14,337::task::817::TaskManager.Task::(resourceAcquired)
> Task=`7379fc72-5137-4432-861c-85b1f33ba843`::_resourcesAcquired:
> Storage.7cf8a585-34a8-470c-9b40-9d486456fcea (shared)
> Thread-550892::DEBUG::2013-07-17
> 19:06:14,337::task::978::TaskManager.Task::(_decref)
> Task=`7379fc72-5137-4432-861c-85b1f33ba843`::ref 1 aborting False
> Thread-550892::INFO::2013-07-17
> 19:06:14,338::logUtils::39::dispatcher::(wrapper) Run and protect:
> getVolumeSize, Return response: {'truesize': '31138512896', 'apparentsize':
> '31138512896'}
> Thread-550892::DEBUG::2013-07-17
> 19:06:14,338::task::1172::TaskManager.Task::(prepare)
> Task=`7379fc72-5137-4432-861c-85b1f33ba843`::finished: {'truesize':
> '31138512896', 'apparentsize': '31138512896'}
> Thread-550892::DEBUG::2013-07-17
> 19:06:14,338::task::588::TaskManager.Task::(_updateState)
> Task=`7379fc72-5137-4432-861c-85b1f33ba843`::moving from state preparing ->
> state finished
> Thread-550892::DEBUG::2013-07-17
> 19:06:14,338::resourceManager::809::ResourceManager.Owner::(releaseAll)
> Owner.releaseAll requests {} resources
> {'Storage.7cf8a585-34a8-470c-9b40-9d486456fcea': < ResourceRef
> 'Storage.7cf8a585-34a8-470c-9b40-9d486456fcea', isValid: 'True' obj:
> 'None'>}
> Thread-550892::DEBUG::2013-07-17
> 19:06:14,338::resourceManager::844::ResourceManager.Owner::(cancelAll)
> Owner.cancelAll requests {}
> Thread-550892::DEBUG::2013-07-17
> 19:06:14,338::resourceManager::538::ResourceManager::(releaseResource)
> Trying to release resource 'Storage.7cf8a585-34a8-470c-9b40-9d486456fcea'
> Thread-550892::DEBUG::2013-07-17
> 19:06:14,338::resourceManager::553::ResourceManager::(releaseResource)
> Released resource 'Storage.7cf8a585-34a8-470c-9b40-9d486456fcea' (0 active
> users)
> Thread-550892::DEBUG::2013-07-17
> 19:06:14,338::resourceManager::558::ResourceManager::(releaseResource)
> Resource 'Storage.7cf8a585-34a8-470c-9b40-9d486456fcea' is free, finding out
> if anyone is waiting for it.
> Thread-550892::DEBUG::2013-07-17
> 19:06:14,339::resourceManager::565::ResourceManager::(releaseResource) No
> one is waiting for resource 'Storage.7cf8a585-34a8-470c-9b40-9d486456fcea',
> Clearing records.
> Thread-550892::DEBUG::2013-07-17
> 19:06:14,339::task::978::TaskManager.Task::(_decref)
> Task=`7379fc72-5137-4432-861c-85b1f33ba843`::ref 0 aborting False
>
>
>
> Dmesg Snippet:
> __ratelimit: 28 callbacks suppressed
> end_request: I/O error, dev dm-64, sector 5769088
> end_request: I/O error, dev dm-64, sector 5769200
> end_request: I/O error, dev dm-64, sector 5507072
> end_request: I/O error, dev dm-64, sector 5507080
> end_request: I/O error, dev dm-64, sector 5507072
> end_request: I/O error, dev dm-64, sector 5506944
> end_request: I/O error, dev dm-64, sector 5507056
> end_request: I/O error, dev dm-64, sector 1312768
> end_request: I/O error, dev dm-64, sector 1312776
> end_request: I/O error, dev dm-64, sector 1312768
> end_request: I/O error, dev dm-64, sector 6031232
> end_request: I/O error, dev dm-64, sector 6031344
> end_request: I/O error, dev dm-64, sector 5769216
> end_request: I/O error, dev dm-64, sector 5769224
> end_request: I/O error, dev dm-64, sector 5769216
> end_request: I/O error, dev dm-64, sector 6293376
> end_request: I/O error, dev dm-64, sector 6293488
> end_request: I/O error, dev dm-64, sector 6031360
> end_request: I/O error, dev dm-64, sector 6031368
> end_request: I/O error, dev dm-64, sector 6031360
> end_request: I/O error, dev dm-64, sector 8390528
> end_request: I/O error, dev dm-64, sector 8390640
> end_request: I/O error, dev dm-64, sector 6293504
> end_request: I/O error, dev dm-64, sector 6293512
> end_request: I/O error, dev dm-64, sector 6293504
> __ratelimit: 25 callbacks suppressed
>
>
> Thanks,
> Matt Curry
> Matt at MattCurry.com
>
>
>
>
> This is a PRIVATE message. If you are not the intended recipient, please
> delete without copying and kindly advise us by e-mail of the mistake in
> delivery. NOTE: Regardless of content, this e-mail shall not operate to bind
> SKOPOS to any order or other contract unless pursuant to explicit written
> agreement or government initiative expressly permitting the use of e-mail
> for such purpose.
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
More information about the Users
mailing list