[ovirt-users] Odd messages on new node/hosted engine
Sandro Bonazzola
sbonazzo at redhat.com
Wed Jul 23 09:06:04 UTC 2014
Il 23/07/2014 04:08, Andrew Lau ha scritto:
> Looks like the multipath rules ovirt seems to setup by default, not sure why..
> I just blacklist my (HDD) devices, otherwise my consoles get filled with a similar message.
hosted-engine setup doesn't set power management, it can be done by hand from the UI.
also, hosted-engine setup don't touch the multipath configuration.
If something changes it, it's not hosted-engine setup.
>
> On Thu, Jul 17, 2014 at 5:41 AM, Chris Adams <cma at cmadams.net <mailto:cma at cmadams.net>> wrote:
>
> I built a new node+hosted engine setup, using up-to-date 6.5 and oVirt
> 3.4.3-RC. I see some odd messages, that I think are probably related:
>
> - In the hosted engine UI, I have an Alert that says "Failed to verify
> Power Management configuration for Host hosted_engine_1."
>
> - On the node, I get the following chunk repeated every 10 seconds in
> /var/log/messages:
>
> Jul 16 14:34:19 node0 kernel: device-mapper: table: 253:2: multipath: error getting device
> Jul 16 14:34:19 node0 kernel: device-mapper: ioctl: error adding target to table
> Jul 16 14:34:19 node0 kernel: device-mapper: table: 253:2: multipath: error getting device
> Jul 16 14:34:19 node0 kernel: device-mapper: ioctl: error adding target to table
> Jul 16 14:34:19 node0 multipathd: dm-2: remove map (uevent)
> Jul 16 14:34:19 node0 multipathd: dm-2: devmap not registered, can't remove
> Jul 16 14:34:19 node0 cpuspeed: Disabling performance cpu frequency scaling governor
> Jul 16 14:34:19 node0 multipathd: dm-2: remove map (uevent)
> Jul 16 14:34:19 node0 multipathd: dm-2: devmap not registered, can't remove
> Jul 16 14:34:19 node0 multipathd: dm-2: remove map (uevent)
> Jul 16 14:34:19 node0 multipathd: dm-2: devmap not registered, can't remove
> Jul 16 14:34:19 node0 multipathd: dm-2: remove map (uevent)
> Jul 16 14:34:19 node0 multipathd: dm-2: devmap not registered, can't remove
> Jul 16 14:34:20 node0 cpuspeed: Enabling performance cpu frequency scaling governor
>
> There is no dm-2; the system is installed on 2 SAS drives, mirrored
> using Linux md RAID1, using LVM (dm-0 is the root filesystem and dm-1 is
> swap).
>
> Here's the corresponding chunk of /var/log/vdsm/vdsm.log:
>
> Thread-128::DEBUG::2014-07-16 14:34:19,092::task::595::TaskManager.Task::(_updateState) Task=`6db0f7ed-ec65-4685-ae2d-604560349317`::moving from
> state init -> state preparing
> Thread-128::INFO::2014-07-16 14:34:19,092::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None)
> Thread-128::INFO::2014-07-16 14:34:19,093::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response:
> {'74cb6a07-5745-4b21-ba4b-d9012acb5cae': {'code': 0, 'version': 3, 'acquired': True, 'delay': '0.000485922', 'lastCheck': '9.4', 'valid': True}}
> Thread-128::DEBUG::2014-07-16 14:34:19,093::task::1185::TaskManager.Task::(prepare) Task=`6db0f7ed-ec65-4685-ae2d-604560349317`::finished:
> {'74cb6a07-5745-4b21-ba4b-d9012acb5cae': {'code': 0, 'version': 3, 'acquired': True, 'delay': '0.000485922', 'lastCheck': '9.4', 'valid': True}}
> Thread-128::DEBUG::2014-07-16 14:34:19,094::task::595::TaskManager.Task::(_updateState) Task=`6db0f7ed-ec65-4685-ae2d-604560349317`::moving from
> state preparing -> state finished
> Thread-128::DEBUG::2014-07-16 14:34:19,094::resourceManager::940::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
> Thread-128::DEBUG::2014-07-16 14:34:19,094::resourceManager::977::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
> Thread-128::DEBUG::2014-07-16 14:34:19,095::task::990::TaskManager.Task::(_decref) Task=`6db0f7ed-ec65-4685-ae2d-604560349317`::ref 0 aborting False
> Thread-2011::DEBUG::2014-07-16 14:34:19,369::BindingXMLRPC::251::vds::(wrapper) client [127.0.0.1]
> Thread-2011::DEBUG::2014-07-16 14:34:19,370::task::595::TaskManager.Task::(_updateState) Task=`44641295-7f85-40f9-ba71-6f587a96f387`::moving from
> state init -> state preparing
> Thread-2011::INFO::2014-07-16 14:34:19,371::logUtils::44::dispatcher::(wrapper) Run and protect: connectStorageServer(domType=1,
> spUUID='b15478ff-1ae1-4065-8e52-19c808d39597', conList=[{'port': '', 'connection': 'nfs.c1.api-digital.com:/vmstore/engine', 'iqn': '', 'portal':
> '', 'user': 'kvm', 'protocol_version': '4', 'password': '******', 'id': '7fb481a8-f7b2-4cf7-8862-8ff02acde48d'}], options=None)
> Thread-2011::DEBUG::2014-07-16 14:34:19,376::hsm::2328::Storage.HSM::(__prefetchDomains) nfs local path:
> /rhev/data-center/mnt/nfs.c1.api-digital.com:_vmstore_engine
> Thread-2011::DEBUG::2014-07-16 14:34:19,378::hsm::2352::Storage.HSM::(__prefetchDomains) Found SD uuids: ('74cb6a07-5745-4b21-ba4b-d9012acb5cae',)
> Thread-2011::DEBUG::2014-07-16 14:34:19,379::hsm::2408::Storage.HSM::(connectStorageServer) knownSDs: {74cb6a07-5745-4b21-ba4b-d9012acb5cae:
> storage.nfsSD.findDomain}
> Thread-2011::INFO::2014-07-16 14:34:19,379::logUtils::47::dispatcher::(wrapper) Run and protect: connectStorageServer, Return response:
> {'statuslist': [{'status': 0, 'id': '7fb481a8-f7b2-4cf7-8862-8ff02acde48d'}]}
> Thread-2011::DEBUG::2014-07-16 14:34:19,379::task::1185::TaskManager.Task::(prepare) Task=`44641295-7f85-40f9-ba71-6f587a96f387`::finished:
> {'statuslist': [{'status': 0, 'id': '7fb481a8-f7b2-4cf7-8862-8ff02acde48d'}]}
> Thread-2011::DEBUG::2014-07-16 14:34:19,379::task::595::TaskManager.Task::(_updateState) Task=`44641295-7f85-40f9-ba71-6f587a96f387`::moving from
> state preparing -> state finished
> Thread-2011::DEBUG::2014-07-16 14:34:19,380::resourceManager::940::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
> Thread-2011::DEBUG::2014-07-16 14:34:19,380::resourceManager::977::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
> Thread-2011::DEBUG::2014-07-16 14:34:19,380::task::990::TaskManager.Task::(_decref) Task=`44641295-7f85-40f9-ba71-6f587a96f387`::ref 0 aborting False
> Thread-2012::DEBUG::2014-07-16 14:34:19,571::BindingXMLRPC::251::vds::(wrapper) client [127.0.0.1]
> Thread-2012::DEBUG::2014-07-16 14:34:19,572::task::595::TaskManager.Task::(_updateState) Task=`a1e6fda0-5839-4a07-938f-76eb1705e654`::moving from
> state init -> state preparing
> Thread-2012::INFO::2014-07-16 14:34:19,573::logUtils::44::dispatcher::(wrapper) Run and protect: repoStats(options=None)
> Thread-2012::INFO::2014-07-16 14:34:19,573::logUtils::47::dispatcher::(wrapper) Run and protect: repoStats, Return response:
> {'74cb6a07-5745-4b21-ba4b-d9012acb5cae': {'code': 0, 'version': 3, 'acquired': True, 'delay': '0.000485922', 'lastCheck': '9.9', 'valid': True}}
> Thread-2012::DEBUG::2014-07-16 14:34:19,573::task::1185::TaskManager.Task::(prepare) Task=`a1e6fda0-5839-4a07-938f-76eb1705e654`::finished:
> {'74cb6a07-5745-4b21-ba4b-d9012acb5cae': {'code': 0, 'version': 3, 'acquired': True, 'delay': '0.000485922', 'lastCheck': '9.9', 'valid': True}}
> Thread-2012::DEBUG::2014-07-16 14:34:19,574::task::595::TaskManager.Task::(_updateState) Task=`a1e6fda0-5839-4a07-938f-76eb1705e654`::moving from
> state preparing -> state finished
> Thread-2012::DEBUG::2014-07-16 14:34:19,574::resourceManager::940::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}
> Thread-2012::DEBUG::2014-07-16 14:34:19,574::resourceManager::977::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}
> Thread-2012::DEBUG::2014-07-16 14:34:19,574::task::990::TaskManager.Task::(_decref) Task=`a1e6fda0-5839-4a07-938f-76eb1705e654`::ref 0 aborting False
> Thread-40::DEBUG::2014-07-16 14:34:19,664::misc::756::SamplingMethod::(__call__) Trying to enter sampling method (storage.sdc.refreshStorage)
> Thread-40::DEBUG::2014-07-16 14:34:19,665::misc::758::SamplingMethod::(__call__) Got in to sampling method
> Thread-40::DEBUG::2014-07-16 14:34:19,665::misc::756::SamplingMethod::(__call__) Trying to enter sampling method (storage.iscsi.rescan)
> Thread-40::DEBUG::2014-07-16 14:34:19,666::misc::758::SamplingMethod::(__call__) Got in to sampling method
> Thread-40::DEBUG::2014-07-16 14:34:19,666::iscsi::407::Storage.ISCSI::(rescan) Performing SCSI scan, this will take up to 30 seconds
> Thread-40::DEBUG::2014-07-16 14:34:19,666::iscsiadm::92::Storage.Misc.excCmd::(_runCmd) '/usr/bin/sudo -n /sbin/iscsiadm -m session -R' (cwd None)
> Thread-40::DEBUG::2014-07-16 14:34:19,709::misc::766::SamplingMethod::(__call__) Returning last result
> Thread-40::DEBUG::2014-07-16 14:34:19,710::multipath::109::Storage.Misc.excCmd::(rescan) '/usr/bin/sudo -n /sbin/multipath' (cwd None)
> Thread-40::DEBUG::2014-07-16 14:34:19,842::multipath::109::Storage.Misc.excCmd::(rescan) SUCCESS: <err> = ''; <rc> = 0
> Thread-40::DEBUG::2014-07-16 14:34:19,843::lvm::497::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' got the operation mutex
> Thread-40::DEBUG::2014-07-16 14:34:19,844::lvm::499::OperationMutex::(_invalidateAllPvs) Operation 'lvm invalidate operation' released the
> operation mutex
> Thread-40::DEBUG::2014-07-16 14:34:19,844::lvm::508::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' got the operation mutex
> Thread-40::DEBUG::2014-07-16 14:34:19,845::lvm::510::OperationMutex::(_invalidateAllVgs) Operation 'lvm invalidate operation' released the
> operation mutex
> Thread-40::DEBUG::2014-07-16 14:34:19,845::lvm::528::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' got the operation mutex
> Thread-40::DEBUG::2014-07-16 14:34:19,846::lvm::530::OperationMutex::(_invalidateAllLvs) Operation 'lvm invalidate operation' released the
> operation mutex
> Thread-40::DEBUG::2014-07-16 14:34:19,846::misc::766::SamplingMethod::(__call__) Returning last result
> Thread-40::DEBUG::2014-07-16 14:34:19,854::fileSD::150::Storage.StorageDomain::(__init__) Reading domain in path
> /rhev/data-center/mnt/nfs.c1.api-digital.com:_vmstore_engine/74cb6a07-5745-4b21-ba4b-d9012acb5cae
> Thread-40::DEBUG::2014-07-16 14:34:19,856::persistentDict::192::Storage.PersistentDict::(__init__) Created a persistent dict with FileMetadataRW
> backend
> Thread-40::DEBUG::2014-07-16 14:34:19,860::persistentDict::234::Storage.PersistentDict::(refresh) read lines (FileMetadataRW)=['CLASS=Data',
> 'DESCRIPTION=hosted_storage', 'IOOPTIMEOUTSEC=10', 'LEASERETRIES=3', 'LEASETIMESEC=60', 'LOCKPOLICY=ON', 'LOCKRENEWALINTERVALSEC=5',
> 'MASTER_VERSION=1', 'POOL_DESCRIPTION=c1', 'POOL_DOMAINS=74cb6a07-5745-4b21-ba4b-d9012acb5cae:Active', 'POOL_SPM_ID=-1', 'POOL_SPM_LVER=0',
> 'POOL_UUID=b15478ff-1ae1-4065-8e52-19c808d39597', 'REMOTE_PATH=nfs.c1.api-digital.com:/vmstore/engine', 'ROLE=Master',
> 'SDUUID=74cb6a07-5745-4b21-ba4b-d9012acb5cae', 'TYPE=NFS', 'VERSION=3', '_SHA_CKSUM=4f007c871da3177ba5546459bcebc8be8aff689e']
> Thread-40::DEBUG::2014-07-16 14:34:19,863::fileSD::609::Storage.StorageDomain::(imageGarbageCollector) Removing remnants of deleted images []
> Thread-40::INFO::2014-07-16 14:34:19,863::sd::383::Storage.StorageDomain::(_registerResourceNamespaces) Resource namespace
> 74cb6a07-5745-4b21-ba4b-d9012acb5cae_imageNS already registered
> Thread-40::INFO::2014-07-16 14:34:19,863::sd::391::Storage.StorageDomain::(_registerResourceNamespaces) Resource namespace
> 74cb6a07-5745-4b21-ba4b-d9012acb5cae_volumeNS already registered
> Thread-40::DEBUG::2014-07-16 14:34:19,868::fileSD::259::Storage.Misc.excCmd::(getReadDelay) '/bin/dd iflag=direct
> if=/rhev/data-center/mnt/nfs.c1.api-digital.com:_vmstore_engine/74cb6a07-5745-4b21-ba4b-d9012acb5cae/dom_md/metadata bs=4096 count=1' (cwd None)
> Thread-40::DEBUG::2014-07-16 14:34:19,885::fileSD::259::Storage.Misc.excCmd::(getReadDelay) SUCCESS: <err> = '0+1 records in\n0+1 records out\n476
> bytes (476 B) copied, 0.000548138 s, 868 kB/s\n'; <rc> = 0
>
> --
> Chris Adams <cma at cmadams.net <mailto:cma at cmadams.net>>
> _______________________________________________
> Users mailing list
> Users at ovirt.org <mailto:Users at ovirt.org>
> http://lists.ovirt.org/mailman/listinfo/users
>
>
>
>
> _______________________________________________
> Users mailing list
> Users at ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
More information about the Users
mailing list