[ovirt-users] . Failing to add iSCSI storage domain Users Digest, Vol 51, Issue 16

Jean-Pierre Ribeauville jpribeauville at axway.com
Wed Dec 2 12:18:42 UTC 2015


Hi,

Failure reason was the following : the initiator IQN wasn't declared on the iSCSI server.

Sorry for disturbing for nothing.

Regards,

J.P.

-----Message d'origine-----
De : users-bounces at ovirt.org [mailto:users-bounces at ovirt.org] De la part de users-request at ovirt.org
Envoyé : mercredi 2 décembre 2015 11:42
À : users at ovirt.org
Objet : Users Digest, Vol 51, Issue 16

Send Users mailing list submissions to
	users at ovirt.org

To subscribe or unsubscribe via the World Wide Web, visit
	http://lists.ovirt.org/mailman/listinfo/users
or, via email, send a message with subject or body 'help' to
	users-request at ovirt.org

You can reach the person managing the list at
	users-owner at ovirt.org

When replying, please edit your Subject line so it is more specific than "Re: Contents of Users digest..."


Today's Topics:

   1.  Failing to add iSCSI storage domain (Jean-Pierre Ribeauville)
   2. Re:  HA cluster (Simone Tiraboschi)


----------------------------------------------------------------------

Message: 1
Date: Wed, 2 Dec 2015 10:41:11 +0000
From: Jean-Pierre Ribeauville <jpribeauville at axway.com>
To: "users at ovirt.org" <users at ovirt.org>
Subject: [ovirt-users] Failing to add iSCSI storage domain
Message-ID:
	<1051EFB4D3A1704680C38CCAAC5836D2929DC082 at WPTXMAIL2.ptx.axway.int>
Content-Type: text/plain; charset="iso-8859-1"

Hi,

By using 3.5 ovirt , I'm trying to add  a iSCSI storage domain to my datacenter.


When clicking on discover button , I  got following error :

No new devices were found. This may be due to either: incorrect multipath configuration on the Host or wrong address of the iscsi target or a failure to authenticate on the target device. Please consult your Storage Administrator.

I restart sanlock service on the host ( it's a RHEL 7)  ; no positive effect.

In engine.log  found this  :

...
IQNListReturnForXmlRpc [mStatus=StatusForXmlRpc [mCode=475, mMessage=Failed discovery of iSCSI targets: "portal=IscsiPortal(hostname=u'10.147.60.90', port=3260), err=(21, [], ['iscsiadm: No portals found'])"]]
2015-12-02 11:28:31,468 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.DiscoverSendTargetsVDSCommand] (ajp-/127.0.0.1:8702-5) HostName = ldc01omv01
2015-12-02 11:28:31,468 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.DiscoverSendTargetsVDSCommand] (ajp-/127.0.0.1:8702-5) Command DiscoverSendTargetsVDSCommand(HostName = ldc01omv01, HostId = 09bb3024-170f-48a1-a78a-951a2c61c680, connection={ id: null, connection: 10.147.60.90, iqn: null, vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };) execution failed. Exception: VDSErrorException: VDSGenericException: VDSErrorException: Failed in vdscommand to DiscoverSendTargetsVDS, error = Failed discovery of iSCSI targets: "portal=IscsiPortal(hostname=u'10.147.60.90', port=3260), err=(21, [], ['iscsiadm: No portals found'])"
2015-12-02 11:28:31,468 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.DiscoverSendTargetsVDSCommand] (ajp-/127.0.0.1:8702-5) FINISH, DiscoverSendTargetsVDSCommand, log id: 3023392e
2015-12-02 11:28:31,468 ERROR [org.ovirt.engine.core.bll.storage.DiscoverSendTargetsQuery] (ajp-/127.0.0.1:8702-5) Query DiscoverSendTargetsQuery failed. Exception message is VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed in vdscommand to DiscoverSendTargetsVDS, error = Failed discovery of iSCSI targets: "portal=IscsiPortal(hostname=u'10.147.60.90', port=3260), err=(21, [], ['iscsiadm: No portals found'])" (Failed with error iSCSIDiscoveryError and code 475) : org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed in vdscommand to DiscoverSendTargetsVDS, error = Failed discovery of iSCSI targets: "portal=IscsiPortal(hostname=u'10.147.60.90', port=3260), err=(21, [], ['iscsiadm: No portals found'])" (Failed with error iSCSIDiscoveryError and code 475): org.ovirt.engine.core.commo
 n.errors.VdcBLLException: VdcBLLException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed in vdscommand to DiscoverSendTargetsVDS, error = Failed discovery of iSCSI targets: "portal=IscsiPortal(hostname=u'10.147.60.90', port=3260), err=(21, [], ['iscsiadm: No portals found'])" (Failed with error iSCSIDiscoveryError and code 475)
        at org.ovirt.engine.core.bll.VdsHandler.handleVdsResult(VdsHandler.java:117) [bll.jar:]
        at org.ovirt.engine.core.bll.VDSBrokerFrontendImpl.RunVdsCommand(VDSBrokerFrontendImpl.java:33) [bll.jar:]
        at org.ovirt.engine.core.bll.storage.DiscoverSendTargetsQuery.executeQueryCommand(DiscoverSendTargetsQuery.java:16) [bll.jar:]
        at org.ovirt.engine.core.bll.QueriesCommandBase.executeCommand(QueriesCommandBase.java:73) [bll.jar:]

...



In order to check iSCSI discovering via another way, I run this command from  ovirt manager system command prompt :

iscsiadm -m discovery -t st -p <iSCSI Server Address>

I got  a correct  device lists.

Thanks for help.



J.P. Ribeauville


P: +33.(0).1.47.17.20.49
.
Puteaux 3 Etage 5  Bureau 4

jpribeauville at axway.com<mailto:jpribeauville at axway.com>
http://www.axway.com<http://www.axway.com/>



P Pensez ? l'environnement avant d'imprimer.



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20151202/e0f0bb04/attachment-0001.html>

------------------------------

Message: 2
Date: Wed, 2 Dec 2015 11:41:02 +0100
From: Simone Tiraboschi <stirabos at redhat.com>
To: Budur Nagaraju <nbudoor at gmail.com>
Cc: users <users at ovirt.org>
Subject: Re: [ovirt-users] HA cluster
Message-ID:
	<CAN8-ONrgt0gOT3JVgURpq6GaM5B-rCTjaXJ4Nd-FNk1F1UEBJw at mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

On Wed, Dec 2, 2015 at 11:25 AM, Budur Nagaraju <nbudoor at gmail.com> wrote:

> pls fine the logs from the below mentioned URL,
>
> http://pastebin.com/ZeKyyFbN
>

OK, the issue is here:

Thread-88::ERROR::2015-12-02
15:06:27,735::vm::2358::vm.Vm::(_startUnderlyingVm)
vmId=`93db4369-285f-48bc-bc68-181d9de41a3c`::The vm start process failed Traceback (most recent call last):
  File "/usr/share/vdsm/virt/vm.py", line 2298, in _startUnderlyingVm
    self._run()
  File "/usr/share/vdsm/virt/vm.py", line 3363, in _run
    self._connection.createXML(domxml, flags),
  File "/usr/lib/python2.6/site-packages/vdsm/libvirtconnection.py", line 119, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2709, in createXML
    if ret is None:raise libvirtError('virDomainCreateXML() failed',
conn=self)
libvirtError: unsupported configuration: Domain requires KVM, but it is not available. Check that virtualization is enabled in the host BIOS, and host configuration is setup to load the kvm modules.
Thread-88::DEBUG::2015-12-02 15:06:27,751::vm::2813::vm.Vm::(setDownStatus)
vmId=`93db4369-285f-48bc-bc68-181d9de41a3c`::Changed state to Down:
unsupported configuration: Domain requires KVM, but it is not available.
Check that virtualization is enabled in the host BIOS, and host configuration is setup to load the kvm modules. (code=1)

but it's pretty strange cause hosted-engine-setup already explicitly check for visualization support and just exits with a clear error if not.
Did you played with the kvm module while hosted-engine-setup was running?

Can you please hosted-engine-setup logs?


>
> On Fri, Nov 27, 2015 at 6:39 PM, Simone Tiraboschi 
> <stirabos at redhat.com>
> wrote:
>
>>
>>
>> On Fri, Nov 27, 2015 at 12:42 PM, Maxim Kovgan <kovganm at gmail.com> wrote:
>>
>>> Maybe even makes sense to open a bugzilla ticket already. Better 
>>> safe than sorry.
>>>
>>
>> We still need at least one log file to understand what happened.
>>
>>
>>> On Nov 27, 2015 11:35 AM, "Simone Tiraboschi" <stirabos at redhat.com>
>>> wrote:
>>>
>>>>
>>>> On Fri, Nov 27, 2015 at 10:10 AM, Budur Nagaraju 
>>>> <nbudoor at gmail.com>
>>>> wrote:
>>>>
>>>>> I do not know what logs you are expecting ? the logs which I got 
>>>>> is pasted in the mail if you require in pastebin let me know I 
>>>>> will upload there .
>>>>>
>>>>
>>>>
>>>> Please run sosreport utility and share the resulting archive where 
>>>> you prefer.
>>>> You can follow this guide:
>>>> http://www.linuxtechi.com/how-to-create-sosreport-in-linux/
>>>>
>>>>>
>>>>>
>>>>> On Fri, Nov 27, 2015 at 1:58 PM, Sandro Bonazzola 
>>>>> <sbonazzo at redhat.com
>>>>> > wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, Nov 27, 2015 at 8:34 AM, Budur Nagaraju 
>>>>>> <nbudoor at gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> I got only 10lines to in the vdsm logs and are below ,
>>>>>>>
>>>>>>>
>>>>>> Can you please provide full sos report?
>>>>>>
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> [root at he /]# tail -f /var/log/vdsm/vdsm.log
>>>>>>> Thread-100::DEBUG::2015-11-27
>>>>>>> 12:58:57,360::resourceManager::616::Storage.ResourceManager::(re
>>>>>>> leaseResource) Trying to release resource 
>>>>>>> 'Storage.HsmDomainMonitorLock'
>>>>>>> Thread-100::DEBUG::2015-11-27
>>>>>>> 12:58:57,360::resourceManager::635::Storage.ResourceManager::(re
>>>>>>> leaseResource) Released resource 'Storage.HsmDomainMonitorLock' 
>>>>>>> (0 active users)
>>>>>>> Thread-100::DEBUG::2015-11-27
>>>>>>> 12:58:57,360::resourceManager::641::Storage.ResourceManager::(re
>>>>>>> leaseResource) Resource 'Storage.HsmDomainMonitorLock' is free, 
>>>>>>> finding out if anyone is waiting for it.
>>>>>>> Thread-100::DEBUG::2015-11-27
>>>>>>> 12:58:57,360::resourceManager::649::Storage.ResourceManager::(re
>>>>>>> leaseResource) No one is waiting for resource 
>>>>>>> 'Storage.HsmDomainMonitorLock', Clearing records.
>>>>>>> Thread-100::INFO::2015-11-27
>>>>>>> 12:58:57,360::logUtils::47::dispatcher::(wrapper) Run and protect:
>>>>>>> stopMonitoringDomain, Return response: None
>>>>>>> Thread-100::DEBUG::2015-11-27
>>>>>>> 12:58:57,361::task::1191::Storage.TaskManager.Task::(prepare)
>>>>>>> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::finished: None
>>>>>>> Thread-100::DEBUG::2015-11-27
>>>>>>> 12:58:57,361::task::595::Storage.TaskManager.Task::(_updateState
>>>>>>> ) Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::moving from state 
>>>>>>> preparing -> state finished
>>>>>>> Thread-100::DEBUG::2015-11-27
>>>>>>> 12:58:57,361::resourceManager::940::Storage.ResourceManager.Owne
>>>>>>> r::(releaseAll) Owner.releaseAll requests {} resources {}
>>>>>>> Thread-100::DEBUG::2015-11-27
>>>>>>> 12:58:57,361::resourceManager::977::Storage.ResourceManager.Owne
>>>>>>> r::(cancelAll)
>>>>>>> Owner.cancelAll requests {}
>>>>>>> Thread-100::DEBUG::2015-11-27
>>>>>>> 12:58:57,361::task::993::Storage.TaskManager.Task::(_decref)
>>>>>>> Task=`0128b179-fdb3-474b-a196-8cc81a72a837`::ref 0 aborting 
>>>>>>> False
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Thu, Nov 26, 2015 at 4:20 PM, Simone Tiraboschi < 
>>>>>>> stirabos at redhat.com> wrote:
>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Thu, Nov 26, 2015 at 11:05 AM, Budur Nagaraju 
>>>>>>>> <nbudoor at gmail.com
>>>>>>>> > wrote:
>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> *Below are the entire logs*
>>>>>>>>>
>>>>>>>>>
>>>>>>>> Sorry, with the entire log I mean if you can attach or share 
>>>>>>>> somewhere the whole /var/log/vdsm/vdsm.log  cause the latest 
>>>>>>>> ten lines are not enough to point out the issue.
>>>>>>>>
>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> *[root at he ~]# tail -f /var/log/vdsm/vdsm.log *
>>>>>>>>>
>>>>>>>>> Detector thread::DEBUG::2015-11-26
>>>>>>>>> 15:16:05,622::protocoldetector::247::vds.MultiProtocolAcceptor
>>>>>>>>> ::(_handle_connection_read) Detected protocol xml from 
>>>>>>>>> 127.0.0.1:50944 Detector thread::DEBUG::2015-11-26
>>>>>>>>> 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket) 
>>>>>>>>> xml over http detected from ('127.0.0.1', 50944) Detector 
>>>>>>>>> thread::DEBUG::2015-11-26
>>>>>>>>> 15:16:05,703::protocoldetector::187::vds.MultiProtocolAcceptor
>>>>>>>>> ::(_add_connection) Adding connection from 127.0.0.1:50945 
>>>>>>>>> Detector thread::DEBUG::2015-11-26
>>>>>>>>> 15:16:06,101::protocoldetector::201::vds.MultiProtocolAcceptor
>>>>>>>>> ::(_remove_connection) Connection removed from 127.0.0.1:50945 
>>>>>>>>> Detector thread::DEBUG::2015-11-26
>>>>>>>>> 15:16:06,101::protocoldetector::247::vds.MultiProtocolAcceptor
>>>>>>>>> ::(_handle_connection_read) Detected protocol xml from 
>>>>>>>>> 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26
>>>>>>>>> 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket) 
>>>>>>>>> xml over http detected from ('127.0.0.1', 50945) Detector 
>>>>>>>>> thread::DEBUG::2015-11-26
>>>>>>>>> 15:16:06,182::protocoldetector::187::vds.MultiProtocolAcceptor
>>>>>>>>> ::(_add_connection) Adding connection from 127.0.0.1:50946 
>>>>>>>>> Detector thread::DEBUG::2015-11-26
>>>>>>>>> 15:16:06,710::protocoldetector::201::vds.MultiProtocolAcceptor
>>>>>>>>> ::(_remove_connection) Connection removed from 127.0.0.1:50946 
>>>>>>>>> Detector thread::DEBUG::2015-11-26
>>>>>>>>> 15:16:06,711::protocoldetector::247::vds.MultiProtocolAcceptor
>>>>>>>>> ::(_handle_connection_read) Detected protocol xml from 
>>>>>>>>> 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26
>>>>>>>>> 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket) 
>>>>>>>>> xml over http detected from ('127.0.0.1', 50946)
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> *[root at he ~]# tail -f /var/log/vdsm/supervdsm.log *
>>>>>>>>>
>>>>>>>>> MainProcess::DEBUG::2015-11-26
>>>>>>>>> 15:13:30,234::supervdsmServer::102::SuperVdsm.ServerCallback::
>>>>>>>>> (wrapper) call readMultipathConf with () {}
>>>>>>>>> MainProcess::DEBUG::2015-11-26
>>>>>>>>> 15:13:30,234::supervdsmServer::109::SuperVdsm.ServerCallback::
>>>>>>>>> (wrapper) return readMultipathConf with ['# RHEV REVISION 
>>>>>>>>> 1.1', '', 'defaults {',
>>>>>>>>> '    polling_interval        5', '    getuid_callout
>>>>>>>>> "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"',
>>>>>>>>> '    no_path_retry           fail', '    user_friendly_names     no', '
>>>>>>>>> flush_on_last_del       yes', '    fast_io_fail_tmo        5', '
>>>>>>>>> dev_loss_tmo            30', '    max_fds                 4096', '}', '',
>>>>>>>>> 'devices {', 'device {', '    vendor                  "HITACHI"', '
>>>>>>>>> product                 "DF.*"', '    getuid_callout
>>>>>>>>> "/lib/udev/scsi_id --whitelisted --replace-whitespace --device=/dev/%n"',
>>>>>>>>> '}', 'device {', '    vendor                  "COMPELNT"', '
>>>>>>>>> product                 "Compellent Vol"', '    no_path_retry
>>>>>>>>> fail', '}', 'device {', '    # multipath.conf.default', '
>>>>>>>>> vendor                  "DGC"', '    product                 ".*"', '
>>>>>>>>> product_blacklist       "LUNZ"', '    path_grouping_policy
>>>>>>>>> "group_by_prio"', '    path_checker            "emc_clariion"', '
>>>>>>>>> hardware_handler        "1 emc"', '    prio                    "emc"', '
>>>>>>>>> failback                immediate', '    rr_weight
>>>>>>>>> "uniform"', '    # vdsm required configuration', '
>>>>>>>>> getuid_callout          "/lib/udev/scsi_id --whitelisted
>>>>>>>>> --replace-whitespace --device=/dev/%n"', '    features                "0"',
>>>>>>>>> '    no_path_retry           fail', '}', '}']
>>>>>>>>> MainProcess|Thread-13::DEBUG::2015-11-26
>>>>>>>>> 15:13:31,365::supervdsmServer::102::SuperVdsm.ServerCallback::
>>>>>>>>> (wrapper) call getHardwareInfo with () {}
>>>>>>>>> MainProcess|Thread-13::DEBUG::2015-11-26
>>>>>>>>> 15:13:31,397::supervdsmServer::109::SuperVdsm.ServerCallback::
>>>>>>>>> (wrapper) return getHardwareInfo with {'systemProductName': 
>>>>>>>>> 'KVM', 'systemUUID':
>>>>>>>>> 'f91632f2-7a17-4ddb-9631-742f82a77480', 'systemFamily': 'Red 
>>>>>>>>> Hat Enterprise Linux', 'systemVersion': 'RHEL 7.0.0 PC (i440FX 
>>>>>>>>> + PIIX, 1996)',
>>>>>>>>> 'systemManufacturer': 'Red Hat'}
>>>>>>>>> MainProcess|Thread-21::DEBUG::2015-11-26
>>>>>>>>> 15:13:35,393::supervdsmServer::102::SuperVdsm.ServerCallback::
>>>>>>>>> (wrapper) call validateAccess with ('qemu', ('qemu', 'kvm'), 
>>>>>>>>> '/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {}
>>>>>>>>> MainProcess|Thread-21::DEBUG::2015-11-26
>>>>>>>>> 15:13:35,395::supervdsmServer::109::SuperVdsm.ServerCallback::
>>>>>>>>> (wrapper) return validateAccess with None
>>>>>>>>> MainProcess|Thread-22::DEBUG::2015-11-26
>>>>>>>>> 15:13:36,067::supervdsmServer::102::SuperVdsm.ServerCallback::
>>>>>>>>> (wrapper) call validateAccess with ('qemu', ('qemu', 'kvm'), 
>>>>>>>>> '/rhev/data-center/mnt/10.204.207.152:_home_vms', 5) {}
>>>>>>>>> MainProcess|Thread-22::DEBUG::2015-11-26
>>>>>>>>> 15:13:36,069::supervdsmServer::109::SuperVdsm.ServerCallback::
>>>>>>>>> (wrapper) return validateAccess with None
>>>>>>>>> MainProcess|PolicyEngine::DEBUG::2015-11-26
>>>>>>>>> 15:13:40,619::supervdsmServer::102::SuperVdsm.ServerCallback::
>>>>>>>>> (wrapper) call ksmTune with ({'run': 0},) {}
>>>>>>>>> MainProcess|PolicyEngine::DEBUG::2015-11-26
>>>>>>>>> 15:13:40,619::supervdsmServer::109::SuperVdsm.ServerCallback::
>>>>>>>>> (wrapper)
>>>>>>>>> return ksmTune with None
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> *[root at he ~]# tail -f /var/log/vdsm/connectivity.log *
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> 2015-11-26 15:02:02,632:DEBUG:recent_client:False
>>>>>>>>> 2015-11-26 15:04:44,975:DEBUG:recent_client:True
>>>>>>>>> 2015-11-26 15:05:15,039:DEBUG:recent_client:False
>>>>>>>>> 2015-11-26 15:07:23,311:DEBUG:recent_client:True
>>>>>>>>> 2015-11-26 15:08:25,774:DEBUG:recent_client:True,
>>>>>>>>> ovirtmgmt:(operstate:up speed:0 duplex:unknown), 
>>>>>>>>> lo:(operstate:up speed:0 duplex:unknown), 
>>>>>>>>> ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), 
>>>>>>>>> bond0:(operstate:down speed:0 duplex:unknown), 
>>>>>>>>> eth0:(operstate:up speed:100
>>>>>>>>> duplex:full)
>>>>>>>>> 2015-11-26 15:08:55,845:DEBUG:recent_client:False
>>>>>>>>> 2015-11-26 15:08:59,859:DEBUG:recent_client:True
>>>>>>>>> 2015-11-26 15:09:29,929:DEBUG:recent_client:False
>>>>>>>>> 2015-11-26 15:13:32,292:DEBUG:recent_client:True,
>>>>>>>>> ovirtmgmt:(operstate:up speed:0 duplex:unknown), 
>>>>>>>>> lo:(operstate:up speed:0 duplex:unknown), 
>>>>>>>>> ;vdsmdummy;:(operstate:down speed:0 duplex:unknown), 
>>>>>>>>> bond0:(operstate:down speed:0 duplex:unknown), 
>>>>>>>>> eth0:(operstate:up speed:100
>>>>>>>>> duplex:full)
>>>>>>>>> 2015-11-26 15:14:02,363:DEBUG:recent_client:False
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> *[root at he ~]# tail -f /var/log/vdsm/mom.log *
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> 2015-11-26 15:13:30,581 - mom.Policy - INFO - Loaded policy 
>>>>>>>>> '04-cputune'
>>>>>>>>> 2015-11-26 15:13:30,581 - mom.PolicyEngine - INFO - Policy 
>>>>>>>>> Engine starting
>>>>>>>>> 2015-11-26 15:13:30,582 - mom.RPCServer - INFO - RPC Server is 
>>>>>>>>> disabled
>>>>>>>>> 2015-11-26 15:13:40,618 - mom.Controllers.KSM - INFO - 
>>>>>>>>> Updating KSM configuration: pages_to_scan:0 
>>>>>>>>> merge_across_nodes:8 run:0
>>>>>>>>> sleep_millisecs:0
>>>>>>>>> 2015-11-26 15:14:51,492 - mom.RPCServer - INFO - 
>>>>>>>>> getStatistics()
>>>>>>>>> 2015-11-26 15:14:56,962 - mom.RPCServer - INFO - 
>>>>>>>>> getStatistics()
>>>>>>>>> 2015-11-26 15:15:02,451 - mom.RPCServer - INFO - 
>>>>>>>>> getStatistics()
>>>>>>>>> 2015-11-26 15:15:07,777 - mom.RPCServer - INFO - 
>>>>>>>>> getStatistics()
>>>>>>>>> 2015-11-26 15:15:13,267 - mom.RPCServer - INFO - 
>>>>>>>>> getStatistics()
>>>>>>>>> 2015-11-26 15:15:18,765 - mom.RPCServer - INFO - 
>>>>>>>>> getStatistics()
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Thu, Nov 26, 2015 at 3:28 PM, Budur Nagaraju 
>>>>>>>>> <nbudoor at gmail.com
>>>>>>>>> > wrote:
>>>>>>>>>
>>>>>>>>>> Below are the logs,
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> [root at he ~]# tail -f /var/log/vdsm/vdsm.log Detector 
>>>>>>>>>> thread::DEBUG::2015-11-26
>>>>>>>>>> 15:16:05,622::protocoldetector::247::vds.MultiProtocolAccepto
>>>>>>>>>> r::(_handle_connection_read) Detected protocol xml from 
>>>>>>>>>> 127.0.0.1:50944 Detector thread::DEBUG::2015-11-26
>>>>>>>>>> 15:16:05,623::BindingXMLRPC::1173::XmlDetector::(handleSocket
>>>>>>>>>> ) xml over http detected from ('127.0.0.1', 50944) Detector 
>>>>>>>>>> thread::DEBUG::2015-11-26
>>>>>>>>>> 15:16:05,703::protocoldetector::187::vds.MultiProtocolAccepto
>>>>>>>>>> r::(_add_connection) Adding connection from 127.0.0.1:50945 
>>>>>>>>>> Detector thread::DEBUG::2015-11-26
>>>>>>>>>> 15:16:06,101::protocoldetector::201::vds.MultiProtocolAccepto
>>>>>>>>>> r::(_remove_connection) Connection removed from 
>>>>>>>>>> 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26
>>>>>>>>>> 15:16:06,101::protocoldetector::247::vds.MultiProtocolAccepto
>>>>>>>>>> r::(_handle_connection_read) Detected protocol xml from 
>>>>>>>>>> 127.0.0.1:50945 Detector thread::DEBUG::2015-11-26
>>>>>>>>>> 15:16:06,101::BindingXMLRPC::1173::XmlDetector::(handleSocket
>>>>>>>>>> ) xml over http detected from ('127.0.0.1', 50945) Detector 
>>>>>>>>>> thread::DEBUG::2015-11-26
>>>>>>>>>> 15:16:06,182::protocoldetector::187::vds.MultiProtocolAccepto
>>>>>>>>>> r::(_add_connection) Adding connection from 127.0.0.1:50946 
>>>>>>>>>> Detector thread::DEBUG::2015-11-26
>>>>>>>>>> 15:16:06,710::protocoldetector::201::vds.MultiProtocolAccepto
>>>>>>>>>> r::(_remove_connection) Connection removed from 
>>>>>>>>>> 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26
>>>>>>>>>> 15:16:06,711::protocoldetector::247::vds.MultiProtocolAccepto
>>>>>>>>>> r::(_handle_connection_read) Detected protocol xml from 
>>>>>>>>>> 127.0.0.1:50946 Detector thread::DEBUG::2015-11-26
>>>>>>>>>> 15:16:06,711::BindingXMLRPC::1173::XmlDetector::(handleSocket
>>>>>>>>>> ) xml over http detected from ('127.0.0.1', 50946)
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Thu, Nov 26, 2015 at 3:06 PM, Simone Tiraboschi < 
>>>>>>>>>> stirabos at redhat.com> wrote:
>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Thu, Nov 26, 2015 at 10:33 AM, Budur Nagaraju < 
>>>>>>>>>>> nbudoor at gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> I have done a fresh installation and now am getting the 
>>>>>>>>>>>> below error,
>>>>>>>>>>>>
>>>>>>>>>>>> [ INFO  ] Updating hosted-engine configuration [ INFO  ] 
>>>>>>>>>>>> Stage: Transaction commit [ INFO  ] Stage: Closing up
>>>>>>>>>>>>           The following network ports should be opened:
>>>>>>>>>>>>               tcp:5900
>>>>>>>>>>>>               tcp:5901
>>>>>>>>>>>>               udp:5900
>>>>>>>>>>>>               udp:5901
>>>>>>>>>>>>           An example of the required configuration for 
>>>>>>>>>>>> iptables can be found at:
>>>>>>>>>>>>               /etc/ovirt-hosted-engine/iptables.example
>>>>>>>>>>>>           In order to configure firewalld, copy the files from
>>>>>>>>>>>>           /etc/ovirt-hosted-engine/firewalld to 
>>>>>>>>>>>> /etc/firewalld/services
>>>>>>>>>>>>           and execute the following commands:
>>>>>>>>>>>>               firewall-cmd -service hosted-console [ INFO  
>>>>>>>>>>>> ] Creating VM [ ERROR ] Failed to execute stage 'Closing 
>>>>>>>>>>>> up': Cannot set temporary password for console connection. 
>>>>>>>>>>>> The VM may not have been
>>>>>>>>>>>> created: please check VDSM logs [ INFO  ] Stage: Clean up [ 
>>>>>>>>>>>> INFO  ] Generating answer file 
>>>>>>>>>>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126145701.conf'
>>>>>>>>>>>> [ INFO  ] Stage: Pre-termination [ INFO  ] Stage: 
>>>>>>>>>>>> Termination
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> [root at he ovirt]# tail -f /var/log/vdsm/
>>>>>>>>>>>> backup/           connectivity.log  mom.log
>>>>>>>>>>>> supervdsm.log     vdsm.log
>>>>>>>>>>>> [root at he ovirt]# tail -f /var/log/vdsm/vdsm.log Detector 
>>>>>>>>>>>> thread::DEBUG::2015-11-26
>>>>>>>>>>>> 14:57:07,564::protocoldetector::247::vds.MultiProtocolAccep
>>>>>>>>>>>> tor::(_handle_connection_read) Detected protocol xml from 
>>>>>>>>>>>> 127.0.0.1:42741 Detector thread::DEBUG::2015-11-26
>>>>>>>>>>>> 14:57:07,564::BindingXMLRPC::1173::XmlDetector::(handleSock
>>>>>>>>>>>> et) xml over http detected from ('127.0.0.1', 42741) 
>>>>>>>>>>>> Detector thread::DEBUG::2015-11-26
>>>>>>>>>>>> 14:57:07,644::protocoldetector::187::vds.MultiProtocolAccep
>>>>>>>>>>>> tor::(_add_connection) Adding connection from 
>>>>>>>>>>>> 127.0.0.1:42742 Detector thread::DEBUG::2015-11-26
>>>>>>>>>>>> 14:57:08,088::protocoldetector::201::vds.MultiProtocolAccep
>>>>>>>>>>>> tor::(_remove_connection) Connection removed from 
>>>>>>>>>>>> 127.0.0.1:42742 Detector thread::DEBUG::2015-11-26
>>>>>>>>>>>> 14:57:08,088::protocoldetector::247::vds.MultiProtocolAccep
>>>>>>>>>>>> tor::(_handle_connection_read) Detected protocol xml from 
>>>>>>>>>>>> 127.0.0.1:42742 Detector thread::DEBUG::2015-11-26
>>>>>>>>>>>> 14:57:08,088::BindingXMLRPC::1173::XmlDetector::(handleSock
>>>>>>>>>>>> et) xml over http detected from ('127.0.0.1', 42742) 
>>>>>>>>>>>> Detector thread::DEBUG::2015-11-26
>>>>>>>>>>>> 14:57:08,171::protocoldetector::187::vds.MultiProtocolAccep
>>>>>>>>>>>> tor::(_add_connection) Adding connection from 
>>>>>>>>>>>> 127.0.0.1:42743 Detector thread::DEBUG::2015-11-26
>>>>>>>>>>>> 14:57:08,572::protocoldetector::201::vds.MultiProtocolAccep
>>>>>>>>>>>> tor::(_remove_connection) Connection removed from 
>>>>>>>>>>>> 127.0.0.1:42743 Detector thread::DEBUG::2015-11-26
>>>>>>>>>>>> 14:57:08,573::protocoldetector::247::vds.MultiProtocolAccep
>>>>>>>>>>>> tor::(_handle_connection_read) Detected protocol xml from 
>>>>>>>>>>>> 127.0.0.1:42743 Detector thread::DEBUG::2015-11-26
>>>>>>>>>>>> 14:57:08,573::BindingXMLRPC::1173::XmlDetector::(handleSock
>>>>>>>>>>>> et) xml over http detected from ('127.0.0.1', 42743)
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> It failed before, can you please attach the whole VDSM logs?
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Thu, Nov 26, 2015 at 2:01 PM, Simone Tiraboschi < 
>>>>>>>>>>>> stirabos at redhat.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Thu, Nov 26, 2015 at 7:30 AM, Budur Nagaraju < 
>>>>>>>>>>>>> nbudoor at gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> Its a fresh setup ,I have deleted all the vms ,still am 
>>>>>>>>>>>>>> facing same issues .
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>> Can you please paste the output of  vdsClient -s 0 list ?
>>>>>>>>>>>>> thanks
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Thu, Nov 26, 2015 at 11:56 AM, Oved Ourfali < 
>>>>>>>>>>>>>> oourfali at redhat.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Hi
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Seems like you have existing VMs running on the host 
>>>>>>>>>>>>>>> (you can check that by looking for qemu processes on your host).
>>>>>>>>>>>>>>> Is that a clean deployment, or was the host used before 
>>>>>>>>>>>>>>> for running VMs?
>>>>>>>>>>>>>>> Perhaps you already ran the hosted engine setup, and the 
>>>>>>>>>>>>>>> VM was left there?
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> CC-ing Sandro who is more familiar in that than me.
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>>>> Oved
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Thu, Nov 26, 2015 at 7:07 AM, Budur Nagaraju < 
>>>>>>>>>>>>>>> nbudoor at gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> HI
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> Getting below error while configuring Hosted engine,
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> root at he ~]# hosted-engine --deploy [ INFO  ] Stage: 
>>>>>>>>>>>>>>>> Initializing [ INFO  ] Generating a temporary VNC 
>>>>>>>>>>>>>>>> password.
>>>>>>>>>>>>>>>> [ INFO  ] Stage: Environment setup
>>>>>>>>>>>>>>>>           Continuing will configure this host for 
>>>>>>>>>>>>>>>> serving as hypervisor and create a VM where you have to 
>>>>>>>>>>>>>>>> install oVirt Engine afterwards.
>>>>>>>>>>>>>>>>           Are you sure you want to continue? (Yes,
>>>>>>>>>>>>>>>> No)[Yes]: yes
>>>>>>>>>>>>>>>>           Configuration files: []
>>>>>>>>>>>>>>>>           Log file:
>>>>>>>>>>>>>>>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20151126102302-bkozgk.log
>>>>>>>>>>>>>>>>           Version: otopi-1.3.2 (otopi-1.3.2-1.el6)
>>>>>>>>>>>>>>>>           It has been detected that this program is 
>>>>>>>>>>>>>>>> executed through an SSH connection without using screen.
>>>>>>>>>>>>>>>>           Continuing with the installation may lead to 
>>>>>>>>>>>>>>>> broken installation if the network connection fails.
>>>>>>>>>>>>>>>>           It is highly recommended to abort the 
>>>>>>>>>>>>>>>> installation and run it inside a screen session using command "screen".
>>>>>>>>>>>>>>>>           Do you want to continue anyway? (Yes, 
>>>>>>>>>>>>>>>> No)[No]: yes [WARNING] Cannot detect if hardware 
>>>>>>>>>>>>>>>> supports virtualization [ INFO  ] Bridge ovirtmgmt 
>>>>>>>>>>>>>>>> already created [ INFO  ] Stage: Environment packages 
>>>>>>>>>>>>>>>> setup [ INFO  ] Stage: Programs detection [ INFO  ] 
>>>>>>>>>>>>>>>> Stage: Environment setup
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> *[ ERROR ] The following VMs has been found:
>>>>>>>>>>>>>>>> 2b8d6d91-d838-44f6-ae3b-c92cda014280[ ERROR ] Failed to 
>>>>>>>>>>>>>>>> execute stage 'Environment setup': Cannot setup Hosted 
>>>>>>>>>>>>>>>> Engine with other VMs running* [ INFO  ] Stage: Clean 
>>>>>>>>>>>>>>>> up [ INFO  ] Generating answer file 
>>>>>>>>>>>>>>>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20151126102310.conf'
>>>>>>>>>>>>>>>> [ INFO  ] Stage: Pre-termination [ INFO  ] Stage: 
>>>>>>>>>>>>>>>> Termination [root at he ~]#
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>>>>> Users mailing list
>>>>>>>>>>>>>>>> Users at ovirt.org
>>>>>>>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>>> Users mailing list
>>>>>>>>>>>>>> Users at ovirt.org
>>>>>>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Users mailing list
>>>>>>> Users at ovirt.org
>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>>
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Sandro Bonazzola
>>>>>> Better technology. Faster innovation. Powered by community 
>>>>>> collaboration.
>>>>>> See how it works at redhat.com
>>>>>>
>>>>>
>>>>>
>>>>
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users at ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>
>>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20151202/9bddf976/attachment.html>

------------------------------

_______________________________________________
Users mailing list
Users at ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


End of Users Digest, Vol 51, Issue 16
*************************************



More information about the Users mailing list