Error while executing action Setup Networks: Could not connect to peer host
by Roman Nikolayevich Drovalev
Ýòî ñîîáùåíèå èç íåñêîëüêèõ ÷àñòåé â ôîðìàòå MIME.
--=_alternative 006584FA43257E68_=
Content-Type: text/plain; charset="US-ASCII"
Hello,
Please help
How to add host in cluster oVirt?
An error in addition network interfaces!
Earlier there was a reference to a site where the given reference has been
described hack (is created the file ifcfg-ovirtmngm in which a script) now
is not accessible!
How to add a host?
By comparison of the interface of a host and ovirtmgmt - error "Error
while executing action Setup Networks: Could not connect to peer host"!
Drovalev Roman
--=_alternative 006584FA43257E68_=
Content-Type: text/html; charset="US-ASCII"
<font size=3 face="sans-serif">Hello,</font>
<br>
<br><font size=3 face="sans-serif">Please help</font>
<br><font size=3 face="Segoe UI">How to add host in cluster oVirt? </font>
<br><font size=3 face="Segoe UI">An error in addition network interfaces!
</font>
<br><font size=3 face="Segoe UI">Earlier there was a reference to a site
where the given reference has been described hack (is created the file
ifcfg-ovirtmngm in which a script) now is not accessible!</font>
<br><font size=3 face="Segoe UI">How to add a host?</font>
<br><font size=3 face="Segoe UI">By comparison of the interface of a host
and ovirtmgmt - error "Error while executing action Setup Networks:
Could not connect to peer host"!</font>
<br>
<br><font size=3 face="Segoe UI">Drovalev Roman</font>
--=_alternative 006584FA43257E68_=--
9 years, 6 months
host-deploy on HE hosts failing: Job for vdsmd.service canceled.
by Daniel Helgenberger
Hello,
I have the following problem with host-deploy on HE hosts:
> 2015-06-17 13:22:32 DEBUG otopi.plugins.otopi.services.systemd plugin.execute:937 execute-output: ('/bin/systemctl', 'stop', 'vdsmd.service') stderr:
> Job for vdsmd.service canceled.
>
> 2015-06-17 13:22:32 DEBUG otopi.context context._executeMethod:152 method exception
> Traceback (most recent call last):
> File "/tmp/ovirt-1WPaDxJOpX/pythonlib/otopi/context.py", line 142, in _executeMethod
> method['method']()
> File "/tmp/ovirt-1WPaDxJOpX/otopi-plugins/ovirt-host-deploy/vdsm/packages.py", line 106, in _packages
> self.services.state('vdsmd', False)
> File "/tmp/ovirt-1WPaDxJOpX/otopi-plugins/otopi/services/systemd.py", line 138, in state
> 'start' if state else 'stop'
> File "/tmp/ovirt-1WPaDxJOpX/otopi-plugins/otopi/services/systemd.py", line 77, in _executeServiceCommand
> raiseOnError=raiseOnError
> File "/tmp/ovirt-1WPaDxJOpX/pythonlib/otopi/plugin.py", line 942, in execute
> command=args[0],
> RuntimeError: Command '/bin/systemctl' failed to execute
> 2015-06-17 13:22:32 ERROR otopi.context context._executeMethod:161 Failed to execute stage 'Package installation': Command '/bin/systemctl' failed to execute
> 2015-06-17 13:22:32 DEBUG otopi.transaction transaction.abort:131 aborting 'Yum Transaction'
> 2015-06-17 13:22:32 INFO otopi.plugins.otopi.packagers.yumpackager yumpackager.info:92 Yum Performing yum transaction rollback
> Loaded plugins: auto-update-debuginfo, fastestmirror
I do not have 'standard' (=non ha) hosts; so I cannot tell for the
others but suspect it relates to HE.
Steps:
1. set the host to maintenance
At this point, only the HE datacenter is still mounted:
mount |grep nfs
nexstor01.sec.int.m-box.de:/volumes/ovirt/engine on
/rhev/data-center/mnt/nexstor01.sec.int.m-box.de:_volumes_ovirt_engine
type nfs
(rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=10.11.0.30,mountvers=3,mountport=58144,mountproto=udp,local_lock=none,addr=10.11.0.30)
2. Reinstall host
-> Error, host deactivated (Job for vdsmd.service canceled.)
3. A workaround is to shutdown sanlock:
# systemctl stop vdsmd
Job for vdsmd.service canceled.
# sanlock shutdown
# systemctl stop vdsmd
4. Now, host deploy works as expected.
Versions:
EL7 centos
ovirt-3.5.2 -3.5.3
sanlock-3.2.2-2.el7.x86_64
vdsm 4.14.14 - 4.14.20
--
Daniel Helgenberger
m box bewegtbild GmbH
P: +49/30/2408781-22
F: +49/30/2408781-10
ACKERSTR. 19
D-10115 BERLIN
www.m-box.de www.monkeymen.tv
Geschäftsführer: Martin Retschitzegger / Michaela Göllner
Handeslregister: Amtsgericht Charlottenburg / HRB 112767
9 years, 6 months
Automating oVirt Windows Guest Tools installations
by Lev Veyde
Hi Patrick,
First of all lets clear some misunderstanding here - you don't need to
manually install Python.
The installation of oVirt WGT is fully self contained, and while the oVirt
Guest Agent it includes is indeed programmed in Python, the version
included is converted using py2exe (check py2exe.org for more details if it
interests you) into a standalone executable (well, almost - just like
Windows version of Python.exe, it depends on Microsoft Visual Studio CRTL,
but we install it during the installation of the oVirt WGT).
Now about the automated installation. Generally we support silent
installation of oVirt WGT.
You just need to supply /S command parameter to the installer.
However there is a catch - unfortunately Windows will popup warning
messages due to the fact that the drivers supplied are non-WHQL'd. That is
because the drivers are signed by Red Hat, Inc. and not by Microsoft
certificate.
This is a security feature of Windows OS itself, and there is not much we
can do about it right now.
The side effect of this is that you need to manually approve the drivers
installation for each driver, or choose to trust all drivers from Red Hat,
Inc., and then no more popups will show up. Unfortunately, you still need
to do this manually at least once, and you can't pre-approve Red Hat, Inc.
to make this process automated. For more information on installing oVirt
WGT you can check this article:
http://community.redhat.com/blog/2015/05/how-to-install-and-use-ovirts-wi...
by yours truly.
There is a workaround though, and it's to create a program that will
automatically approve such unsigned drivers dialogs. It's relatively easy
to program with i.e. AutoIt scripting engine (check:
https://www.autoitscript.com/site/autoit/ ), which is free (like in free
beer, but unfortunately not as in freedom because source code for it is not
supplied). Note that you must be quite careful with that, as by doing so
you basically disabling the security mechanism that Microsoft had put in
place for a reason, and potentially you may unintentionally install other
non-WHQL'd drivers - if the installation attempt for these other drivers
will be made while your auto-approver program will run.
Thanks in advance,
Lev Veyde.
9 years, 6 months
Re: [ovirt-users] [Centos7.1x64] [Ovirt 3.5.2] SOLVED: Test fence : Power management test failed for Host hosted_engine1 Done
by wodel youchi
Hi, and thanks for your replies.
The problem was the *quotation marks* in fence options, the lanplus="1" and
power_wait="60" were misinterpreted, the correct form is lanplus=1,
power_wait=60
After changing that the power management was configured and the test passed
with : success, on.
thanks.
2015-06-11 12:11 GMT+01:00 Martin Perina <mperina(a)redhat.com>:
> Hi,
>
> I have HP DL160 G6 with ILO 2.0 and Firmware 4.23, but I'm really
> don't have an idea what is your issue right now with power management.
> Because oVirt just executes fence_ipmi on selected proxy host with
> parameters you could see in your vdsm log. And those parameters are
> identical to your command line options that you used when testing it
> with fence_ipmi directly (which worked fine). Very strange :-(
>
> Also I can't see any difference from oVirt power management code point
> of view in using Centos 6 or 7. Just be warned that in oVirt 3.6 not
> all features will be supported on Centos 6, so using Centos 7 is IMO
> a better option for now.
>
> But anyway could you please file a bug for oVirt 3.5.2 with the description
> of your issue and your logs attached? We will try to reproduce this issue,
> but so for I wasn't able to reproduce it.
>
> About your issue with hp watch dog, I cannot give any specifi advice, just
> that you can try to solve the issue by updating BIOS/firmware to latest
> version and/or try to contact HP support.
>
> Thanks a lot
>
> Martin Perina
>
>
> ----- Original Message -----
> > From: "wodel youchi" <wodel.youchi(a)gmail.com>
> > To: "Martin Perina" <mperina(a)redhat.com>
> > Cc: "users" <Users(a)ovirt.org>, "Eli Mesika" <emesika(a)redhat.com>
> > Sent: Thursday, June 11, 2015 12:56:46 PM
> > Subject: Re: [ovirt-users] [Centos7.1x64] [Ovirt 3.5.2] Test fence :
> Power management test failed for Host
> > hosted_engine1 Done
> >
> > Hi Martin,
> >
> > Could you please tell me the version of the ILO firmware you are using?
> >
> > I did upgrade mine from 1.40 to 2.10 but nothing changed, I did also
> > upgrade the smart array p420i card from 5.10 to 6.34 without luck so far.
> >
> > I checked again all parameters, I can't find the error.
> >
> > I did all the updates for Centos and oVirt
> >
> > I have another problem when rebooting any hypervisor, the hypervisor
> hangs,
> > the problem is with hpwtd (hp watch dog)
> > "hpwdt unexpected close not stopping watchdog"
> >
> > I added this to kernel parameters "intremap=no_x2apic_optout" but it
> didn't
> > change any thing.
> >
> > I am thinking to test with the latest kernel available to see if it's a
> > kernel problem.
> >
> > and I am going to reinstall the platform with Centos 6 to see if there
> will
> > be any differences.
> >
> >
> >
> >
> > 2015-06-10 12:00 GMT+01:00 wodel youchi <wodel.youchi(a)gmail.com>:
> >
> > > Hi,
> > >
> > > engine log is already in debug mode
> > >
> > > here it is:
> > > 2015-06-10 11:48:23,653 INFO
> > > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> > > (ajp--127.0.0.1-8702-12) Correlation ID: null, Call Stack: null, Custom
> > > Event ID: -1, Message: Host hosted_engine_2 from cluster Default was
> chosen
> > > as a proxy to execute Status command on Host hosted_engine_1.
> > > 2015-06-10 11:48:23,653 INFO [org.ovirt.engine.core.bll.FenceExecutor]
> > > (ajp--127.0.0.1-8702-12) Using Host hosted_engine_2 from cluster
> Default as
> > > proxy to execute Status command on Host
> > > 2015-06-10 11:48:23,673 INFO [org.ovirt.engine.core.bll.FenceExecutor]
> > > (ajp--127.0.0.1-8702-12) Executing <Status> Power Management command,
> Proxy
> > > Host:hosted_engine_2, Agent:ipmilan, Target Host:, Management
> > > IP:192.168.2.2, User:Administrator, Options:
> power_wait="60",lanplus="1",
> > > Fencing policy:null
> > > 2015-06-10 11:48:23,703 INFO
> > > *[org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]
> > > (ajp--127.0.0.1-8702-12) START, FenceVdsVDSCommand(HostName =
> > > hosted_engine_2, HostId = 0192d1ac-b905-4660-b149-4bef578985dd,
> targetVdsId
> > > = cf2d1260-7bb3-451a-9cd7-80e6a0ede52a, action = Status, ip =
> 192.168.2.2,
> > > port = , type = ipmilan, user = Administrator, password = ******,
> options =
> > > ' power_wait="60",lanplus="1"', policy = 'null'), log id:
> > > 2bda01bd2015-06-10 11:48:23,892 WARN
> > > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> > > (ajp--127.0.0.1-8702-12) Correlation ID: null, Call Stack: null, Custom
> > > Event ID: -1, Message: Power Management test failed for Host
> > > hosted_engine_1.Done*
> > > 2015-06-10 11:48:23,892 INFO
> > > [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]
> > > (ajp--127.0.0.1-8702-12) FINISH, FenceVdsVDSCommand, return:
> > > *Test Succeeded, unknown, log id: 2bda01bd2015-06-10 11:48:23,897 WARN
> > > [org.ovirt.engine.core.bll.FenceExecutor] (ajp--127.0.0.1-8702-12)
> Fencing
> > > operation failed with proxy host 0192d1ac-b905-4660-*b149-4bef578985dd,
> > > trying another proxy...
> > > 2015-06-10 11:48:24,039 ERROR [org.ovirt.engine.core.bll.FenceExecutor]
> > > (ajp--127.0.0.1-8702-12) Failed to run Power Management command on
> Host ,
> > > no running proxy Host was found.
> > > 2015-06-10 11:48:24,039 WARN [org.ovirt.engine.core.bll.FenceExecutor]
> > > (ajp--127.0.0.1-8702-12) Failed to find other proxy to re-run failed
> fence
> > > operation, retrying with the same proxy...
> > > 2015-06-10 11:48:24,143 INFO
> > > *[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> > > (ajp--127.0.0.1-8702-12) Correlation ID: null, Call Stack: null, Custom
> > > Event ID: -1, Message: Host hosted_engine_2 from cluster Default was
> chosen
> > > as a proxy to execute Status command on Host hosted_engine_1.*
> > > 2015-06-10 11:48:24,143 INFO [org.ovirt.engine.core.bll.FenceExecutor]
> > > (ajp--127.0.0.1-8702-12) Using Host hosted_engine_2 from cluster
> Default as
> > > proxy to execute Status command on Host
> > >
> > > *2015-06-10 11:48:24,148 INFO
> [org.ovirt.engine.core.bll.FenceExecutor]
> > > (ajp--127.0.0.1-8702-12) Executing <Status> Power Management command,
> Proxy
> > > Host:hosted_engine_2, Agent:ipmilan, Target Host:, Management
> > > IP:192.168.2.2, User:Administrator, Options:
> power_wait="60",lanplus="1",
> > > Fencing policy:null2015-06-10 11:48:24,165 INFO
> > > *[org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]
> > > (ajp--127.0.0.1-8702-12) START, FenceVdsVDSCommand(HostName =
> > > hosted_engine_2, HostId = 0192d1ac-b905-4660-b149-4bef578985dd,
> targetVdsId
> > > = cf2d1260-7bb3-451a-9cd7-80e6a0ede52a, action = Status, ip =
> 192.168.2.2,
> > > port = , type = ipmilan, user = Administrator, password = ******,
> options =
> > > ' power_wait="60",lanplus="1"', policy = 'null'), log id: 7e7f2726
> > > 2015-06-10 11:48:24,360 WARN
> > > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> > > (ajp--127.0.0.1-8702-12) Correlation ID: null, Call Stack: null, Custom
> > > Event ID: -1, Message: Power Management test failed for Host
> > > hosted_engine_1.Done
> > > 2015-06-10 11:48:24,360 INFO
> > > *[org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]
> > > (ajp--127.0.0.1-8702-12) FINISH, FenceVdsVDSCommand, return: Test
> > > Succeeded, unknown, log id: 7e7f2726*
> > >
> > >
> > > VDSM log from hosted_engine_2
> > >
> > > JsonRpcServer::DEBUG::2015-06-10
> > > 11:48:23,640::__init__::506::jsonrpc.JsonRpcServer::(serve_requests)
> > > Waiting for request
> > > Thread-2201::DEBUG::2015-06-10
> 11:48:23,642::API::1209::vds::(fenceNode)
> > >
> *fenceNode(addr=192.168.2.2,port=,agent=ipmilan,user=Administrator,passwd=XXXX,action=status,secure=False,options=
> > > power_wait="60"lanplus="1",policy=None)*
> > > Thread-2201::DEBUG::2015-06-10
> 11:48:23,642::utils::739::root::(execCmd)
> > > /usr/sbin/fence_ipmilan (cwd None)
> > > Thread-2201::DEBUG::2015-06-10
> 11:48:23,709::utils::759::root::(execCmd)
> > > *FAILED:
> > > <err> = 'Failed: Unable to obtain correct plug status or plug is not
> > > available\n\n\n*'; <rc> = 1
> > > Thread-2201::DEBUG::2015-06-10 11:48:23,710::API::1164::vds::(fence)
> rc 1
> > > inp agent=fence_ipmilan
> > > ipaddr=192.168.2.2
> > > login=Administrator
> > > action=status
> > > passwd=XXXX
> > > power_wait="60"
> > > lanplus="1" out [] err ['Failed: Unable to obtain correct plug status
> or
> > > plug is not available', '', '']
> > > Thread-2201::DEBUG::2015-06-10
> 11:48:23,710::API::1235::vds::(fenceNode)
> > > rc 1 in agent=fence_ipmilan
> > > ipaddr=192.168.2.2
> > > login=Administrator
> > > action=status
> > > passwd=XXXX
> > > power_wait="60"
> > > lanplus="1" out [] err ['Failed: Unable to obtain correct plug status
> or
> > > plug is not available', '', '']
> > > Thread-2201::DEBUG::2015-06-10
> > > 11:48:23,710::stompReactor::163::yajsonrpc.StompServer::(send) Sending
> > > response
> > > JsonRpc (StompReactor)::DEBUG::2015-06-10
> > > 11:48:23,712::stompReactor::98::Broker.StompAdapter::(handle_frame)
> > > Handling message <StompFrame command='SEND'>
> > > JsonRpcServer::DEBUG::2015-06-10
> > > 11:48:23,713::__init__::506::jsonrpc.JsonRpcServer::(serve_requests)
> > > Waiting for request
> > > Thread-2202::DEBUG::2015-06-10
> 11:48:23,715::API::1209::vds::(fenceNode)
> > >
> fenceNode(addr=192.168.2.2,port=,agent=ipmilan,user=Administrator,passwd=XXXX,action=status,secure=False,options=
> > > power_wait="60"
> > > lanplus="1",policy=None)
> > > Thread-2202::DEBUG::2015-06-10
> 11:48:23,715::utils::739::root::(execCmd)
> > > /usr/sbin/fence_ipmilan (cwd None)
> > > Thread-2202::DEBUG::2015-06-10
> 11:48:23,781::utils::759::root::(execCmd)
> > > FAILED: <err> = 'Failed: Unable to obtain correct plug status or plug
> is
> > > not available\n\n\n'; <rc> = 1
> > > Thread-2202::DEBUG::2015-06-10 11:48:23,781::API::1164::vds::(fence)
> rc 1
> > > inp agent=fence_ipmilan
> > > ipaddr=192.168.2.2
> > > login=Administrator
> > > action=status
> > > passwd=XXXX
> > > power_wait="60"
> > > lanplus="1" out [] err ['Failed: Unable to obtain correct plug status
> or
> > > plug is not available', '', '']
> > >
> > >
> > >
> > > I triple checked, I used the correct IPs and login password, the test
> in
> > > console works.
> > >
> > > 2015-06-10 10:31 GMT+01:00 Martin Perina <mperina(a)redhat.com>:
> > >
> > >> Hi,
> > >>
> > >> I just install engine 3.5.2 on Centos 7.1, added 2 Centos 7.1 hosts
> (both
> > >> with ipmilan fence devices) and everything worked fine. I also tried
> to
> > >> add
> > >> options
> > >>
> > >> lanplus="1", power_wait="60"
> > >>
> > >> and even with them getting power status of hosts worked fine.
> > >>
> > >> So could you please check again settings of your hosts in webadmin?
> > >>
> > >> hosted_engine1
> > >> PM address: IP address of ILO4 interface of the host hosted_engine1
> > >>
> > >>
> > >> hosted_engine2
> > >> PM address: IP address of ILO4 interface of the host hosted_engine2
> > >>
> > >> If the IP addresses are entered correctly, please allow DEBUG log for
> > >> engine,
> > >> execute test of PM settings for one host and attach logs from engine
> and
> > >> VDSM logs from both hosts.
> > >>
> > >> Thanks
> > >>
> > >> Martin Perina
> > >>
> > >>
> > >> ----- Original Message -----
> > >> > From: "wodel youchi" <wodel.youchi(a)gmail.com>
> > >> > To: "users" <Users(a)ovirt.org>
> > >> > Sent: Tuesday, June 9, 2015 2:41:02 PM
> > >> > Subject: [ovirt-users] [Centos7.1x64] [Ovirt 3.5.2] Test fence :
> Power
> > >> management test failed for Host hosted_engine1
> > >> > Done
> > >> >
> > >> > Hi,
> > >> >
> > >> > I have a weird problem with fencing
> > >> >
> > >> > I have a cluster of two HP DL380p G8 (ILO4)
> > >> >
> > >> > Centos7.1x64 and oVirt 3.5.2 ALL UPDATED
> > >> >
> > >> > I configured fencing first with ilo4 then ipmilan
> > >> >
> > >> > When testing fence from the engine I get : Succeeded, Unknown
> > >> >
> > >> > And in alerts tab I get : Power management test failed for Host
> > >> > hosted_engine1 Done (the same for host2)
> > >> >
> > >> > I tested with fence_ilo4 and fence_ipmilan and they report the
> result
> > >> > correctly
> > >> >
> > >> > # fence_ipmilan -P -a 192.168.2.2 -o status -l Administrator -p
> ertyuiop
> > >> > -vExecuting: /usr/bin/ipmitool -I lanplus -H 192.168.2.2 -U
> > >> Administrator -P
> > >> > ertyuiop -p 623 -L ADMINISTRATOR chassis power status
> > >> >
> > >> > 0 Chassis Power is on
> > >> >
> > >> >
> > >> > Status: ON
> > >> >
> > >> >
> > >> > # fence_ilo4 -l Administrator -p ertyuiop -a 192.168.2.2 -o status
> -v
> > >> > Executing: /usr/bin/ipmitool -I lanplus -H 192.168.2.2 -U
> Administrator
> > >> -P
> > >> > ertyuiop -p 623 -L ADMINISTRATOR chassis power status
> > >> >
> > >> > 0 Chassis Power is on
> > >> >
> > >> >
> > >> > Status: ON
> > >> >
> > >> > ----------------------------------
> > >> > These are the options passed to fence_ipmilan (I tested with the
> > >> options and
> > >> > without them)
> > >> >
> > >> > lanplus="1", power_wait="60"
> > >> >
> > >> >
> > >> > This is the engine log:
> > >> >
> > >> > 2015-06-09 13:35:29,287 INFO
> [org.ovirt.engine.core.bll.FenceExecutor]
> > >> > (ajp--127.0.0.1-8702-7) Using Host hosted_engine_2 from cluster
> Default
> > >> as
> > >> > proxy to execute Status command on Host
> > >> > 2015-06-09 13:35:29,289 INFO
> [org.ovirt.engine.core.bll.FenceExecutor]
> > >> > (ajp--127.0.0.1-8702-7) Executing <Status> Power Management command,
> > >> Proxy
> > >> > Host:hosted_engine_2, Agent:ipmilan, Target Host:, Management
> > >> > IP:192.168.2.2, User:Administrator, Options:
> > >> power_wait="60",lanplus="1",
> > >> > Fencing policy:null
> > >> > 2015-06-09 13:35:29,306 INFO
> > >> > [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]
> > >> > (ajp--127.0.0.1-8702-7) START, FenceVdsVDSCommand(
> > >> > HostName = hosted_engine_2,
> > >> > HostId = 0192d1ac-b905-4660-b149-4bef578985dd,
> > >> > targetVdsId = cf2d1260-7bb3-451a-9cd7-80e6a0ede52a,
> > >> > action = Status,
> > >> > ip = 192.168.2.2,
> > >> > port = ,
> > >> > type = ipmilan,
> > >> > user = Administrator,
> > >> > password = ******,
> > >> > options = ' power_wait="60",lanplus="1"',
> > >> > policy = 'null'), log id: 24ce6206
> > >> > 2015-06-09 13:35:29,516 WARN
> > >> >
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> > >> > (ajp--127.0.0.1-8702-7) Correlation ID: null, Call Stack: null,
> Custom
> > >> Event
> > >> > ID: -1, Message: Power Management test failed for Host
> > >> hosted_engine_1.Done
> > >> > 2015-06-09 13:35:29,516 INFO
> > >> > [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand]
> > >> > (ajp--127.0.0.1-8702-7) FINISH, FenceVdsVDSCommand, return: Test
> > >> Succeeded,
> > >> > unknown , log id: 24ce6206
> > >> >
> > >> >
> > >> > and here the vdsm log from the proxy
> > >> >
> > >> > JsonRpcServer::DEBUG::2015-06-09
> > >> > 13:37:52,461::__init__::506::jsonrpc.JsonRpcServer::(serve_requests)
> > >> Waiting
> > >> > for request
> > >> > Thread-131907::DEBUG::2015-06-09
> > >> 13:37:52,463::API::1209::vds::(fenceNode)
> > >> >
> > >>
> fenceNode(addr=192.168.2.2,port=,agent=ipmilan,user=Administrator,passwd=XXXX,action=status,secure=False,options=
> > >> > power_wait="60"
> > >> > lanplus="1",policy=None)
> > >> > Thread-131907::DEBUG::2015-06-09
> > >> 13:37:52,463::utils::739::root::(execCmd)
> > >> > /usr/sbin/fence_ipmilan (cwd None)
> > >> > Thread-131907::DEBUG::2015-06-09
> > >> 13:37:52,533::utils::759::root::(execCmd)
> > >> > FAILED: <err> = 'Failed: Unable to obtain correct plug status or
> plug
> > >> is not
> > >> > available\n\n\n'; <rc> = 1
> > >> > Thread-131907::DEBUG::2015-06-09
> 13:37:52,533::API::1164::vds::(fence)
> > >> rc 1
> > >> > inp agent=fence_ipmilan
> > >> > ipaddr=192.168.2.2
> > >> > login=Administrator
> > >> > action=status
> > >> > passwd=XXXX
> > >> > power_wait="60"
> > >> > lanplus="1" out [] err ['Failed: Unable to obtain correct plug
> status
> > >> or plug
> > >> > is not available', '', '']
> > >> > Thread-131907::DEBUG::2015-06-09
> > >> 13:37:52,533::API::1235::vds::(fenceNode) rc
> > >> > 1 in agent=fence_ipmilan
> > >> > ipaddr=192.168.2.2
> > >> > login=Administrator
> > >> > action=status
> > >> > passwd=XXXX
> > >> > power_wait="60"
> > >> > lanplus="1" out [] err [' Failed: Unable to obtain correct plug
> status
> > >> or
> > >> > plug is not available ', '', '']
> > >> > Thread-131907::DEBUG::2015-06-09
> > >> > 13:37:52,534::stompReactor::163::yajsonrpc.StompServer::(send)
> Sending
> > >> > response
> > >> > Detector thread::DEBUG::2015-06-09
> > >> >
> > >>
> 13:37:53,670::protocoldetector::187::vds.MultiProtocolAcceptor::(_add_connection)
> > >> > Adding connection from 127.0.0.1:55761
> > >> >
> > >> >
> > >> > VDSM rpms
> > >> > # rpm -qa | grep vdsm
> > >> > vdsm-cli-4.16.14-0.el7.noarch
> > >> > vdsm-python-zombiereaper-4.16.14-0.el7.noarch
> > >> > vdsm-xmlrpc-4.16.14-0.el7.noarch
> > >> > vdsm-yajsonrpc-4.16.14-0.el7.noarch
> > >> > vdsm-4.16.14-0.el7.x86_64
> > >> > vdsm-python-4.16.14-0.el7.noarch
> > >> > vdsm-jsonrpc-4.16.14-0.el7.noarch
> > >> >
> > >> > any idea?
> > >> >
> > >> > Thanks in advance.
> > >> >
> > >> > _______________________________________________
> > >> > Users mailing list
> > >> > Users(a)ovirt.org
> > >> > http://lists.ovirt.org/mailman/listinfo/users
> > >> >
> > >>
> > >
> > >
> >
>
9 years, 6 months
storage domain don't go active anymore after 3.5.3 update
by Nathanaël Blanchet
Hello, since the update to 3.5.3, my master data domain fails to activate.
I don't know what to do, and it is critical for production.
Vms are still up but I can't interacte with them anymore
If I shutdown any of them, I won't be able to recover them.
My datacenter is a FC one, and all of my hosts have been upgraded to
vdsm 4.16.20, I restarted vdsmd and engine, but nothing has changed.
Thank you for your help.
--
Nathanaël Blanchet
Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
9 years, 6 months
Need Advice on Hosted Engine installation
by wodel youchi
Hi,
We are installing a new ovirt platform for a client. this is our first
deployment.
we chose Hosted Engine install.
here is the hardware used and the network configuration used be patient :)
We have 02 hypervisors for now.
each sever has 8 nic ports, we will use bonding.
A NAS with nfs4 for the Engine_VM the ISO and Export domains.
And iSCSI SAN for the rest of the VMs (Data domain)
At first we chose as a network configuration:
10.0.0.X/16 as DMZ (ovirtmgmt will use this network)
172.16.1/24 as Storage network. we've configured our nfs to export only on
this network.
-----------------
We didn't configure the bonding manually at first, we thought that we could
do it once the engine is up.... we couldn't :-(
for ovirtmgmt, if we create the bond after the engine's installation, we
could not modify the configuration, error message "network in use"
for the storage network, we could't attach this logical network to the nic
used for the storage, because when we do that we loose the connexion to nfs
for the VM engine...
so we had to export the VM engine nfs's share on DMZ and we had to
configure bonding before starting the hosted engine installation.
We also configured bonding on storage network, but after the engine
installation, we couldn't attach this bonding to the logical network
Storage, error message "you have to specify network address if static
protocol is used", the IP address is specified, it's the IP address of the
bonding... :-(
---------------
Questions:
- Do we missed anything on network configuration capabilities of ovirt? or
the hosted engine is really a special case?
- What is the best way to configure ovirt's Hosted Engine storage?
- For the rest of the network, do we have to configure bonding and bridging
from the GUI only? (not manually)
thanks in advance.
9 years, 6 months
Bug in hostdeploy / baseurl - RepoError: Cannot find a valid baseurl for repo: base/7/x86_64
by Daniel Helgenberger
Hello,
I just went ahead and filed [BZ1232714] because with ovirt 3.5.3 host
deploy seens to fail on CentOS7 if there is no baseurl setting in yum repos:
RepoError: Cannot find a valid baseurl for repo: base/7/x86_64
[BZ1232714] https://bugzilla.redhat.com/show_bug.cgi?id=1232714
-8<---------
> 2015-06-17 12:27:37 DEBUG otopi.transaction transaction._prepare:77 preparing 'Yum Transaction'
> Loaded plugins: fastestmirror
> 2015-06-17 12:27:37 DEBUG otopi.context context._executeMethod:138 Stage internal_packages METHOD otopi.plugins.otopi.network.hostname.Plugin._internal_packages
> 2015-06-17 12:27:37 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:88 Yum queue package iproute for install
> Loading mirror speeds from cached hostfile
> 2015-06-17 12:27:37 ERROR otopi.plugins.otopi.packagers.yumpackager yumpackager.error:97 Yum Cannot queue package iproute: Cannot find a valid baseurl for repo: base/7/x86_64
> 2015-06-17 12:27:37 DEBUG otopi.context context._executeMethod:152 method exception
> Traceback (most recent call last):
> File "/tmp/ovirt-s3ofZ9o6Pq/pythonlib/otopi/context.py", line 142, in _executeMethod
> method['method']()
> File "/tmp/ovirt-s3ofZ9o6Pq/otopi-plugins/otopi/network/hostname.py", line 66, in _internal_packages
> self.packager.install(packages=('iproute',))
> File "/tmp/ovirt-s3ofZ9o6Pq/otopi-plugins/otopi/packagers/yumpackager.py", line 303, in install
> ignoreErrors=ignoreErrors
> File "/tmp/ovirt-s3ofZ9o6Pq/pythonlib/otopi/miniyum.py", line 865, in install
> **kwargs
> File "/tmp/ovirt-s3ofZ9o6Pq/pythonlib/otopi/miniyum.py", line 509, in _queue
> provides = self._queryProvides(packages=(package,))
> File "/tmp/ovirt-s3ofZ9o6Pq/pythonlib/otopi/miniyum.py", line 447, in _queryProvides
> for po in self._yb.searchPackageProvides(args=packages):
> File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 3406, in searchPackageProvides
> where = self.returnPackagesByDep(arg)
> File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 4232, in returnPackagesByDep
> return self.pkgSack.searchProvides(depstring)
> File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 1079, in <lambda>
> pkgSack = property(fget=lambda self: self._getSacks(),
> File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 784, in _getSacks
> self.repos.populateSack(which=repos)
> File "/usr/lib/python2.7/site-packages/yum/repos.py", line 344, in populateSack
> self.doSetup()
> File "/usr/lib/python2.7/site-packages/yum/repos.py", line 158, in doSetup
> self.ayum.plugins.run('postreposetup')
> File "/usr/lib/python2.7/site-packages/yum/plugins.py", line 188, in run
> func(conduitcls(self, self.base, conf, **kwargs))
> File "/usr/lib/yum-plugins/fastestmirror.py", line 197, in postreposetup_hook
> if downgrade_ftp and _len_non_ftp(repo.urls) == 1:
> File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 871, in <lambda>
> urls = property(fget=lambda self: self._geturls(),
> File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 868, in _geturls
> self._baseurlSetup()
> File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 834, in _baseurlSetup
> self.check()
> File "/usr/lib/python2.7/site-packages/yum/yumRepo.py", line 554, in check
> 'Cannot find a valid baseurl for repo: %s' % self.ui_id
> RepoError: Cannot find a valid baseurl for repo: base/7/x86_64
> 2015-06-17 12:27:37 ERROR otopi.context context._executeMethod:161 Failed to execute stage 'Environment packages setup': Cannot find a valid baseurl for repo: base/7/x86_64
> 2015-06-17 12:27:37 DEBUG otopi.transaction transaction.abort:131 aborting 'Yum Transaction'
> 2015-06-17 12:27:37 INFO otopi.plugins.otopi.packagers.yumpackager yumpackager.info:92 Yum Performing yum transaction rollback
> Could not retrieve mirrorlist http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=os&infra=stock error was
> 14: curl#7 - "Failed connect to mirrorlist.centos.org:80; No route to host"
> Loaded plugins: fastestmirror
> 2015-06-17 12:27:37 DEBUG otopi.context context.dumpEnvironment:490 ENVIRONMENT DUMP - BEGIN
> 2015-06-17 12:27:37 DEBUG otopi.context context.dumpEnvironment:500 ENV BASE/error=bool:'True'
> 2015-06-17 12:27:37 DEBUG otopi.context context.dumpEnvironment:500 ENV BASE/exceptionInfo=list:'[(<class 'yum.Errors.RepoError'>, RepoError(), <traceback object at 0x2e26200>)]'
> 2015-06-17 12:27:37 DEBUG otopi.context context.dumpEnvironment:504 ENVIRONMENT DUMP - END
> 2015-06-17 12:27:37 INFO otopi.context context.runSequence:417 Stage: Pre-termination
-8<-----
--
Daniel Helgenberger
m box bewegtbild GmbH
P: +49/30/2408781-22
F: +49/30/2408781-10
ACKERSTR. 19
D-10115 BERLIN
www.m-box.de www.monkeymen.tv
Geschäftsführer: Martin Retschitzegger / Michaela Göllner
Handeslregister: Amtsgericht Charlottenburg / HRB 112767
9 years, 6 months
problems with power management using idrac7 on r620
by Jason Keltz
Hi.
I've been having problem with power management using iDRAC 7 EXPRESS on
a Dell R620. This uses a shared LOM as opposed to Enterprise that has a
dedicated one. Every now and then, idrac simply stops responding to
ping, so it can't respond to status commands from the proxy. If I send
a reboot with "ipmitool mc reset cold" command, the idrac reboots and
comes back, but after the problem has occurred, even after a reboot, it
responds to ping, but drops 80+% of packets. The only way I can "solve"
the problem is to physically restart the server. This isn't just
happening on one R620 - it's happening on all of my ovirt hosts. I
highly suspect it has to do with a memory leak, and being monitored by
engine causes the problem. I had applied a recent firmware upgrade
that was supposed to "solve" this kind of problem, but it doesn't. In
other to provide Dell with more details, can someone tell me how often
each host is being queried for status? I can't seem to find that info.
The idrac on my file server doesn't seem to exhibit the same problem,
and I suspect that is because it isn't being queried.
Thanks,
Jason.
9 years, 6 months
windows guest tools install
by Patrick Russell
Hi all,
We’ve got a large migration in progress for a windows (2k3, 2k8, and 2k12) environment from vmware to ovirt. Does anyone have any suggestions for an unattended ovirt-tools install? Our windows team has pretty much shot down installing python on their VM’s. Are there any flags we can pass to the installer to just accept the defaults? Any other suggestions?
Thanks,
Patrick
9 years, 6 months
Postponing 3.6.0 second alpha to next week
by Sandro Bonazzola
Hi,
due to instability of the current code, we need to postpone second alpha to next week.
Maintainers:
- please ensure that all packages you maintain builds in jenkins
- please ensure that all the dependencies required by your packages are available either in ovirt repository or in an external repository included in
ovirt-release repo files.
- please ensure that engine is at least able to add a couple of hosts, create a vm on one host and migrate it to the other one.
Infra:
- please help keeping jenkins monitored and stabilizing it.
Community:
- please help testing nightly build while stabilizing it for the second alpha release.
Thanks,
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
9 years, 6 months