Cannot start VM - pauses due to storage error
by Brent Hartzell
------=_NextPart_000_3233_01D021BB.C6C2CAC0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Hello,
All of the sudden stared getting the errors below. Can't start VM's, they
immediately go into paused state.
Thread-51856::DEBUG::2014-12-27
09:56:08,812::vm::486::vm.Vm::(_getUserCpuTuneInfo)
vmId=`e3d75e55-2b41-4f0b-8d2a-16f8fde2ba0d`::Domain Metadata is not set
Thread-51858::DEBUG::2014-12-27
09:56:11,953::vm::486::vm.Vm::(_getUserCpuTuneInfo)
vmId=`e3d75e55-2b41-4f0b-8d2a-16f8fde2ba0d`::Domain Metadata is not set
Thread-51860::DEBUG::2014-12-27
09:56:12,856::__init__::467::jsonrpc.JsonRpcServer::(_serveRequest) Calling
'VM.cont' in bridge with {u'vmID': u'e3d75e55-2b41-4f0b-8d2a-16f8fde2ba0d'}
libvirtEventLoop::DEBUG::2014-12-27
09:56:13,016::vm::5461::vm.Vm::(_onLibvirtLifecycleEvent)
vmId=`e3d75e55-2b41-4f0b-8d2a-16f8fde2ba0d`::event Resumed detail 0 opaque
None
libvirtEventLoop::DEBUG::2014-12-27
09:56:13,021::vm::5461::vm.Vm::(_onLibvirtLifecycleEvent)
vmId=`e3d75e55-2b41-4f0b-8d2a-16f8fde2ba0d`::event Resumed detail 0 opaque
None
libvirtEventLoop::INFO::2014-12-27
09:56:13,026::vm::4780::vm.Vm::(_onIOError)
vmId=`e3d75e55-2b41-4f0b-8d2a-16f8fde2ba0d`::abnormal vm stop device
virtio-disk0 error eother
libvirtEventLoop::DEBUG::2014-12-27
09:56:13,029::vm::5461::vm.Vm::(_onLibvirtLifecycleEvent)
vmId=`e3d75e55-2b41-4f0b-8d2a-16f8fde2ba0d`::event Suspended detail 2 opaque
None
Thread-51863::DEBUG::2014-12-27
09:56:15,072::vm::486::vm.Vm::(_getUserCpuTuneInfo)
vmId=`e3d75e55-2b41-4f0b-8d2a-16f8fde2ba0d`::Domain Metadata is not set
Thread-51869::DEBUG::2014-12-27
09:56:18,120::vm::486::vm.Vm::(_getUserCpuTuneInfo)
vmId=`e3d75e55-2b41-4f0b-8d2a-16f8fde2ba0d`::Domain Metadata is not set
Thread-51872::DEBUG::2014-12-27
09:56:21,154::vm::486::vm.Vm::(_getUserCpuTuneInfo)
vmId=`e3d75e55-2b41-4f0b-8d2a-16f8fde2ba0d`::Domain Metadata is not set
------=_NextPart_000_3233_01D021BB.C6C2CAC0
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" =
xmlns:o=3D"urn:schemas-microsoft-com:office:office" =
xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" =
xmlns=3D"http://www.w3.org/TR/REC-html40"><head><META =
HTTP-EQUIV=3D"Content-Type" CONTENT=3D"text/html; =
charset=3Dus-ascii"><meta name=3DGenerator content=3D"Microsoft Word 15 =
(filtered medium)"><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri",serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri",serif;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DEN-US =
link=3D"#0563C1" vlink=3D"#954F72"><div class=3DWordSection1><p =
class=3DMsoNormal>Hello,<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>All of the =
sudden stared getting the errors below. Can’t start VM’s, =
they immediately go into paused state.<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>Thread-51856::DEBUG::2014-12-27 =
09:56:08,812::vm::486::vm.Vm::(_getUserCpuTuneInfo) =
vmId=3D`e3d75e55-2b41-4f0b-8d2a-16f8fde2ba0d`::Domain Metadata is not =
set<o:p></o:p></p><p class=3DMsoNormal>Thread-51858::DEBUG::2014-12-27 =
09:56:11,953::vm::486::vm.Vm::(_getUserCpuTuneInfo) =
vmId=3D`e3d75e55-2b41-4f0b-8d2a-16f8fde2ba0d`::Domain Metadata is not =
set<o:p></o:p></p><p class=3DMsoNormal>Thread-51860::DEBUG::2014-12-27 =
09:56:12,856::__init__::467::jsonrpc.JsonRpcServer::(_serveRequest) =
Calling 'VM.cont' in bridge with {u'vmID': =
u'e3d75e55-2b41-4f0b-8d2a-16f8fde2ba0d'}<o:p></o:p></p><p =
class=3DMsoNormal>libvirtEventLoop::DEBUG::2014-12-27 =
09:56:13,016::vm::5461::vm.Vm::(_onLibvirtLifecycleEvent) =
vmId=3D`e3d75e55-2b41-4f0b-8d2a-16f8fde2ba0d`::event Resumed detail 0 =
opaque None<o:p></o:p></p><p =
class=3DMsoNormal>libvirtEventLoop::DEBUG::2014-12-27 =
09:56:13,021::vm::5461::vm.Vm::(_onLibvirtLifecycleEvent) =
vmId=3D`e3d75e55-2b41-4f0b-8d2a-16f8fde2ba0d`::event Resumed detail 0 =
opaque None<o:p></o:p></p><p =
class=3DMsoNormal>libvirtEventLoop::INFO::2014-12-27 =
09:56:13,026::vm::4780::vm.Vm::(_onIOError) =
vmId=3D`e3d75e55-2b41-4f0b-8d2a-16f8fde2ba0d`::abnormal vm stop device =
virtio-disk0 error eother<o:p></o:p></p><p =
class=3DMsoNormal>libvirtEventLoop::DEBUG::2014-12-27 =
09:56:13,029::vm::5461::vm.Vm::(_onLibvirtLifecycleEvent) =
vmId=3D`e3d75e55-2b41-4f0b-8d2a-16f8fde2ba0d`::event Suspended detail 2 =
opaque None<o:p></o:p></p><p =
class=3DMsoNormal>Thread-51863::DEBUG::2014-12-27 =
09:56:15,072::vm::486::vm.Vm::(_getUserCpuTuneInfo) =
vmId=3D`e3d75e55-2b41-4f0b-8d2a-16f8fde2ba0d`::Domain Metadata is not =
set<o:p></o:p></p><p class=3DMsoNormal>Thread-51869::DEBUG::2014-12-27 =
09:56:18,120::vm::486::vm.Vm::(_getUserCpuTuneInfo) =
vmId=3D`e3d75e55-2b41-4f0b-8d2a-16f8fde2ba0d`::Domain Metadata is not =
set<o:p></o:p></p><p class=3DMsoNormal>Thread-51872::DEBUG::2014-12-27 =
09:56:21,154::vm::486::vm.Vm::(_getUserCpuTuneInfo) =
vmId=3D`e3d75e55-2b41-4f0b-8d2a-16f8fde2ba0d`::Domain Metadata is not =
set<o:p></o:p></p></div></body></html>
------=_NextPart_000_3233_01D021BB.C6C2CAC0--
10 years, 3 months
1. Re: ??: bond mode balance-alb (Jorick Astrego)
by Nikolai Sednev
------=_Part_2052610_1445178490.1419935668210
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Mode 2 will do the job the best way for you in case of static LAG supported only at the switch's side, I'd advise using of xmit_hash_policy layer3+4, so you'll get better distribution for your DC.
Thanks in advance.
Best regards,
Nikolai
____________________
Nikolai Sednev
Senior Quality Engineer at Compute team
Red Hat Israel
34 Jerusalem Road,
Ra'anana, Israel 43501
Tel: +972 9 7692043
Mobile: +972 52 7342734
Email: nsednev(a)redhat.com
IRC: nsednev
----- Original Message -----
From: users-request(a)ovirt.org
To: users(a)ovirt.org
Sent: Tuesday, December 30, 2014 2:12:58 AM
Subject: Users Digest, Vol 39, Issue 173
Send Users mailing list submissions to
users(a)ovirt.org
To subscribe or unsubscribe via the World Wide Web, visit
http://lists.ovirt.org/mailman/listinfo/users
or, via email, send a message with subject or body 'help' to
users-request(a)ovirt.org
You can reach the person managing the list at
users-owner(a)ovirt.org
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Users digest..."
Today's Topics:
1. Re: ??: bond mode balance-alb (Jorick Astrego)
2. Re: ??: bond mode balance-alb (Jorick Astrego)
3. HostedEngine Deployment Woes (Mikola Rose)
----------------------------------------------------------------------
Message: 1
Date: Mon, 29 Dec 2014 20:13:40 +0100
From: Jorick Astrego <j.astrego(a)netbulae.eu>
To: users(a)ovirt.org
Subject: Re: [ovirt-users] ??: bond mode balance-alb
Message-ID: <54A1A7E4.90308(a)netbulae.eu>
Content-Type: text/plain; charset="utf-8"
On 12/29/2014 12:56 AM, Dan Kenigsberg wrote:
> On Fri, Dec 26, 2014 at 12:39:45PM -0600, Blaster wrote:
>> On 12/23/2014 2:55 AM, Dan Kenigsberg wrote:
>>> Bug 1094842 - Bonding modes 0, 5 and 6 should be avoided for VM networks
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1094842#c0
>> Dan,
>>
>> What is bad about these modes that oVirt can't use them?
> I can only quote jpirko's workds from the link above:
>
> Do not use tlb or alb in bridge, never! It does not work, that's it. The reason
> is it mangles source macs in xmit frames and arps. When it is possible, just
> use mode 4 (lacp). That should be always possible because all enterprise
> switches support that. Generally, for 99% of use cases, you *should* use mode
> 4. There is no reason to use other modes.
>
This switch is more of an office switch and only supports part of the
802.3ad standard:
PowerConnect* *2824
Scalable from small workgroups to dense access solutions, the 2824
offers 24-port flexibility plus two combo small?form?factor
pluggable (SFP) ports for connecting the switch to other networking
equipment located beyond the 100 m distance limitations of copper
cabling.
Industry-standard link aggregation adhering to IEEE 802.3ad
standards (static support only, LACP not supported)
So the only way to have some kind of bonding without buying more
expensive switches, is using balance-rr (mode=0), balance-xor (mode=2)
or broadcast (modes=3).
>> I just tested mode 4, and the LACP with Fedora 20 appears to not be
>> compatible with the LAG mode on my Dell 2824.
>>
>> Would there be any issues with bringing two NICS into the VM and doing
>> balance-alb at the guest level?
>>
Kind regards,
Jorick Astrego
Met vriendelijke groet, With kind regards,
Jorick Astrego
Netbulae Virtualization Experts
----------------
Tel: 053 20 30 270 info(a)netbulae.eu Staalsteden 4-3A KvK 08198180
Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01
----------------
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20141229/dfacba22/atta...>
------------------------------
Message: 2
Date: Mon, 29 Dec 2014 20:14:55 +0100
From: Jorick Astrego <j.astrego(a)netbulae.eu>
To: users(a)ovirt.org
Subject: Re: [ovirt-users] ??: bond mode balance-alb
Message-ID: <54A1A82F.1090100(a)netbulae.eu>
Content-Type: text/plain; charset="utf-8"
On 12/29/2014 12:56 AM, Dan Kenigsberg wrote:
> On Fri, Dec 26, 2014 at 12:39:45PM -0600, Blaster wrote:
>> On 12/23/2014 2:55 AM, Dan Kenigsberg wrote:
>>> Bug 1094842 - Bonding modes 0, 5 and 6 should be avoided for VM networks
>>> https://bugzilla.redhat.com/show_bug.cgi?id=1094842#c0
>>
Sorry, no mode 0. So only mode 2 or 3 for your environment....
Kind regards,
Jorick
Met vriendelijke groet, With kind regards,
Jorick Astrego
Netbulae Virtualization Experts
----------------
Tel: 053 20 30 270 info(a)netbulae.eu Staalsteden 4-3A KvK 08198180
Fax: 053 20 30 271 www.netbulae.eu 7547 TA Enschede BTW NL821234584B01
----------------
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20141229/41da033b/atta...>
------------------------------
Message: 3
Date: Tue, 30 Dec 2014 00:12:52 +0000
From: Mikola Rose <mrose(a)power-soft.com>
To: "users(a)ovirt.org" <users(a)ovirt.org>
Subject: [ovirt-users] HostedEngine Deployment Woes
Message-ID: <F992C848-E4EB-468E-83F4-37646EDB3E62(a)power-soft.com>
Content-Type: text/plain; charset="us-ascii"
Hi List Members;
I have been struggling with deploying oVirt hosted engine I keep running into a timeout during the "Misc Configuration" any suggestion on how I can trouble shoot this?
Redhat 2.6.32-504.3.3.el6.x86_64
Installed Packages
ovirt-host-deploy.noarch 1.2.5-1.el6ev @rhel-6-server-rhevm-3.4-rpms
ovirt-host-deploy-java.noarch 1.2.5-1.el6ev @rhel-6-server-rhevm-3.4-rpms
ovirt-hosted-engine-ha.noarch 1.1.6-3.el6ev @rhel-6-server-rhevm-3.4-rpms
ovirt-hosted-engine-setup.noarch 1.1.5-1.el6ev @rhel-6-server-rhevm-3.4-rpms
rhevm-setup-plugin-ovirt-engine.noarch 3.4.4-2.2.el6ev @rhel-6-server-rhevm-3.4-rpms
rhevm-setup-plugin-ovirt-engine-common.noarch 3.4.4-2.2.el6ev @rhel-6-server-rhevm-3.4-rpms
Please confirm installation settings (Yes, No)[No]: Yes
[ INFO ] Stage: Transaction setup
[ INFO ] Stage: Misc configuration
[ INFO ] Stage: Package installation
[ INFO ] Stage: Misc configuration
[ INFO ] Configuring libvirt
[ INFO ] Configuring VDSM
[ INFO ] Starting vdsmd
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Waiting for VDSM hardware info
[ INFO ] Connecting Storage Domain
[ INFO ] Connecting Storage Pool
[ INFO ] Verifying sanlock lockspace initialization
[ INFO ] sanlock lockspace already initialized
[ INFO ] sanlock metadata already initialized
[ INFO ] Creating VM Image
[ INFO ] Disconnecting Storage Pool
[ INFO ] Start monitoring domain
[ ERROR ] Failed to execute stage 'Misc configuration': The read operation timed out
[ INFO ] Stage: Clean up
[ INFO ] Generating answer file '/etc/ovirt-hosted-engine/answers.conf'
[ INFO ] Stage: Pre-termination
[ INFO ] Stage: Termination
2014-12-29 14:53:41 DEBUG otopi.plugins.ovirt_hosted_engine_setup.sanlock.lockspace lockspace._misc:133 Ensuring lease for lockspace hosted-engine, host id 1 is acquired (file: /rhev/data-center/mnt/192.168.0.75:_Volumes_Raid1/8094d528-7aa2-4c28-839f-73d7c8bcfebb/ha_agent/hosted-engine.lockspace)
2014-12-29 14:53:41 INFO otopi.plugins.ovirt_hosted_engine_setup.sanlock.lockspace lockspace._misc:144 sanlock lockspace already initialized
2014-12-29 14:53:41 INFO otopi.plugins.ovirt_hosted_engine_setup.sanlock.lockspace lockspace._misc:157 sanlock metadata already initialized
2014-12-29 14:53:41 DEBUG otopi.context context._executeMethod:138 Stage misc METHOD otopi.plugins.ovirt_hosted_engine_setup.vm.image.Plugin._misc
2014-12-29 14:53:41 INFO otopi.plugins.ovirt_hosted_engine_setup.vm.image image._misc:162 Creating VM Image
2014-12-29 14:53:41 DEBUG otopi.plugins.ovirt_hosted_engine_setup.vm.image image._misc:163 createVolume
2014-12-29 14:53:42 DEBUG otopi.plugins.ovirt_hosted_engine_setup.vm.image image._misc:184 Created volume d8e7eed4-c763-4b3d-8a71-35f2d692a73d, request was:
- image: 9043e535-ea94-41f8-98df-6fdbfeb107c3
- volume: e6a9291d-ac21-4a95-b43c-0d6e552baaa2
2014-12-29 14:53:42 DEBUG otopi.ovirt_hosted_engine_setup.tasks tasks.wait:48 Waiting for existing tasks to complete
2014-12-29 14:53:43 DEBUG otopi.ovirt_hosted_engine_setup.tasks tasks.wait:48 Waiting for existing tasks to complete
2014-12-29 14:53:43 DEBUG otopi.context context._executeMethod:138 Stage misc METHOD otopi.plugins.ovirt_hosted_engine_setup.vm.boot_disk.Plugin._misc
2014-12-29 14:53:43 DEBUG otopi.context context._executeMethod:144 condition False
2014-12-29 14:53:43 DEBUG otopi.context context._executeMethod:138 Stage misc METHOD otopi.plugins.ovirt_hosted_engine_setup.storage.storage.Plugin._disconnect_pool
2014-12-29 14:53:43 INFO otopi.plugins.ovirt_hosted_engine_setup.storage.storage storage._disconnect_pool:971 Disconnecting Storage Pool
2014-12-29 14:53:43 DEBUG otopi.ovirt_hosted_engine_setup.tasks tasks.wait:48 Waiting for existing tasks to complete
2014-12-29 14:53:43 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.storage storage._spmStop:602 spmStop
2014-12-29 14:53:43 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.storage storage._spmStop:611
2014-12-29 14:53:43 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.storage storage._storagePoolConnection:573 disconnectStoragePool
2014-12-29 14:53:45 INFO otopi.plugins.ovirt_hosted_engine_setup.storage.storage storage._disconnect_pool:975 Start monitoring domain
2014-12-29 14:53:45 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.storage storage._startMonitoringDomain:529 _startMonitoringDomain
2014-12-29 14:53:46 DEBUG otopi.plugins.ovirt_hosted_engine_setup.storage.storage storage._startMonitoringDomain:534 {'status': {'message': 'OK', 'code': 0}}
2014-12-29 14:53:51 DEBUG otopi.ovirt_hosted_engine_setup.tasks tasks.wait:127 Waiting for domain monitor
2014-12-29 14:54:51 DEBUG otopi.context context._executeMethod:152 method exception
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/otopi/context.py", line 142, in _executeMethod
method['method']()
File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/storage/storage.py", line 976, in _disconnect_pool
self._startMonitoringDomain()
File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/ovirt-hosted-engine-setup/storage/storage.py", line 539, in _startMonitoringDomain
waiter.wait(self.environment[ohostedcons.StorageEnv.SD_UUID])
File "/usr/lib/python2.6/site-packages/ovirt_hosted_engine_setup/tasks.py", line 128, in wait
response = serv.s.getVdsStats()
File "/usr/lib64/python2.6/xmlrpclib.py", line 1199, in __call__
return self.__send(self.__name, args)
File "/usr/lib64/python2.6/xmlrpclib.py", line 1489, in __request
verbose=self.__verbose
File "/usr/lib64/python2.6/xmlrpclib.py", line 1237, in request
errcode, errmsg, headers = h.getreply()
File "/usr/lib64/python2.6/httplib.py", line 1064, in getreply
response = self._conn.getresponse()
File "/usr/lib64/python2.6/httplib.py", line 990, in getresponse
response.begin()
File "/usr/lib64/python2.6/httplib.py", line 391, in begin
version, status, reason = self._read_status()
File "/usr/lib64/python2.6/httplib.py", line 349, in _read_status
line = self.fp.readline()
File "/usr/lib64/python2.6/socket.py", line 433, in readline
data = recv(1)
File "/usr/lib64/python2.6/ssl.py", line 215, in recv
return self.read(buflen)
File "/usr/lib64/python2.6/ssl.py", line 136, in read
return self._sslobj.read(len)
SSLError: The read operation timed out
var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20141229145137-g8d2or.log
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20141230/899f724c/atta...>
------------------------------
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
End of Users Digest, Vol 39, Issue 173
**************************************
------=_Part_2052610_1445178490.1419935668210
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: georgia,serif; font-size: 12pt; colo=
r: #000000"><div><span style=3D"color: #333333; font-family: monospace; fon=
t-size: 13.142857551574707px; background-color: #fdfdfd;" data-mce-style=3D=
"color: #333333; font-family: monospace; font-size: 13.142857551574707px; b=
ackground-color: #fdfdfd;">Mode 2 will do the job the best way for you in c=
ase of static LAG supported only at the switch's side, I'd advise using of&=
nbsp;xmit_hash_policy layer3+4, so you'll get better distribution for your =
DC.</span></div><div><br></div><div><span name=3D"x"></span><br>Thanks in a=
dvance.<br><div><br></div>Best regards,<br>Nikolai<br>____________________<=
br>Nikolai Sednev<br>Senior Quality Engineer at Compute team<br>Red Hat Isr=
ael<br>34 Jerusalem Road,<br>Ra'anana, Israel 43501<br><div><br></div>Tel: =
+972 9 7692043<br>Mobile: +972 52 7342734<br>Em=
ail: nsednev(a)redhat.com<br>IRC: nsednev<span name=3D"x"></span><br></div><d=
iv><br></div><hr id=3D"zwchr"><div style=3D"color:#000;font-weight:normal;f=
ont-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-seri=
f;font-size:12pt;" data-mce-style=3D"color: #000; font-weight: normal; font=
-style: normal; text-decoration: none; font-family: Helvetica,Arial,sans-se=
rif; font-size: 12pt;"><b>From: </b>users-request(a)ovirt.org<br><b>To: </b>u=
sers(a)ovirt.org<br><b>Sent: </b>Tuesday, December 30, 2014 2:12:58 AM<br><b>=
Subject: </b>Users Digest, Vol 39, Issue 173<br><div><br></div>Send Users m=
ailing list submissions to<br> &nb=
sp;users(a)ovirt.org<br><div><br></div>To subscribe or unsubscribe via the Wo=
rld Wide Web, visit<br> http=
://lists.ovirt.org/mailman/listinfo/users<br>or, via email, send a message =
with subject or body 'help' to<br>  =
; users-request(a)ovirt.org<br><div><br></div>You can reach the person m=
anaging the list at<br> user=
s-owner(a)ovirt.org<br><div><br></div>When replying, please edit your Subject=
line so it is more specific<br>than "Re: Contents of Users digest..."<br><=
div><br></div><br>Today's Topics:<br><div><br></div> 1. Re: &nb=
sp;??: bond mode balance-alb (Jorick Astrego)<br> 2. Re: =
??: bond mode balance-alb (Jorick Astrego)<br> 3. HostedE=
ngine Deployment Woes (Mikola Rose)<br><div><br></div><br>-----------------=
-----------------------------------------------------<br><div><br></div>Mes=
sage: 1<br>Date: Mon, 29 Dec 2014 20:13:40 +0100<br>From: Jorick Astrego &l=
t;j.astrego(a)netbulae.eu><br>To: users(a)ovirt.org<br>Subject: Re: [ovirt-u=
sers] ??: bond mode balance-alb<br>Message-ID: <54A1A7E4.90308(a)netbulae.=
eu><br>Content-Type: text/plain; charset=3D"utf-8"<br><div><br></div><br=
>On 12/29/2014 12:56 AM, Dan Kenigsberg wrote:<br>> On Fri, Dec 26, 2014=
at 12:39:45PM -0600, Blaster wrote:<br>>> On 12/23/2014 2:55 AM, Dan=
Kenigsberg wrote:<br>>>> Bug 1094842 - Bonding modes 0, 5 and 6 s=
hould be avoided for VM networks<br>>>> https://bugzilla.redhat.co=
m/show_bug.cgi?id=3D1094842#c0<br>>> Dan,<br>>><br>>> Wha=
t is bad about these modes that oVirt can't use them?<br>> I can only qu=
ote jpirko's workds from the link above:<br>><br>> Do n=
ot use tlb or alb in bridge, never! It does not work, that's it. The reason=
<br>> is it mangles source macs in xmit frames and arps. W=
hen it is possible, just<br>> use mode 4 (lacp). That shou=
ld be always possible because all enterprise<br>> switches=
support that. Generally, for 99% of use cases, you *should* use mode<br>&g=
t; 4. There is no reason to use other modes.<br>><br>This =
switch is more of an office switch and only supports part of the<br>802.3ad=
standard:<br><div><br></div><br> &=
nbsp;PowerConnect* *2824<br><div><br></div> Scalable from=
small workgroups to dense access solutions, the 2824<br>  =
;offers 24-port flexibility plus two combo small?form?factor<br>  =
; pluggable (SFP) ports for connecting the switch to other networking=
<br> equipment located beyond the 100 m distance limitati=
ons of copper<br> cabling.<br><div><br></div> =
Industry-standard link aggregation adhering to IEEE 802.3ad<br> =
; standards (static support only, LACP not supported)<br><div><=
br></div><br>So the only way to have some kind of bonding without buying mo=
re<br>expensive switches, is using balance-rr (mode=3D0), balance-xor (mode=
=3D2)<br>or broadcast (modes=3D3).<br>>> I just tested mode 4, and th=
e LACP with Fedora 20 appears to not be<br>>> compatible with the LAG=
mode on my Dell 2824.<br>>><br>>> Would there be any issues wi=
th bringing two NICS into the VM and doing<br>>> balance-alb at the g=
uest level?<br>>><br>Kind regards,<br><div><br></div>Jorick Astrego<b=
r><div><br></div><br><div><br></div>Met vriendelijke groet, With kind regar=
ds,<br><div><br></div>Jorick Astrego<br><div><br></div>Netbulae Virtualizat=
ion Experts <br><div><br></div>----------------<br><div><br></div> &nb=
sp; Tel: 053 20 30 270  =
; info(a)netbulae.eu &nb=
sp; Staalsteden 4-3A &=
nbsp; KvK 08198180<br> =
Fax: 053 20 30 271 ww=
w.netbulae.eu 7547 TA Ensch=
ede BTW NL821234584B01<br><=
div><br></div>----------------<br><div><br></div>-------------- next part -=
-------------<br>An HTML attachment was scrubbed...<br>URL: <http://list=
s.ovirt.org/pipermail/users/attachments/20141229/dfacba22/attachment-0001.h=
tml><br><div><br></div>------------------------------<br><div><br></div>=
Message: 2<br>Date: Mon, 29 Dec 2014 20:14:55 +0100<br>From: Jorick Astrego=
<j.astrego(a)netbulae.eu><br>To: users(a)ovirt.org<br>Subject: Re: [ovir=
t-users] ??: bond mode balance-alb<br>Message-ID: <54A1A82F.1090100@netb=
ulae.eu><br>Content-Type: text/plain; charset=3D"utf-8"<br><div><br></di=
v><br>On 12/29/2014 12:56 AM, Dan Kenigsberg wrote:<br>> On Fri, Dec 26,=
2014 at 12:39:45PM -0600, Blaster wrote:<br>>> On 12/23/2014 2:55 AM=
, Dan Kenigsberg wrote:<br>>>> Bug 1094842 - Bonding modes 0, 5 an=
d 6 should be avoided for VM networks<br>>>> https://bugzilla.redh=
at.com/show_bug.cgi?id=3D1094842#c0<br>>><br>Sorry, no mode 0. So onl=
y mode 2 or 3 for your environment....<br><div><br></div>Kind regards,<br><=
div><br></div>Jorick<br><div><br></div><br><div><br></div>Met vriendelijke =
groet, With kind regards,<br><div><br></div>Jorick Astrego<br><div><br></di=
v>Netbulae Virtualization Experts <br><div><br></div>----------------<br><d=
iv><br></div> Tel: 053 20 30=
270 info(a)netbulae.eu  =
; Staalsteden 4-3A &nb=
sp; KvK 08198180<br> &n=
bsp; Fax: 053 20 30 271 &nbs=
p; www.netbulae.eu &nb=
sp; 7547 TA Enschede B=
TW NL821234584B01<br><div><br></div>----------------<br><div><br></div>----=
---------- next part --------------<br>An HTML attachment was scrubbed...<b=
r>URL: <http://lists.ovirt.org/pipermail/users/attachments/20141229/41da=
033b/attachment-0001.html><br><div><br></div>---------------------------=
---<br><div><br></div>Message: 3<br>Date: Tue, 30 Dec 2014 00:12:52 +0000<b=
r>From: Mikola Rose <mrose(a)power-soft.com><br>To: "users(a)ovirt.org" &=
lt;users(a)ovirt.org><br>Subject: [ovirt-users] HostedEngine Deployment Wo=
es<br>Message-ID: <F992C848-E4EB-468E-83F4-37646EDB3E62(a)power-soft.com&g=
t;<br>Content-Type: text/plain; charset=3D"us-ascii"<br><div><br></div><br>=
Hi List Members;<br><div><br></div>I have been struggling with deploying oV=
irt hosted engine I keep running into a timeout during the "Misc Conf=
iguration" any suggestion on how I can trouble shoot this?<br><div><b=
r></div>Redhat 2.6.32-504.3.3.el6.x86_64<br><div><br></div>Installed Packag=
es<br>ovirt-host-deploy.noarch &n=
bsp; =
 =
; &nb=
sp; 1.2.5-1.el6ev &=
nbsp; =
&nbs=
p; &n=
bsp; @rhel-6-server-rhevm-3.4-rpms<br>ovirt-host-deploy-java.noarch =
 =
; &nb=
sp; &=
nbsp; =
1.2.5-1.el6ev  =
; &nb=
sp; &=
nbsp; @rhel-6-server-rhevm-3.4-rpms<br>o=
virt-hosted-engine-ha.noarch &nbs=
p; &n=
bsp; =
 =
; 1.1.6-3.el6ev &nbs=
p; &n=
bsp; =
@rhel=
-6-server-rhevm-3.4-rpms<br>ovirt-hosted-engine-setup.noarch =
 =
; &nb=
sp; &=
nbsp; 1.1.5-1.el6ev =
&nbs=
p; &n=
bsp; =
@rhel-6-server-rhevm-3.4-rpms<br>rhevm-setup-plugin-ovirt-eng=
ine.noarch &=
nbsp; =
&nbs=
p; 3.4.4-2.2.el6ev =
 =
; &nb=
sp; &=
nbsp; @rhel-6-server-rhevm-3.4-rpms<br>rhevm-setup-plugin-ovirt-engi=
ne-common.noarch &n=
bsp; =
 =
; 3.4.4-2.2.el6ev &n=
bsp; =
 =
; @rh=
el-6-server-rhevm-3.4-rpms<br><div><br></div><br> =
Please confirm installation settings (Yes, No)[No]: Yes<br>[ =
INFO ] Stage: Transaction setup<br>[ INFO ] Stage: Misc configu=
ration<br>[ INFO ] Stage: Package installation<br>[ INFO ] Stag=
e: Misc configuration<br>[ INFO ] Configuring libvirt<br>[ INFO  =
;] Configuring VDSM<br>[ INFO ] Starting vdsmd<br>[ INFO ] Wait=
ing for VDSM hardware info<br>[ INFO ] Waiting for VDSM hardware info=
<br>[ INFO ] Connecting Storage Domain<br>[ INFO ] Connecting S=
torage Pool<br>[ INFO ] Verifying sanlock lockspace initialization<br=
>[ INFO ] sanlock lockspace already initialized<br>[ INFO ] san=
lock metadata already initialized<br>[ INFO ] Creating VM Image<br>[ =
INFO ] Disconnecting Storage Pool<br>[ INFO ] Start monitoring =
domain<br>[ ERROR ] Failed to execute stage 'Misc configuration': The read =
operation timed out<br>[ INFO ] Stage: Clean up<br>[ INFO ] Gen=
erating answer file '/etc/ovirt-hosted-engine/answers.conf'<br>[ INFO  =
;] Stage: Pre-termination<br>[ INFO ] Stage: Termination<br><div><br>=
</div><br><div><br></div>2014-12-29 14:53:41 DEBUG otopi.plugins.ovirt_host=
ed_engine_setup.sanlock.lockspace lockspace._misc:133 Ensuring lease for lo=
ckspace hosted-engine, host id 1 is acquired (file: /rhev/data-center/mnt/1=
92.168.0.75:_Volumes_Raid1/8094d528-7aa2-4c28-839f-73d7c8bcfebb/ha_agent/ho=
sted-engine.lockspace)<br>2014-12-29 14:53:41 INFO otopi.plugins.ovirt_host=
ed_engine_setup.sanlock.lockspace lockspace._misc:144 sanlock lockspace alr=
eady initialized<br>2014-12-29 14:53:41 INFO otopi.plugins.ovirt_hosted_eng=
ine_setup.sanlock.lockspace lockspace._misc:157 sanlock metadata already in=
itialized<br>2014-12-29 14:53:41 DEBUG otopi.context context._executeMethod=
:138 Stage misc METHOD otopi.plugins.ovirt_hosted_engine_setup.vm.image.Plu=
gin._misc<br>2014-12-29 14:53:41 INFO otopi.plugins.ovirt_hosted_engine_set=
up.vm.image image._misc:162 Creating VM Image<br>2014-12-29 14:53:41 DEBUG =
otopi.plugins.ovirt_hosted_engine_setup.vm.image image._misc:163 createVolu=
me<br>2014-12-29 14:53:42 DEBUG otopi.plugins.ovirt_hosted_engine_setup.vm.=
image image._misc:184 Created volume d8e7eed4-c763-4b3d-8a71-35f2d692a73d, =
request was:<br>- image: 9043e535-ea94-41f8-98df-6fdbfeb107c3<br>- volume: =
e6a9291d-ac21-4a95-b43c-0d6e552baaa2<br>2014-12-29 14:53:42 DEBUG otopi.ovi=
rt_hosted_engine_setup.tasks tasks.wait:48 Waiting for existing tasks to co=
mplete<br>2014-12-29 14:53:43 DEBUG otopi.ovirt_hosted_engine_setup.tasks t=
asks.wait:48 Waiting for existing tasks to complete<br>2014-12-29 14:53:43 =
DEBUG otopi.context context._executeMethod:138 Stage misc METHOD otopi.plug=
ins.ovirt_hosted_engine_setup.vm.boot_disk.Plugin._misc<br>2014-12-29 14:53=
:43 DEBUG otopi.context context._executeMethod:144 condition False<br>2014-=
12-29 14:53:43 DEBUG otopi.context context._executeMethod:138 Stage misc ME=
THOD otopi.plugins.ovirt_hosted_engine_setup.storage.storage.Plugin._discon=
nect_pool<br>2014-12-29 14:53:43 INFO otopi.plugins.ovirt_hosted_engine_set=
up.storage.storage storage._disconnect_pool:971 Disconnecting Storage Pool<=
br>2014-12-29 14:53:43 DEBUG otopi.ovirt_hosted_engine_setup.tasks tasks.wa=
it:48 Waiting for existing tasks to complete<br>2014-12-29 14:53:43 DEBUG o=
topi.plugins.ovirt_hosted_engine_setup.storage.storage storage._spmStop:602=
spmStop<br>2014-12-29 14:53:43 DEBUG otopi.plugins.ovirt_hosted_engine_set=
up.storage.storage storage._spmStop:611<br>2014-12-29 14:53:43 DEBUG otopi.=
plugins.ovirt_hosted_engine_setup.storage.storage storage._storagePoolConne=
ction:573 disconnectStoragePool<br>2014-12-29 14:53:45 INFO otopi.plugins.o=
virt_hosted_engine_setup.storage.storage storage._disconnect_pool:975 Start=
monitoring domain<br>2014-12-29 14:53:45 DEBUG otopi.plugins.ovirt_hosted_=
engine_setup.storage.storage storage._startMonitoringDomain:529 _startMonit=
oringDomain<br>2014-12-29 14:53:46 DEBUG otopi.plugins.ovirt_hosted_engine_=
setup.storage.storage storage._startMonitoringDomain:534 {'status': {'messa=
ge': 'OK', 'code': 0}}<br>2014-12-29 14:53:51 DEBUG otopi.ovirt_hosted_engi=
ne_setup.tasks tasks.wait:127 Waiting for domain monitor<br>2014-12-29 14:5=
4:51 DEBUG otopi.context context._executeMethod:152 method exception<br>Tra=
ceback (most recent call last):<br> File "/usr/lib/python2.6/sit=
e-packages/otopi/context.py", line 142, in _executeMethod<br> &=
nbsp;method['method']()<br> File "/usr/share/ovirt-hosted-engine=
-setup/scripts/../plugins/ovirt-hosted-engine-setup/storage/storage.py", li=
ne 976, in _disconnect_pool<br> self._startMonitoringDoma=
in()<br> File "/usr/share/ovirt-hosted-engine-setup/scripts/../p=
lugins/ovirt-hosted-engine-setup/storage/storage.py", line 539, in _startMo=
nitoringDomain<br> waiter.wait(self.environment[ohostedco=
ns.StorageEnv.SD_UUID])<br> File "/usr/lib/python2.6/site-packag=
es/ovirt_hosted_engine_setup/tasks.py", line 128, in wait<br> &=
nbsp;response =3D serv.s.getVdsStats()<br> File "/usr/lib64/pyth=
on2.6/xmlrpclib.py", line 1199, in __call__<br> return se=
lf.__send(self.__name, args)<br> File "/usr/lib64/python2.6/xmlr=
pclib.py", line 1489, in __request<br> verbose=3Dself.__v=
erbose<br> File "/usr/lib64/python2.6/xmlrpclib.py", line 1237, =
in request<br> errcode, errmsg, headers =3D h.getreply()<=
br> File "/usr/lib64/python2.6/httplib.py", line 1064, in getrep=
ly<br> response =3D self._conn.getresponse()<br> &nb=
sp;File "/usr/lib64/python2.6/httplib.py", line 990, in getresponse<br>&nbs=
p; response.begin()<br> File "/usr/lib64/python2.6/h=
ttplib.py", line 391, in begin<br> version, status, reaso=
n =3D self._read_status()<br> File "/usr/lib64/python2.6/httplib=
.py", line 349, in _read_status<br> line =3D self.fp.read=
line()<br> File "/usr/lib64/python2.6/socket.py", line 433, in r=
eadline<br> data =3D recv(1)<br> File "/usr/li=
b64/python2.6/ssl.py", line 215, in recv<br> return self.=
read(buflen)<br> File "/usr/lib64/python2.6/ssl.py", line 136, i=
n read<br> return self._sslobj.read(len)<br>SSLError: The=
read operation timed out<br><div><br></div><br><div><br></div><br>var/log/=
ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20141229145137-g8d2or.l=
og<br>-------------- next part --------------<br>An HTML attachment was scr=
ubbed...<br>URL: <http://lists.ovirt.org/pipermail/users/attachments/201=
41230/899f724c/attachment.html><br><div><br></div>----------------------=
--------<br><div><br></div>_______________________________________________<=
br>Users mailing list<br>Users(a)ovirt.org<br>http://lists.ovirt.org/mailman/=
listinfo/users<br><div><br></div><br>End of Users Digest, Vol 39, Issue 173=
<br>**************************************<br></div><div><br></div></div></=
body></html>
------=_Part_2052610_1445178490.1419935668210--
10 years, 4 months
engine-iso-uploader unexpected behaviour
by Steve Atkinson
When attempting use the engine-iso-uploader to drop ISOs in my iso storage
domain I get the following results.
Using engine-iso-uploader --iso-domain=[domain] upload [iso] does not work
because the engine does not have access to our storage network. So it
attempts to mount to an address that is not routable. I thought to resolve
this by adding an interfaces to the Hosted Engine, only to find that I
cannot modify the Engine's VM config from the GUI. I receive the
error: Cannot add Interface. This VM is not managed by the engine.
Actually, I get that error whenever I attempt to modify anything about the
engine. Maybe this is expected behavior? I can't find any bestpractices
regarding Hosted Engine administration.
Alternatively, using engine-iso-uploader --nfs-server=[path] upload [iso]
--verbose returns the following error:
ERROR: local variable 'domain_type' referenced before assignment
INFO: Use the -h option to see usage.
DEBUG: Configuration:
DEBUG: command: upload
DEBUG: Traceback (most recent call last):
DEBUG: File "/usr/bin/engine-iso-uploader", line 1440, in <module>
DEBUG: isoup = ISOUploader(conf)
DEBUG: File "/usr/bin/engine-iso-uploader", line 455, in __init__
DEBUG: self.upload_to_storage_domain()
DEBUG: File "/usr/bin/engine-iso-uploader", line 1089, in
upload_to_storage_domain
DEBUG: elif domain_type in ('localfs', ):
DEBUG: UnboundLocalError: local variable 'domain_type' referenced before
assignment
Engine is Self Hosted and is Version 3.5.0.1-1.el6.
Thanks!
-Steve
10 years, 4 months
feedback-on-oVirt-engine-3.5.0.1-1.el6
by bingozhou2013
------=_001_NextPart566401241025_=----
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: base64
RGVhciBTaXIsDQoNCldoZW4gIEkgIHRyeSB0byBpbnN0YWxsIHRoZSBvVmlydC1lbmdpbmUgMy41
ICBpbiB0aGUgQ2VudE9TIDYuNiAgLkJlbG93IGVycm9yIGlzIHNob3cgOg0KDQotLT4gRmluaXNo
ZWQgRGVwZW5kZW5jeSBSZXNvbHV0aW9uDQpFcnJvcjogUGFja2FnZTogb3ZpcnQtZW5naW5lLWJh
Y2tlbmQtMy41LjAuMS0xLmVsNi5ub2FyY2ggKG92aXJ0LTMuNSkNCiAgICAgICAgICAgUmVxdWly
ZXM6IG5vdm5jDQogWW91IGNvdWxkIHRyeSB1c2luZyAtLXNraXAtYnJva2VuIHRvIHdvcmsgYXJv
dW5kIHRoZSBwcm9ibGVtDQogWW91IGNvdWxkIHRyeSBydW5uaW5nOiBycG0gLVZhIC0tbm9maWxl
cyAtLW5vZGlnZXN0DQoNCg0KSSBoYXZlICBhZGRlZCB0aGUgIEVQRUwgc291cmNlIGFuZCBpbnN0
YWxsZWQgIG92aXJ0LXJlbGVhc2UzNS5ycG0gLGJ1dCBpdCBzdGlsbCBzaG93cyAicmVxdWlyZXM6
bm92bmMiIC4gUGxlYXNlIGhlbHAgbWUgdG8gY2hlY2sgdGhpcyAuVGhhbmsgeW91IHZlcnkgbXVj
aCAhDQoNCg0KDQoNCmJpbmdvemhvdTIwMTM=
------=_001_NextPart566401241025_=----
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML xmlns:o =3D "urn:schemas-microsoft-com:office:office"><HEAD>
<META content=3D"text/html; charset=3Dus-ascii" http-equiv=3DContent-Type>
<STYLE>
BLOCKQUOTE {
MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px; MARGIN-LEFT: 2em
}
OL {
MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
UL {
MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
P {
MARGIN-TOP: 0px; MARGIN-BOTTOM: 0px
}
BODY {
LINE-HEIGHT: 1.5; FONT-FAMILY: Tahoma; COLOR: #000000; FONT-SIZE: 10.5pt
}
</STYLE>
<META name=3DGENERATOR content=3D"MSHTML 9.00.8112.16599"></HEAD>
<BODY style=3D"MARGIN: 10px">
<DIV>Dear Sir,</DIV>
<DIV> </DIV>
<DIV>When I try to install the oVirt-engine 3.5 in the C=
entOS=20
6.6 .Below error is show :</DIV>
<DIV> </DIV>
<DIV>
<DIV>--> Finished Dependency Resolution</DIV>
<DIV style=3D""><SPAN=20
style=3D"BACKGROUND-COLOR: #ff0000">Error: Package: ovirt-engine=
-backend-3.5.0.1-1.el6.noarch (ovirt-3.5)</SPAN></DIV>
<DIV style=3D""><SPAN=20
style=3D"BACKGROUND-COLOR: #ff0000"> &n=
bsp; Requires: novnc</SPAN></DIV>
<DIV> You could try using --skip-broken to&n=
bsp;work around the problem</DIV>
<DIV> You could try running: rpm -Va --=
nofiles --nodigest</DIV></DIV>
<DIV> </DIV>
<DIV> </DIV>
<DIV>I have added the EPEL source and installed =20
ovirt-release35.rpm ,but it still shows "requires:novnc" . Please help me =
to=20
check this .Thank you very much !</DIV>
<DIV> </DIV>
<HR style=3D"WIDTH: 210px; HEIGHT: 1px" align=3Dleft color=3D#b5c4df SIZE=
=3D1>
<DIV><SPAN>bingozhou2013</SPAN></DIV></BODY></HTML>
------=_001_NextPart566401241025_=------
10 years, 4 months
domain storage of the type Glusterfs
by suporte@logicworks.pt
------=_Part_25142_1582656685.1419956865500
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Hi,
Concerning to Version 3.5.0.1-1.el6,
For creating a domain storage of the type Glusterfs, do I have to create a host first?
Thanks
Jose
--
Jose Ferradeira
http://www.logicworks.pt
------=_Part_25142_1582656685.1419956865500
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 7bit
<html><body><div style="font-family: Times New Roman; font-size: 10pt; color: #000000"><div>Hi,<br></div><div><br></div><div><span class="version-text">Concerning to Version 3.5.0.1-1.el6,</span></div><div><span class="version-text">For creating a domain storage of the type Glusterfs, do I have to create a host first?<br></span></div><div><br class="version-text"></div><div>Thanks<br></div><div><br></div><div>Jose<br></div><div>-- <br></div><div><span name="x"></span><hr style="width: 100%; height: 2px;" data-mce-style="width: 100%; height: 2px;">Jose Ferradeira<br>http://www.logicworks.pt<br><span name="x"></span><br></div></div></body></html>
------=_Part_25142_1582656685.1419956865500--
10 years, 4 months
Centos 6.6 ovirt install fail. dependency novnc error
by Anders Hellquist
Hi !
I am trying to install ovirt 3.5 on a Centos 6.6 host
Have tried both
http://resources.ovirt.org/pub/yum-repo/ovirt-release35.rpm
http://resources.ovirt.org/pub/yum-repo/ovirt-release35-snapshot.rpm
While installing ovirt-engine, ovirt-engine-backend requires novnc and fails.
--> Running transaction check
---> Package bind-libs.x86_64 32:9.8.2-0.30.rc1.el6_6.1 will be installed
---> Package ovirt-engine-backend.noarch
0:3.5.1-0.0.master.20141219191829.gite2b1d3a.el6 will be installed
--> Processing Dependency: novnc for package:
ovirt-engine-backend-3.5.1-0.0.master.20141219191829.gite2b1d3a.el6.noarch
---> Package perl-Pod-Escapes.x86_64 1:1.04-136.el6_6.1 will be installed
---> Package python-crypto.x86_64 0:2.0.1-22.el6 will be installed
---> Package setools-libs.x86_64 0:3.3.7-4.el6 will be installed
--> Finished Dependency Resolution
Error: Package:
ovirt-engine-backend-3.5.1-0.0.master.20141219191829.gite2b1d3a.el6.noarch
(ovirt-3.5-snapshot)
Requires: novnc
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
Have looked for novnc package but maybe it is named something else..
Anyone ?
Best regards, Anders
10 years, 4 months
Unable to reinstall hosts after network removal.
by Arman Khalatyan
Hello,
I have a little trouble with ovirt 3.5 on CentOS6.6:
I was removing all networks from all hosts.
Then after removing network from data center the hosts went to unusable.
Every time after reinstall the host claims that the network is not
configured, but it s already removed from network tab in DC.
Where from it gets the old configuration? the old interfaces also restored
every time on the reinstalled hosts.
Which DB table is in charge of dc-networks?
Thanks,
Arman.
***********************************************************
Dr. Arman Khalatyan eScience -SuperComputing Leibniz-Institut für
Astrophysik Potsdam (AIP) An der Sternwarte 16, 14482 Potsdam, Germany
***********************************************************
10 years, 4 months
VM failover with ovirt3.5
by Yue, Cong
--_000_ED08B56256B38842A463A2A0804C5AC0326ACA4745svrcaexch1atg_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hi
In my environment, I have 3 ovirt nodes as one cluster. And on top of host-=
1, there is one vm to host ovirt engine.
Also I have one external storage for the cluster to use as data domain of e=
ngine and data.
I confirmed live migration works well in my environment.
But it seems very buggy for VM failover if I try to force to shut down one =
ovirt node. Sometimes the VM in the node which is shutdown can migrate to o=
ther host, but it take more than several minutes.
Sometimes, it can not migrate at all. Sometimes, only when the host is back=
, the VM is beginning to move.
Is there some documentation to explain how VM failover is working? And is t=
here some bugs reported related with this?
Thanks in advance,
Cong
________________________________
This e-mail message is for the sole use of the intended recipient(s) and ma=
y contain confidential and privileged information. Any unauthorized review,=
use, disclosure or distribution is prohibited. If you are not the intended=
recipient, please contact the sender by reply e-mail and destroy all copie=
s of the original message. If you are the intended recipient, please be adv=
ised that the content of this message is subject to access, review and disc=
losure by the sender's e-mail System Administrator.
--_000_ED08B56256B38842A463A2A0804C5AC0326ACA4745svrcaexch1atg_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"MS Mincho";
panose-1:2 2 6 9 4 2 5 8 3 4;}
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
{font-family:Tahoma;
panose-1:2 11 6 4 3 5 4 4 2 4;}
@font-face
{font-family:"\@MS Mincho";
panose-1:2 2 6 9 4 2 5 8 3 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
p.MsoAcetate, li.MsoAcetate, div.MsoAcetate
{mso-style-priority:99;
mso-style-link:"Balloon Text Char";
margin:0in;
margin-bottom:.0001pt;
font-size:8.0pt;
font-family:"Tahoma","sans-serif";}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri","sans-serif";
color:windowtext;}
span.BalloonTextChar
{mso-style-name:"Balloon Text Char";
mso-style-priority:99;
mso-style-link:"Balloon Text";
font-family:"Tahoma","sans-serif";}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri","sans-serif";}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span style=3D"mso-fareast-language:JA">Hi<o:p></o:p=
></span></p>
<p class=3D"MsoNormal"><span style=3D"mso-fareast-language:JA"><o:p> <=
/o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"mso-fareast-language:JA">In my enviro=
nment, I have 3 ovirt nodes as one cluster. And on top of host-1, there is =
one vm to host ovirt engine.<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"mso-fareast-language:JA">Also I have =
one external storage for the cluster to use as data domain of engine and da=
ta.<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"mso-fareast-language:JA">I confirmed =
live migration works well in my environment.<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"mso-fareast-language:JA">But it seems=
very buggy for VM failover if I try to force to shut down one ovirt node. =
Sometimes the VM in the node which is shutdown can migrate to other host, b=
ut it take more than several minutes.<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"mso-fareast-language:JA">Sometimes, i=
t can not migrate at all. Sometimes, only when the host is back, the VM is =
beginning to move.<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"mso-fareast-language:JA"><o:p> <=
/o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"mso-fareast-language:JA">Is there som=
e documentation to explain how VM failover is working? And is there some bu=
gs reported related with this?<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"mso-fareast-language:JA"><o:p> <=
/o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"mso-fareast-language:JA">Thanks in ad=
vance,<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"mso-fareast-language:JA">Cong<o:p></o=
:p></span></p>
<p class=3D"MsoNormal"><span style=3D"mso-fareast-language:JA"><o:p> <=
/o:p></span></p>
</div>
<br>
<hr>
<font face=3D"Arial" color=3D"Gray" size=3D"1">This e-mail message is for t=
he sole use of the intended recipient(s) and may contain confidential and p=
rivileged information. Any unauthorized review, use, disclosure or distribu=
tion is prohibited. If you are not the
intended recipient, please contact the sender by reply e-mail and destroy =
all copies of the original message. If you are the intended recipient, plea=
se be advised that the content of this message is subject to access, review=
and disclosure by the sender's
e-mail System Administrator.<br>
</font>
</body>
</html>
--_000_ED08B56256B38842A463A2A0804C5AC0326ACA4745svrcaexch1atg_--
10 years, 4 months
bond mode balance-alb
by Blaster
I’m looking to setup bonding. I’m using Dell 2824 switches which support LAG (static 802.3ad apparently?) but not LACP (dynamic 802.3ad). Fedora only seems to support Dynamic mode from what I can tell.
The next best choice seems to be balance-alb, but for some reason it’s not available as a mode in the drop down. Any reason why it’s missing? If I select custom and enter balance-alb, is that supported?
10 years, 4 months
Re: [ovirt-users] Users Digest, Vol 39, Issue 171
by Nikolai Sednev
------=_Part_1882534_1428653136.1419879250032
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Hi,
Can you please provide engine.log from /var/log/ovirt-engine/engine.log and to try as follows:
1. Revert all tree hosts to maintenance-mode=none.
2. Check that engine up and running.
3. Turn one of the hosts that is not running the engine, to maintenance mode local.
4. Turn host that is running the engine to maintenance mode local.
5. Check that engine migrated to one and the only remaining host, that had not been put in to maintenance mode at all.
Can you also provide your engine version, is it 3.4 something?
Thanks in advance.
Best regards,
Nikolai
____________________
Nikolai Sednev
Senior Quality Engineer at Compute team
Red Hat Israel
34 Jerusalem Road,
Ra'anana, Israel 43501
Tel: +972 9 7692043
Mobile: +972 52 7342734
Email: nsednev(a)redhat.com
IRC: nsednev
----- Original Message -----
From: users-request(a)ovirt.org
To: users(a)ovirt.org
Sent: Monday, December 29, 2014 8:29:36 PM
Subject: Users Digest, Vol 39, Issue 171
Send Users mailing list submissions to
users(a)ovirt.org
To subscribe or unsubscribe via the World Wide Web, visit
http://lists.ovirt.org/mailman/listinfo/users
or, via email, send a message with subject or body 'help' to
users-request(a)ovirt.org
You can reach the person managing the list at
users-owner(a)ovirt.org
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Users digest..."
Today's Topics:
1. Re: VM failover with ovirt3.5 (Yue, Cong)
----------------------------------------------------------------------
Message: 1
Date: Mon, 29 Dec 2014 10:29:04 -0800
From: "Yue, Cong" <Cong_Yue(a)alliedtelesis.com>
To: Artyom Lukianov <alukiano(a)redhat.com>
Cc: "users(a)ovirt.org" <users(a)ovirt.org>
Subject: Re: [ovirt-users] VM failover with ovirt3.5
Message-ID: <21D302CF-AD6F-4E8C-A373-52ADAC1C129B(a)alliedtelesis.com>
Content-Type: text/plain; charset="utf-8"
I disabled local maintenance mode for all hosts, and then only set the host where HE VM is there to local maintenance mode. The logs are as follows. During the migration of HE VM , it shows some fatal error happen. By the way, also HE VM can not work with live migration. Instead, other VMs can do live migration.
---
[root@compute2-3 ~]# hosted-engine --set-maintenance --mode=local
You have new mail in /var/spool/mail/root
[root@compute2-3 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
MainThread::INFO::2014-12-29
13:16:12,435::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.92 (id: 3, score: 2400)
MainThread::INFO::2014-12-29
13:16:22,711::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-29
13:16:22,711::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.92 (id: 3, score: 2400)
MainThread::INFO::2014-12-29
13:16:32,978::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-29
13:16:32,978::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-29
13:16:43,272::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-29
13:16:43,272::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-29
13:16:53,316::states::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
Engine vm running on localhost
MainThread::INFO::2014-12-29
13:16:53,562::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-29
13:16:53,562::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-29
13:17:03,600::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-29
13:17:03,611::brokerlink::111::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
Trying: notify time=1419877023.61 type=state_transition
detail=EngineUp-LocalMaintenanceMigrateVm hostname='compute2-3'
MainThread::INFO::2014-12-29
13:17:03,672::brokerlink::120::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
Success, was notification of state_transition
(EngineUp-LocalMaintenanceMigrateVm) sent? sent
MainThread::INFO::2014-12-29
13:17:03,911::states::208::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)
Score is 0 due to local maintenance mode
MainThread::INFO::2014-12-29
13:17:03,912::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenanceMigrateVm (score: 0)
MainThread::INFO::2014-12-29
13:17:03,912::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-29
13:17:03,960::brokerlink::111::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
Trying: notify time=1419877023.96 type=state_transition
detail=LocalMaintenanceMigrateVm-EngineMigratingAway
hostname='compute2-3'
MainThread::INFO::2014-12-29
13:17:03,980::brokerlink::120::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
Success, was notification of state_transition
(LocalMaintenanceMigrateVm-EngineMigratingAway) sent? sent
MainThread::INFO::2014-12-29
13:17:04,218::states::66::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_penalize_memory)
Penalizing score by 400 due to low free memory
MainThread::INFO::2014-12-29
13:17:04,218::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineMigratingAway (score: 2000)
MainThread::INFO::2014-12-29
13:17:04,219::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::ERROR::2014-12-29
13:17:14,251::hosted_engine::867::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitor_migration)
Failed to migrate
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
line 863, in _monitor_migration
vm_id,
File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/vds_client.py",
line 85, in run_vds_client_cmd
response['status']['message'])
DetailedError: Error 12 from migrateStatus: Fatal error during migration
MainThread::INFO::2014-12-29
13:17:14,262::brokerlink::111::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
Trying: notify time=1419877034.26 type=state_transition
detail=EngineMigratingAway-ReinitializeFSM hostname='compute2-3'
MainThread::INFO::2014-12-29
13:17:14,263::brokerlink::120::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
Success, was notification of state_transition
(EngineMigratingAway-ReinitializeFSM) sent? ignored
MainThread::INFO::2014-12-29
13:17:14,496::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state ReinitializeFSM (score: 0)
MainThread::INFO::2014-12-29
13:17:14,496::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-29
13:17:24,536::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-29
13:17:24,547::brokerlink::111::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
Trying: notify time=1419877044.55 type=state_transition
detail=ReinitializeFSM-LocalMaintenance hostname='compute2-3'
MainThread::INFO::2014-12-29
13:17:24,574::brokerlink::120::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
Success, was notification of state_transition
(ReinitializeFSM-LocalMaintenance) sent? sent
MainThread::INFO::2014-12-29
13:17:24,812::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenance (score: 0)
MainThread::INFO::2014-12-29
13:17:24,812::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-29
13:17:34,851::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-29
13:17:35,095::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenance (score: 0)
MainThread::INFO::2014-12-29
13:17:35,095::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-29
13:17:45,130::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-29
13:17:45,368::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenance (score: 0)
MainThread::INFO::2014-12-29
13:17:45,368::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
^C
[root@compute2-3 ~]#
[root@compute2-3 ~]# hosted-engine --vm-status
--== Host 1 status ==--
Status up-to-date : True
Hostname : 10.0.0.94
Host ID : 1
Engine status : {"health": "good", "vm": "up",
"detail": "up"}
Score : 0
Local maintenance : True
Host timestamp : 1014956<tel:1014956>
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=1014956<tel:1014956> (Mon Dec 29 13:20:19 2014)
host-id=1
score=0
maintenance=True
state=LocalMaintenance
--== Host 2 status ==--
Status up-to-date : True
Hostname : 10.0.0.93
Host ID : 2
Engine status : {"reason": "vm not running on
this host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 2400
Local maintenance : False
Host timestamp : 866019
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=866019 (Mon Dec 29 10:19:45 2014)
host-id=2
score=2400
maintenance=False
state=EngineDown
--== Host 3 status ==--
Status up-to-date : True
Hostname : 10.0.0.92
Host ID : 3
Engine status : {"reason": "vm not running on
this host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 2400
Local maintenance : False
Host timestamp : 860493
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=860493 (Mon Dec 29 10:20:35 2014)
host-id=3
score=2400
maintenance=False
state=EngineDown
[root@compute2-3 ~]#
---
Thanks,
Cong
On 2014/12/29, at 8:43, "Artyom Lukianov" <alukiano(a)redhat.com<mailto:alukiano@redhat.com>> wrote:
I see that HE vm run on host with ip 10.0.0.94, and two another hosts in "Local Maintenance" state, so vm will not migrate to any of them, can you try disable local maintenance on all hosts in HE environment and after enable "local maintenance" on host where HE vm run, and provide also output of hosted-engine --vm-status.
Failover works in next way:
1) if host where run HE vm have score less by 800 that some other host in HE environment, HE vm will migrate on host with best score
2) if something happen to vm(kernel panic, crash of service...), agent will restart HE vm on another host in HE environment with positive score
3) if put to local maintenance host with HE vm, vm will migrate to another host with positive score
Thanks.
----- Original Message -----
From: "Cong Yue" <Cong_Yue(a)alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com>>
To: "Artyom Lukianov" <alukiano(a)redhat.com<mailto:alukiano@redhat.com>>
Cc: "Simone Tiraboschi" <stirabos(a)redhat.com<mailto:stirabos@redhat.com>>, users(a)ovirt.org<mailto:users@ovirt.org>
Sent: Monday, December 29, 2014 6:30:42 PM
Subject: Re: [ovirt-users] VM failover with ovirt3.5
Thanks and the --vm-status log is as follows:
[root@compute2-2 ~]# hosted-engine --vm-status
--== Host 1 status ==--
Status up-to-date : True
Hostname : 10.0.0.94
Host ID : 1
Engine status : {"health": "good", "vm": "up",
"detail": "up"}
Score : 2400
Local maintenance : False
Host timestamp : 1008087
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=1008087<tel:1008087> (Mon Dec 29 11:25:51 2014)
host-id=1
score=2400
maintenance=False
state=EngineUp
--== Host 2 status ==--
Status up-to-date : True
Hostname : 10.0.0.93
Host ID : 2
Engine status : {"reason": "vm not running on
this host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 0
Local maintenance : True
Host timestamp : 859142
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=859142 (Mon Dec 29 08:25:08 2014)
host-id=2
score=0
maintenance=True
state=LocalMaintenance
--== Host 3 status ==--
Status up-to-date : True
Hostname : 10.0.0.92
Host ID : 3
Engine status : {"reason": "vm not running on
this host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 0
Local maintenance : True
Host timestamp : 853615
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=853615 (Mon Dec 29 08:25:57 2014)
host-id=3
score=0
maintenance=True
state=LocalMaintenance
You have new mail in /var/spool/mail/root
[root@compute2-2 ~]#
Could you please explain how VM failover works inside ovirt? Is there any other debug option I can enable to check the problem?
Thanks,
Cong
On 2014/12/29, at 1:39, "Artyom Lukianov" <alukiano(a)redhat.com<mailto:alukiano@redhat.com><mailto:alukiano@redhat.com>> wrote:
Can you also provide output of hosted-engine --vm-status please, previous time it was useful, because I do not see something unusual.
Thanks
----- Original Message -----
From: "Cong Yue" <Cong_Yue(a)alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com><mailto:Cong_Yue@alliedtelesis.com>>
To: "Artyom Lukianov" <alukiano(a)redhat.com<mailto:alukiano@redhat.com><mailto:alukiano@redhat.com>>
Cc: "Simone Tiraboschi" <stirabos(a)redhat.com<mailto:stirabos@redhat.com><mailto:stirabos@redhat.com>>, users(a)ovirt.org<mailto:users@ovirt.org><mailto:users@ovirt.org>
Sent: Monday, December 29, 2014 7:15:24 AM
Subject: Re: [ovirt-users] VM failover with ovirt3.5
Also I change the maintenance mode to local in another host. But also the VM in this host can not be migrated. The logs are as follows.
[root@compute2-2 ~]# hosted-engine --set-maintenance --mode=local
[root@compute2-2 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
MainThread::INFO::2014-12-28
21:09:04,184::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-28
21:09:14,603::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-28
21:09:14,603::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-28
21:09:24,903::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-28
21:09:24,904::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-28
21:09:35,026::states::437::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
Engine vm is running on host 10.0.0.94 (id 1)
MainThread::INFO::2014-12-28
21:09:35,236::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-28
21:09:35,236::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-28
21:09:45,604::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-28
21:09:45,604::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-28
21:09:55,691::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-28
21:09:55,701::brokerlink::111::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
Trying: notify time=1419829795.7 type=state_transition
detail=EngineDown-LocalMaintenance hostname='compute2-2'
MainThread::INFO::2014-12-28
21:09:55,761::brokerlink::120::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
Success, was notification of state_transition
(EngineDown-LocalMaintenance) sent? sent
MainThread::INFO::2014-12-28
21:09:55,990::states::208::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)
Score is 0 due to local maintenance mode
MainThread::INFO::2014-12-28
21:09:55,990::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenance (score: 0)
MainThread::INFO::2014-12-28
21:09:55,991::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
^C
You have new mail in /var/spool/mail/root
[root@compute2-2 ~]# ps -ef | grep qemu
root 18420 2777 0 21:10<x-apple-data-detectors://39> pts/0 00:00:00<x-apple-data-detectors://40> grep --color=auto qemu
qemu 29809 1 0 Dec19 ? 01:17:20 /usr/libexec/qemu-kvm
-name testvm2-2 -S -machine rhel6.5.0,accel=kvm,usb=off -cpu Nehalem
-m 500 -realtime mlock=off -smp
1,maxcpus=16,sockets=16,cores=1,threads=1 -uuid
c31e97d0-135e-42da-9954-162b5228dce3 -smbios
type=1,manufacturer=oVirt,product=oVirt
Node,version=7-0.1406.el7.centos.2.5,serial=4C4C4544-0059-3610-8033-B4C04F395931,uuid=c31e97d0-135e-42da-9954-162b5228dce3
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/testvm2-2.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2014-12-19T20:17:17<x-apple-data-detectors://42>,driftfix=slew -no-kvm-pit-reinjection
-no-hpet -no-shutdown -boot strict=on -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5
-drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial=
-device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
-drive file=/rhev/data-center/00000002-0002-0002-0002-0000000001e4/1dc71096-27c4-4256-b2ac-bd7265525c69/images/5cbeb8c9-4f04-48d0-a5eb-78c49187c550/a0570e8c-9867-4ec4-818f-11e102fc4f9b,if=none,id=drive-virtio-disk0,format=qcow2,serial=5cbeb8c9-4f04-48d0-a5eb-78c49187c550,cache=none,werror=stop,rerror=stop,aio=threads
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-netdev tap,fd=28,id=hostnet0,vhost=on,vhostfd=29 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:db:94:00,bus=pci.0,addr=0x3
-chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/c31e97d0-135e-42da-9954-162b5228dce3.com.redhat.rhevm.vdsm,server,nowait
-device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/c31e97d0-135e-42da-9954-162b5228dce3.org.qemu.guest_agent.0,server,nowait
-device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev spicevmc,id=charchannel2,name=vdagent -device
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
-spice tls-port=5901,addr=10.0.0.93,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
-k en-us -vga qxl -global qxl-vga.ram_size=67108864<tel:67108864> -global
qxl-vga.vram_size=33554432<tel:33554432> -incoming tcp:[::]:49152 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7
[root@compute2-2 ~]#
Thanks,
Cong
On 2014/12/28, at 20:53, "Yue, Cong" <Cong_Yue(a)alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com><mailto:Cong_Yue@alliedtelesis.com><mailto:Cong_Yue@alliedtelesis.com>> wrote:
I checked it again and confirmed there is one guest VM is running on the top of this host. The log is as follows:
[root@compute2-1 vdsm]# ps -ef | grep qemu
qemu 2983 846 0 Dec19 ? 00:00:00<x-apple-data-detectors://0> [supervdsmServer] <defunct>
root 5489 3053 0 20:49<x-apple-data-detectors://1> pts/0 00:00:00<x-apple-data-detectors://2> grep --color=auto qemu
qemu 26128 1 0 Dec19 ? 01:09:19 /usr/libexec/qemu-kvm
-name testvm2 -S -machine rhel6.5.0,accel=kvm,usb=off -cpu Nehalem -m
500 -realtime mlock=off -smp 1,maxcpus=16,sockets=16,cores=1,threads=1
-uuid e46bca87-4df5-4287-844b-90a26fccef33 -smbios
type=1,manufacturer=oVirt,product=oVirt
Node,version=7-0.1406.el7.centos.2.5,serial=4C4C4544-0030-3310-8059-B8C04F585231,uuid=e46bca87-4df5-4287-844b-90a26fccef33
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/testvm2.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2014-12-19T20:18:01<x-apple-data-detectors://4>,driftfix=slew -no-kvm-pit-reinjection
-no-hpet -no-shutdown -boot strict=on -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5
-drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial=
-device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
-drive file=/rhev/data-center/00000002-0002-0002-0002-0000000001e4/1dc71096-27c4-4256-b2ac-bd7265525c69/images/b4b5426b-95e3-41af-b286-da245891cdaf/0f688d49-97e3-4f1d-84d4-ac1432d903b3,if=none,id=drive-virtio-disk0,format=qcow2,serial=b4b5426b-95e3-41af-b286-da245891cdaf,cache=none,werror=stop,rerror=stop,aio=threads
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=27 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:db:94:01,bus=pci.0,addr=0x3
-chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/e46bca87-4df5-4287-844b-90a26fccef33.com.redhat.rhevm.vdsm,server,nowait
-device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/e46bca87-4df5-4287-844b-90a26fccef33.org.qemu.guest_agent.0,server,nowait
-device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev spicevmc,id=charchannel2,name=vdagent -device
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
-spice tls-port=5900,addr=10.0.0.92,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
-k en-us -vga qxl -global qxl-vga.ram_size=67108864<tel:67108864> -global
qxl-vga.vram_size=33554432<tel:33554432> -incoming tcp:[::]:49152 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7
[root@compute2-1 vdsm]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
MainThread::INFO::2014-12-28
20:49:27,315::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-28
20:49:27,646::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenance (score: 0)
MainThread::INFO::2014-12-28
20:49:27,646::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-28
20:49:37,732::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-28
20:49:37,961::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenance (score: 0)
MainThread::INFO::2014-12-28
20:49:37,961::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-28
20:49:48,048::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-28
20:49:48,319::states::208::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)
Score is 0 due to local maintenance mode
MainThread::INFO::2014-12-28
20:49:48,319::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenance (score: 0)
MainThread::INFO::2014-12-28
20:49:48,319::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
Thanks,
Cong
On 2014/12/28, at 3:46, "Artyom Lukianov" <alukiano(a)redhat.com<mailto:alukiano@redhat.com><mailto:alukiano@redhat.com><mailto:alukiano@redhat.com>> wrote:
I see that you set local maintenance on host3 that do not have engine vm on it, so it nothing to migrate from this host.
If you set local maintenance on host1, vm must migrate to another host with positive score.
Thanks
----- Original Message -----
From: "Cong Yue" <Cong_Yue(a)alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com><mailto:Cong_Yue@alliedtelesis.com><mailto:Cong_Yue@alliedtelesis.com>>
To: "Simone Tiraboschi" <stirabos(a)redhat.com<mailto:stirabos@redhat.com><mailto:stirabos@redhat.com><mailto:stirabos@redhat.com>>
Cc: users(a)ovirt.org<mailto:users@ovirt.org><mailto:users@ovirt.org><mailto:users@ovirt.org>
Sent: Saturday, December 27, 2014 6:58:32 PM
Subject: Re: [ovirt-users] VM failover with ovirt3.5
Hi
I had a try with "hosted-engine --set-maintence --mode=local" on
compute2-1, which is host 3 in my cluster. From the log, it shows
maintence mode is dectected, but migration does not happen.
The logs are as follows. Is there any other config I need to check?
[root@compute2-1 vdsm]# hosted-engine --vm-status
--== Host 1 status ==-
Status up-to-date : True
Hostname : 10.0.0.94
Host ID : 1
Engine status : {"health": "good", "vm": "up",
"detail": "up"}
Score : 2400
Local maintenance : False
Host timestamp : 836296
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=836296 (Sat Dec 27 11:42:39 2014)
host-id=1
score=2400
maintenance=False
state=EngineUp
--== Host 2 status ==--
Status up-to-date : True
Hostname : 10.0.0.93
Host ID : 2
Engine status : {"reason": "vm not running on
this host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 2400
Local maintenance : False
Host timestamp : 687358
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=687358 (Sat Dec 27 08:42:04 2014)
host-id=2
score=2400
maintenance=False
state=EngineDown
--== Host 3 status ==--
Status up-to-date : True
Hostname : 10.0.0.92
Host ID : 3
Engine status : {"reason": "vm not running on
this host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 0
Local maintenance : True
Host timestamp : 681827
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=681827 (Sat Dec 27 08:42:40 2014)
host-id=3
score=0
maintenance=True
state=LocalMaintenance
[root@compute2-1 vdsm]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
MainThread::INFO::2014-12-27
08:42:41,109::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-27
08:42:51,198::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-27
08:42:51,420::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenance (score: 0)
MainThread::INFO::2014-12-27
08:42:51,420::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-27
08:43:01,507::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-27
08:43:01,773::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenance (score: 0)
MainThread::INFO::2014-12-27
08:43:01,773::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-27
08:43:11,859::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-27
08:43:12,072::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenance (score: 0)
MainThread::INFO::2014-12-27
08:43:12,072::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
[root@compute2-3 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
MainThread::INFO::2014-12-27
11:36:28,855::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-27
11:36:39,130::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-27
11:36:39,130::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-27
11:36:49,449::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-27
11:36:49,449::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-27
11:36:59,739::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-27
11:36:59,739::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-27
11:37:09,779::states::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
Engine vm running on localhost
MainThread::INFO::2014-12-27
11:37:10,026::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-27
11:37:10,026::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-27
11:37:20,331::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-27
11:37:20,331::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
[root@compute2-2 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
MainThread::INFO::2014-12-27
08:36:12,462::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-27
08:36:22,797::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-27
08:36:22,798::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-27
08:36:32,876::states::437::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
Engine vm is running on host 10.0.0.94 (id 1)
MainThread::INFO::2014-12-27
08:36:33,169::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-27
08:36:33,169::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-27
08:36:43,567::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-27
08:36:43,567::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-27
08:36:53,858::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-27
08:36:53,858::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-27
08:37:04,028::state_machine::160::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Global metadata: {'maintenance': False}
MainThread::INFO::2014-12-27
08:37:04,028::state_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Host 10.0.0.94 (id 1): {'extra':
'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=835987
(Sat Dec 27 11:37:30
2014)\nhost-id=1\nscore=2400\nmaintenance=False\nstate=EngineUp\n',
'hostname': '10.0.0.94', 'alive': True, 'host-id': 1, 'engine-status':
{'health': 'good', 'vm': 'up', 'detail': 'up'}, 'score': 2400,
'maintenance': False, 'host-ts': 835987}
MainThread::INFO::2014-12-27
08:37:04,028::state_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Host 10.0.0.92 (id 3): {'extra':
'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=681528
(Sat Dec 27 08:37:41
2014)\nhost-id=3\nscore=0\nmaintenance=True\nstate=LocalMaintenance\n',
'hostname': '10.0.0.92', 'alive': True, 'host-id': 3, 'engine-status':
{'reason': 'vm not running on this host', 'health': 'bad', 'vm':
'down', 'detail': 'unknown'}, 'score': 0, 'maintenance': True,
'host-ts': 681528}
MainThread::INFO::2014-12-27
08:37:04,028::state_machine::168::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Local (id 2): {'engine-health': {'reason': 'vm not running on this
host', 'health': 'bad', 'vm': 'down', 'detail': 'unknown'}, 'bridge':
True, 'mem-free': 15300.0, 'maintenance': False, 'cpu-load': 0.0215,
'gateway': True}
MainThread::INFO::2014-12-27
08:37:04,265::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-27
08:37:04,265::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
Thanks,
Cong
On 2014/12/22, at 5:29, "Simone Tiraboschi" <stirabos(a)redhat.com<mailto:stirabos@redhat.com><mailto:stirabos@redhat.com><mailto:stirabos@redhat.com>> wrote:
----- Original Message -----
From: "Cong Yue" <Cong_Yue(a)alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com><mailto:Cong_Yue@alliedtelesis.com><mailto:Cong_Yue@alliedtelesis.com>>
To: "Simone Tiraboschi" <stirabos(a)redhat.com<mailto:stirabos@redhat.com><mailto:stirabos@redhat.com><mailto:stirabos@redhat.com>>
Cc: users(a)ovirt.org<mailto:users@ovirt.org><mailto:users@ovirt.org><mailto:users@ovirt.org>
Sent: Friday, December 19, 2014 7:22:10 PM
Subject: RE: [ovirt-users] VM failover with ovirt3.5
Thanks for the information. This is the log for my three ovirt nodes.
>From the output of hosted-engine --vm-status, it shows the engine state for
my 2nd and 3rd ovirt node is DOWN.
Is this the reason why VM failover not work in my environment?
No, they looks ok: you can run the engine VM on single host at a time.
How can I make
also engine works for my 2nd and 3rd ovit nodes?
If you put the host 1 in local maintenance mode ( hosted-engine --set-maintenance --mode=local ) the VM should migrate to host 2; if you reactivate host 1 ( hosted-engine --set-maintenance --mode=none ) and put host 2 in local maintenance mode the VM should migrate again.
Can you please try that and post the logs if something is going bad?
--
--== Host 1 status ==--
Status up-to-date : True
Hostname : 10.0.0.94
Host ID : 1
Engine status : {"health": "good", "vm": "up",
"detail": "up"}
Score : 2400
Local maintenance : False
Host timestamp : 150475
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=150475 (Fri Dec 19 13:12:18 2014)
host-id=1
score=2400
maintenance=False
state=EngineUp
--== Host 2 status ==--
Status up-to-date : True
Hostname : 10.0.0.93
Host ID : 2
Engine status : {"reason": "vm not running on
this host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 2400
Local maintenance : False
Host timestamp : 1572
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=1572 (Fri Dec 19 10:12:18 2014)
host-id=2
score=2400
maintenance=False
state=EngineDown
--== Host 3 status ==--
Status up-to-date : False
Hostname : 10.0.0.92
Host ID : 3
Engine status : unknown stale-data
Score : 2400
Local maintenance : False
Host timestamp : 987
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=987 (Fri Dec 19 10:09:58 2014)
host-id=3
score=2400
maintenance=False
state=EngineDown
--
And the /var/log/ovirt-hosted-engine-ha/agent.log for three ovirt nodes are
as follows:
--
10.0.0.94(hosted-engine-1)
---
MainThread::INFO::2014-12-19
13:09:33,716::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:09:33,716::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:09:44,017::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:09:44,017::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:09:54,303::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:09:54,303::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:10:04,342::states::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
Engine vm running on localhost
MainThread::INFO::2014-12-19
13:10:04,617::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:10:04,617::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:10:14,657::state_machine::160::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Global metadata: {'maintenance': False}
MainThread::INFO::2014-12-19
13:10:14,657::state_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Host 10.0.0.93 (id 2): {'extra':
'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=1448
(Fri Dec 19 10:10:14
2014)\nhost-id=2\nscore=2400\nmaintenance=False\nstate=EngineDown\n',
'hostname': '10.0.0.93', 'alive': True, 'host-id': 2, 'engine-status':
{'reason': 'vm not running on this host', 'health': 'bad', 'vm':
'down', 'detail': 'unknown'}, 'score': 2400, 'maintenance': False,
'host-ts': 1448}
MainThread::INFO::2014-12-19
13:10:14,657::state_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Host 10.0.0.92 (id 3): {'extra':
'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=987
(Fri Dec 19 10:09:58
2014)\nhost-id=3\nscore=2400\nmaintenance=False\nstate=EngineDown\n',
'hostname': '10.0.0.92', 'alive': True, 'host-id': 3, 'engine-status':
{'reason': 'vm not running on this host', 'health': 'bad', 'vm':
'down', 'detail': 'unknown'}, 'score': 2400, 'maintenance': False,
'host-ts': 987}
MainThread::INFO::2014-12-19
13:10:14,658::state_machine::168::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Local (id 1): {'engine-health': {'health': 'good', 'vm': 'up',
'detail': 'up'}, 'bridge': True, 'mem-free': 1079.0, 'maintenance':
False, 'cpu-load': 0.0269, 'gateway': True}
MainThread::INFO::2014-12-19
13:10:14,904::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:10:14,904::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:10:25,210::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:10:25,210::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:10:35,499::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:10:35,499::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:10:45,784::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:10:45,785::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:10:56,070::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:10:56,070::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:11:06,109::states::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
Engine vm running on localhost
MainThread::INFO::2014-12-19
13:11:06,359::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:11:06,359::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:11:16,658::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:11:16,658::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:11:26,991::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:11:26,991::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:11:37,341::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:11:37,341::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
----
10.0.0.93 (hosted-engine-2)
MainThread::INFO::2014-12-19
10:12:18,339::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-19
10:12:18,339::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-19
10:12:28,651::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-19
10:12:28,652::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-19
10:12:39,010::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-19
10:12:39,010::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-19
10:12:49,338::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-19
10:12:49,338::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-19
10:12:59,642::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-19
10:12:59,642::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-19
10:13:10,010::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-19
10:13:10,010::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
10.0.0.92(hosted-engine-3)
same as 10.0.0.93
--
-----Original Message-----
From: Simone Tiraboschi [mailto:stirabos@redhat.com]
Sent: Friday, December 19, 2014 12:28 AM
To: Yue, Cong
Cc: users(a)ovirt.org<mailto:users@ovirt.org><mailto:users@ovirt.org><mailto:users@ovirt.org>
Subject: Re: [ovirt-users] VM failover with ovirt3.5
----- Original Message -----
From: "Cong Yue" <Cong_Yue(a)alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com><mailto:Cong_Yue@alliedtelesis.com><mailto:Cong_Yue@alliedtelesis.com>>
To: users(a)ovirt.org<mailto:users@ovirt.org><mailto:users@ovirt.org><mailto:users@ovirt.org>
Sent: Friday, December 19, 2014 2:14:33 AM
Subject: [ovirt-users] VM failover with ovirt3.5
Hi
In my environment, I have 3 ovirt nodes as one cluster. And on top of
host-1, there is one vm to host ovirt engine.
Also I have one external storage for the cluster to use as data domain
of engine and data.
I confirmed live migration works well in my environment.
But it seems very buggy for VM failover if I try to force to shut down
one ovirt node. Sometimes the VM in the node which is shutdown can
migrate to other host, but it take more than several minutes.
Sometimes, it can not migrate at all. Sometimes, only when the host is
back, the VM is beginning to move.
Can you please check or share the logs under /var/log/ovirt-hosted-engine-ha/
?
Is there some documentation to explain how VM failover is working? And
is there some bugs reported related with this?
http://www.ovirt.org/Features/Self_Hosted_Engine#Agent_State_Diagram
Thanks in advance,
Cong
This e-mail message is for the sole use of the intended recipient(s)
and may contain confidential and privileged information. Any
unauthorized review, use, disclosure or distribution is prohibited. If
you are not the intended recipient, please contact the sender by reply
e-mail and destroy all copies of the original message. If you are the
intended recipient, please be advised that the content of this message
is subject to access, review and disclosure by the sender's e-mail System
Administrator.
_______________________________________________
Users mailing list
Users(a)ovirt.org<mailto:Users@ovirt.org><mailto:Users@ovirt.org><mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
This e-mail message is for the sole use of the intended recipient(s) and may
contain confidential and privileged information. Any unauthorized review,
use, disclosure or distribution is prohibited. If you are not the intended
recipient, please contact the sender by reply e-mail and destroy all copies
of the original message. If you are the intended recipient, please be
advised that the content of this message is subject to access, review and
disclosure by the sender's e-mail System Administrator.
This e-mail message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. If you are the intended recipient, please be advised that the content of this message is subject to access, review and disclosure by the sender's e-mail System Administrator.
_______________________________________________
Users mailing list
Users(a)ovirt.org<mailto:Users@ovirt.org><mailto:Users@ovirt.org><mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
________________________________
This e-mail message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. If you are the intended recipient, please be advised that the content of this message is subject to access, review and disclosure by the sender's e-mail System Administrator.
________________________________
This e-mail message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. If you are the intended recipient, please be advised that the content of this message is subject to access, review and disclosure by the sender's e-mail System Administrator.
________________________________
This e-mail message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. If you are the intended recipient, please be advised that the content of this message is subject to access, review and disclosure by the sender's e-mail System Administrator.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20141229/4ec6cc13/atta...>
------------------------------
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
End of Users Digest, Vol 39, Issue 171
**************************************
------=_Part_1882534_1428653136.1419879250032
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: georgia,serif; font-size: 12pt; colo=
r: #000000"><div>Hi,</div><div>Can you please provide engine.log from =
/var/log/ovirt-engine/engine.log and to try as follows:</div><div><ol><li>R=
evert all tree hosts to maintenance-mode=3Dnone.</li><li>Check that engine =
up and running.</li><li>Turn one of the hosts that is not running the engin=
e, to maintenance mode local.</li><li>Turn host that is running the engine =
to maintenance mode local.</li><li>Check that engine migrated to one and th=
e only remaining host, that had not been put in to maintenance mode at all.=
</li></ol><div>Can you also provide your engine version, is it 3.4 somethin=
g?</div></div><div><br></div><div><span name=3D"x"></span><br>Thanks in adv=
ance.<br><div><br></div>Best regards,<br>Nikolai<br>____________________<br=
>Nikolai Sednev<br>Senior Quality Engineer at Compute team<br>Red Hat Israe=
l<br>34 Jerusalem Road,<br>Ra'anana, Israel 43501<br><div><br></div>Tel: &n=
bsp; +972 9 7692043<br>Mobile: +972 52 7342734<br>Emai=
l: nsednev(a)redhat.com<br>IRC: nsednev<span name=3D"x"></span><br></div><div=
><br></div><hr id=3D"zwchr"><div style=3D"color:#000;font-weight:normal;fon=
t-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;=
font-size:12pt;"><b>From: </b>users-request(a)ovirt.org<br><b>To: </b>users@o=
virt.org<br><b>Sent: </b>Monday, December 29, 2014 8:29:36 PM<br><b>Subject=
: </b>Users Digest, Vol 39, Issue 171<br><div><br></div>Send Users mailing =
list submissions to<br> user=
s(a)ovirt.org<br><div><br></div>To subscribe or unsubscribe via the World Wid=
e Web, visit<br> http://list=
s.ovirt.org/mailman/listinfo/users<br>or, via email, send a message with su=
bject or body 'help' to<br> =
users-request(a)ovirt.org<br><div><br></div>You can reach the person managing=
the list at<br> users-owner=
@ovirt.org<br><div><br></div>When replying, please edit your Subject line s=
o it is more specific<br>than "Re: Contents of Users digest..."<br><div><br=
></div><br>Today's Topics:<br><div><br></div> 1. Re: VM f=
ailover with ovirt3.5 (Yue, Cong)<br><div><br></div><br>-------------------=
---------------------------------------------------<br><div><br></div>Messa=
ge: 1<br>Date: Mon, 29 Dec 2014 10:29:04 -0800<br>From: "Yue, Cong" <Con=
g_Yue(a)alliedtelesis.com><br>To: Artyom Lukianov <alukiano(a)redhat.com&=
gt;<br>Cc: "users(a)ovirt.org" <users(a)ovirt.org><br>Subject: Re: [ovirt=
-users] VM failover with ovirt3.5<br>Message-ID: <21D302CF-AD6F-4E8C-A37=
3-52ADAC1C129B(a)alliedtelesis.com><br>Content-Type: text/plain; charset=
=3D"utf-8"<br><div><br></div>I disabled local maintenance mode for all host=
s, and then only set the host where HE VM is there to local maintenance mod=
e. The logs are as follows. During the migration of HE VM , it shows some f=
atal error happen. By the way, also HE VM can not work with live migration.=
Instead, other VMs can do live migration.<br><div><br></div>---<br>[root@c=
ompute2-3 ~]# hosted-engine --set-maintenance --mode=3Dlocal<br>You have ne=
w mail in /var/spool/mail/root<br>[root@compute2-3 ~]# tail -f /var/log/ovi=
rt-hosted-engine-ha/agent.log<br>MainThread::INFO::2014-12-29<br>13:16:12,4=
35::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEn=
gine::(start_monitoring)<br>Best remote host 10.0.0.92 (id: 3, score: 2400)=
<br>MainThread::INFO::2014-12-29<br>13:16:22,711::hosted_engine::327::ovirt=
_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>C=
urrent state EngineUp (score: 2400)<br>MainThread::INFO::2014-12-29<br>13:1=
6:22,711::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.Ho=
stedEngine::(start_monitoring)<br>Best remote host 10.0.0.92 (id: 3, score:=
2400)<br>MainThread::INFO::2014-12-29<br>13:16:32,978::hosted_engine::327:=
:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring=
)<br>Current state EngineUp (score: 2400)<br>MainThread::INFO::2014-12-29<b=
r>13:16:32,978::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_eng=
ine.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.93 (id: 2, =
score: 2400)<br>MainThread::INFO::2014-12-29<br>13:16:43,272::hosted_engine=
::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_moni=
toring)<br>Current state EngineUp (score: 2400)<br>MainThread::INFO::2014-1=
2-29<br>13:16:43,272::hosted_engine::332::ovirt_hosted_engine_ha.agent.host=
ed_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.93 (i=
d: 2, score: 2400)<br>MainThread::INFO::2014-12-29<br>13:16:53,316::states:=
:394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)<br=
>Engine vm running on localhost<br>MainThread::INFO::2014-12-29<br>13:16:53=
,562::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.Hosted=
Engine::(start_monitoring)<br>Current state EngineUp (score: 2400)<br>MainT=
hread::INFO::2014-12-29<br>13:16:53,562::hosted_engine::332::ovirt_hosted_e=
ngine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Best remot=
e host 10.0.0.93 (id: 2, score: 2400)<br>MainThread::INFO::2014-12-29<br>13=
:17:03,600::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engi=
ne.HostedEngine::(check)<br>Local maintenance detected<br>MainThread::INFO:=
:2014-12-29<br>13:17:03,611::brokerlink::111::ovirt_hosted_engine_ha.lib.br=
okerlink.BrokerLink::(notify)<br>Trying: notify time=3D1419877023.61 type=
=3Dstate_transition<br>detail=3DEngineUp-LocalMaintenanceMigrateVm hostname=
=3D'compute2-3'<br>MainThread::INFO::2014-12-29<br>13:17:03,672::brokerlink=
::120::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)<br>Succes=
s, was notification of state_transition<br>(EngineUp-LocalMaintenanceMigrat=
eVm) sent? sent<br>MainThread::INFO::2014-12-29<br>13:17:03,911::states::20=
8::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)<br>Scor=
e is 0 due to local maintenance mode<br>MainThread::INFO::2014-12-29<br>13:=
17:03,912::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.H=
ostedEngine::(start_monitoring)<br>Current state LocalMaintenanceMigrateVm =
(score: 0)<br>MainThread::INFO::2014-12-29<br>13:17:03,912::hosted_engine::=
332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monito=
ring)<br>Best remote host 10.0.0.93 (id: 2, score: 2400)<br>MainThread::INF=
O::2014-12-29<br>13:17:03,960::brokerlink::111::ovirt_hosted_engine_ha.lib.=
brokerlink.BrokerLink::(notify)<br>Trying: notify time=3D1419877023.96 type=
=3Dstate_transition<br>detail=3DLocalMaintenanceMigrateVm-EngineMigratingAw=
ay<br>hostname=3D'compute2-3'<br>MainThread::INFO::2014-12-29<br>13:17:03,9=
80::brokerlink::120::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(not=
ify)<br>Success, was notification of state_transition<br>(LocalMaintenanceM=
igrateVm-EngineMigratingAway) sent? sent<br>MainThread::INFO::2014-12-29<br=
>13:17:04,218::states::66::ovirt_hosted_engine_ha.agent.hosted_engine.Hoste=
dEngine::(_penalize_memory)<br>Penalizing score by 400 due to low free memo=
ry<br>MainThread::INFO::2014-12-29<br>13:17:04,218::hosted_engine::327::ovi=
rt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br=
>Current state EngineMigratingAway (score: 2000)<br>MainThread::INFO::2014-=
12-29<br>13:17:04,219::hosted_engine::332::ovirt_hosted_engine_ha.agent.hos=
ted_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.93 (=
id: 2, score: 2400)<br>MainThread::ERROR::2014-12-29<br>13:17:14,251::hoste=
d_engine::867::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_m=
onitor_migration)<br>Failed to migrate<br>Traceback (most recent call last)=
:<br> File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/ag=
ent/hosted_engine.py",<br>line 863, in _monitor_migration<br> v=
m_id,<br> File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_h=
a/lib/vds_client.py",<br>line 85, in run_vds_client_cmd<br> res=
ponse['status']['message'])<br>DetailedError: Error 12 from migrateStatus: =
Fatal error during migration<br>MainThread::INFO::2014-12-29<br>13:17:14,26=
2::brokerlink::111::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(noti=
fy)<br>Trying: notify time=3D1419877034.26 type=3Dstate_transition<br>detai=
l=3DEngineMigratingAway-ReinitializeFSM hostname=3D'compute2-3'<br>MainThre=
ad::INFO::2014-12-29<br>13:17:14,263::brokerlink::120::ovirt_hosted_engine_=
ha.lib.brokerlink.BrokerLink::(notify)<br>Success, was notification of stat=
e_transition<br>(EngineMigratingAway-ReinitializeFSM) sent? ignored<br>Main=
Thread::INFO::2014-12-29<br>13:17:14,496::hosted_engine::327::ovirt_hosted_=
engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Current s=
tate ReinitializeFSM (score: 0)<br>MainThread::INFO::2014-12-29<br>13:17:14=
,496::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.Hosted=
Engine::(start_monitoring)<br>Best remote host 10.0.0.93 (id: 2, score: 240=
0)<br>MainThread::INFO::2014-12-29<br>13:17:24,536::state_decorators::124::=
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)<br>Local m=
aintenance detected<br>MainThread::INFO::2014-12-29<br>13:17:24,547::broker=
link::111::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)<br>Tr=
ying: notify time=3D1419877044.55 type=3Dstate_transition<br>detail=3DReini=
tializeFSM-LocalMaintenance hostname=3D'compute2-3'<br>MainThread::INFO::20=
14-12-29<br>13:17:24,574::brokerlink::120::ovirt_hosted_engine_ha.lib.broke=
rlink.BrokerLink::(notify)<br>Success, was notification of state_transition=
<br>(ReinitializeFSM-LocalMaintenance) sent? sent<br>MainThread::INFO::2014=
-12-29<br>13:17:24,812::hosted_engine::327::ovirt_hosted_engine_ha.agent.ho=
sted_engine.HostedEngine::(start_monitoring)<br>Current state LocalMaintena=
nce (score: 0)<br>MainThread::INFO::2014-12-29<br>13:17:24,812::hosted_engi=
ne::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_mo=
nitoring)<br>Best remote host 10.0.0.93 (id: 2, score: 2400)<br>MainThread:=
:INFO::2014-12-29<br>13:17:34,851::state_decorators::124::ovirt_hosted_engi=
ne_ha.agent.hosted_engine.HostedEngine::(check)<br>Local maintenance detect=
ed<br>MainThread::INFO::2014-12-29<br>13:17:35,095::hosted_engine::327::ovi=
rt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br=
>Current state LocalMaintenance (score: 0)<br>MainThread::INFO::2014-12-29<=
br>13:17:35,095::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_en=
gine.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.93 (id: 2,=
score: 2400)<br>MainThread::INFO::2014-12-29<br>13:17:45,130::state_decora=
tors::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)=
<br>Local maintenance detected<br>MainThread::INFO::2014-12-29<br>13:17:45,=
368::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedE=
ngine::(start_monitoring)<br>Current state LocalMaintenance (score: 0)<br>M=
ainThread::INFO::2014-12-29<br>13:17:45,368::hosted_engine::332::ovirt_host=
ed_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Best r=
emote host 10.0.0.93 (id: 2, score: 2400)<br>^C<br>[root@compute2-3 ~]#<br>=
<div><br></div><br>[root@compute2-3 ~]# hosted-engine --vm-status<br><div><=
br></div><br>--=3D=3D Host 1 status =3D=3D--<br><div><br></div>Status up-to=
-date : True<=
br>Hostname =
: 10.0.0.94<br>Host ID &nb=
sp; : =
1<br>Engine status =
: {"health": "good", "vm": "up",<br>"detail": "up"}<br>=
Score =
: 0<br>Local maintenance &=
nbsp; : True<br>Host timestamp &nb=
sp; : 101495=
6<tel:1014956><br>Extra metadata (valid at timestamp):<br>metadata_pa=
rse_version=3D1<br>metadata_feature_version=3D1<br>timestamp=3D1014956<t=
el:1014956> (Mon Dec 29 13:20:19 2014)<br>host-id=3D1<br>score=3D0<br>ma=
intenance=3DTrue<br>state=3DLocalMaintenance<br><div><br></div><br>--=3D=3D=
Host 2 status =3D=3D--<br><div><br></div>Status up-to-date &=
nbsp; : True<br>Hostname &n=
bsp; =
: 10.0.0.93<br>Host ID &nb=
sp; : 2<br>Engine status &n=
bsp; :=
{"reason": "vm not running on<br>this host", "health": "bad", "vm": "down"=
, "detail": "unknown"}<br>Score &=
nbsp; : 2400<br>Loca=
l maintenance  =
;: False<br>Host timestamp =
: 866019<br>Extra metadata (valid at timestamp):<br>m=
etadata_parse_version=3D1<br>metadata_feature_version=3D1<br>timestamp=3D86=
6019 (Mon Dec 29 10:19:45 2014)<br>host-id=3D2<br>score=3D2400<br>maintenan=
ce=3DFalse<br>state=3DEngineDown<br><div><br></div><br>--=3D=3D Host 3 stat=
us =3D=3D--<br><div><br></div>Status up-to-date =
: True<br>Hostname =
: 10.=
0.0.92<br>Host ID &=
nbsp; : 3<br>Engine status =
: {"reason": =
"vm not running on<br>this host", "health": "bad", "vm": "down", "detail": =
"unknown"}<br>Score =
: 2400<br>Local maintenanc=
e : False<br>=
Host timestamp &nbs=
p; : 860493<br>Extra metadata (valid at timestamp):<br>metadata_pars=
e_version=3D1<br>metadata_feature_version=3D1<br>timestamp=3D860493 (Mon De=
c 29 10:20:35 2014)<br>host-id=3D3<br>score=3D2400<br>maintenance=3DFalse<b=
r>state=3DEngineDown<br>[root@compute2-3 ~]#<br>---<br>Thanks,<br>Cong<br><=
div><br></div><br><div><br></div>On 2014/12/29, at 8:43, "Artyom Lukianov" =
<alukiano@redhat.com<mailto:alukiano@redhat.com>> wrote:<br><di=
v><br></div>I see that HE vm run on host with ip 10.0.0.94, and two another=
hosts in "Local Maintenance" state, so vm will not migrate to any of them,=
can you try disable local maintenance on all hosts in HE environment and a=
fter enable "local maintenance" on host where HE vm run, and provide also o=
utput of hosted-engine --vm-status.<br>Failover works in next way:<br>1) if=
host where run HE vm have score less by 800 that some other host in HE env=
ironment, HE vm will migrate on host with best score<br>2) if something hap=
pen to vm(kernel panic, crash of service...), agent will restart HE vm on a=
nother host in HE environment with positive score<br>3) if put to local mai=
ntenance host with HE vm, vm will migrate to another host with positive sco=
re<br>Thanks.<br><div><br></div>----- Original Message -----<br>From: "Cong=
Yue" <Cong_Yue@alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com&g=
t;><br>To: "Artyom Lukianov" <alukiano@redhat.com<mailto:alukiano@=
redhat.com>><br>Cc: "Simone Tiraboschi" <stirabos(a)redhat.com<ma=
ilto:stirabos@redhat.com>>, users@ovirt.org<mailto:users@ovirt.org=
><br>Sent: Monday, December 29, 2014 6:30:42 PM<br>Subject: Re: [ovirt-u=
sers] VM failover with ovirt3.5<br><div><br></div>Thanks and the --vm-statu=
s log is as follows:<br>[root@compute2-2 ~]# hosted-engine --vm-status<br><=
div><br></div><br>--=3D=3D Host 1 status =3D=3D--<br><div><br></div>Status =
up-to-date : =
True<br>Hostname &n=
bsp; : 10.0.0.94<br>Host ID  =
; &nb=
sp;: 1<br>Engine status &n=
bsp; : {"health": "good", "vm": "up",<br>"detail": "up"=
}<br>Score &=
nbsp; : 2400<br>Local maintenance =
: False<br>Host time=
stamp =
: 1008087<br>Extra metadata (valid at timestamp):<br>metadata_parse_versio=
n=3D1<br>metadata_feature_version=3D1<br>timestamp=3D1008087<tel:1008087=
> (Mon Dec 29 11:25:51 2014)<br>host-id=3D1<br>score=3D2400<br>maintenan=
ce=3DFalse<br>state=3DEngineUp<br><div><br></div><br>--=3D=3D Host 2 status=
=3D=3D--<br><div><br></div>Status up-to-date &=
nbsp; : True<br>Hostname &n=
bsp; : 10.0.=
0.93<br>Host ID &nb=
sp; : 2<br>Engine status &n=
bsp; : {"reason": "v=
m not running on<br>this host", "health": "bad", "vm": "down", "detail": "u=
nknown"}<br>Score &=
nbsp; : 0<br>Local maintenance &nb=
sp; : True<br>Host t=
imestamp &nb=
sp; : 859142<br>Extra metadata (valid at timestamp):<br>metadata_parse_vers=
ion=3D1<br>metadata_feature_version=3D1<br>timestamp=3D859142 (Mon Dec 29 0=
8:25:08 2014)<br>host-id=3D2<br>score=3D0<br>maintenance=3DTrue<br>state=3D=
LocalMaintenance<br><div><br></div><br>--=3D=3D Host 3 status =3D=3D--<br><=
div><br></div>Status up-to-date &=
nbsp; : True<br>Hostname &n=
bsp; : 10.0.0.92<br>Host I=
D &nb=
sp; : 3<br>Engine status &n=
bsp; : {"reason": "vm not running =
on<br>this host", "health": "bad", "vm": "down", "detail": "unknown"}<br>Sc=
ore &=
nbsp; : 0<br>Local maintenance &nb=
sp; : True<br>Host timestamp  =
; : 853615<b=
r>Extra metadata (valid at timestamp):<br>metadata_parse_version=3D1<br>met=
adata_feature_version=3D1<br>timestamp=3D853615 (Mon Dec 29 08:25:57 2014)<=
br>host-id=3D3<br>score=3D0<br>maintenance=3DTrue<br>state=3DLocalMaintenan=
ce<br>You have new mail in /var/spool/mail/root<br>[root@compute2-2 ~]#<br>=
<div><br></div>Could you please explain how VM failover works inside ovirt?=
Is there any other debug option I can enable to check the problem?<br><div=
><br></div>Thanks,<br>Cong<br><div><br></div><br>On 2014/12/29, at 1:39, "A=
rtyom Lukianov" <alukiano@redhat.com<mailto:alukiano@redhat.com>&l=
t;mailto:alukiano@redhat.com>> wrote:<br><div><br></div>Can you also =
provide output of hosted-engine --vm-status please, previous time it was us=
eful, because I do not see something unusual.<br>Thanks<br><div><br></div>-=
---- Original Message -----<br>From: "Cong Yue" <Cong_Yue(a)alliedtelesis.=
com<mailto:Cong_Yue@alliedtelesis.com><mailto:Cong_Yue@alliedteles=
is.com>><br>To: "Artyom Lukianov" <alukiano@redhat.com<mailto:a=
lukiano@redhat.com><mailto:alukiano@redhat.com>><br>Cc: "Simone=
Tiraboschi" <stirabos@redhat.com<mailto:stirabos@redhat.com><m=
ailto:stirabos@redhat.com>>, users@ovirt.org<mailto:users@ovirt.or=
g><mailto:users@ovirt.org><br>Sent: Monday, December 29, 2014 7:15=
:24 AM<br>Subject: Re: [ovirt-users] VM failover with ovirt3.5<br><div><br>=
</div>Also I change the maintenance mode to local in another host. But also=
the VM in this host can not be migrated. The logs are as follows.<br><div>=
<br></div>[root@compute2-2 ~]# hosted-engine --set-maintenance --mode=3Dloc=
al<br>[root@compute2-2 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.lo=
g<br>MainThread::INFO::2014-12-28<br>21:09:04,184::hosted_engine::332::ovir=
t_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>=
Best remote host 10.0.0.94 (id: 1, score: 2400)<br>MainThread::INFO::2014-1=
2-28<br>21:09:14,603::hosted_engine::327::ovirt_hosted_engine_ha.agent.host=
ed_engine.HostedEngine::(start_monitoring)<br>Current state EngineDown (sco=
re: 2400)<br>MainThread::INFO::2014-12-28<br>21:09:14,603::hosted_engine::3=
32::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitor=
ing)<br>Best remote host 10.0.0.94 (id: 1, score: 2400)<br>MainThread::INFO=
::2014-12-28<br>21:09:24,903::hosted_engine::327::ovirt_hosted_engine_ha.ag=
ent.hosted_engine.HostedEngine::(start_monitoring)<br>Current state EngineD=
own (score: 2400)<br>MainThread::INFO::2014-12-28<br>21:09:24,904::hosted_e=
ngine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start=
_monitoring)<br>Best remote host 10.0.0.94 (id: 1, score: 2400)<br>MainThre=
ad::INFO::2014-12-28<br>21:09:35,026::states::437::ovirt_hosted_engine_ha.a=
gent.hosted_engine.HostedEngine::(consume)<br>Engine vm is running on host =
10.0.0.94 (id 1)<br>MainThread::INFO::2014-12-28<br>21:09:35,236::hosted_en=
gine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_=
monitoring)<br>Current state EngineDown (score: 2400)<br>MainThread::INFO::=
2014-12-28<br>21:09:35,236::hosted_engine::332::ovirt_hosted_engine_ha.agen=
t.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0=
.94 (id: 1, score: 2400)<br>MainThread::INFO::2014-12-28<br>21:09:45,604::h=
osted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:=
:(start_monitoring)<br>Current state EngineDown (score: 2400)<br>MainThread=
::INFO::2014-12-28<br>21:09:45,604::hosted_engine::332::ovirt_hosted_engine=
_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote hos=
t 10.0.0.94 (id: 1, score: 2400)<br>MainThread::INFO::2014-12-28<br>21:09:5=
5,691::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.Ho=
stedEngine::(check)<br>Local maintenance detected<br>MainThread::INFO::2014=
-12-28<br>21:09:55,701::brokerlink::111::ovirt_hosted_engine_ha.lib.brokerl=
ink.BrokerLink::(notify)<br>Trying: notify time=3D1419829795.7 type=3Dstate=
_transition<br>detail=3DEngineDown-LocalMaintenance hostname=3D'compute2-2'=
<br>MainThread::INFO::2014-12-28<br>21:09:55,761::brokerlink::120::ovirt_ho=
sted_engine_ha.lib.brokerlink.BrokerLink::(notify)<br>Success, was notifica=
tion of state_transition<br>(EngineDown-LocalMaintenance) sent? sent<br>Mai=
nThread::INFO::2014-12-28<br>21:09:55,990::states::208::ovirt_hosted_engine=
_ha.agent.hosted_engine.HostedEngine::(score)<br>Score is 0 due to local ma=
intenance mode<br>MainThread::INFO::2014-12-28<br>21:09:55,990::hosted_engi=
ne::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_mo=
nitoring)<br>Current state LocalMaintenance (score: 0)<br>MainThread::INFO:=
:2014-12-28<br>21:09:55,991::hosted_engine::332::ovirt_hosted_engine_ha.age=
nt.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.0.=
0.94 (id: 1, score: 2400)<br>^C<br>You have new mail in /var/spool/mail/roo=
t<br>[root@compute2-2 ~]# ps -ef | grep qemu<br>root 18420 &n=
bsp;2777 0 21:10<x-apple-data-detectors://39> pts/0 &nbs=
p;00:00:00<x-apple-data-detectors://40> grep --color=3Dauto qemu<br>q=
emu 29809 1 0 Dec19 ?  =
; 01:17:20 /usr/libexec/qemu-kvm<br>-name testvm2-2 -S -machine rhel6=
.5.0,accel=3Dkvm,usb=3Doff -cpu Nehalem<br>-m 500 -realtime mlock=3Doff -sm=
p<br>1,maxcpus=3D16,sockets=3D16,cores=3D1,threads=3D1 -uuid<br>c31e97d0-13=
5e-42da-9954-162b5228dce3 -smbios<br>type=3D1,manufacturer=3DoVirt,product=
=3DoVirt<br>Node,version=3D7-0.1406.el7.centos.2.5,serial=3D4C4C4544-0059-3=
610-8033-B4C04F395931,uuid=3Dc31e97d0-135e-42da-9954-162b5228dce3<br>-no-us=
er-config -nodefaults -chardev<br>socket,id=3Dcharmonitor,path=3D/var/lib/l=
ibvirt/qemu/testvm2-2.monitor,server,nowait<br>-mon chardev=3Dcharmonitor,i=
d=3Dmonitor,mode=3Dcontrol -rtc<br>base=3D2014-12-19T20:17:17<x-apple-da=
ta-detectors://42>,driftfix=3Dslew -no-kvm-pit-reinjection<br>-no-hpet -=
no-shutdown -boot strict=3Don -device<br>piix3-usb-uhci,id=3Dusb,bus=3Dpci.=
0,addr=3D0x1.0x2 -device<br>virtio-scsi-pci,id=3Dscsi0,bus=3Dpci.0,addr=3D0=
x4 -device<br>virtio-serial-pci,id=3Dvirtio-serial0,max_ports=3D16,bus=3Dpc=
i.0,addr=3D0x5<br>-drive if=3Dnone,id=3Ddrive-ide0-1-0,readonly=3Don,format=
=3Draw,serial=3D<br>-device ide-cd,bus=3Dide.1,unit=3D0,drive=3Ddrive-ide0-=
1-0,id=3Dide0-1-0<br>-drive file=3D/rhev/data-center/00000002-0002-0002-000=
2-0000000001e4/1dc71096-27c4-4256-b2ac-bd7265525c69/images/5cbeb8c9-4f04-48=
d0-a5eb-78c49187c550/a0570e8c-9867-4ec4-818f-11e102fc4f9b,if=3Dnone,id=3Ddr=
ive-virtio-disk0,format=3Dqcow2,serial=3D5cbeb8c9-4f04-48d0-a5eb-78c49187c5=
50,cache=3Dnone,werror=3Dstop,rerror=3Dstop,aio=3Dthreads<br>-device virtio=
-blk-pci,scsi=3Doff,bus=3Dpci.0,addr=3D0x6,drive=3Ddrive-virtio-disk0,id=3D=
virtio-disk0,bootindex=3D1<br>-netdev tap,fd=3D28,id=3Dhostnet0,vhost=3Don,=
vhostfd=3D29 -device<br>virtio-net-pci,netdev=3Dhostnet0,id=3Dnet0,mac=3D00=
:1a:4a:db:94:00,bus=3Dpci.0,addr=3D0x3<br>-chardev socket,id=3Dcharchannel0=
,path=3D/var/lib/libvirt/qemu/channels/c31e97d0-135e-42da-9954-162b5228dce3=
.com.redhat.rhevm.vdsm,server,nowait<br>-device virtserialport,bus=3Dvirtio=
-serial0.0,nr=3D1,chardev=3Dcharchannel0,id=3Dchannel0,name=3Dcom.redhat.rh=
evm.vdsm<br>-chardev socket,id=3Dcharchannel1,path=3D/var/lib/libvirt/qemu/=
channels/c31e97d0-135e-42da-9954-162b5228dce3.org.qemu.guest_agent.0,server=
,nowait<br>-device virtserialport,bus=3Dvirtio-serial0.0,nr=3D2,chardev=3Dc=
harchannel1,id=3Dchannel1,name=3Dorg.qemu.guest_agent.0<br>-chardev spicevm=
c,id=3Dcharchannel2,name=3Dvdagent -device<br>virtserialport,bus=3Dvirtio-s=
erial0.0,nr=3D3,chardev=3Dcharchannel2,id=3Dchannel2,name=3Dcom.redhat.spic=
e.0<br>-spice tls-port=3D5901,addr=3D10.0.0.93,x509-dir=3D/etc/pki/vdsm/lib=
virt-spice,tls-channel=3Dmain,tls-channel=3Ddisplay,tls-channel=3Dinputs,tl=
s-channel=3Dcursor,tls-channel=3Dplayback,tls-channel=3Drecord,tls-channel=
=3Dsmartcard,tls-channel=3Dusbredir,seamless-migration=3Don<br>-k en-us -vg=
a qxl -global qxl-vga.ram_size=3D67108864<tel:67108864> -global<br>qx=
l-vga.vram_size=3D33554432<tel:33554432> -incoming tcp:[::]:49152 -de=
vice<br>virtio-balloon-pci,id=3Dballoon0,bus=3Dpci.0,addr=3D0x7<br>[root@co=
mpute2-2 ~]#<br><div><br></div>Thanks,<br>Cong<br><div><br></div><br>On 201=
4/12/28, at 20:53, "Yue, Cong" <Cong_Yue@alliedtelesis.com<mailto:Con=
g_Yue@alliedtelesis.com><mailto:Cong_Yue@alliedtelesis.com><mai=
lto:Cong_Yue@alliedtelesis.com>> wrote:<br><div><br></div>I checked i=
t again and confirmed there is one guest VM is running on the top of this h=
ost. The log is as follows:<br><div><br></div>[root@compute2-1 vdsm]# ps -e=
f | grep qemu<br>qemu 2983 846 0 Dec19 ? &=
nbsp; 00:00:00<x-apple-data-detectors://0> [super=
vdsmServer] <defunct><br>root 5489 3053 &nb=
sp;0 20:49<x-apple-data-detectors://1> pts/0 00:00:00<=
;x-apple-data-detectors://2> grep --color=3Dauto qemu<br>qemu &nb=
sp; 26128 1 0 Dec19 ? 01:09:=
19 /usr/libexec/qemu-kvm<br>-name testvm2 -S -machine rhel6.5.0,accel=3Dkvm=
,usb=3Doff -cpu Nehalem -m<br>500 -realtime mlock=3Doff -smp 1,maxcpus=3D16=
,sockets=3D16,cores=3D1,threads=3D1<br>-uuid e46bca87-4df5-4287-844b-90a26f=
ccef33 -smbios<br>type=3D1,manufacturer=3DoVirt,product=3DoVirt<br>Node,ver=
sion=3D7-0.1406.el7.centos.2.5,serial=3D4C4C4544-0030-3310-8059-B8C04F58523=
1,uuid=3De46bca87-4df5-4287-844b-90a26fccef33<br>-no-user-config -nodefault=
s -chardev<br>socket,id=3Dcharmonitor,path=3D/var/lib/libvirt/qemu/testvm2.=
monitor,server,nowait<br>-mon chardev=3Dcharmonitor,id=3Dmonitor,mode=3Dcon=
trol -rtc<br>base=3D2014-12-19T20:18:01<x-apple-data-detectors://4>,d=
riftfix=3Dslew -no-kvm-pit-reinjection<br>-no-hpet -no-shutdown -boot stric=
t=3Don -device<br>piix3-usb-uhci,id=3Dusb,bus=3Dpci.0,addr=3D0x1.0x2 -devic=
e<br>virtio-scsi-pci,id=3Dscsi0,bus=3Dpci.0,addr=3D0x4 -device<br>virtio-se=
rial-pci,id=3Dvirtio-serial0,max_ports=3D16,bus=3Dpci.0,addr=3D0x5<br>-driv=
e if=3Dnone,id=3Ddrive-ide0-1-0,readonly=3Don,format=3Draw,serial=3D<br>-de=
vice ide-cd,bus=3Dide.1,unit=3D0,drive=3Ddrive-ide0-1-0,id=3Dide0-1-0<br>-d=
rive file=3D/rhev/data-center/00000002-0002-0002-0002-0000000001e4/1dc71096=
-27c4-4256-b2ac-bd7265525c69/images/b4b5426b-95e3-41af-b286-da245891cdaf/0f=
688d49-97e3-4f1d-84d4-ac1432d903b3,if=3Dnone,id=3Ddrive-virtio-disk0,format=
=3Dqcow2,serial=3Db4b5426b-95e3-41af-b286-da245891cdaf,cache=3Dnone,werror=
=3Dstop,rerror=3Dstop,aio=3Dthreads<br>-device virtio-blk-pci,scsi=3Doff,bu=
s=3Dpci.0,addr=3D0x6,drive=3Ddrive-virtio-disk0,id=3Dvirtio-disk0,bootindex=
=3D1<br>-netdev tap,fd=3D26,id=3Dhostnet0,vhost=3Don,vhostfd=3D27 -device<b=
r>virtio-net-pci,netdev=3Dhostnet0,id=3Dnet0,mac=3D00:1a:4a:db:94:01,bus=3D=
pci.0,addr=3D0x3<br>-chardev socket,id=3Dcharchannel0,path=3D/var/lib/libvi=
rt/qemu/channels/e46bca87-4df5-4287-844b-90a26fccef33.com.redhat.rhevm.vdsm=
,server,nowait<br>-device virtserialport,bus=3Dvirtio-serial0.0,nr=3D1,char=
dev=3Dcharchannel0,id=3Dchannel0,name=3Dcom.redhat.rhevm.vdsm<br>-chardev s=
ocket,id=3Dcharchannel1,path=3D/var/lib/libvirt/qemu/channels/e46bca87-4df5=
-4287-844b-90a26fccef33.org.qemu.guest_agent.0,server,nowait<br>-device vir=
tserialport,bus=3Dvirtio-serial0.0,nr=3D2,chardev=3Dcharchannel1,id=3Dchann=
el1,name=3Dorg.qemu.guest_agent.0<br>-chardev spicevmc,id=3Dcharchannel2,na=
me=3Dvdagent -device<br>virtserialport,bus=3Dvirtio-serial0.0,nr=3D3,charde=
v=3Dcharchannel2,id=3Dchannel2,name=3Dcom.redhat.spice.0<br>-spice tls-port=
=3D5900,addr=3D10.0.0.92,x509-dir=3D/etc/pki/vdsm/libvirt-spice,tls-channel=
=3Dmain,tls-channel=3Ddisplay,tls-channel=3Dinputs,tls-channel=3Dcursor,tls=
-channel=3Dplayback,tls-channel=3Drecord,tls-channel=3Dsmartcard,tls-channe=
l=3Dusbredir,seamless-migration=3Don<br>-k en-us -vga qxl -global qxl-vga.r=
am_size=3D67108864<tel:67108864> -global<br>qxl-vga.vram_size=3D33554=
432<tel:33554432> -incoming tcp:[::]:49152 -device<br>virtio-balloon-=
pci,id=3Dballoon0,bus=3Dpci.0,addr=3D0x7<br>[root@compute2-1 vdsm]# tail -f=
/var/log/ovirt-hosted-engine-ha/agent.log<br>MainThread::INFO::2014-12-28<=
br>20:49:27,315::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted=
_engine.HostedEngine::(check)<br>Local maintenance detected<br>MainThread::=
INFO::2014-12-28<br>20:49:27,646::hosted_engine::327::ovirt_hosted_engine_h=
a.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Current state Loc=
alMaintenance (score: 0)<br>MainThread::INFO::2014-12-28<br>20:49:27,646::h=
osted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:=
:(start_monitoring)<br>Best remote host 10.0.0.94 (id: 1, score: 2400)<br>M=
ainThread::INFO::2014-12-28<br>20:49:37,732::state_decorators::124::ovirt_h=
osted_engine_ha.agent.hosted_engine.HostedEngine::(check)<br>Local maintena=
nce detected<br>MainThread::INFO::2014-12-28<br>20:49:37,961::hosted_engine=
::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_moni=
toring)<br>Current state LocalMaintenance (score: 0)<br>MainThread::INFO::2=
014-12-28<br>20:49:37,961::hosted_engine::332::ovirt_hosted_engine_ha.agent=
.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.=
94 (id: 1, score: 2400)<br>MainThread::INFO::2014-12-28<br>20:49:48,048::st=
ate_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngin=
e::(check)<br>Local maintenance detected<br>MainThread::INFO::2014-12-28<br=
>20:49:48,319::states::208::ovirt_hosted_engine_ha.agent.hosted_engine.Host=
edEngine::(score)<br>Score is 0 due to local maintenance mode<br>MainThread=
::INFO::2014-12-28<br>20:49:48,319::hosted_engine::327::ovirt_hosted_engine=
_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Current state L=
ocalMaintenance (score: 0)<br>MainThread::INFO::2014-12-28<br>20:49:48,319:=
:hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngin=
e::(start_monitoring)<br>Best remote host 10.0.0.94 (id: 1, score: 2400)<br=
><div><br></div>Thanks,<br>Cong<br><div><br></div><br>On 2014/12/28, at 3:4=
6, "Artyom Lukianov" <alukiano@redhat.com<mailto:alukiano@redhat.com&=
gt;<mailto:alukiano@redhat.com><mailto:alukiano@redhat.com>>=
wrote:<br><div><br></div>I see that you set local maintenance on host3 tha=
t do not have engine vm on it, so it nothing to migrate from this host.<br>=
If you set local maintenance on host1, vm must migrate to another host with=
positive score.<br>Thanks<br><div><br></div>----- Original Message -----<b=
r>From: "Cong Yue" <Cong_Yue@alliedtelesis.com<mailto:Cong_Yue@allied=
telesis.com><mailto:Cong_Yue@alliedtelesis.com><mailto:Cong_Yue=
@alliedtelesis.com>><br>To: "Simone Tiraboschi" <stirabos(a)redhat.c=
om<mailto:stirabos@redhat.com><mailto:stirabos@redhat.com><m=
ailto:stirabos@redhat.com>><br>Cc: users@ovirt.org<mailto:users@ov=
irt.org><mailto:users@ovirt.org><mailto:users@ovirt.org><br>=
Sent: Saturday, December 27, 2014 6:58:32 PM<br>Subject: Re: [ovirt-users] =
VM failover with ovirt3.5<br><div><br></div>Hi<br><div><br></div>I had a tr=
y with "hosted-engine --set-maintence --mode=3Dlocal" on<br>compute2-1, whi=
ch is host 3 in my cluster. From the log, it shows<br>maintence mode is dec=
tected, but migration does not happen.<br><div><br></div>The logs are as fo=
llows. Is there any other config I need to check?<br><div><br></div>[root@c=
ompute2-1 vdsm]# hosted-engine --vm-status<br><div><br></div><br>--=3D=3D H=
ost 1 status =3D=3D-<br><div><br></div>Status up-to-date &nbs=
p; : True<br>Hostname  =
; &nb=
sp; : 10.0.0.94<br>Host ID =
: 1<br>Engine status  =
; : {"=
health": "good", "vm": "up",<br>"detail": "up"}<br>Score &nbs=
p; &n=
bsp; : 2400<br>Local maintenance &=
nbsp; : False<br>Host timestamp &n=
bsp; : 836296<br>Extra metadata (=
valid at timestamp):<br>metadata_parse_version=3D1<br>metadata_feature_vers=
ion=3D1<br>timestamp=3D836296 (Sat Dec 27 11:42:39 2014)<br>host-id=3D1<br>=
score=3D2400<br>maintenance=3DFalse<br>state=3DEngineUp<br><div><br></div><=
br>--=3D=3D Host 2 status =3D=3D--<br><div><br></div>Status up-to-date &nbs=
p; : True<br>Hostnam=
e &nb=
sp; : 10.0.0.93<br>Host ID =
: 2<br>Engin=
e status &nb=
sp; : {"reason": "vm not running on<br>this host", "health": "bad", "=
vm": "down", "detail": "unknown"}<br>Score &nbs=
p; : 2=
400<br>Local maintenance &=
nbsp; : False<br>Host timestamp &n=
bsp; : 687358<br>Extra metadata (valid at times=
tamp):<br>metadata_parse_version=3D1<br>metadata_feature_version=3D1<br>tim=
estamp=3D687358 (Sat Dec 27 08:42:04 2014)<br>host-id=3D2<br>score=3D2400<b=
r>maintenance=3DFalse<br>state=3DEngineDown<br><div><br></div><br>--=3D=3D =
Host 3 status =3D=3D--<br><div><br></div>Status up-to-date &n=
bsp; : True<br>Hostname &nb=
sp; &=
nbsp; : 10.0.0.92<br>Host ID &nbs=
p; : 3<br>Engine status &nb=
sp; : =
{"reason": "vm not running on<br>this host", "health": "bad", "vm": "down",=
"detail": "unknown"}<br>Score &n=
bsp; : 0<br>Local ma=
intenance : T=
rue<br>Host timestamp &nbs=
p; : 681827<br>Extra metadata (valid at timestamp):<br>metada=
ta_parse_version=3D1<br>metadata_feature_version=3D1<br>timestamp=3D681827 =
(Sat Dec 27 08:42:40 2014)<br>host-id=3D3<br>score=3D0<br>maintenance=3DTru=
e<br>state=3DLocalMaintenance<br>[root@compute2-1 vdsm]# tail -f /var/log/o=
virt-hosted-engine-ha/agent.log<br>MainThread::INFO::2014-12-27<br>08:42:41=
,109::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.Hosted=
Engine::(start_monitoring)<br>Best remote host 10.0.0.94 (id: 1, score: 240=
0)<br>MainThread::INFO::2014-12-27<br>08:42:51,198::state_decorators::124::=
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)<br>Local m=
aintenance detected<br>MainThread::INFO::2014-12-27<br>08:42:51,420::hosted=
_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(sta=
rt_monitoring)<br>Current state LocalMaintenance (score: 0)<br>MainThread::=
INFO::2014-12-27<br>08:42:51,420::hosted_engine::332::ovirt_hosted_engine_h=
a.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote host =
10.0.0.94 (id: 1, score: 2400)<br>MainThread::INFO::2014-12-27<br>08:43:01,=
507::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.Host=
edEngine::(check)<br>Local maintenance detected<br>MainThread::INFO::2014-1=
2-27<br>08:43:01,773::hosted_engine::327::ovirt_hosted_engine_ha.agent.host=
ed_engine.HostedEngine::(start_monitoring)<br>Current state LocalMaintenanc=
e (score: 0)<br>MainThread::INFO::2014-12-27<br>08:43:01,773::hosted_engine=
::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_moni=
toring)<br>Best remote host 10.0.0.94 (id: 1, score: 2400)<br>MainThread::I=
NFO::2014-12-27<br>08:43:11,859::state_decorators::124::ovirt_hosted_engine=
_ha.agent.hosted_engine.HostedEngine::(check)<br>Local maintenance detected=
<br>MainThread::INFO::2014-12-27<br>08:43:12,072::hosted_engine::327::ovirt=
_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>C=
urrent state LocalMaintenance (score: 0)<br>MainThread::INFO::2014-12-27<br=
>08:43:12,072::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engi=
ne.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.94 (id: 1, s=
core: 2400)<br><div><br></div><br><div><br></div>[root@compute2-3 ~]# tail =
-f /var/log/ovirt-hosted-engine-ha/agent.log<br>MainThread::INFO::2014-12-2=
7<br>11:36:28,855::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_=
engine.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.93 (id: =
2, score: 2400)<br>MainThread::INFO::2014-12-27<br>11:36:39,130::hosted_eng=
ine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_m=
onitoring)<br>Current state EngineUp (score: 2400)<br>MainThread::INFO::201=
4-12-27<br>11:36:39,130::hosted_engine::332::ovirt_hosted_engine_ha.agent.h=
osted_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.93=
(id: 2, score: 2400)<br>MainThread::INFO::2014-12-27<br>11:36:49,449::host=
ed_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(s=
tart_monitoring)<br>Current state EngineUp (score: 2400)<br>MainThread::INF=
O::2014-12-27<br>11:36:49,449::hosted_engine::332::ovirt_hosted_engine_ha.a=
gent.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.=
0.0.93 (id: 2, score: 2400)<br>MainThread::INFO::2014-12-27<br>11:36:59,739=
::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngi=
ne::(start_monitoring)<br>Current state EngineUp (score: 2400)<br>MainThrea=
d::INFO::2014-12-27<br>11:36:59,739::hosted_engine::332::ovirt_hosted_engin=
e_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote ho=
st 10.0.0.93 (id: 2, score: 2400)<br>MainThread::INFO::2014-12-27<br>11:37:=
09,779::states::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngin=
e::(consume)<br>Engine vm running on localhost<br>MainThread::INFO::2014-12=
-27<br>11:37:10,026::hosted_engine::327::ovirt_hosted_engine_ha.agent.hoste=
d_engine.HostedEngine::(start_monitoring)<br>Current state EngineUp (score:=
2400)<br>MainThread::INFO::2014-12-27<br>11:37:10,026::hosted_engine::332:=
:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring=
)<br>Best remote host 10.0.0.93 (id: 2, score: 2400)<br>MainThread::INFO::2=
014-12-27<br>11:37:20,331::hosted_engine::327::ovirt_hosted_engine_ha.agent=
.hosted_engine.HostedEngine::(start_monitoring)<br>Current state EngineUp (=
score: 2400)<br>MainThread::INFO::2014-12-27<br>11:37:20,331::hosted_engine=
::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_moni=
toring)<br>Best remote host 10.0.0.93 (id: 2, score: 2400)<br><div><br></di=
v><br>[root@compute2-2 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.lo=
g<br>MainThread::INFO::2014-12-27<br>08:36:12,462::hosted_engine::332::ovir=
t_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>=
Best remote host 10.0.0.94 (id: 1, score: 2400)<br>MainThread::INFO::2014-1=
2-27<br>08:36:22,797::hosted_engine::327::ovirt_hosted_engine_ha.agent.host=
ed_engine.HostedEngine::(start_monitoring)<br>Current state EngineDown (sco=
re: 2400)<br>MainThread::INFO::2014-12-27<br>08:36:22,798::hosted_engine::3=
32::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitor=
ing)<br>Best remote host 10.0.0.94 (id: 1, score: 2400)<br>MainThread::INFO=
::2014-12-27<br>08:36:32,876::states::437::ovirt_hosted_engine_ha.agent.hos=
ted_engine.HostedEngine::(consume)<br>Engine vm is running on host 10.0.0.9=
4 (id 1)<br>MainThread::INFO::2014-12-27<br>08:36:33,169::hosted_engine::32=
7::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitori=
ng)<br>Current state EngineDown (score: 2400)<br>MainThread::INFO::2014-12-=
27<br>08:36:33,169::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted=
_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.94 (id:=
1, score: 2400)<br>MainThread::INFO::2014-12-27<br>08:36:43,567::hosted_en=
gine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_=
monitoring)<br>Current state EngineDown (score: 2400)<br>MainThread::INFO::=
2014-12-27<br>08:36:43,567::hosted_engine::332::ovirt_hosted_engine_ha.agen=
t.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0=
.94 (id: 1, score: 2400)<br>MainThread::INFO::2014-12-27<br>08:36:53,858::h=
osted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:=
:(start_monitoring)<br>Current state EngineDown (score: 2400)<br>MainThread=
::INFO::2014-12-27<br>08:36:53,858::hosted_engine::332::ovirt_hosted_engine=
_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote hos=
t 10.0.0.94 (id: 1, score: 2400)<br>MainThread::INFO::2014-12-27<br>08:37:0=
4,028::state_machine::160::ovirt_hosted_engine_ha.agent.hosted_engine.Hoste=
dEngine::(refresh)<br>Global metadata: {'maintenance': False}<br>MainThread=
::INFO::2014-12-27<br>08:37:04,028::state_machine::165::ovirt_hosted_engine=
_ha.agent.hosted_engine.HostedEngine::(refresh)<br>Host 10.0.0.94 (id 1): {=
'extra':<br>'metadata_parse_version=3D1\nmetadata_feature_version=3D1\ntime=
stamp=3D835987<br>(Sat Dec 27 11:37:30<br>2014)\nhost-id=3D1\nscore=3D2400\=
nmaintenance=3DFalse\nstate=3DEngineUp\n',<br>'hostname': '10.0.0.94', 'ali=
ve': True, 'host-id': 1, 'engine-status':<br>{'health': 'good', 'vm': 'up',=
'detail': 'up'}, 'score': 2400,<br>'maintenance': False, 'host-ts': 835987=
}<br>MainThread::INFO::2014-12-27<br>08:37:04,028::state_machine::165::ovir=
t_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)<br>Host 10.0=
.0.92 (id 3): {'extra':<br>'metadata_parse_version=3D1\nmetadata_feature_ve=
rsion=3D1\ntimestamp=3D681528<br>(Sat Dec 27 08:37:41<br>2014)\nhost-id=3D3=
\nscore=3D0\nmaintenance=3DTrue\nstate=3DLocalMaintenance\n',<br>'hostname'=
: '10.0.0.92', 'alive': True, 'host-id': 3, 'engine-status':<br>{'reason': =
'vm not running on this host', 'health': 'bad', 'vm':<br>'down', 'detail': =
'unknown'}, 'score': 0, 'maintenance': True,<br>'host-ts': 681528}<br>MainT=
hread::INFO::2014-12-27<br>08:37:04,028::state_machine::168::ovirt_hosted_e=
ngine_ha.agent.hosted_engine.HostedEngine::(refresh)<br>Local (id 2): {'eng=
ine-health': {'reason': 'vm not running on this<br>host', 'health': 'bad', =
'vm': 'down', 'detail': 'unknown'}, 'bridge':<br>True, 'mem-free': 15300.0,=
'maintenance': False, 'cpu-load': 0.0215,<br>'gateway': True}<br>MainThrea=
d::INFO::2014-12-27<br>08:37:04,265::hosted_engine::327::ovirt_hosted_engin=
e_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Current state =
EngineDown (score: 2400)<br>MainThread::INFO::2014-12-27<br>08:37:04,265::h=
osted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:=
:(start_monitoring)<br>Best remote host 10.0.0.94 (id: 1, score: 2400)<br><=
div><br></div>Thanks,<br>Cong<br><div><br></div>On 2014/12/22, at 5:29, "Si=
mone Tiraboschi" <stirabos@redhat.com<mailto:stirabos@redhat.com>&=
lt;mailto:stirabos@redhat.com><mailto:stirabos@redhat.com>> wro=
te:<br><div><br></div><br><div><br></div>----- Original Message -----<br>Fr=
om: "Cong Yue" <Cong_Yue@alliedtelesis.com<mailto:Cong_Yue@alliedtele=
sis.com><mailto:Cong_Yue@alliedtelesis.com><mailto:Cong_Yue@all=
iedtelesis.com>><br>To: "Simone Tiraboschi" <stirabos(a)redhat.com&l=
t;mailto:stirabos@redhat.com><mailto:stirabos@redhat.com><mailt=
o:stirabos@redhat.com>><br>Cc: users@ovirt.org<mailto:users@ovirt.=
org><mailto:users@ovirt.org><mailto:users@ovirt.org><br>Sent=
: Friday, December 19, 2014 7:22:10 PM<br>Subject: RE: [ovirt-users] VM fai=
lover with ovirt3.5<br><div><br></div>Thanks for the information. This is t=
he log for my three ovirt nodes.<br>From the output of hosted-engine --vm-s=
tatus, it shows the engine state for<br>my 2nd and 3rd ovirt node is DOWN.<=
br>Is this the reason why VM failover not work in my environment?<br><div><=
br></div>No, they looks ok: you can run the engine VM on single host at a t=
ime.<br><div><br></div>How can I make<br>also engine works for my 2nd and 3=
rd ovit nodes?<br><div><br></div>If you put the host 1 in local maintenance=
mode ( hosted-engine --set-maintenance --mode=3Dlocal ) the VM should migr=
ate to host 2; if you reactivate host 1 ( hosted-engine --set-maintenance -=
-mode=3Dnone ) and put host 2 in local maintenance mode the VM should migra=
te again.<br><div><br></div>Can you please try that and post the logs if so=
mething is going bad?<br><div><br></div><br>--<br>--=3D=3D Host 1 status =
=3D=3D--<br><div><br></div>Status up-to-date &n=
bsp; : True<br>Hostname &nb=
sp; : 10.0.0=
.94<br>Host ID &nbs=
p; : 1<br>Engine status &nb=
sp; : {"health": "go=
od", "vm": "up",<br>"detail": "up"}<br>Score &n=
bsp; :=
2400<br>Local maintenance =
: False<br>Host timestamp =
: 150475<br>Extra metadata (valid at tim=
estamp):<br>metadata_parse_version=3D1<br>metadata_feature_version=3D1<br>t=
imestamp=3D150475 (Fri Dec 19 13:12:18 2014)<br>host-id=3D1<br>score=3D2400=
<br>maintenance=3DFalse<br>state=3DEngineUp<br><div><br></div><br>--=3D=3D =
Host 2 status =3D=3D--<br><div><br></div>Status up-to-date &n=
bsp; : True<br>Hostname &nb=
sp; &=
nbsp; : 10.0.0.93<br>Host ID &nbs=
p; : 2<br>Engine status &nb=
sp; : =
{"reason": "vm not running on<br>this host", "health": "bad", "vm": "down",=
"detail": "unknown"}<br>Score &n=
bsp; : 2400<br>Local=
maintenance =
: False<br>Host timestamp =
: 1572<br>Extra metadata (valid at timestamp):<br>meta=
data_parse_version=3D1<br>metadata_feature_version=3D1<br>timestamp=3D1572 =
(Fri Dec 19 10:12:18 2014)<br>host-id=3D2<br>score=3D2400<br>maintenance=3D=
False<br>state=3DEngineDown<br><div><br></div><br>--=3D=3D Host 3 status =
=3D=3D--<br><div><br></div>Status up-to-date &n=
bsp; : False<br>Hostname &n=
bsp; : 10.0.=
0.92<br>Host ID &nb=
sp; : 3<br>Engine status &n=
bsp; : unknown stale=
-data<br>Score &nbs=
p; : 2400<br>Local maintenance &nb=
sp; : False<br>Host =
timestamp &n=
bsp; : 987<br>Extra metadata (valid at timestamp):<br>metadata_parse_versio=
n=3D1<br>metadata_feature_version=3D1<br>timestamp=3D987 (Fri Dec 19 10:09:=
58 2014)<br>host-id=3D3<br>score=3D2400<br>maintenance=3DFalse<br>state=3DE=
ngineDown<br><div><br></div>--<br>And the /var/log/ovirt-hosted-engine-ha/a=
gent.log for three ovirt nodes are<br>as follows:<br>--<br>10.0.0.94(hosted=
-engine-1)<br>---<br>MainThread::INFO::2014-12-19<br>13:09:33,716::hosted_e=
ngine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start=
_monitoring)<br>Current state EngineUp (score: 2400)<br>MainThread::INFO::2=
014-12-19<br>13:09:33,716::hosted_engine::332::ovirt_hosted_engine_ha.agent=
.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.=
93 (id: 2, score: 2400)<br>MainThread::INFO::2014-12-19<br>13:09:44,017::ho=
sted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::=
(start_monitoring)<br>Current state EngineUp (score: 2400)<br>MainThread::I=
NFO::2014-12-19<br>13:09:44,017::hosted_engine::332::ovirt_hosted_engine_ha=
.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote host 1=
0.0.0.93 (id: 2, score: 2400)<br>MainThread::INFO::2014-12-19<br>13:09:54,3=
03::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEn=
gine::(start_monitoring)<br>Current state EngineUp (score: 2400)<br>MainThr=
ead::INFO::2014-12-19<br>13:09:54,303::hosted_engine::332::ovirt_hosted_eng=
ine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote =
host 10.0.0.93 (id: 2, score: 2400)<br>MainThread::INFO::2014-12-19<br>13:1=
0:04,342::states::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEng=
ine::(consume)<br>Engine vm running on localhost<br>MainThread::INFO::2014-=
12-19<br>13:10:04,617::hosted_engine::327::ovirt_hosted_engine_ha.agent.hos=
ted_engine.HostedEngine::(start_monitoring)<br>Current state EngineUp (scor=
e: 2400)<br>MainThread::INFO::2014-12-19<br>13:10:04,617::hosted_engine::33=
2::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitori=
ng)<br>Best remote host 10.0.0.93 (id: 2, score: 2400)<br>MainThread::INFO:=
:2014-12-19<br>13:10:14,657::state_machine::160::ovirt_hosted_engine_ha.age=
nt.hosted_engine.HostedEngine::(refresh)<br>Global metadata: {'maintenance'=
: False}<br>MainThread::INFO::2014-12-19<br>13:10:14,657::state_machine::16=
5::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)<br>Ho=
st 10.0.0.93 (id 2): {'extra':<br>'metadata_parse_version=3D1\nmetadata_fea=
ture_version=3D1\ntimestamp=3D1448<br>(Fri Dec 19 10:10:14<br>2014)\nhost-i=
d=3D2\nscore=3D2400\nmaintenance=3DFalse\nstate=3DEngineDown\n',<br>'hostna=
me': '10.0.0.93', 'alive': True, 'host-id': 2, 'engine-status':<br>{'reason=
': 'vm not running on this host', 'health': 'bad', 'vm':<br>'down', 'detail=
': 'unknown'}, 'score': 2400, 'maintenance': False,<br>'host-ts': 1448}<br>=
MainThread::INFO::2014-12-19<br>13:10:14,657::state_machine::165::ovirt_hos=
ted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)<br>Host 10.0.0.92=
(id 3): {'extra':<br>'metadata_parse_version=3D1\nmetadata_feature_version=
=3D1\ntimestamp=3D987<br>(Fri Dec 19 10:09:58<br>2014)\nhost-id=3D3\nscore=
=3D2400\nmaintenance=3DFalse\nstate=3DEngineDown\n',<br>'hostname': '10.0.0=
.92', 'alive': True, 'host-id': 3, 'engine-status':<br>{'reason': 'vm not r=
unning on this host', 'health': 'bad', 'vm':<br>'down', 'detail': 'unknown'=
}, 'score': 2400, 'maintenance': False,<br>'host-ts': 987}<br>MainThread::I=
NFO::2014-12-19<br>13:10:14,658::state_machine::168::ovirt_hosted_engine_ha=
.agent.hosted_engine.HostedEngine::(refresh)<br>Local (id 1): {'engine-heal=
th': {'health': 'good', 'vm': 'up',<br>'detail': 'up'}, 'bridge': True, 'me=
m-free': 1079.0, 'maintenance':<br>False, 'cpu-load': 0.0269, 'gateway': Tr=
ue}<br>MainThread::INFO::2014-12-19<br>13:10:14,904::hosted_engine::327::ov=
irt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<b=
r>Current state EngineUp (score: 2400)<br>MainThread::INFO::2014-12-19<br>1=
3:10:14,904::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine=
.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.93 (id: 2, sco=
re: 2400)<br>MainThread::INFO::2014-12-19<br>13:10:25,210::hosted_engine::3=
27::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitor=
ing)<br>Current state EngineUp (score: 2400)<br>MainThread::INFO::2014-12-1=
9<br>13:10:25,210::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_=
engine.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.93 (id: =
2, score: 2400)<br>MainThread::INFO::2014-12-19<br>13:10:35,499::hosted_eng=
ine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_m=
onitoring)<br>Current state EngineUp (score: 2400)<br>MainThread::INFO::201=
4-12-19<br>13:10:35,499::hosted_engine::332::ovirt_hosted_engine_ha.agent.h=
osted_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.93=
(id: 2, score: 2400)<br>MainThread::INFO::2014-12-19<br>13:10:45,784::host=
ed_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(s=
tart_monitoring)<br>Current state EngineUp (score: 2400)<br>MainThread::INF=
O::2014-12-19<br>13:10:45,785::hosted_engine::332::ovirt_hosted_engine_ha.a=
gent.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.=
0.0.93 (id: 2, score: 2400)<br>MainThread::INFO::2014-12-19<br>13:10:56,070=
::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngi=
ne::(start_monitoring)<br>Current state EngineUp (score: 2400)<br>MainThrea=
d::INFO::2014-12-19<br>13:10:56,070::hosted_engine::332::ovirt_hosted_engin=
e_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote ho=
st 10.0.0.93 (id: 2, score: 2400)<br>MainThread::INFO::2014-12-19<br>13:11:=
06,109::states::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngin=
e::(consume)<br>Engine vm running on localhost<br>MainThread::INFO::2014-12=
-19<br>13:11:06,359::hosted_engine::327::ovirt_hosted_engine_ha.agent.hoste=
d_engine.HostedEngine::(start_monitoring)<br>Current state EngineUp (score:=
2400)<br>MainThread::INFO::2014-12-19<br>13:11:06,359::hosted_engine::332:=
:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring=
)<br>Best remote host 10.0.0.93 (id: 2, score: 2400)<br>MainThread::INFO::2=
014-12-19<br>13:11:16,658::hosted_engine::327::ovirt_hosted_engine_ha.agent=
.hosted_engine.HostedEngine::(start_monitoring)<br>Current state EngineUp (=
score: 2400)<br>MainThread::INFO::2014-12-19<br>13:11:16,658::hosted_engine=
::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_moni=
toring)<br>Best remote host 10.0.0.93 (id: 2, score: 2400)<br>MainThread::I=
NFO::2014-12-19<br>13:11:26,991::hosted_engine::327::ovirt_hosted_engine_ha=
.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Current state Engi=
neUp (score: 2400)<br>MainThread::INFO::2014-12-19<br>13:11:26,991::hosted_=
engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(star=
t_monitoring)<br>Best remote host 10.0.0.93 (id: 2, score: 2400)<br>MainThr=
ead::INFO::2014-12-19<br>13:11:37,341::hosted_engine::327::ovirt_hosted_eng=
ine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Current stat=
e EngineUp (score: 2400)<br>MainThread::INFO::2014-12-19<br>13:11:37,341::h=
osted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:=
:(start_monitoring)<br>Best remote host 10.0.0.93 (id: 2, score: 2400)<br>-=
---<br><div><br></div>10.0.0.93 (hosted-engine-2)<br>MainThread::INFO::2014=
-12-19<br>10:12:18,339::hosted_engine::327::ovirt_hosted_engine_ha.agent.ho=
sted_engine.HostedEngine::(start_monitoring)<br>Current state EngineDown (s=
core: 2400)<br>MainThread::INFO::2014-12-19<br>10:12:18,339::hosted_engine:=
:332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monit=
oring)<br>Best remote host 10.0.0.94 (id: 1, score: 2400)<br>MainThread::IN=
FO::2014-12-19<br>10:12:28,651::hosted_engine::327::ovirt_hosted_engine_ha.=
agent.hosted_engine.HostedEngine::(start_monitoring)<br>Current state Engin=
eDown (score: 2400)<br>MainThread::INFO::2014-12-19<br>10:12:28,652::hosted=
_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(sta=
rt_monitoring)<br>Best remote host 10.0.0.94 (id: 1, score: 2400)<br>MainTh=
read::INFO::2014-12-19<br>10:12:39,010::hosted_engine::327::ovirt_hosted_en=
gine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Current sta=
te EngineDown (score: 2400)<br>MainThread::INFO::2014-12-19<br>10:12:39,010=
::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngi=
ne::(start_monitoring)<br>Best remote host 10.0.0.94 (id: 1, score: 2400)<b=
r>MainThread::INFO::2014-12-19<br>10:12:49,338::hosted_engine::327::ovirt_h=
osted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Cur=
rent state EngineDown (score: 2400)<br>MainThread::INFO::2014-12-19<br>10:1=
2:49,338::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.Ho=
stedEngine::(start_monitoring)<br>Best remote host 10.0.0.94 (id: 1, score:=
2400)<br>MainThread::INFO::2014-12-19<br>10:12:59,642::hosted_engine::327:=
:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring=
)<br>Current state EngineDown (score: 2400)<br>MainThread::INFO::2014-12-19=
<br>10:12:59,642::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_e=
ngine.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.94 (id: 1=
, score: 2400)<br>MainThread::INFO::2014-12-19<br>10:13:10,010::hosted_engi=
ne::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_mo=
nitoring)<br>Current state EngineDown (score: 2400)<br>MainThread::INFO::20=
14-12-19<br>10:13:10,010::hosted_engine::332::ovirt_hosted_engine_ha.agent.=
hosted_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.9=
4 (id: 1, score: 2400)<br><div><br></div><br>10.0.0.92(hosted-engine-3)<br>=
same as 10.0.0.93<br>--<br><div><br></div>-----Original Message-----<br>Fro=
m: Simone Tiraboschi [mailto:stirabos@redhat.com]<br>Sent: Friday, December=
19, 2014 12:28 AM<br>To: Yue, Cong<br>Cc: users@ovirt.org<mailto:users@=
ovirt.org><mailto:users@ovirt.org><mailto:users@ovirt.org><b=
r>Subject: Re: [ovirt-users] VM failover with ovirt3.5<br><div><br></div><b=
r><div><br></div>----- Original Message -----<br>From: "Cong Yue" <Cong_=
Yue@alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com><mailto:Co=
ng_Yue@alliedtelesis.com><mailto:Cong_Yue@alliedtelesis.com>><b=
r>To: users@ovirt.org<mailto:users@ovirt.org><mailto:users@ovirt.o=
rg><mailto:users@ovirt.org><br>Sent: Friday, December 19, 2014 2:1=
4:33 AM<br>Subject: [ovirt-users] VM failover with ovirt3.5<br><div><br></d=
iv><br><div><br></div>Hi<br><div><br></div><br><div><br></div>In my environ=
ment, I have 3 ovirt nodes as one cluster. And on top of<br>host-1, there i=
s one vm to host ovirt engine.<br><div><br></div>Also I have one external s=
torage for the cluster to use as data domain<br>of engine and data.<br><div=
><br></div>I confirmed live migration works well in my environment.<br><div=
><br></div>But it seems very buggy for VM failover if I try to force to shu=
t down<br>one ovirt node. Sometimes the VM in the node which is shutdown ca=
n<br>migrate to other host, but it take more than several minutes.<br><div>=
<br></div>Sometimes, it can not migrate at all. Sometimes, only when the ho=
st is<br>back, the VM is beginning to move.<br><div><br></div>Can you pleas=
e check or share the logs under /var/log/ovirt-hosted-engine-ha/<br>?<br><d=
iv><br></div>Is there some documentation to explain how VM failover is work=
ing? And<br>is there some bugs reported related with this?<br><div><br></di=
v>http://www.ovirt.org/Features/Self_Hosted_Engine#Agent_State_Diagram<br><=
div><br></div>Thanks in advance,<br><div><br></div>Cong<br><div><br></div><=
br><div><br></div><br>This e-mail message is for the sole use of the intend=
ed recipient(s)<br>and may contain confidential and privileged information.=
Any<br>unauthorized review, use, disclosure or distribution is prohibited.=
If<br>you are not the intended recipient, please contact the sender by rep=
ly<br>e-mail and destroy all copies of the original message. If you are the=
<br>intended recipient, please be advised that the content of this message<=
br>is subject to access, review and disclosure by the sender's e-mail Syste=
m<br>Administrator.<br><div><br></div>_____________________________________=
__________<br>Users mailing list<br>Users@ovirt.org<mailto:Users@ovirt.o=
rg><mailto:Users@ovirt.org><mailto:Users@ovirt.org><br>http:=
//lists.ovirt.org/mailman/listinfo/users<br><div><br></div>This e-mail mess=
age is for the sole use of the intended recipient(s) and may<br>contain con=
fidential and privileged information. Any unauthorized review,<br>use, disc=
losure or distribution is prohibited. If you are not the intended<br>recipi=
ent, please contact the sender by reply e-mail and destroy all copies<br>of=
the original message. If you are the intended recipient, please be<br>advi=
sed that the content of this message is subject to access, review and<br>di=
sclosure by the sender's e-mail System Administrator.<br><div><br></div><br=
>This e-mail message is for the sole use of the intended recipient(s) and m=
ay contain confidential and privileged information. Any unauthorized review=
, use, disclosure or distribution is prohibited. If you are not the intende=
d recipient, please contact the sender by reply e-mail and destroy all copi=
es of the original message. If you are the intended recipient, please be ad=
vised that the content of this message is subject to access, review and dis=
closure by the sender's e-mail System Administrator.<br>___________________=
____________________________<br>Users mailing list<br>Users(a)ovirt.org<ma=
ilto:Users@ovirt.org><mailto:Users@ovirt.org><mailto:Users@ovir=
t.org><br>http://lists.ovirt.org/mailman/listinfo/users<br><div><br></di=
v>________________________________<br>This e-mail message is for the sole u=
se of the intended recipient(s) and may contain confidential and privileged=
information. Any unauthorized review, use, disclosure or distribution is p=
rohibited. If you are not the intended recipient, please contact the sender=
by reply e-mail and destroy all copies of the original message. If you are=
the intended recipient, please be advised that the content of this message=
is subject to access, review and disclosure by the sender's e-mail System =
Administrator.<br><div><br></div>________________________________<br>This e=
-mail message is for the sole use of the intended recipient(s) and may cont=
ain confidential and privileged information. Any unauthorized review, use, =
disclosure or distribution is prohibited. If you are not the intended recip=
ient, please contact the sender by reply e-mail and destroy all copies of t=
he original message. If you are the intended recipient, please be advised t=
hat the content of this message is subject to access, review and disclosure=
by the sender's e-mail System Administrator.<br><div><br></div>___________=
_____________________<br>This e-mail message is for the sole use of the int=
ended recipient(s) and may contain confidential and privileged information.=
Any unauthorized review, use, disclosure or distribution is prohibited. If=
you are not the intended recipient, please contact the sender by reply e-m=
ail and destroy all copies of the original message. If you are the intended=
recipient, please be advised that the content of this message is subject t=
o access, review and disclosure by the sender's e-mail System Administrator=
.<br>-------------- next part --------------<br>An HTML attachment was scru=
bbed...<br>URL: <http://lists.ovirt.org/pipermail/users/attachments/2014=
1229/4ec6cc13/attachment.html><br><div><br></div>-----------------------=
-------<br><div><br></div>_______________________________________________<b=
r>Users mailing list<br>Users(a)ovirt.org<br>http://lists.ovirt.org/mailman/l=
istinfo/users<br><div><br></div><br>End of Users Digest, Vol 39, Issue 171<=
br>**************************************<br></div><div><br></div></div></b=
ody></html>
------=_Part_1882534_1428653136.1419879250032--
10 years, 4 months
Re: [ovirt-users] VM failover with ovirt3.5
by Nikolai Sednev
------=_Part_1875460_365779577.1419876418683
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Hi,
Your guest-vm have to be defined as " Highly Available"
Highly Available
Select this check box if the virtual machine is to be highly available. For example, in cases of host maintenance or failure, the virtual machine is automatically moved to or re-launched on another host. If the host is manually shut down by the system administrator, the virtual machine is not automatically moved to another host.
Note that this option is unavailable if the Migration Options setting in the Hosts tab is set to either Allow manual migration only or No migration . For a virtual machine to be highly available, it must be possible for the Manager to migrate the virtual machine to other available hosts as necessary.
Thanks in advance.
Best regards,
Nikolai
____________________
Nikolai Sednev
Senior Quality Engineer at Compute team
Red Hat Israel
34 Jerusalem Road,
Ra'anana, Israel 43501
Tel: +972 9 7692043
Mobile: +972 52 7342734
Email: nsednev(a)redhat.com
IRC: nsednev
----- Original Message -----
From: users-request(a)ovirt.org
To: users(a)ovirt.org
Sent: Monday, December 29, 2014 7:50:07 PM
Subject: Users Digest, Vol 39, Issue 169
Send Users mailing list submissions to
users(a)ovirt.org
To subscribe or unsubscribe via the World Wide Web, visit
http://lists.ovirt.org/mailman/listinfo/users
or, via email, send a message with subject or body 'help' to
users-request(a)ovirt.org
You can reach the person managing the list at
users-owner(a)ovirt.org
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Users digest..."
Today's Topics:
1. Re: VM failover with ovirt3.5 (Yue, Cong)
----------------------------------------------------------------------
Message: 1
Date: Mon, 29 Dec 2014 09:49:58 -0800
From: "Yue, Cong" <Cong_Yue(a)alliedtelesis.com>
To: Artyom Lukianov <alukiano(a)redhat.com>
Cc: "users(a)ovirt.org" <users(a)ovirt.org>
Subject: Re: [ovirt-users] VM failover with ovirt3.5
Message-ID: <11A51118-8B03-41FE-8FD0-C81AC8897EF6(a)alliedtelesis.com>
Content-Type: text/plain; charset="us-ascii"
Thanks for detailed explanation. Do you mean only HE VM can be failover? I want to have a try with the VM on any host to check whether VM can be failover to other host automatically like VMware or Xenserver?
I will have a try as you advised and provide the log for your further advice.
Thanks,
Cong
> On 2014/12/29, at 8:43, "Artyom Lukianov" <alukiano(a)redhat.com> wrote:
>
> I see that HE vm run on host with ip 10.0.0.94, and two another hosts in "Local Maintenance" state, so vm will not migrate to any of them, can you try disable local maintenance on all hosts in HE environment and after enable "local maintenance" on host where HE vm run, and provide also output of hosted-engine --vm-status.
> Failover works in next way:
> 1) if host where run HE vm have score less by 800 that some other host in HE environment, HE vm will migrate on host with best score
> 2) if something happen to vm(kernel panic, crash of service...), agent will restart HE vm on another host in HE environment with positive score
> 3) if put to local maintenance host with HE vm, vm will migrate to another host with positive score
> Thanks.
>
> ----- Original Message -----
> From: "Cong Yue" <Cong_Yue(a)alliedtelesis.com>
> To: "Artyom Lukianov" <alukiano(a)redhat.com>
> Cc: "Simone Tiraboschi" <stirabos(a)redhat.com>, users(a)ovirt.org
> Sent: Monday, December 29, 2014 6:30:42 PM
> Subject: Re: [ovirt-users] VM failover with ovirt3.5
>
> Thanks and the --vm-status log is as follows:
> [root@compute2-2 ~]# hosted-engine --vm-status
>
>
> --== Host 1 status ==--
>
> Status up-to-date : True
> Hostname : 10.0.0.94
> Host ID : 1
> Engine status : {"health": "good", "vm": "up",
> "detail": "up"}
> Score : 2400
> Local maintenance : False
> Host timestamp : 1008087
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=1008087<tel:1008087> (Mon Dec 29 11:25:51 2014)
> host-id=1
> score=2400
> maintenance=False
> state=EngineUp
>
>
> --== Host 2 status ==--
>
> Status up-to-date : True
> Hostname : 10.0.0.93
> Host ID : 2
> Engine status : {"reason": "vm not running on
> this host", "health": "bad", "vm": "down", "detail": "unknown"}
> Score : 0
> Local maintenance : True
> Host timestamp : 859142
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=859142 (Mon Dec 29 08:25:08 2014)
> host-id=2
> score=0
> maintenance=True
> state=LocalMaintenance
>
>
> --== Host 3 status ==--
>
> Status up-to-date : True
> Hostname : 10.0.0.92
> Host ID : 3
> Engine status : {"reason": "vm not running on
> this host", "health": "bad", "vm": "down", "detail": "unknown"}
> Score : 0
> Local maintenance : True
> Host timestamp : 853615
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=853615 (Mon Dec 29 08:25:57 2014)
> host-id=3
> score=0
> maintenance=True
> state=LocalMaintenance
> You have new mail in /var/spool/mail/root
> [root@compute2-2 ~]#
>
> Could you please explain how VM failover works inside ovirt? Is there any other debug option I can enable to check the problem?
>
> Thanks,
> Cong
>
>
> On 2014/12/29, at 1:39, "Artyom Lukianov" <alukiano(a)redhat.com<mailto:alukiano@redhat.com>> wrote:
>
> Can you also provide output of hosted-engine --vm-status please, previous time it was useful, because I do not see something unusual.
> Thanks
>
> ----- Original Message -----
> From: "Cong Yue" <Cong_Yue(a)alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com>>
> To: "Artyom Lukianov" <alukiano(a)redhat.com<mailto:alukiano@redhat.com>>
> Cc: "Simone Tiraboschi" <stirabos(a)redhat.com<mailto:stirabos@redhat.com>>, users(a)ovirt.org<mailto:users@ovirt.org>
> Sent: Monday, December 29, 2014 7:15:24 AM
> Subject: Re: [ovirt-users] VM failover with ovirt3.5
>
> Also I change the maintenance mode to local in another host. But also the VM in this host can not be migrated. The logs are as follows.
>
> [root@compute2-2 ~]# hosted-engine --set-maintenance --mode=local
> [root@compute2-2 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
> MainThread::INFO::2014-12-28
> 21:09:04,184::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-28
> 21:09:14,603::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-28
> 21:09:14,603::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-28
> 21:09:24,903::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-28
> 21:09:24,904::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-28
> 21:09:35,026::states::437::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
> Engine vm is running on host 10.0.0.94 (id 1)
> MainThread::INFO::2014-12-28
> 21:09:35,236::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-28
> 21:09:35,236::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-28
> 21:09:45,604::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-28
> 21:09:45,604::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-28
> 21:09:55,691::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
> Local maintenance detected
> MainThread::INFO::2014-12-28
> 21:09:55,701::brokerlink::111::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
> Trying: notify time=1419829795.7 type=state_transition
> detail=EngineDown-LocalMaintenance hostname='compute2-2'
> MainThread::INFO::2014-12-28
> 21:09:55,761::brokerlink::120::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
> Success, was notification of state_transition
> (EngineDown-LocalMaintenance) sent? sent
> MainThread::INFO::2014-12-28
> 21:09:55,990::states::208::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)
> Score is 0 due to local maintenance mode
> MainThread::INFO::2014-12-28
> 21:09:55,990::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state LocalMaintenance (score: 0)
> MainThread::INFO::2014-12-28
> 21:09:55,991::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> ^C
> You have new mail in /var/spool/mail/root
> [root@compute2-2 ~]# ps -ef | grep qemu
> root 18420 2777 0 21:10<x-apple-data-detectors://39> pts/0 00:00:00<x-apple-data-detectors://40> grep --color=auto qemu
> qemu 29809 1 0 Dec19 ? 01:17:20 /usr/libexec/qemu-kvm
> -name testvm2-2 -S -machine rhel6.5.0,accel=kvm,usb=off -cpu Nehalem
> -m 500 -realtime mlock=off -smp
> 1,maxcpus=16,sockets=16,cores=1,threads=1 -uuid
> c31e97d0-135e-42da-9954-162b5228dce3 -smbios
> type=1,manufacturer=oVirt,product=oVirt
> Node,version=7-0.1406.el7.centos.2.5,serial=4C4C4544-0059-3610-8033-B4C04F395931,uuid=c31e97d0-135e-42da-9954-162b5228dce3
> -no-user-config -nodefaults -chardev
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/testvm2-2.monitor,server,nowait
> -mon chardev=charmonitor,id=monitor,mode=control -rtc
> base=2014-12-19T20:17:17<x-apple-data-detectors://42>,driftfix=slew -no-kvm-pit-reinjection
> -no-hpet -no-shutdown -boot strict=on -device
> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
> virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5
> -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial=
> -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
> -drive file=/rhev/data-center/00000002-0002-0002-0002-0000000001e4/1dc71096-27c4-4256-b2ac-bd7265525c69/images/5cbeb8c9-4f04-48d0-a5eb-78c49187c550/a0570e8c-9867-4ec4-818f-11e102fc4f9b,if=none,id=drive-virtio-disk0,format=qcow2,serial=5cbeb8c9-4f04-48d0-a5eb-78c49187c550,cache=none,werror=stop,rerror=stop,aio=threads
> -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
> -netdev tap,fd=28,id=hostnet0,vhost=on,vhostfd=29 -device
> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:db:94:00,bus=pci.0,addr=0x3
> -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/c31e97d0-135e-42da-9954-162b5228dce3.com.redhat.rhevm.vdsm,server,nowait
> -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/c31e97d0-135e-42da-9954-162b5228dce3.org.qemu.guest_agent.0,server,nowait
> -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
> -chardev spicevmc,id=charchannel2,name=vdagent -device
> virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
> -spice tls-port=5901,addr=10.0.0.93,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
> -k en-us -vga qxl -global qxl-vga.ram_size=67108864<tel:67108864> -global
> qxl-vga.vram_size=33554432<tel:33554432> -incoming tcp:[::]:49152 -device
> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7
> [root@compute2-2 ~]#
>
> Thanks,
> Cong
>
>
> On 2014/12/28, at 20:53, "Yue, Cong" <Cong_Yue(a)alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com><mailto:Cong_Yue@alliedtelesis.com>> wrote:
>
> I checked it again and confirmed there is one guest VM is running on the top of this host. The log is as follows:
>
> [root@compute2-1 vdsm]# ps -ef | grep qemu
> qemu 2983 846 0 Dec19 ? 00:00:00<x-apple-data-detectors://0> [supervdsmServer] <defunct>
> root 5489 3053 0 20:49<x-apple-data-detectors://1> pts/0 00:00:00<x-apple-data-detectors://2> grep --color=auto qemu
> qemu 26128 1 0 Dec19 ? 01:09:19 /usr/libexec/qemu-kvm
> -name testvm2 -S -machine rhel6.5.0,accel=kvm,usb=off -cpu Nehalem -m
> 500 -realtime mlock=off -smp 1,maxcpus=16,sockets=16,cores=1,threads=1
> -uuid e46bca87-4df5-4287-844b-90a26fccef33 -smbios
> type=1,manufacturer=oVirt,product=oVirt
> Node,version=7-0.1406.el7.centos.2.5,serial=4C4C4544-0030-3310-8059-B8C04F585231,uuid=e46bca87-4df5-4287-844b-90a26fccef33
> -no-user-config -nodefaults -chardev
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/testvm2.monitor,server,nowait
> -mon chardev=charmonitor,id=monitor,mode=control -rtc
> base=2014-12-19T20:18:01<x-apple-data-detectors://4>,driftfix=slew -no-kvm-pit-reinjection
> -no-hpet -no-shutdown -boot strict=on -device
> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
> virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5
> -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial=
> -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
> -drive file=/rhev/data-center/00000002-0002-0002-0002-0000000001e4/1dc71096-27c4-4256-b2ac-bd7265525c69/images/b4b5426b-95e3-41af-b286-da245891cdaf/0f688d49-97e3-4f1d-84d4-ac1432d903b3,if=none,id=drive-virtio-disk0,format=qcow2,serial=b4b5426b-95e3-41af-b286-da245891cdaf,cache=none,werror=stop,rerror=stop,aio=threads
> -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
> -netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=27 -device
> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:db:94:01,bus=pci.0,addr=0x3
> -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/e46bca87-4df5-4287-844b-90a26fccef33.com.redhat.rhevm.vdsm,server,nowait
> -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/e46bca87-4df5-4287-844b-90a26fccef33.org.qemu.guest_agent.0,server,nowait
> -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
> -chardev spicevmc,id=charchannel2,name=vdagent -device
> virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
> -spice tls-port=5900,addr=10.0.0.92,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
> -k en-us -vga qxl -global qxl-vga.ram_size=67108864<tel:67108864> -global
> qxl-vga.vram_size=33554432<tel:33554432> -incoming tcp:[::]:49152 -device
> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7
> [root@compute2-1 vdsm]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
> MainThread::INFO::2014-12-28
> 20:49:27,315::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
> Local maintenance detected
> MainThread::INFO::2014-12-28
> 20:49:27,646::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state LocalMaintenance (score: 0)
> MainThread::INFO::2014-12-28
> 20:49:27,646::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-28
> 20:49:37,732::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
> Local maintenance detected
> MainThread::INFO::2014-12-28
> 20:49:37,961::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state LocalMaintenance (score: 0)
> MainThread::INFO::2014-12-28
> 20:49:37,961::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-28
> 20:49:48,048::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
> Local maintenance detected
> MainThread::INFO::2014-12-28
> 20:49:48,319::states::208::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)
> Score is 0 due to local maintenance mode
> MainThread::INFO::2014-12-28
> 20:49:48,319::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state LocalMaintenance (score: 0)
> MainThread::INFO::2014-12-28
> 20:49:48,319::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
>
> Thanks,
> Cong
>
>
> On 2014/12/28, at 3:46, "Artyom Lukianov" <alukiano(a)redhat.com<mailto:alukiano@redhat.com><mailto:alukiano@redhat.com>> wrote:
>
> I see that you set local maintenance on host3 that do not have engine vm on it, so it nothing to migrate from this host.
> If you set local maintenance on host1, vm must migrate to another host with positive score.
> Thanks
>
> ----- Original Message -----
> From: "Cong Yue" <Cong_Yue(a)alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com><mailto:Cong_Yue@alliedtelesis.com>>
> To: "Simone Tiraboschi" <stirabos(a)redhat.com<mailto:stirabos@redhat.com><mailto:stirabos@redhat.com>>
> Cc: users(a)ovirt.org<mailto:users@ovirt.org><mailto:users@ovirt.org>
> Sent: Saturday, December 27, 2014 6:58:32 PM
> Subject: Re: [ovirt-users] VM failover with ovirt3.5
>
> Hi
>
> I had a try with "hosted-engine --set-maintence --mode=local" on
> compute2-1, which is host 3 in my cluster. From the log, it shows
> maintence mode is dectected, but migration does not happen.
>
> The logs are as follows. Is there any other config I need to check?
>
> [root@compute2-1 vdsm]# hosted-engine --vm-status
>
>
> --== Host 1 status ==-
>
> Status up-to-date : True
> Hostname : 10.0.0.94
> Host ID : 1
> Engine status : {"health": "good", "vm": "up",
> "detail": "up"}
> Score : 2400
> Local maintenance : False
> Host timestamp : 836296
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=836296 (Sat Dec 27 11:42:39 2014)
> host-id=1
> score=2400
> maintenance=False
> state=EngineUp
>
>
> --== Host 2 status ==--
>
> Status up-to-date : True
> Hostname : 10.0.0.93
> Host ID : 2
> Engine status : {"reason": "vm not running on
> this host", "health": "bad", "vm": "down", "detail": "unknown"}
> Score : 2400
> Local maintenance : False
> Host timestamp : 687358
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=687358 (Sat Dec 27 08:42:04 2014)
> host-id=2
> score=2400
> maintenance=False
> state=EngineDown
>
>
> --== Host 3 status ==--
>
> Status up-to-date : True
> Hostname : 10.0.0.92
> Host ID : 3
> Engine status : {"reason": "vm not running on
> this host", "health": "bad", "vm": "down", "detail": "unknown"}
> Score : 0
> Local maintenance : True
> Host timestamp : 681827
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=681827 (Sat Dec 27 08:42:40 2014)
> host-id=3
> score=0
> maintenance=True
> state=LocalMaintenance
> [root@compute2-1 vdsm]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
> MainThread::INFO::2014-12-27
> 08:42:41,109::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-27
> 08:42:51,198::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
> Local maintenance detected
> MainThread::INFO::2014-12-27
> 08:42:51,420::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state LocalMaintenance (score: 0)
> MainThread::INFO::2014-12-27
> 08:42:51,420::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-27
> 08:43:01,507::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
> Local maintenance detected
> MainThread::INFO::2014-12-27
> 08:43:01,773::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state LocalMaintenance (score: 0)
> MainThread::INFO::2014-12-27
> 08:43:01,773::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-27
> 08:43:11,859::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
> Local maintenance detected
> MainThread::INFO::2014-12-27
> 08:43:12,072::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state LocalMaintenance (score: 0)
> MainThread::INFO::2014-12-27
> 08:43:12,072::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
>
>
>
> [root@compute2-3 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
> MainThread::INFO::2014-12-27
> 11:36:28,855::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-27
> 11:36:39,130::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-27
> 11:36:39,130::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-27
> 11:36:49,449::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-27
> 11:36:49,449::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-27
> 11:36:59,739::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-27
> 11:36:59,739::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-27
> 11:37:09,779::states::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
> Engine vm running on localhost
> MainThread::INFO::2014-12-27
> 11:37:10,026::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-27
> 11:37:10,026::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-27
> 11:37:20,331::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-27
> 11:37:20,331::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
>
>
> [root@compute2-2 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
> MainThread::INFO::2014-12-27
> 08:36:12,462::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-27
> 08:36:22,797::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-27
> 08:36:22,798::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-27
> 08:36:32,876::states::437::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
> Engine vm is running on host 10.0.0.94 (id 1)
> MainThread::INFO::2014-12-27
> 08:36:33,169::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-27
> 08:36:33,169::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-27
> 08:36:43,567::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-27
> 08:36:43,567::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-27
> 08:36:53,858::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-27
> 08:36:53,858::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-27
> 08:37:04,028::state_machine::160::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
> Global metadata: {'maintenance': False}
> MainThread::INFO::2014-12-27
> 08:37:04,028::state_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
> Host 10.0.0.94 (id 1): {'extra':
> 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=835987
> (Sat Dec 27 11:37:30
> 2014)\nhost-id=1\nscore=2400\nmaintenance=False\nstate=EngineUp\n',
> 'hostname': '10.0.0.94', 'alive': True, 'host-id': 1, 'engine-status':
> {'health': 'good', 'vm': 'up', 'detail': 'up'}, 'score': 2400,
> 'maintenance': False, 'host-ts': 835987}
> MainThread::INFO::2014-12-27
> 08:37:04,028::state_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
> Host 10.0.0.92 (id 3): {'extra':
> 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=681528
> (Sat Dec 27 08:37:41
> 2014)\nhost-id=3\nscore=0\nmaintenance=True\nstate=LocalMaintenance\n',
> 'hostname': '10.0.0.92', 'alive': True, 'host-id': 3, 'engine-status':
> {'reason': 'vm not running on this host', 'health': 'bad', 'vm':
> 'down', 'detail': 'unknown'}, 'score': 0, 'maintenance': True,
> 'host-ts': 681528}
> MainThread::INFO::2014-12-27
> 08:37:04,028::state_machine::168::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
> Local (id 2): {'engine-health': {'reason': 'vm not running on this
> host', 'health': 'bad', 'vm': 'down', 'detail': 'unknown'}, 'bridge':
> True, 'mem-free': 15300.0, 'maintenance': False, 'cpu-load': 0.0215,
> 'gateway': True}
> MainThread::INFO::2014-12-27
> 08:37:04,265::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-27
> 08:37:04,265::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
>
> Thanks,
> Cong
>
> On 2014/12/22, at 5:29, "Simone Tiraboschi" <stirabos(a)redhat.com<mailto:stirabos@redhat.com><mailto:stirabos@redhat.com>> wrote:
>
>
>
> ----- Original Message -----
> From: "Cong Yue" <Cong_Yue(a)alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com><mailto:Cong_Yue@alliedtelesis.com>>
> To: "Simone Tiraboschi" <stirabos(a)redhat.com<mailto:stirabos@redhat.com><mailto:stirabos@redhat.com>>
> Cc: users(a)ovirt.org<mailto:users@ovirt.org><mailto:users@ovirt.org>
> Sent: Friday, December 19, 2014 7:22:10 PM
> Subject: RE: [ovirt-users] VM failover with ovirt3.5
>
> Thanks for the information. This is the log for my three ovirt nodes.
> From the output of hosted-engine --vm-status, it shows the engine state for
> my 2nd and 3rd ovirt node is DOWN.
> Is this the reason why VM failover not work in my environment?
>
> No, they looks ok: you can run the engine VM on single host at a time.
>
> How can I make
> also engine works for my 2nd and 3rd ovit nodes?
>
> If you put the host 1 in local maintenance mode ( hosted-engine --set-maintenance --mode=local ) the VM should migrate to host 2; if you reactivate host 1 ( hosted-engine --set-maintenance --mode=none ) and put host 2 in local maintenance mode the VM should migrate again.
>
> Can you please try that and post the logs if something is going bad?
>
>
> --
> --== Host 1 status ==--
>
> Status up-to-date : True
> Hostname : 10.0.0.94
> Host ID : 1
> Engine status : {"health": "good", "vm": "up",
> "detail": "up"}
> Score : 2400
> Local maintenance : False
> Host timestamp : 150475
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=150475 (Fri Dec 19 13:12:18 2014)
> host-id=1
> score=2400
> maintenance=False
> state=EngineUp
>
>
> --== Host 2 status ==--
>
> Status up-to-date : True
> Hostname : 10.0.0.93
> Host ID : 2
> Engine status : {"reason": "vm not running on
> this host", "health": "bad", "vm": "down", "detail": "unknown"}
> Score : 2400
> Local maintenance : False
> Host timestamp : 1572
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=1572 (Fri Dec 19 10:12:18 2014)
> host-id=2
> score=2400
> maintenance=False
> state=EngineDown
>
>
> --== Host 3 status ==--
>
> Status up-to-date : False
> Hostname : 10.0.0.92
> Host ID : 3
> Engine status : unknown stale-data
> Score : 2400
> Local maintenance : False
> Host timestamp : 987
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=987 (Fri Dec 19 10:09:58 2014)
> host-id=3
> score=2400
> maintenance=False
> state=EngineDown
>
> --
> And the /var/log/ovirt-hosted-engine-ha/agent.log for three ovirt nodes are
> as follows:
> --
> 10.0.0.94(hosted-engine-1)
> ---
> MainThread::INFO::2014-12-19
> 13:09:33,716::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:09:33,716::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:09:44,017::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:09:44,017::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:09:54,303::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:09:54,303::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:04,342::states::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
> Engine vm running on localhost
> MainThread::INFO::2014-12-19
> 13:10:04,617::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:04,617::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:14,657::state_machine::160::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
> Global metadata: {'maintenance': False}
> MainThread::INFO::2014-12-19
> 13:10:14,657::state_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
> Host 10.0.0.93 (id 2): {'extra':
> 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=1448
> (Fri Dec 19 10:10:14
> 2014)\nhost-id=2\nscore=2400\nmaintenance=False\nstate=EngineDown\n',
> 'hostname': '10.0.0.93', 'alive': True, 'host-id': 2, 'engine-status':
> {'reason': 'vm not running on this host', 'health': 'bad', 'vm':
> 'down', 'detail': 'unknown'}, 'score': 2400, 'maintenance': False,
> 'host-ts': 1448}
> MainThread::INFO::2014-12-19
> 13:10:14,657::state_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
> Host 10.0.0.92 (id 3): {'extra':
> 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=987
> (Fri Dec 19 10:09:58
> 2014)\nhost-id=3\nscore=2400\nmaintenance=False\nstate=EngineDown\n',
> 'hostname': '10.0.0.92', 'alive': True, 'host-id': 3, 'engine-status':
> {'reason': 'vm not running on this host', 'health': 'bad', 'vm':
> 'down', 'detail': 'unknown'}, 'score': 2400, 'maintenance': False,
> 'host-ts': 987}
> MainThread::INFO::2014-12-19
> 13:10:14,658::state_machine::168::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
> Local (id 1): {'engine-health': {'health': 'good', 'vm': 'up',
> 'detail': 'up'}, 'bridge': True, 'mem-free': 1079.0, 'maintenance':
> False, 'cpu-load': 0.0269, 'gateway': True}
> MainThread::INFO::2014-12-19
> 13:10:14,904::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:14,904::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:25,210::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:25,210::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:35,499::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:35,499::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:45,784::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:45,785::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:56,070::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:56,070::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:11:06,109::states::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
> Engine vm running on localhost
> MainThread::INFO::2014-12-19
> 13:11:06,359::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:11:06,359::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:11:16,658::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:11:16,658::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:11:26,991::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:11:26,991::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:11:37,341::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:11:37,341::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> ----
>
> 10.0.0.93 (hosted-engine-2)
> MainThread::INFO::2014-12-19
> 10:12:18,339::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-19
> 10:12:18,339::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-19
> 10:12:28,651::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-19
> 10:12:28,652::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-19
> 10:12:39,010::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-19
> 10:12:39,010::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-19
> 10:12:49,338::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-19
> 10:12:49,338::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-19
> 10:12:59,642::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-19
> 10:12:59,642::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-19
> 10:13:10,010::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-19
> 10:13:10,010::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
>
>
> 10.0.0.92(hosted-engine-3)
> same as 10.0.0.93
> --
>
> -----Original Message-----
> From: Simone Tiraboschi [mailto:stirabos@redhat.com]
> Sent: Friday, December 19, 2014 12:28 AM
> To: Yue, Cong
> Cc: users(a)ovirt.org<mailto:users@ovirt.org><mailto:users@ovirt.org>
> Subject: Re: [ovirt-users] VM failover with ovirt3.5
>
>
>
> ----- Original Message -----
> From: "Cong Yue" <Cong_Yue(a)alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com><mailto:Cong_Yue@alliedtelesis.com>>
> To: users(a)ovirt.org<mailto:users@ovirt.org><mailto:users@ovirt.org>
> Sent: Friday, December 19, 2014 2:14:33 AM
> Subject: [ovirt-users] VM failover with ovirt3.5
>
>
>
> Hi
>
>
>
> In my environment, I have 3 ovirt nodes as one cluster. And on top of
> host-1, there is one vm to host ovirt engine.
>
> Also I have one external storage for the cluster to use as data domain
> of engine and data.
>
> I confirmed live migration works well in my environment.
>
> But it seems very buggy for VM failover if I try to force to shut down
> one ovirt node. Sometimes the VM in the node which is shutdown can
> migrate to other host, but it take more than several minutes.
>
> Sometimes, it can not migrate at all. Sometimes, only when the host is
> back, the VM is beginning to move.
>
> Can you please check or share the logs under /var/log/ovirt-hosted-engine-ha/
> ?
>
> Is there some documentation to explain how VM failover is working? And
> is there some bugs reported related with this?
>
> http://www.ovirt.org/Features/Self_Hosted_Engine#Agent_State_Diagram
>
> Thanks in advance,
>
> Cong
>
>
>
>
> This e-mail message is for the sole use of the intended recipient(s)
> and may contain confidential and privileged information. Any
> unauthorized review, use, disclosure or distribution is prohibited. If
> you are not the intended recipient, please contact the sender by reply
> e-mail and destroy all copies of the original message. If you are the
> intended recipient, please be advised that the content of this message
> is subject to access, review and disclosure by the sender's e-mail System
> Administrator.
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org<mailto:Users@ovirt.org><mailto:Users@ovirt.org>
> http://lists.ovirt.org/mailman/listinfo/users
>
> This e-mail message is for the sole use of the intended recipient(s) and may
> contain confidential and privileged information. Any unauthorized review,
> use, disclosure or distribution is prohibited. If you are not the intended
> recipient, please contact the sender by reply e-mail and destroy all copies
> of the original message. If you are the intended recipient, please be
> advised that the content of this message is subject to access, review and
> disclosure by the sender's e-mail System Administrator.
>
>
> This e-mail message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. If you are the intended recipient, please be advised that the content of this message is subject to access, review and disclosure by the sender's e-mail System Administrator.
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org<mailto:Users@ovirt.org><mailto:Users@ovirt.org>
> http://lists.ovirt.org/mailman/listinfo/users
>
> ________________________________
> This e-mail message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. If you are the intended recipient, please be advised that the content of this message is subject to access, review and disclosure by the sender's e-mail System Administrator.
>
> ________________________________
> This e-mail message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. If you are the intended recipient, please be advised that the content of this message is subject to access, review and disclosure by the sender's e-mail System Administrator.
This e-mail message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. If you are the intended recipient, please be advised that the content of this message is subject to access, review and disclosure by the sender's e-mail System Administrator.
------------------------------
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
End of Users Digest, Vol 39, Issue 169
**************************************
------=_Part_1875460_365779577.1419876418683
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: georgia,serif; font-size: 12pt; colo=
r: #000000"><div>Hi,</div><div>Your guest-vm have to be defined as "<span s=
tyle=3D"font-family: 'Arial Unicode MS', Arial, sans-serif; font-size: smal=
l; line-height: 16px; background-color: #ffffff;" data-mce-style=3D"font-fa=
mily: 'Arial Unicode MS', Arial, sans-serif; font-size: small; line-height:=
16px; background-color: #ffffff;">Highly Available"</span></div><div><tabl=
e xmlns:d=3D"http://docbook.org/ns/docbook" class=3D"lt-4-cols lt-7-rows mc=
eItemTable" summary=3D"Virtual Machine: High Availability Settings" style=
=3D"widows: 4; orphans: 4; border: 1px solid #aaaaaa; width: 768.7999877929=
688px; border-collapse: collapse; table-layout: fixed; word-wrap: break-wor=
d; color: #000000; font-family: 'liberation sans', 'Myriad ', 'Bitstream Ve=
ra Sans', 'Lucida Grande', 'Luxi Sans', 'Trebuchet MS', helvetica, verdana,=
arial, sans-serif; font-size: 14.399999618530273px; line-height: 18.049999=
237060547px; background-color: #ffffff;" data-mce-style=3D"widows: 4; orpha=
ns: 4; border: 1px solid #aaaaaa; width: 768.7999877929688px; border-collap=
se: collapse; table-layout: fixed; word-wrap: break-word; color: #000000; f=
ont-family: 'liberation sans', 'Myriad ', 'Bitstream Vera Sans', 'Lucida Gr=
ande', 'Luxi Sans', 'Trebuchet MS', helvetica, verdana, arial, sans-serif; =
font-size: 14.399999618530273px; line-height: 18.049999237060547px; backgro=
und-color: #ffffff;"><tbody><tr><td align=3D"left" style=3D"border: none; v=
ertical-align: top; padding: 0.15em 0.5em;" data-mce-style=3D"border: none;=
vertical-align: top; padding: 0.15em 0.5em;"><div class=3D"para" style=3D"=
line-height: 1.29em; padding-top: 0px; margin-top: 0px; padding-bottom: 0px=
; margin-bottom: 1em; display: inline;" data-mce-style=3D"line-height: 1.29=
em; padding-top: 0px; margin-top: 0px; padding-bottom: 0px; margin-bottom: =
1em; display: inline;"><span class=3D"guilabel" style=3D"font-family: 'deja=
vu sans mono', 'liberation mono', 'bitstream vera mono', 'dejavu mono', mon=
ospace; font-weight: bold;" data-mce-style=3D"font-family: 'dejavu sans mon=
o', 'liberation mono', 'bitstream vera mono', 'dejavu mono', monospace; fon=
t-weight: bold;"><strong>Highly Available</strong></span></div></td><td ali=
gn=3D"left" style=3D"border: none; vertical-align: top; padding: 0.15em 0.5=
em;" data-mce-style=3D"border: none; vertical-align: top; padding: 0.15em 0=
.5em;"><div class=3D"para" style=3D"line-height: 1.29em; padding-top: 0px; =
margin-top: 0px; padding-bottom: 0px; margin-bottom: 1em; display: inline;"=
data-mce-style=3D"line-height: 1.29em; padding-top: 0px; margin-top: 0px; =
padding-bottom: 0px; margin-bottom: 1em; display: inline;">Select this chec=
k box if the virtual machine is to be highly available. For example, in cas=
es of host maintenance or failure, the virtual machine is automatically mov=
ed to or re-launched on another host. If the host is manually shut down by =
the system administrator, the virtual machine is not automatically moved to=
another host.</div><div class=3D"para" style=3D"line-height: 1.29em; paddi=
ng-top: 0px; margin-top: 1em; padding-bottom: 0px; margin-bottom: 1em;" dat=
a-mce-style=3D"line-height: 1.29em; padding-top: 0px; margin-top: 1em; padd=
ing-bottom: 0px; margin-bottom: 1em;">Note that this option is unavailable =
if the <span class=3D"guilabel" style=3D"font-family: 'dejavu sans mon=
o', 'liberation mono', 'bitstream vera mono', 'dejavu mono', monospace; fon=
t-weight: bold;" data-mce-style=3D"font-family: 'dejavu sans mono', 'libera=
tion mono', 'bitstream vera mono', 'dejavu mono', monospace; font-weight: b=
old;">Migration Options</span> setting in the <span class=3D"guil=
abel" style=3D"font-family: 'dejavu sans mono', 'liberation mono', 'bitstre=
am vera mono', 'dejavu mono', monospace; font-weight: bold;" data-mce-style=
=3D"font-family: 'dejavu sans mono', 'liberation mono', 'bitstream vera mon=
o', 'dejavu mono', monospace; font-weight: bold;">Hosts</span> tab is =
set to either <span class=3D"guilabel" style=3D"font-family: 'dejavu s=
ans mono', 'liberation mono', 'bitstream vera mono', 'dejavu mono', monospa=
ce; font-weight: bold;" data-mce-style=3D"font-family: 'dejavu sans mono', =
'liberation mono', 'bitstream vera mono', 'dejavu mono', monospace; font-we=
ight: bold;">Allow manual migration only</span> or <span class=3D=
"guilabel" style=3D"font-family: 'dejavu sans mono', 'liberation mono', 'bi=
tstream vera mono', 'dejavu mono', monospace; font-weight: bold;" data-mce-=
style=3D"font-family: 'dejavu sans mono', 'liberation mono', 'bitstream ver=
a mono', 'dejavu mono', monospace; font-weight: bold;">No migration</span>.=
For a virtual machine to be highly available, it must be possible for the =
Manager to migrate the virtual machine to other available hosts as necessar=
y.</div></td></tr></tbody></table><div><br></div></div><div><span name=3D"x=
"></span><br>Thanks in advance.<br><div><br></div>Best regards,<br>Nikolai<=
br>____________________<br>Nikolai Sednev<br>Senior Quality Engineer at Com=
pute team<br>Red Hat Israel<br>34 Jerusalem Road,<br>Ra'anana, Israel 43501=
<br><div><br></div>Tel: +972 9 7692043<br>Mobil=
e: +972 52 7342734<br>Email: nsednev(a)redhat.com<br>IRC: nsednev<span name=
=3D"x"></span><br></div><div><br></div><hr id=3D"zwchr"><div style=3D"color=
:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family=
:Helvetica,Arial,sans-serif;font-size:12pt;"><b>From: </b>users-request@ovi=
rt.org<br><b>To: </b>users(a)ovirt.org<br><b>Sent: </b>Monday, December 29, 2=
014 7:50:07 PM<br><b>Subject: </b>Users Digest, Vol 39, Issue 169<br><div><=
br></div>Send Users mailing list submissions to<br> =
users(a)ovirt.org<br><div><br></div>To subscribe or u=
nsubscribe via the World Wide Web, visit<br> &=
nbsp; http://lists.ovirt.org/mailman/listinfo/users<br>or, via e=
mail, send a message with subject or body 'help' to<br> &n=
bsp; users-request(a)ovirt.org<br><div><br></div>You c=
an reach the person managing the list at<br> &=
nbsp; users-owner(a)ovirt.org<br><div><br></div>When replying, ple=
ase edit your Subject line so it is more specific<br>than "Re: Contents of =
Users digest..."<br><div><br></div><br>Today's Topics:<br><div><br></div>&n=
bsp; 1. Re: VM failover with ovirt3.5 (Yue, Cong)<br><div><br><=
/div><br>------------------------------------------------------------------=
----<br><div><br></div>Message: 1<br>Date: Mon, 29 Dec 2014 09:49:58 -0800<=
br>From: "Yue, Cong" <Cong_Yue(a)alliedtelesis.com><br>To: Artyom Lukia=
nov <alukiano(a)redhat.com><br>Cc: "users(a)ovirt.org" <users(a)ovirt.or=
g><br>Subject: Re: [ovirt-users] VM failover with ovirt3.5<br>Message-ID=
: <11A51118-8B03-41FE-8FD0-C81AC8897EF6(a)alliedtelesis.com><br>Content=
-Type: text/plain; charset=3D"us-ascii"<br><div><br></div>Thanks for detail=
ed explanation. Do you mean only HE VM can be failover? I want to have a tr=
y with the VM on any host to check whether VM can be failover to other host=
automatically like VMware or Xenserver?<br>I will have a try as you advise=
d and provide the log for your further advice.<br><div><br></div>Thanks,<br=
>Cong<br><div><br></div><br><div><br></div>> On 2014/12/29, at 8:43, "Ar=
tyom Lukianov" <alukiano(a)redhat.com> wrote:<br>><br>> I see tha=
t HE vm run on host with ip 10.0.0.94, and two another hosts in "Local Main=
tenance" state, so vm will not migrate to any of them, can you try disable =
local maintenance on all hosts in HE environment and after enable "local ma=
intenance" on host where HE vm run, and provide also output of hosted-engin=
e --vm-status.<br>> Failover works in next way:<br>> 1) if host where=
run HE vm have score less by 800 that some other host in HE environment, H=
E vm will migrate on host with best score<br>> 2) if something happen to=
vm(kernel panic, crash of service...), agent will restart HE vm on another=
host in HE environment with positive score<br>> 3) if put to local main=
tenance host with HE vm, vm will migrate to another host with positive scor=
e<br>> Thanks.<br>><br>> ----- Original Message -----<br>> From=
: "Cong Yue" <Cong_Yue(a)alliedtelesis.com><br>> To: "Artyom Lukiano=
v" <alukiano(a)redhat.com><br>> Cc: "Simone Tiraboschi" <stirabos=
@redhat.com>, users(a)ovirt.org<br>> Sent: Monday, December 29, 2014 6:=
30:42 PM<br>> Subject: Re: [ovirt-users] VM failover with ovirt3.5<br>&g=
t;<br>> Thanks and the --vm-status log is as follows:<br>> [root@comp=
ute2-2 ~]# hosted-engine --vm-status<br>><br>><br>> --=3D=3D Host =
1 status =3D=3D--<br>><br>> Status up-to-date &n=
bsp; : True<br>> Hostname  =
; &nb=
sp; : 10.0.0.94<br>> Host ID &=
nbsp; : 1<br>> Engine st=
atus =
: {"health": "good", "vm": "up",<br>> "detail": "up"}<br>> Scor=
e &nb=
sp; : 2400<br>> Local maintenance &nbs=
p; : False<br>> Host tim=
estamp  =
; : 1008087<br>> Extra metadata (valid at timestamp):<br>> metadata_p=
arse_version=3D1<br>> metadata_feature_version=3D1<br>> timestamp=3D1=
008087<tel:1008087> (Mon Dec 29 11:25:51 2014)<br>> host-id=3D1<br=
>> score=3D2400<br>> maintenance=3DFalse<br>> state=3DEngineUp<br>=
><br>><br>> --=3D=3D Host 2 status =3D=3D--<br>><br>> Status=
up-to-date :=
True<br>> Hostname &nb=
sp; : 10.0.0.93<br>> Host ID &=
nbsp; =
: 2<br>> Engine status =
: {"reason": "vm not running on<br=
>> this host", "health": "bad", "vm": "down", "detail": "unknown"}<br>&g=
t; Score &nb=
sp; : 0<br>> Local maintenance =
: True<br>> Host =
timestamp &n=
bsp; : 859142<br>> Extra metadata (valid at timestamp):<br>> metadata=
_parse_version=3D1<br>> metadata_feature_version=3D1<br>> timestamp=
=3D859142 (Mon Dec 29 08:25:08 2014)<br>> host-id=3D2<br>> score=3D0<=
br>> maintenance=3DTrue<br>> state=3DLocalMaintenance<br>><br>>=
<br>> --=3D=3D Host 3 status =3D=3D--<br>><br>> Status up-to-date =
: True<br>>=
; Hostname &=
nbsp; : 10.0.0.92<br>> Host ID =
&nbs=
p;: 3<br>> Engine status  =
; : {"reason": "vm not running on<br>> this h=
ost", "health": "bad", "vm": "down", "detail": "unknown"}<br>> Score &nb=
sp; &=
nbsp; : 0<br>> Local maintenance  =
; : True<br>> Host timestamp &n=
bsp; : 85361=
5<br>> Extra metadata (valid at timestamp):<br>> metadata_parse_versi=
on=3D1<br>> metadata_feature_version=3D1<br>> timestamp=3D853615 (Mon=
Dec 29 08:25:57 2014)<br>> host-id=3D3<br>> score=3D0<br>> mainte=
nance=3DTrue<br>> state=3DLocalMaintenance<br>> You have new mail in =
/var/spool/mail/root<br>> [root@compute2-2 ~]#<br>><br>> Could you=
please explain how VM failover works inside ovirt? Is there any other debu=
g option I can enable to check the problem?<br>><br>> Thanks,<br>>=
Cong<br>><br>><br>> On 2014/12/29, at 1:39, "Artyom Lukianov" <=
;alukiano@redhat.com<mailto:alukiano@redhat.com>> wrote:<br>><b=
r>> Can you also provide output of hosted-engine --vm-status please, pre=
vious time it was useful, because I do not see something unusual.<br>> T=
hanks<br>><br>> ----- Original Message -----<br>> From: "Cong Yue"=
<Cong_Yue@alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com>>=
;<br>> To: "Artyom Lukianov" <alukiano@redhat.com<mailto:alukiano@=
redhat.com>><br>> Cc: "Simone Tiraboschi" <stirabos(a)redhat.com&=
lt;mailto:stirabos@redhat.com>>, users@ovirt.org<mailto:users@ovir=
t.org><br>> Sent: Monday, December 29, 2014 7:15:24 AM<br>> Subjec=
t: Re: [ovirt-users] VM failover with ovirt3.5<br>><br>> Also I chang=
e the maintenance mode to local in another host. But also the VM in this ho=
st can not be migrated. The logs are as follows.<br>><br>> [root@comp=
ute2-2 ~]# hosted-engine --set-maintenance --mode=3Dlocal<br>> [root@com=
pute2-2 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log<br>> MainT=
hread::INFO::2014-12-28<br>> 21:09:04,184::hosted_engine::332::ovirt_hos=
ted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>> =
Best remote host 10.0.0.94 (id: 1, score: 2400)<br>> MainThread::INFO::2=
014-12-28<br>> 21:09:14,603::hosted_engine::327::ovirt_hosted_engine_ha.=
agent.hosted_engine.HostedEngine::(start_monitoring)<br>> Current state =
EngineDown (score: 2400)<br>> MainThread::INFO::2014-12-28<br>> 21:09=
:14,603::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.Hos=
tedEngine::(start_monitoring)<br>> Best remote host 10.0.0.94 (id: 1, sc=
ore: 2400)<br>> MainThread::INFO::2014-12-28<br>> 21:09:24,903::hoste=
d_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(st=
art_monitoring)<br>> Current state EngineDown (score: 2400)<br>> Main=
Thread::INFO::2014-12-28<br>> 21:09:24,904::hosted_engine::332::ovirt_ho=
sted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>>=
Best remote host 10.0.0.94 (id: 1, score: 2400)<br>> MainThread::INFO::=
2014-12-28<br>> 21:09:35,026::states::437::ovirt_hosted_engine_ha.agent.=
hosted_engine.HostedEngine::(consume)<br>> Engine vm is running on host =
10.0.0.94 (id 1)<br>> MainThread::INFO::2014-12-28<br>> 21:09:35,236:=
:hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngin=
e::(start_monitoring)<br>> Current state EngineDown (score: 2400)<br>>=
; MainThread::INFO::2014-12-28<br>> 21:09:35,236::hosted_engine::332::ov=
irt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<b=
r>> Best remote host 10.0.0.94 (id: 1, score: 2400)<br>> MainThread::=
INFO::2014-12-28<br>> 21:09:45,604::hosted_engine::327::ovirt_hosted_eng=
ine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>> Current=
state EngineDown (score: 2400)<br>> MainThread::INFO::2014-12-28<br>>=
; 21:09:45,604::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_eng=
ine.HostedEngine::(start_monitoring)<br>> Best remote host 10.0.0.94 (id=
: 1, score: 2400)<br>> MainThread::INFO::2014-12-28<br>> 21:09:55,691=
::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedE=
ngine::(check)<br>> Local maintenance detected<br>> MainThread::INFO:=
:2014-12-28<br>> 21:09:55,701::brokerlink::111::ovirt_hosted_engine_ha.l=
ib.brokerlink.BrokerLink::(notify)<br>> Trying: notify time=3D1419829795=
.7 type=3Dstate_transition<br>> detail=3DEngineDown-LocalMaintenance hos=
tname=3D'compute2-2'<br>> MainThread::INFO::2014-12-28<br>> 21:09:55,=
761::brokerlink::120::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(no=
tify)<br>> Success, was notification of state_transition<br>> (Engine=
Down-LocalMaintenance) sent? sent<br>> MainThread::INFO::2014-12-28<br>&=
gt; 21:09:55,990::states::208::ovirt_hosted_engine_ha.agent.hosted_engine.H=
ostedEngine::(score)<br>> Score is 0 due to local maintenance mode<br>&g=
t; MainThread::INFO::2014-12-28<br>> 21:09:55,990::hosted_engine::327::o=
virt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<=
br>> Current state LocalMaintenance (score: 0)<br>> MainThread::INFO:=
:2014-12-28<br>> 21:09:55,991::hosted_engine::332::ovirt_hosted_engine_h=
a.agent.hosted_engine.HostedEngine::(start_monitoring)<br>> Best remote =
host 10.0.0.94 (id: 1, score: 2400)<br>> ^C<br>> You have new mail in=
/var/spool/mail/root<br>> [root@compute2-2 ~]# ps -ef | grep qemu<br>&g=
t; root 18420 2777 0 21:10<x-apple-data-detect=
ors://39> pts/0 00:00:00<x-apple-data-detectors://40>=
grep --color=3Dauto qemu<br>> qemu 29809 1 =
0 Dec19 ? 01:17:20 /usr/libexec/qemu-kvm<b=
r>> -name testvm2-2 -S -machine rhel6.5.0,accel=3Dkvm,usb=3Doff -cpu Neh=
alem<br>> -m 500 -realtime mlock=3Doff -smp<br>> 1,maxcpus=3D16,socke=
ts=3D16,cores=3D1,threads=3D1 -uuid<br>> c31e97d0-135e-42da-9954-162b522=
8dce3 -smbios<br>> type=3D1,manufacturer=3DoVirt,product=3DoVirt<br>>=
Node,version=3D7-0.1406.el7.centos.2.5,serial=3D4C4C4544-0059-3610-8033-B4=
C04F395931,uuid=3Dc31e97d0-135e-42da-9954-162b5228dce3<br>> -no-user-con=
fig -nodefaults -chardev<br>> socket,id=3Dcharmonitor,path=3D/var/lib/li=
bvirt/qemu/testvm2-2.monitor,server,nowait<br>> -mon chardev=3Dcharmonit=
or,id=3Dmonitor,mode=3Dcontrol -rtc<br>> base=3D2014-12-19T20:17:17<x=
-apple-data-detectors://42>,driftfix=3Dslew -no-kvm-pit-reinjection<br>&=
gt; -no-hpet -no-shutdown -boot strict=3Don -device<br>> piix3-usb-uhci,=
id=3Dusb,bus=3Dpci.0,addr=3D0x1.0x2 -device<br>> virtio-scsi-pci,id=3Dsc=
si0,bus=3Dpci.0,addr=3D0x4 -device<br>> virtio-serial-pci,id=3Dvirtio-se=
rial0,max_ports=3D16,bus=3Dpci.0,addr=3D0x5<br>> -drive if=3Dnone,id=3Dd=
rive-ide0-1-0,readonly=3Don,format=3Draw,serial=3D<br>> -device ide-cd,b=
us=3Dide.1,unit=3D0,drive=3Ddrive-ide0-1-0,id=3Dide0-1-0<br>> -drive fil=
e=3D/rhev/data-center/00000002-0002-0002-0002-0000000001e4/1dc71096-27c4-42=
56-b2ac-bd7265525c69/images/5cbeb8c9-4f04-48d0-a5eb-78c49187c550/a0570e8c-9=
867-4ec4-818f-11e102fc4f9b,if=3Dnone,id=3Ddrive-virtio-disk0,format=3Dqcow2=
,serial=3D5cbeb8c9-4f04-48d0-a5eb-78c49187c550,cache=3Dnone,werror=3Dstop,r=
error=3Dstop,aio=3Dthreads<br>> -device virtio-blk-pci,scsi=3Doff,bus=3D=
pci.0,addr=3D0x6,drive=3Ddrive-virtio-disk0,id=3Dvirtio-disk0,bootindex=3D1=
<br>> -netdev tap,fd=3D28,id=3Dhostnet0,vhost=3Don,vhostfd=3D29 -device<=
br>> virtio-net-pci,netdev=3Dhostnet0,id=3Dnet0,mac=3D00:1a:4a:db:94:00,=
bus=3Dpci.0,addr=3D0x3<br>> -chardev socket,id=3Dcharchannel0,path=3D/va=
r/lib/libvirt/qemu/channels/c31e97d0-135e-42da-9954-162b5228dce3.com.redhat=
.rhevm.vdsm,server,nowait<br>> -device virtserialport,bus=3Dvirtio-seria=
l0.0,nr=3D1,chardev=3Dcharchannel0,id=3Dchannel0,name=3Dcom.redhat.rhevm.vd=
sm<br>> -chardev socket,id=3Dcharchannel1,path=3D/var/lib/libvirt/qemu/c=
hannels/c31e97d0-135e-42da-9954-162b5228dce3.org.qemu.guest_agent.0,server,=
nowait<br>> -device virtserialport,bus=3Dvirtio-serial0.0,nr=3D2,chardev=
=3Dcharchannel1,id=3Dchannel1,name=3Dorg.qemu.guest_agent.0<br>> -charde=
v spicevmc,id=3Dcharchannel2,name=3Dvdagent -device<br>> virtserialport,=
bus=3Dvirtio-serial0.0,nr=3D3,chardev=3Dcharchannel2,id=3Dchannel2,name=3Dc=
om.redhat.spice.0<br>> -spice tls-port=3D5901,addr=3D10.0.0.93,x509-dir=
=3D/etc/pki/vdsm/libvirt-spice,tls-channel=3Dmain,tls-channel=3Ddisplay,tls=
-channel=3Dinputs,tls-channel=3Dcursor,tls-channel=3Dplayback,tls-channel=
=3Drecord,tls-channel=3Dsmartcard,tls-channel=3Dusbredir,seamless-migration=
=3Don<br>> -k en-us -vga qxl -global qxl-vga.ram_size=3D67108864<tel:=
67108864> -global<br>> qxl-vga.vram_size=3D33554432<tel:33554432&g=
t; -incoming tcp:[::]:49152 -device<br>> virtio-balloon-pci,id=3Dballoon=
0,bus=3Dpci.0,addr=3D0x7<br>> [root@compute2-2 ~]#<br>><br>> Thank=
s,<br>> Cong<br>><br>><br>> On 2014/12/28, at 20:53, "Yue, Cong=
" <Cong_Yue@alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com>&l=
t;mailto:Cong_Yue@alliedtelesis.com>> wrote:<br>><br>> I checke=
d it again and confirmed there is one guest VM is running on the top of thi=
s host. The log is as follows:<br>><br>> [root@compute2-1 vdsm]# ps -=
ef | grep qemu<br>> qemu 2983 846 0 Dec=
19 ? 00:00:00<x-apple-data-detectors://0> =
[supervdsmServer] <defunct><br>> root 5489 &nb=
sp;3053 0 20:49<x-apple-data-detectors://1> pts/0 =
00:00:00<x-apple-data-detectors://2> grep --color=3Dauto qemu<br>>=
qemu 26128 1 0 Dec19 ? &nb=
sp; 01:09:19 /usr/libexec/qemu-kvm<br>> -name testvm2 -S -machine =
rhel6.5.0,accel=3Dkvm,usb=3Doff -cpu Nehalem -m<br>> 500 -realtime mlock=
=3Doff -smp 1,maxcpus=3D16,sockets=3D16,cores=3D1,threads=3D1<br>> -uuid=
e46bca87-4df5-4287-844b-90a26fccef33 -smbios<br>> type=3D1,manufacturer=
=3DoVirt,product=3DoVirt<br>> Node,version=3D7-0.1406.el7.centos.2.5,ser=
ial=3D4C4C4544-0030-3310-8059-B8C04F585231,uuid=3De46bca87-4df5-4287-844b-9=
0a26fccef33<br>> -no-user-config -nodefaults -chardev<br>> socket,id=
=3Dcharmonitor,path=3D/var/lib/libvirt/qemu/testvm2.monitor,server,nowait<b=
r>> -mon chardev=3Dcharmonitor,id=3Dmonitor,mode=3Dcontrol -rtc<br>> =
base=3D2014-12-19T20:18:01<x-apple-data-detectors://4>,driftfix=3Dsle=
w -no-kvm-pit-reinjection<br>> -no-hpet -no-shutdown -boot strict=3Don -=
device<br>> piix3-usb-uhci,id=3Dusb,bus=3Dpci.0,addr=3D0x1.0x2 -device<b=
r>> virtio-scsi-pci,id=3Dscsi0,bus=3Dpci.0,addr=3D0x4 -device<br>> vi=
rtio-serial-pci,id=3Dvirtio-serial0,max_ports=3D16,bus=3Dpci.0,addr=3D0x5<b=
r>> -drive if=3Dnone,id=3Ddrive-ide0-1-0,readonly=3Don,format=3Draw,seri=
al=3D<br>> -device ide-cd,bus=3Dide.1,unit=3D0,drive=3Ddrive-ide0-1-0,id=
=3Dide0-1-0<br>> -drive file=3D/rhev/data-center/00000002-0002-0002-0002=
-0000000001e4/1dc71096-27c4-4256-b2ac-bd7265525c69/images/b4b5426b-95e3-41a=
f-b286-da245891cdaf/0f688d49-97e3-4f1d-84d4-ac1432d903b3,if=3Dnone,id=3Ddri=
ve-virtio-disk0,format=3Dqcow2,serial=3Db4b5426b-95e3-41af-b286-da245891cda=
f,cache=3Dnone,werror=3Dstop,rerror=3Dstop,aio=3Dthreads<br>> -device vi=
rtio-blk-pci,scsi=3Doff,bus=3Dpci.0,addr=3D0x6,drive=3Ddrive-virtio-disk0,i=
d=3Dvirtio-disk0,bootindex=3D1<br>> -netdev tap,fd=3D26,id=3Dhostnet0,vh=
ost=3Don,vhostfd=3D27 -device<br>> virtio-net-pci,netdev=3Dhostnet0,id=
=3Dnet0,mac=3D00:1a:4a:db:94:01,bus=3Dpci.0,addr=3D0x3<br>> -chardev soc=
ket,id=3Dcharchannel0,path=3D/var/lib/libvirt/qemu/channels/e46bca87-4df5-4=
287-844b-90a26fccef33.com.redhat.rhevm.vdsm,server,nowait<br>> -device v=
irtserialport,bus=3Dvirtio-serial0.0,nr=3D1,chardev=3Dcharchannel0,id=3Dcha=
nnel0,name=3Dcom.redhat.rhevm.vdsm<br>> -chardev socket,id=3Dcharchannel=
1,path=3D/var/lib/libvirt/qemu/channels/e46bca87-4df5-4287-844b-90a26fccef3=
3.org.qemu.guest_agent.0,server,nowait<br>> -device virtserialport,bus=
=3Dvirtio-serial0.0,nr=3D2,chardev=3Dcharchannel1,id=3Dchannel1,name=3Dorg.=
qemu.guest_agent.0<br>> -chardev spicevmc,id=3Dcharchannel2,name=3Dvdage=
nt -device<br>> virtserialport,bus=3Dvirtio-serial0.0,nr=3D3,chardev=3Dc=
harchannel2,id=3Dchannel2,name=3Dcom.redhat.spice.0<br>> -spice tls-port=
=3D5900,addr=3D10.0.0.92,x509-dir=3D/etc/pki/vdsm/libvirt-spice,tls-channel=
=3Dmain,tls-channel=3Ddisplay,tls-channel=3Dinputs,tls-channel=3Dcursor,tls=
-channel=3Dplayback,tls-channel=3Drecord,tls-channel=3Dsmartcard,tls-channe=
l=3Dusbredir,seamless-migration=3Don<br>> -k en-us -vga qxl -global qxl-=
vga.ram_size=3D67108864<tel:67108864> -global<br>> qxl-vga.vram_si=
ze=3D33554432<tel:33554432> -incoming tcp:[::]:49152 -device<br>> =
virtio-balloon-pci,id=3Dballoon0,bus=3Dpci.0,addr=3D0x7<br>> [root@compu=
te2-1 vdsm]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log<br>> Main=
Thread::INFO::2014-12-28<br>> 20:49:27,315::state_decorators::124::ovirt=
_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)<br>> Local m=
aintenance detected<br>> MainThread::INFO::2014-12-28<br>> 20:49:27,6=
46::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEn=
gine::(start_monitoring)<br>> Current state LocalMaintenance (score: 0)<=
br>> MainThread::INFO::2014-12-28<br>> 20:49:27,646::hosted_engine::3=
32::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitor=
ing)<br>> Best remote host 10.0.0.94 (id: 1, score: 2400)<br>> MainTh=
read::INFO::2014-12-28<br>> 20:49:37,732::state_decorators::124::ovirt_h=
osted_engine_ha.agent.hosted_engine.HostedEngine::(check)<br>> Local mai=
ntenance detected<br>> MainThread::INFO::2014-12-28<br>> 20:49:37,961=
::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngi=
ne::(start_monitoring)<br>> Current state LocalMaintenance (score: 0)<br=
>> MainThread::INFO::2014-12-28<br>> 20:49:37,961::hosted_engine::332=
::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitorin=
g)<br>> Best remote host 10.0.0.94 (id: 1, score: 2400)<br>> MainThre=
ad::INFO::2014-12-28<br>> 20:49:48,048::state_decorators::124::ovirt_hos=
ted_engine_ha.agent.hosted_engine.HostedEngine::(check)<br>> Local maint=
enance detected<br>> MainThread::INFO::2014-12-28<br>> 20:49:48,319::=
states::208::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(scor=
e)<br>> Score is 0 due to local maintenance mode<br>> MainThread::INF=
O::2014-12-28<br>> 20:49:48,319::hosted_engine::327::ovirt_hosted_engine=
_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>> Current st=
ate LocalMaintenance (score: 0)<br>> MainThread::INFO::2014-12-28<br>>=
; 20:49:48,319::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_eng=
ine.HostedEngine::(start_monitoring)<br>> Best remote host 10.0.0.94 (id=
: 1, score: 2400)<br>><br>> Thanks,<br>> Cong<br>><br>><br>&=
gt; On 2014/12/28, at 3:46, "Artyom Lukianov" <alukiano(a)redhat.com<ma=
ilto:alukiano@redhat.com><mailto:alukiano@redhat.com>> wrote:<b=
r>><br>> I see that you set local maintenance on host3 that do not ha=
ve engine vm on it, so it nothing to migrate from this host.<br>> If you=
set local maintenance on host1, vm must migrate to another host with posit=
ive score.<br>> Thanks<br>><br>> ----- Original Message -----<br>&=
gt; From: "Cong Yue" <Cong_Yue@alliedtelesis.com<mailto:Cong_Yue@alli=
edtelesis.com><mailto:Cong_Yue@alliedtelesis.com>><br>> To: =
"Simone Tiraboschi" <stirabos@redhat.com<mailto:stirabos@redhat.com&g=
t;<mailto:stirabos@redhat.com>><br>> Cc: users(a)ovirt.org<mai=
lto:users@ovirt.org><mailto:users@ovirt.org><br>> Sent: Saturda=
y, December 27, 2014 6:58:32 PM<br>> Subject: Re: [ovirt-users] VM failo=
ver with ovirt3.5<br>><br>> Hi<br>><br>> I had a try with "host=
ed-engine --set-maintence --mode=3Dlocal" on<br>> compute2-1, which is h=
ost 3 in my cluster. From the log, it shows<br>> maintence mode is decte=
cted, but migration does not happen.<br>><br>> The logs are as follow=
s. Is there any other config I need to check?<br>><br>> [root@compute=
2-1 vdsm]# hosted-engine --vm-status<br>><br>><br>> --=3D=3D Host =
1 status =3D=3D-<br>><br>> Status up-to-date &nb=
sp; : True<br>> Hostname =
&nbs=
p; : 10.0.0.94<br>> Host ID &n=
bsp; : 1<br>> Engine sta=
tus &=
nbsp;: {"health": "good", "vm": "up",<br>> "detail": "up"}<br>> Score=
&nbs=
p; : 2400<br>> Local maintenance  =
; : False<br>> Host time=
stamp =
: 836296<br>> Extra metadata (valid at timestamp):<br>> metadata_par=
se_version=3D1<br>> metadata_feature_version=3D1<br>> timestamp=3D836=
296 (Sat Dec 27 11:42:39 2014)<br>> host-id=3D1<br>> score=3D2400<br>=
> maintenance=3DFalse<br>> state=3DEngineUp<br>><br>><br>> -=
-=3D=3D Host 2 status =3D=3D--<br>><br>> Status up-to-date &nb=
sp; : True<br>> Hostname=
&nbs=
p; : 10.0.0.93<br>> Host ID &n=
bsp; : 2<br>&=
gt; Engine status &=
nbsp; : {"reason": "vm not running on<br>> this host", "hea=
lth": "bad", "vm": "down", "detail": "unknown"}<br>> Score =
&nbs=
p; : 2400<br>> Local maintenance  =
; : False<br>> Host timestamp &=
nbsp; : 687358<br>&=
gt; Extra metadata (valid at timestamp):<br>> metadata_parse_version=3D1=
<br>> metadata_feature_version=3D1<br>> timestamp=3D687358 (Sat Dec 2=
7 08:42:04 2014)<br>> host-id=3D2<br>> score=3D2400<br>> maintenan=
ce=3DFalse<br>> state=3DEngineDown<br>><br>><br>> --=3D=3D Host=
3 status =3D=3D--<br>><br>> Status up-to-date &=
nbsp; : True<br>> Hostname &nbs=
p; &n=
bsp; : 10.0.0.92<br>> Host ID =
: 3<br>> Engine s=
tatus =
: {"reason": "vm not running on<br>> this host", "health": "bad",=
"vm": "down", "detail": "unknown"}<br>> Score &nbs=
p; &n=
bsp;: 0<br>> Local maintenance =
: True<br>> Host timestamp &nb=
sp; : 681827<br>> Extra metada=
ta (valid at timestamp):<br>> metadata_parse_version=3D1<br>> metadat=
a_feature_version=3D1<br>> timestamp=3D681827 (Sat Dec 27 08:42:40 2014)=
<br>> host-id=3D3<br>> score=3D0<br>> maintenance=3DTrue<br>> s=
tate=3DLocalMaintenance<br>> [root@compute2-1 vdsm]# tail -f /var/log/ov=
irt-hosted-engine-ha/agent.log<br>> MainThread::INFO::2014-12-27<br>>=
08:42:41,109::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engi=
ne.HostedEngine::(start_monitoring)<br>> Best remote host 10.0.0.94 (id:=
1, score: 2400)<br>> MainThread::INFO::2014-12-27<br>> 08:42:51,198:=
:state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEn=
gine::(check)<br>> Local maintenance detected<br>> MainThread::INFO::=
2014-12-27<br>> 08:42:51,420::hosted_engine::327::ovirt_hosted_engine_ha=
.agent.hosted_engine.HostedEngine::(start_monitoring)<br>> Current state=
LocalMaintenance (score: 0)<br>> MainThread::INFO::2014-12-27<br>> 0=
8:42:51,420::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine=
.HostedEngine::(start_monitoring)<br>> Best remote host 10.0.0.94 (id: 1=
, score: 2400)<br>> MainThread::INFO::2014-12-27<br>> 08:43:01,507::s=
tate_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngi=
ne::(check)<br>> Local maintenance detected<br>> MainThread::INFO::20=
14-12-27<br>> 08:43:01,773::hosted_engine::327::ovirt_hosted_engine_ha.a=
gent.hosted_engine.HostedEngine::(start_monitoring)<br>> Current state L=
ocalMaintenance (score: 0)<br>> MainThread::INFO::2014-12-27<br>> 08:=
43:01,773::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.H=
ostedEngine::(start_monitoring)<br>> Best remote host 10.0.0.94 (id: 1, =
score: 2400)<br>> MainThread::INFO::2014-12-27<br>> 08:43:11,859::sta=
te_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine=
::(check)<br>> Local maintenance detected<br>> MainThread::INFO::2014=
-12-27<br>> 08:43:12,072::hosted_engine::327::ovirt_hosted_engine_ha.age=
nt.hosted_engine.HostedEngine::(start_monitoring)<br>> Current state Loc=
alMaintenance (score: 0)<br>> MainThread::INFO::2014-12-27<br>> 08:43=
:12,072::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.Hos=
tedEngine::(start_monitoring)<br>> Best remote host 10.0.0.94 (id: 1, sc=
ore: 2400)<br>><br>><br>><br>> [root@compute2-3 ~]# tail -f /va=
r/log/ovirt-hosted-engine-ha/agent.log<br>> MainThread::INFO::2014-12-27=
<br>> 11:36:28,855::hosted_engine::332::ovirt_hosted_engine_ha.agent.hos=
ted_engine.HostedEngine::(start_monitoring)<br>> Best remote host 10.0.0=
.93 (id: 2, score: 2400)<br>> MainThread::INFO::2014-12-27<br>> 11:36=
:39,130::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.Hos=
tedEngine::(start_monitoring)<br>> Current state EngineUp (score: 2400)<=
br>> MainThread::INFO::2014-12-27<br>> 11:36:39,130::hosted_engine::3=
32::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitor=
ing)<br>> Best remote host 10.0.0.93 (id: 2, score: 2400)<br>> MainTh=
read::INFO::2014-12-27<br>> 11:36:49,449::hosted_engine::327::ovirt_host=
ed_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>> C=
urrent state EngineUp (score: 2400)<br>> MainThread::INFO::2014-12-27<br=
>> 11:36:49,449::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted=
_engine.HostedEngine::(start_monitoring)<br>> Best remote host 10.0.0.93=
(id: 2, score: 2400)<br>> MainThread::INFO::2014-12-27<br>> 11:36:59=
,739::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.Hosted=
Engine::(start_monitoring)<br>> Current state EngineUp (score: 2400)<br>=
> MainThread::INFO::2014-12-27<br>> 11:36:59,739::hosted_engine::332:=
:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring=
)<br>> Best remote host 10.0.0.93 (id: 2, score: 2400)<br>> MainThrea=
d::INFO::2014-12-27<br>> 11:37:09,779::states::394::ovirt_hosted_engine_=
ha.agent.hosted_engine.HostedEngine::(consume)<br>> Engine vm running on=
localhost<br>> MainThread::INFO::2014-12-27<br>> 11:37:10,026::hoste=
d_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(st=
art_monitoring)<br>> Current state EngineUp (score: 2400)<br>> MainTh=
read::INFO::2014-12-27<br>> 11:37:10,026::hosted_engine::332::ovirt_host=
ed_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>> B=
est remote host 10.0.0.93 (id: 2, score: 2400)<br>> MainThread::INFO::20=
14-12-27<br>> 11:37:20,331::hosted_engine::327::ovirt_hosted_engine_ha.a=
gent.hosted_engine.HostedEngine::(start_monitoring)<br>> Current state E=
ngineUp (score: 2400)<br>> MainThread::INFO::2014-12-27<br>> 11:37:20=
,331::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.Hosted=
Engine::(start_monitoring)<br>> Best remote host 10.0.0.93 (id: 2, score=
: 2400)<br>><br>><br>> [root@compute2-2 ~]# tail -f /var/log/ovirt=
-hosted-engine-ha/agent.log<br>> MainThread::INFO::2014-12-27<br>> 08=
:36:12,462::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.=
HostedEngine::(start_monitoring)<br>> Best remote host 10.0.0.94 (id: 1,=
score: 2400)<br>> MainThread::INFO::2014-12-27<br>> 08:36:22,797::ho=
sted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::=
(start_monitoring)<br>> Current state EngineDown (score: 2400)<br>> M=
ainThread::INFO::2014-12-27<br>> 08:36:22,798::hosted_engine::332::ovirt=
_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>&=
gt; Best remote host 10.0.0.94 (id: 1, score: 2400)<br>> MainThread::INF=
O::2014-12-27<br>> 08:36:32,876::states::437::ovirt_hosted_engine_ha.age=
nt.hosted_engine.HostedEngine::(consume)<br>> Engine vm is running on ho=
st 10.0.0.94 (id 1)<br>> MainThread::INFO::2014-12-27<br>> 08:36:33,1=
69::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEn=
gine::(start_monitoring)<br>> Current state EngineDown (score: 2400)<br>=
> MainThread::INFO::2014-12-27<br>> 08:36:33,169::hosted_engine::332:=
:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring=
)<br>> Best remote host 10.0.0.94 (id: 1, score: 2400)<br>> MainThrea=
d::INFO::2014-12-27<br>> 08:36:43,567::hosted_engine::327::ovirt_hosted_=
engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>> Curr=
ent state EngineDown (score: 2400)<br>> MainThread::INFO::2014-12-27<br>=
> 08:36:43,567::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_=
engine.HostedEngine::(start_monitoring)<br>> Best remote host 10.0.0.94 =
(id: 1, score: 2400)<br>> MainThread::INFO::2014-12-27<br>> 08:36:53,=
858::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedE=
ngine::(start_monitoring)<br>> Current state EngineDown (score: 2400)<br=
>> MainThread::INFO::2014-12-27<br>> 08:36:53,858::hosted_engine::332=
::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitorin=
g)<br>> Best remote host 10.0.0.94 (id: 1, score: 2400)<br>> MainThre=
ad::INFO::2014-12-27<br>> 08:37:04,028::state_machine::160::ovirt_hosted=
_engine_ha.agent.hosted_engine.HostedEngine::(refresh)<br>> Global metad=
ata: {'maintenance': False}<br>> MainThread::INFO::2014-12-27<br>> 08=
:37:04,028::state_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.=
HostedEngine::(refresh)<br>> Host 10.0.0.94 (id 1): {'extra':<br>> 'm=
etadata_parse_version=3D1\nmetadata_feature_version=3D1\ntimestamp=3D835987=
<br>> (Sat Dec 27 11:37:30<br>> 2014)\nhost-id=3D1\nscore=3D2400\nmai=
ntenance=3DFalse\nstate=3DEngineUp\n',<br>> 'hostname': '10.0.0.94', 'al=
ive': True, 'host-id': 1, 'engine-status':<br>> {'health': 'good', 'vm':=
'up', 'detail': 'up'}, 'score': 2400,<br>> 'maintenance': False, 'host-=
ts': 835987}<br>> MainThread::INFO::2014-12-27<br>> 08:37:04,028::sta=
te_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(=
refresh)<br>> Host 10.0.0.92 (id 3): {'extra':<br>> 'metadata_parse_v=
ersion=3D1\nmetadata_feature_version=3D1\ntimestamp=3D681528<br>> (Sat D=
ec 27 08:37:41<br>> 2014)\nhost-id=3D3\nscore=3D0\nmaintenance=3DTrue\ns=
tate=3DLocalMaintenance\n',<br>> 'hostname': '10.0.0.92', 'alive': True,=
'host-id': 3, 'engine-status':<br>> {'reason': 'vm not running on this =
host', 'health': 'bad', 'vm':<br>> 'down', 'detail': 'unknown'}, 'score'=
: 0, 'maintenance': True,<br>> 'host-ts': 681528}<br>> MainThread::IN=
FO::2014-12-27<br>> 08:37:04,028::state_machine::168::ovirt_hosted_engin=
e_ha.agent.hosted_engine.HostedEngine::(refresh)<br>> Local (id 2): {'en=
gine-health': {'reason': 'vm not running on this<br>> host', 'health': '=
bad', 'vm': 'down', 'detail': 'unknown'}, 'bridge':<br>> True, 'mem-free=
': 15300.0, 'maintenance': False, 'cpu-load': 0.0215,<br>> 'gateway': Tr=
ue}<br>> MainThread::INFO::2014-12-27<br>> 08:37:04,265::hosted_engin=
e::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_mon=
itoring)<br>> Current state EngineDown (score: 2400)<br>> MainThread:=
:INFO::2014-12-27<br>> 08:37:04,265::hosted_engine::332::ovirt_hosted_en=
gine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>> Best r=
emote host 10.0.0.94 (id: 1, score: 2400)<br>><br>> Thanks,<br>> C=
ong<br>><br>> On 2014/12/22, at 5:29, "Simone Tiraboschi" <stirabo=
s@redhat.com<mailto:stirabos@redhat.com><mailto:stirabos@redhat.co=
m>> wrote:<br>><br>><br>><br>> ----- Original Message ---=
--<br>> From: "Cong Yue" <Cong_Yue@alliedtelesis.com<mailto:Cong_Y=
ue@alliedtelesis.com><mailto:Cong_Yue@alliedtelesis.com>><br>&g=
t; To: "Simone Tiraboschi" <stirabos@redhat.com<mailto:stirabos@redha=
t.com><mailto:stirabos@redhat.com>><br>> Cc: users(a)ovirt.org=
<mailto:users@ovirt.org><mailto:users@ovirt.org><br>> Sent: =
Friday, December 19, 2014 7:22:10 PM<br>> Subject: RE: [ovirt-users] VM =
failover with ovirt3.5<br>><br>> Thanks for the information. This is =
the log for my three ovirt nodes.<br>> From the output of hosted-engine =
--vm-status, it shows the engine state for<br>> my 2nd and 3rd ovirt nod=
e is DOWN.<br>> Is this the reason why VM failover not work in my enviro=
nment?<br>><br>> No, they looks ok: you can run the engine VM on sing=
le host at a time.<br>><br>> How can I make<br>> also engine works=
for my 2nd and 3rd ovit nodes?<br>><br>> If you put the host 1 in lo=
cal maintenance mode ( hosted-engine --set-maintenance --mode=3Dlocal ) the=
VM should migrate to host 2; if you reactivate host 1 ( hosted-engine --se=
t-maintenance --mode=3Dnone ) and put host 2 in local maintenance mode the =
VM should migrate again.<br>><br>> Can you please try that and post t=
he logs if something is going bad?<br>><br>><br>> --<br>> --=3D=
=3D Host 1 status =3D=3D--<br>><br>> Status up-to-date =
: True<br>> Hostname &nb=
sp; &=
nbsp; : 10.0.0.94<br>> Host ID =
: 1<br>> =
Engine status  =
; : {"health": "good", "vm": "up",<br>> "detail": "up"}<br>=
> Score &=
nbsp; : 2400<br>> Local maintenance &n=
bsp; : False<br>>=
Host timestamp &nb=
sp; : 150475<br>> Extra metadata (valid at timestamp):<br>> me=
tadata_parse_version=3D1<br>> metadata_feature_version=3D1<br>> times=
tamp=3D150475 (Fri Dec 19 13:12:18 2014)<br>> host-id=3D1<br>> score=
=3D2400<br>> maintenance=3DFalse<br>> state=3DEngineUp<br>><br>>=
;<br>> --=3D=3D Host 2 status =3D=3D--<br>><br>> Status up-to-date=
: True<br>&g=
t; Hostname =
: 10.0.0.93<br>> Host ID  =
; &nb=
sp;: 2<br>> Engine status &nbs=
p; : {"reason": "vm not running on<br>> this =
host", "health": "bad", "vm": "down", "detail": "unknown"}<br>> Score &n=
bsp; =
: 2400<br>> Local maintenance &=
nbsp; : False<br>> Host timesta=
mp : =
1572<br>> Extra metadata (valid at timestamp):<br>> metadata_parse_ve=
rsion=3D1<br>> metadata_feature_version=3D1<br>> timestamp=3D1572 (Fr=
i Dec 19 10:12:18 2014)<br>> host-id=3D2<br>> score=3D2400<br>> ma=
intenance=3DFalse<br>> state=3DEngineDown<br>><br>><br>> --=3D=
=3D Host 3 status =3D=3D--<br>><br>> Status up-to-date =
: False<br>> Hostname &n=
bsp; =
: 10.0.0.92<br>> Host ID  =
; : 3<br>>=
Engine status &nbs=
p; : unknown stale-data<br>> Score &nb=
sp; &=
nbsp;: 2400<br>> Local maintenance &n=
bsp; : False<br>> Host timestamp  =
; : 987<br>> Extra meta=
data (valid at timestamp):<br>> metadata_parse_version=3D1<br>> metad=
ata_feature_version=3D1<br>> timestamp=3D987 (Fri Dec 19 10:09:58 2014)<=
br>> host-id=3D3<br>> score=3D2400<br>> maintenance=3DFalse<br>>=
; state=3DEngineDown<br>><br>> --<br>> And the /var/log/ovirt-host=
ed-engine-ha/agent.log for three ovirt nodes are<br>> as follows:<br>>=
; --<br>> 10.0.0.94(hosted-engine-1)<br>> ---<br>> MainThread::INF=
O::2014-12-19<br>> 13:09:33,716::hosted_engine::327::ovirt_hosted_engine=
_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>> Current st=
ate EngineUp (score: 2400)<br>> MainThread::INFO::2014-12-19<br>> 13:=
09:33,716::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.H=
ostedEngine::(start_monitoring)<br>> Best remote host 10.0.0.93 (id: 2, =
score: 2400)<br>> MainThread::INFO::2014-12-19<br>> 13:09:44,017::hos=
ted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(=
start_monitoring)<br>> Current state EngineUp (score: 2400)<br>> Main=
Thread::INFO::2014-12-19<br>> 13:09:44,017::hosted_engine::332::ovirt_ho=
sted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>>=
Best remote host 10.0.0.93 (id: 2, score: 2400)<br>> MainThread::INFO::=
2014-12-19<br>> 13:09:54,303::hosted_engine::327::ovirt_hosted_engine_ha=
.agent.hosted_engine.HostedEngine::(start_monitoring)<br>> Current state=
EngineUp (score: 2400)<br>> MainThread::INFO::2014-12-19<br>> 13:09:=
54,303::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.Host=
edEngine::(start_monitoring)<br>> Best remote host 10.0.0.93 (id: 2, sco=
re: 2400)<br>> MainThread::INFO::2014-12-19<br>> 13:10:04,342::states=
::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)<b=
r>> Engine vm running on localhost<br>> MainThread::INFO::2014-12-19<=
br>> 13:10:04,617::hosted_engine::327::ovirt_hosted_engine_ha.agent.host=
ed_engine.HostedEngine::(start_monitoring)<br>> Current state EngineUp (=
score: 2400)<br>> MainThread::INFO::2014-12-19<br>> 13:10:04,617::hos=
ted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(=
start_monitoring)<br>> Best remote host 10.0.0.93 (id: 2, score: 2400)<b=
r>> MainThread::INFO::2014-12-19<br>> 13:10:14,657::state_machine::16=
0::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)<br>&g=
t; Global metadata: {'maintenance': False}<br>> MainThread::INFO::2014-1=
2-19<br>> 13:10:14,657::state_machine::165::ovirt_hosted_engine_ha.agent=
.hosted_engine.HostedEngine::(refresh)<br>> Host 10.0.0.93 (id 2): {'ext=
ra':<br>> 'metadata_parse_version=3D1\nmetadata_feature_version=3D1\ntim=
estamp=3D1448<br>> (Fri Dec 19 10:10:14<br>> 2014)\nhost-id=3D2\nscor=
e=3D2400\nmaintenance=3DFalse\nstate=3DEngineDown\n',<br>> 'hostname': '=
10.0.0.93', 'alive': True, 'host-id': 2, 'engine-status':<br>> {'reason'=
: 'vm not running on this host', 'health': 'bad', 'vm':<br>> 'down', 'de=
tail': 'unknown'}, 'score': 2400, 'maintenance': False,<br>> 'host-ts': =
1448}<br>> MainThread::INFO::2014-12-19<br>> 13:10:14,657::state_mach=
ine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh=
)<br>> Host 10.0.0.92 (id 3): {'extra':<br>> 'metadata_parse_version=
=3D1\nmetadata_feature_version=3D1\ntimestamp=3D987<br>> (Fri Dec 19 10:=
09:58<br>> 2014)\nhost-id=3D3\nscore=3D2400\nmaintenance=3DFalse\nstate=
=3DEngineDown\n',<br>> 'hostname': '10.0.0.92', 'alive': True, 'host-id'=
: 3, 'engine-status':<br>> {'reason': 'vm not running on this host', 'he=
alth': 'bad', 'vm':<br>> 'down', 'detail': 'unknown'}, 'score': 2400, 'm=
aintenance': False,<br>> 'host-ts': 987}<br>> MainThread::INFO::2014-=
12-19<br>> 13:10:14,658::state_machine::168::ovirt_hosted_engine_ha.agen=
t.hosted_engine.HostedEngine::(refresh)<br>> Local (id 1): {'engine-heal=
th': {'health': 'good', 'vm': 'up',<br>> 'detail': 'up'}, 'bridge': True=
, 'mem-free': 1079.0, 'maintenance':<br>> False, 'cpu-load': 0.0269, 'ga=
teway': True}<br>> MainThread::INFO::2014-12-19<br>> 13:10:14,904::ho=
sted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::=
(start_monitoring)<br>> Current state EngineUp (score: 2400)<br>> Mai=
nThread::INFO::2014-12-19<br>> 13:10:14,904::hosted_engine::332::ovirt_h=
osted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>>=
; Best remote host 10.0.0.93 (id: 2, score: 2400)<br>> MainThread::INFO:=
:2014-12-19<br>> 13:10:25,210::hosted_engine::327::ovirt_hosted_engine_h=
a.agent.hosted_engine.HostedEngine::(start_monitoring)<br>> Current stat=
e EngineUp (score: 2400)<br>> MainThread::INFO::2014-12-19<br>> 13:10=
:25,210::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.Hos=
tedEngine::(start_monitoring)<br>> Best remote host 10.0.0.93 (id: 2, sc=
ore: 2400)<br>> MainThread::INFO::2014-12-19<br>> 13:10:35,499::hoste=
d_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(st=
art_monitoring)<br>> Current state EngineUp (score: 2400)<br>> MainTh=
read::INFO::2014-12-19<br>> 13:10:35,499::hosted_engine::332::ovirt_host=
ed_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>> B=
est remote host 10.0.0.93 (id: 2, score: 2400)<br>> MainThread::INFO::20=
14-12-19<br>> 13:10:45,784::hosted_engine::327::ovirt_hosted_engine_ha.a=
gent.hosted_engine.HostedEngine::(start_monitoring)<br>> Current state E=
ngineUp (score: 2400)<br>> MainThread::INFO::2014-12-19<br>> 13:10:45=
,785::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.Hosted=
Engine::(start_monitoring)<br>> Best remote host 10.0.0.93 (id: 2, score=
: 2400)<br>> MainThread::INFO::2014-12-19<br>> 13:10:56,070::hosted_e=
ngine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start=
_monitoring)<br>> Current state EngineUp (score: 2400)<br>> MainThrea=
d::INFO::2014-12-19<br>> 13:10:56,070::hosted_engine::332::ovirt_hosted_=
engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>> Best=
remote host 10.0.0.93 (id: 2, score: 2400)<br>> MainThread::INFO::2014-=
12-19<br>> 13:11:06,109::states::394::ovirt_hosted_engine_ha.agent.hoste=
d_engine.HostedEngine::(consume)<br>> Engine vm running on localhost<br>=
> MainThread::INFO::2014-12-19<br>> 13:11:06,359::hosted_engine::327:=
:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring=
)<br>> Current state EngineUp (score: 2400)<br>> MainThread::INFO::20=
14-12-19<br>> 13:11:06,359::hosted_engine::332::ovirt_hosted_engine_ha.a=
gent.hosted_engine.HostedEngine::(start_monitoring)<br>> Best remote hos=
t 10.0.0.93 (id: 2, score: 2400)<br>> MainThread::INFO::2014-12-19<br>&g=
t; 13:11:16,658::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_en=
gine.HostedEngine::(start_monitoring)<br>> Current state EngineUp (score=
: 2400)<br>> MainThread::INFO::2014-12-19<br>> 13:11:16,658::hosted_e=
ngine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start=
_monitoring)<br>> Best remote host 10.0.0.93 (id: 2, score: 2400)<br>>=
; MainThread::INFO::2014-12-19<br>> 13:11:26,991::hosted_engine::327::ov=
irt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<b=
r>> Current state EngineUp (score: 2400)<br>> MainThread::INFO::2014-=
12-19<br>> 13:11:26,991::hosted_engine::332::ovirt_hosted_engine_ha.agen=
t.hosted_engine.HostedEngine::(start_monitoring)<br>> Best remote host 1=
0.0.0.93 (id: 2, score: 2400)<br>> MainThread::INFO::2014-12-19<br>> =
13:11:37,341::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engin=
e.HostedEngine::(start_monitoring)<br>> Current state EngineUp (score: 2=
400)<br>> MainThread::INFO::2014-12-19<br>> 13:11:37,341::hosted_engi=
ne::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_mo=
nitoring)<br>> Best remote host 10.0.0.93 (id: 2, score: 2400)<br>> -=
---<br>><br>> 10.0.0.93 (hosted-engine-2)<br>> MainThread::INFO::2=
014-12-19<br>> 10:12:18,339::hosted_engine::327::ovirt_hosted_engine_ha.=
agent.hosted_engine.HostedEngine::(start_monitoring)<br>> Current state =
EngineDown (score: 2400)<br>> MainThread::INFO::2014-12-19<br>> 10:12=
:18,339::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.Hos=
tedEngine::(start_monitoring)<br>> Best remote host 10.0.0.94 (id: 1, sc=
ore: 2400)<br>> MainThread::INFO::2014-12-19<br>> 10:12:28,651::hoste=
d_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(st=
art_monitoring)<br>> Current state EngineDown (score: 2400)<br>> Main=
Thread::INFO::2014-12-19<br>> 10:12:28,652::hosted_engine::332::ovirt_ho=
sted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>>=
Best remote host 10.0.0.94 (id: 1, score: 2400)<br>> MainThread::INFO::=
2014-12-19<br>> 10:12:39,010::hosted_engine::327::ovirt_hosted_engine_ha=
.agent.hosted_engine.HostedEngine::(start_monitoring)<br>> Current state=
EngineDown (score: 2400)<br>> MainThread::INFO::2014-12-19<br>> 10:1=
2:39,010::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.Ho=
stedEngine::(start_monitoring)<br>> Best remote host 10.0.0.94 (id: 1, s=
core: 2400)<br>> MainThread::INFO::2014-12-19<br>> 10:12:49,338::host=
ed_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(s=
tart_monitoring)<br>> Current state EngineDown (score: 2400)<br>> Mai=
nThread::INFO::2014-12-19<br>> 10:12:49,338::hosted_engine::332::ovirt_h=
osted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>>=
; Best remote host 10.0.0.94 (id: 1, score: 2400)<br>> MainThread::INFO:=
:2014-12-19<br>> 10:12:59,642::hosted_engine::327::ovirt_hosted_engine_h=
a.agent.hosted_engine.HostedEngine::(start_monitoring)<br>> Current stat=
e EngineDown (score: 2400)<br>> MainThread::INFO::2014-12-19<br>> 10:=
12:59,642::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.H=
ostedEngine::(start_monitoring)<br>> Best remote host 10.0.0.94 (id: 1, =
score: 2400)<br>> MainThread::INFO::2014-12-19<br>> 10:13:10,010::hos=
ted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(=
start_monitoring)<br>> Current state EngineDown (score: 2400)<br>> Ma=
inThread::INFO::2014-12-19<br>> 10:13:10,010::hosted_engine::332::ovirt_=
hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>&g=
t; Best remote host 10.0.0.94 (id: 1, score: 2400)<br>><br>><br>> =
10.0.0.92(hosted-engine-3)<br>> same as 10.0.0.93<br>> --<br>><br>=
> -----Original Message-----<br>> From: Simone Tiraboschi [mailto:sti=
rabos(a)redhat.com]<br>> Sent: Friday, December 19, 2014 12:28 AM<br>> =
To: Yue, Cong<br>> Cc: users@ovirt.org<mailto:users@ovirt.org><=
mailto:users@ovirt.org><br>> Subject: Re: [ovirt-users] VM failover w=
ith ovirt3.5<br>><br>><br>><br>> ----- Original Message -----<b=
r>> From: "Cong Yue" <Cong_Yue@alliedtelesis.com<mailto:Cong_Yue@a=
lliedtelesis.com><mailto:Cong_Yue@alliedtelesis.com>><br>> T=
o: users@ovirt.org<mailto:users@ovirt.org><mailto:users@ovirt.org&=
gt;<br>> Sent: Friday, December 19, 2014 2:14:33 AM<br>> Subject: [ov=
irt-users] VM failover with ovirt3.5<br>><br>><br>><br>> Hi<br>=
><br>><br>><br>> In my environment, I have 3 ovirt nodes as one=
cluster. And on top of<br>> host-1, there is one vm to host ovirt engin=
e.<br>><br>> Also I have one external storage for the cluster to use =
as data domain<br>> of engine and data.<br>><br>> I confirmed live=
migration works well in my environment.<br>><br>> But it seems very =
buggy for VM failover if I try to force to shut down<br>> one ovirt node=
. Sometimes the VM in the node which is shutdown can<br>> migrate to oth=
er host, but it take more than several minutes.<br>><br>> Sometimes, =
it can not migrate at all. Sometimes, only when the host is<br>> back, t=
he VM is beginning to move.<br>><br>> Can you please check or share t=
he logs under /var/log/ovirt-hosted-engine-ha/<br>> ?<br>><br>> Is=
there some documentation to explain how VM failover is working? And<br>>=
; is there some bugs reported related with this?<br>><br>> http://www=
.ovirt.org/Features/Self_Hosted_Engine#Agent_State_Diagram<br>><br>> =
Thanks in advance,<br>><br>> Cong<br>><br>><br>><br>><br>=
> This e-mail message is for the sole use of the intended recipient(s)<b=
r>> and may contain confidential and privileged information. Any<br>>=
unauthorized review, use, disclosure or distribution is prohibited. If<br>=
> you are not the intended recipient, please contact the sender by reply=
<br>> e-mail and destroy all copies of the original message. If you are =
the<br>> intended recipient, please be advised that the content of this =
message<br>> is subject to access, review and disclosure by the sender's=
e-mail System<br>> Administrator.<br>><br>> _____________________=
__________________________<br>> Users mailing list<br>> Users(a)ovirt.o=
rg<mailto:Users@ovirt.org><mailto:Users@ovirt.org><br>> http=
://lists.ovirt.org/mailman/listinfo/users<br>><br>> This e-mail messa=
ge is for the sole use of the intended recipient(s) and may<br>> contain=
confidential and privileged information. Any unauthorized review,<br>> =
use, disclosure or distribution is prohibited. If you are not the intended<=
br>> recipient, please contact the sender by reply e-mail and destroy al=
l copies<br>> of the original message. If you are the intended recipient=
, please be<br>> advised that the content of this message is subject to =
access, review and<br>> disclosure by the sender's e-mail System Adminis=
trator.<br>><br>><br>> This e-mail message is for the sole use of =
the intended recipient(s) and may contain confidential and privileged infor=
mation. Any unauthorized review, use, disclosure or distribution is prohibi=
ted. If you are not the intended recipient, please contact the sender by re=
ply e-mail and destroy all copies of the original message. If you are the i=
ntended recipient, please be advised that the content of this message is su=
bject to access, review and disclosure by the sender's e-mail System Admini=
strator.<br>> _______________________________________________<br>> Us=
ers mailing list<br>> Users@ovirt.org<mailto:Users@ovirt.org><m=
ailto:Users@ovirt.org><br>> http://lists.ovirt.org/mailman/listinfo/u=
sers<br>><br>> ________________________________<br>> This e-mail m=
essage is for the sole use of the intended recipient(s) and may contain con=
fidential and privileged information. Any unauthorized review, use, disclos=
ure or distribution is prohibited. If you are not the intended recipient, p=
lease contact the sender by reply e-mail and destroy all copies of the orig=
inal message. If you are the intended recipient, please be advised that the=
content of this message is subject to access, review and disclosure by the=
sender's e-mail System Administrator.<br>><br>> ____________________=
____________<br>> This e-mail message is for the sole use of the intende=
d recipient(s) and may contain confidential and privileged information. Any=
unauthorized review, use, disclosure or distribution is prohibited. If you=
are not the intended recipient, please contact the sender by reply e-mail =
and destroy all copies of the original message. If you are the intended rec=
ipient, please be advised that the content of this message is subject to ac=
cess, review and disclosure by the sender's e-mail System Administrator.<br=
><div><br></div>This e-mail message is for the sole use of the intended rec=
ipient(s) and may contain confidential and privileged information. Any unau=
thorized review, use, disclosure or distribution is prohibited. If you are =
not the intended recipient, please contact the sender by reply e-mail and d=
estroy all copies of the original message. If you are the intended recipien=
t, please be advised that the content of this message is subject to access,=
review and disclosure by the sender's e-mail System Administrator.<br><div=
><br></div><br>------------------------------<br><div><br></div>___________=
____________________________________<br>Users mailing list<br>Users(a)ovirt.o=
rg<br>http://lists.ovirt.org/mailman/listinfo/users<br><div><br></div><br>E=
nd of Users Digest, Vol 39, Issue 169<br>**********************************=
****<br></div><div><br></div></div></body></html>
------=_Part_1875460_365779577.1419876418683--
10 years, 4 months
Re: [ovirt-users] ??: bond mode balance-alb
by Nikolai Sednev
------=_Part_1871238_1615445632.1419874799888
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
I'd like to add that using of floating MAC "balance-tlb" for mode 5 or ARP negotiation for mode 6 load balancing " balance-alb" will influence latency and performance, using such mode should be avoided.
Mode zero or "balance-rr" should be also avoided as it is the only mode that will allow a single TCP/IP stream to utilize more than one interface, hence will create additional jitter, latency and performance impacts, as frames/packets will be sent and arrive from different interfaces, while preferred is to balance on per flow. Unless in your data center you're not using L2-only based traffic, I really don't see any usage for mode zero.
In Cisco routers the is a functionality called IP-CEF, which is turned on by default and balancing traffic on per TCP/IP flow, instead of per-packet, it is being used for better routing decisions for per-flow load balancing, if turned off, then per-packet load balancing will be enforced, causing high performance impact on router's CPU and memory resources, as decision have to be made on per-packet level, the higher the bit rate, the harder impact on resources of the router will be, especially for small sized packets.
Thanks in advance.
Best regards,
Nikolai
____________________
Nikolai Sednev
Senior Quality Engineer at Compute team
Red Hat Israel
34 Jerusalem Road,
Ra'anana, Israel 43501
Tel: +972 9 7692043
Mobile: +972 52 7342734
Email: nsednev(a)redhat.com
IRC: nsednev
----- Original Message -----
From: users-request(a)ovirt.org
To: users(a)ovirt.org
Sent: Monday, December 29, 2014 6:53:59 AM
Subject: Users Digest, Vol 39, Issue 163
Send Users mailing list submissions to
users(a)ovirt.org
To subscribe or unsubscribe via the World Wide Web, visit
http://lists.ovirt.org/mailman/listinfo/users
or, via email, send a message with subject or body 'help' to
users-request(a)ovirt.org
You can reach the person managing the list at
users-owner(a)ovirt.org
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Users digest..."
Today's Topics:
1. Re: Problem after update ovirt to 3.5 (Juan Jose)
2. Re: ??: bond mode balance-alb (Dan Kenigsberg)
3. Re: VM failover with ovirt3.5 (Yue, Cong)
----------------------------------------------------------------------
Message: 1
Date: Sun, 28 Dec 2014 20:08:37 +0100
From: Juan Jose <jj197005(a)gmail.com>
To: Simone Tiraboschi <stirabos(a)redhat.com>
Cc: "users(a)ovirt.org" <users(a)ovirt.org>
Subject: Re: [ovirt-users] Problem after update ovirt to 3.5
Message-ID:
<CADrE9wYtNdMPNsyjjZxA3zbyKZhYB5DA03wQ17dTLfuBBtA-Bg(a)mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
Many thanks Simone,
Juanjo.
On Tue, Dec 16, 2014 at 1:48 PM, Simone Tiraboschi <stirabos(a)redhat.com>
wrote:
>
>
> ----- Original Message -----
> > From: "Juan Jose" <jj197005(a)gmail.com>
> > To: "Yedidyah Bar David" <didi(a)redhat.com>, sbonazzo(a)redhat.com
> > Cc: users(a)ovirt.org
> > Sent: Tuesday, December 16, 2014 1:03:17 PM
> > Subject: Re: [ovirt-users] Problem after update ovirt to 3.5
> >
> > Hello everybody,
> >
> > It was the firewall, after upgrade my engine the NFS configuration had
> > disappered, I have configured again as Red Hat says and now it works
> > properly again.
> >
> > Many thank again for the indications.
>
> We already had a patch for it [1],
> it will released next month with oVirt 3.5.1
>
> [1] http://gerrit.ovirt.org/#/c/32874/
>
> > Juanjo.
> >
> > On Mon, Dec 15, 2014 at 2:32 PM, Yedidyah Bar David < didi(a)redhat.com >
> > wrote:
> >
> >
> > ----- Original Message -----
> > > From: "Juan Jose" < jj197005(a)gmail.com >
> > > To: users(a)ovirt.org
> > > Sent: Monday, December 15, 2014 3:17:15 PM
> > > Subject: [ovirt-users] Problem after update ovirt to 3.5
> > >
> > > Hello everybody,
> > >
> > > After upgrade my engine to oVirt 3.5, I have also upgraded one of my
> hosts
> > > to
> > > oVirt 3.5. After that it seems that all have gone good aparently.
> > >
> > > But in some seconds my ISO domain is desconnected and it is impossible
> to
> > > Activate. I'm attaching my engine.log. The below error is showed each
> time
> > > I
> > > try to Activate the ISO domain. Before the upgrade it was working
> without
> > > problems:
> > >
> > > 2014-12-15 13:25:07,607 ERROR
> > > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> > > (org.ovirt.thread.pool-8-thread-5) [460733dd] Correlation ID: null,
> Call
> > > Stack: null, Custom Event ID: -1, Message: Failed to connect Host
> host1 to
> > > the Storage Domains ISO_DOMAIN.
> > > 2014-12-15 13:25:07,608 INFO
> > >
> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
> > > (org.ovirt.thread.pool-8-thread-5) [460733dd] FINISH,
> > > ConnectStorageServerVDSCommand, return:
> > > {81c0a853-715c-4478-a812-6a74808fc482=477}, log id: 3590969e
> > > 2014-12-15 13:25:07,615 ERROR
> > > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> > > (org.ovirt.thread.pool-8-thread-5) [460733dd] Correlation ID: null,
> Call
> > > Stack: null, Custom Event ID: -1, Message: The error message for
> connection
> > > ovirt-engine.siee.local:/var/lib/exports/iso-20140303082312 returned by
> > > VDSM
> > > was: Problem while trying to mount target
> > > 2014-12-15 13:25:07,616 ERROR
> > > [org.ovirt.engine.core.bll.storage.NFSStorageHelper]
> > > (org.ovirt.thread.pool-8-thread-5) [460733dd] The connection with
> details
> > > ovirt-engine.siee.local:/var/lib/exports/iso-20140303082312 failed
> because
> > > of error code 477 and error message is: problem while trying to mount
> > > target
> > >
> > > If any other information is required, please tell me.
> >
> > Is the ISO domain on the engine host?
> >
> > Please check there iptables and /etc/exports, /etc/exports.d.
> >
> > Please post the setup (upgrade) log, check /var/log/ovirt-engine/setup.
> >
> > Thanks,
> > --
> > Didi
> >
> > _______________________________________________
> > Users mailing list
> > Users(a)ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20141228/bab30c2a/atta...>
------------------------------
Message: 2
Date: Sun, 28 Dec 2014 23:56:58 +0000
From: Dan Kenigsberg <danken(a)redhat.com>
To: Blaster <Blaster(a)556nato.com>
Cc: "Users(a)ovirt.org List" <users(a)ovirt.org>
Subject: Re: [ovirt-users] ??: bond mode balance-alb
Message-ID: <20141228235658.GE21690(a)redhat.com>
Content-Type: text/plain; charset=us-ascii
On Fri, Dec 26, 2014 at 12:39:45PM -0600, Blaster wrote:
> On 12/23/2014 2:55 AM, Dan Kenigsberg wrote:
> >Bug 1094842 - Bonding modes 0, 5 and 6 should be avoided for VM networks
> >https://bugzilla.redhat.com/show_bug.cgi?id=1094842#c0
>
> Dan,
>
> What is bad about these modes that oVirt can't use them?
I can only quote jpirko's workds from the link above:
Do not use tlb or alb in bridge, never! It does not work, that's it. The reason
is it mangles source macs in xmit frames and arps. When it is possible, just
use mode 4 (lacp). That should be always possible because all enterprise
switches support that. Generally, for 99% of use cases, you *should* use mode
4. There is no reason to use other modes.
>
> I just tested mode 4, and the LACP with Fedora 20 appears to not be
> compatible with the LAG mode on my Dell 2824.
>
> Would there be any issues with bringing two NICS into the VM and doing
> balance-alb at the guest level?
>
>
>
------------------------------
Message: 3
Date: Sun, 28 Dec 2014 20:53:44 -0800
From: "Yue, Cong" <Cong_Yue(a)alliedtelesis.com>
To: Artyom Lukianov <alukiano(a)redhat.com>
Cc: "users(a)ovirt.org" <users(a)ovirt.org>
Subject: Re: [ovirt-users] VM failover with ovirt3.5
Message-ID: <B7E7D6D4-B85D-471C-87A7-EA9AD32BF279(a)alliedtelesis.com>
Content-Type: text/plain; charset="utf-8"
I checked it again and confirmed there is one guest VM is running on the top of this host. The log is as follows:
[root@compute2-1 vdsm]# ps -ef | grep qemu
qemu 2983 846 0 Dec19 ? 00:00:00<x-apple-data-detectors://0> [supervdsmServer] <defunct>
root 5489 3053 0 20:49<x-apple-data-detectors://1> pts/0 00:00:00<x-apple-data-detectors://2> grep --color=auto qemu
qemu 26128 1 0 Dec19 ? 01:09:19 /usr/libexec/qemu-kvm
-name testvm2 -S -machine rhel6.5.0,accel=kvm,usb=off -cpu Nehalem -m
500 -realtime mlock=off -smp 1,maxcpus=16,sockets=16,cores=1,threads=1
-uuid e46bca87-4df5-4287-844b-90a26fccef33 -smbios
type=1,manufacturer=oVirt,product=oVirt
Node,version=7-0.1406.el7.centos.2.5,serial=4C4C4544-0030-3310-8059-B8C04F585231,uuid=e46bca87-4df5-4287-844b-90a26fccef33
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/testvm2.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2014-12-19T20:18:01<x-apple-data-detectors://4>,driftfix=slew -no-kvm-pit-reinjection
-no-hpet -no-shutdown -boot strict=on -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5
-drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial=
-device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
-drive file=/rhev/data-center/00000002-0002-0002-0002-0000000001e4/1dc71096-27c4-4256-b2ac-bd7265525c69/images/b4b5426b-95e3-41af-b286-da245891cdaf/0f688d49-97e3-4f1d-84d4-ac1432d903b3,if=none,id=drive-virtio-disk0,format=qcow2,serial=b4b5426b-95e3-41af-b286-da245891cdaf,cache=none,werror=stop,rerror=stop,aio=threads
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=27 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:db:94:01,bus=pci.0,addr=0x3
-chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/e46bca87-4df5-4287-844b-90a26fccef33.com.redhat.rhevm.vdsm,server,nowait
-device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/e46bca87-4df5-4287-844b-90a26fccef33.org.qemu.guest_agent.0,server,nowait
-device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev spicevmc,id=charchannel2,name=vdagent -device
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
-spice tls-port=5900,addr=10.0.0.92,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
-k en-us -vga qxl -global qxl-vga.ram_size=67108864<tel:67108864> -global
qxl-vga.vram_size=33554432<tel:33554432> -incoming tcp:[::]:49152 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7
[root@compute2-1 vdsm]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
MainThread::INFO::2014-12-28
20:49:27,315::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-28
20:49:27,646::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenance (score: 0)
MainThread::INFO::2014-12-28
20:49:27,646::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-28
20:49:37,732::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-28
20:49:37,961::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenance (score: 0)
MainThread::INFO::2014-12-28
20:49:37,961::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-28
20:49:48,048::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-28
20:49:48,319::states::208::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)
Score is 0 due to local maintenance mode
MainThread::INFO::2014-12-28
20:49:48,319::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenance (score: 0)
MainThread::INFO::2014-12-28
20:49:48,319::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
Thanks,
Cong
On 2014/12/28, at 3:46, "Artyom Lukianov" <alukiano(a)redhat.com<mailto:alukiano@redhat.com>> wrote:
I see that you set local maintenance on host3 that do not have engine vm on it, so it nothing to migrate from this host.
If you set local maintenance on host1, vm must migrate to another host with positive score.
Thanks
----- Original Message -----
From: "Cong Yue" <Cong_Yue(a)alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com>>
To: "Simone Tiraboschi" <stirabos(a)redhat.com<mailto:stirabos@redhat.com>>
Cc: users(a)ovirt.org<mailto:users@ovirt.org>
Sent: Saturday, December 27, 2014 6:58:32 PM
Subject: Re: [ovirt-users] VM failover with ovirt3.5
Hi
I had a try with "hosted-engine --set-maintence --mode=local" on
compute2-1, which is host 3 in my cluster. From the log, it shows
maintence mode is dectected, but migration does not happen.
The logs are as follows. Is there any other config I need to check?
[root@compute2-1 vdsm]# hosted-engine --vm-status
--== Host 1 status ==-
Status up-to-date : True
Hostname : 10.0.0.94
Host ID : 1
Engine status : {"health": "good", "vm": "up",
"detail": "up"}
Score : 2400
Local maintenance : False
Host timestamp : 836296
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=836296 (Sat Dec 27 11:42:39 2014)
host-id=1
score=2400
maintenance=False
state=EngineUp
--== Host 2 status ==--
Status up-to-date : True
Hostname : 10.0.0.93
Host ID : 2
Engine status : {"reason": "vm not running on
this host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 2400
Local maintenance : False
Host timestamp : 687358
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=687358 (Sat Dec 27 08:42:04 2014)
host-id=2
score=2400
maintenance=False
state=EngineDown
--== Host 3 status ==--
Status up-to-date : True
Hostname : 10.0.0.92
Host ID : 3
Engine status : {"reason": "vm not running on
this host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 0
Local maintenance : True
Host timestamp : 681827
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=681827 (Sat Dec 27 08:42:40 2014)
host-id=3
score=0
maintenance=True
state=LocalMaintenance
[root@compute2-1 vdsm]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
MainThread::INFO::2014-12-27
08:42:41,109::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-27
08:42:51,198::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-27
08:42:51,420::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenance (score: 0)
MainThread::INFO::2014-12-27
08:42:51,420::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-27
08:43:01,507::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-27
08:43:01,773::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenance (score: 0)
MainThread::INFO::2014-12-27
08:43:01,773::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-27
08:43:11,859::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-27
08:43:12,072::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenance (score: 0)
MainThread::INFO::2014-12-27
08:43:12,072::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
[root@compute2-3 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
MainThread::INFO::2014-12-27
11:36:28,855::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-27
11:36:39,130::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-27
11:36:39,130::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-27
11:36:49,449::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-27
11:36:49,449::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-27
11:36:59,739::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-27
11:36:59,739::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-27
11:37:09,779::states::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
Engine vm running on localhost
MainThread::INFO::2014-12-27
11:37:10,026::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-27
11:37:10,026::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-27
11:37:20,331::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-27
11:37:20,331::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
[root@compute2-2 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
MainThread::INFO::2014-12-27
08:36:12,462::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-27
08:36:22,797::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-27
08:36:22,798::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-27
08:36:32,876::states::437::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
Engine vm is running on host 10.0.0.94 (id 1)
MainThread::INFO::2014-12-27
08:36:33,169::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-27
08:36:33,169::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-27
08:36:43,567::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-27
08:36:43,567::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-27
08:36:53,858::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-27
08:36:53,858::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-27
08:37:04,028::state_machine::160::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Global metadata: {'maintenance': False}
MainThread::INFO::2014-12-27
08:37:04,028::state_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Host 10.0.0.94 (id 1): {'extra':
'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=835987
(Sat Dec 27 11:37:30
2014)\nhost-id=1\nscore=2400\nmaintenance=False\nstate=EngineUp\n',
'hostname': '10.0.0.94', 'alive': True, 'host-id': 1, 'engine-status':
{'health': 'good', 'vm': 'up', 'detail': 'up'}, 'score': 2400,
'maintenance': False, 'host-ts': 835987}
MainThread::INFO::2014-12-27
08:37:04,028::state_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Host 10.0.0.92 (id 3): {'extra':
'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=681528
(Sat Dec 27 08:37:41
2014)\nhost-id=3\nscore=0\nmaintenance=True\nstate=LocalMaintenance\n',
'hostname': '10.0.0.92', 'alive': True, 'host-id': 3, 'engine-status':
{'reason': 'vm not running on this host', 'health': 'bad', 'vm':
'down', 'detail': 'unknown'}, 'score': 0, 'maintenance': True,
'host-ts': 681528}
MainThread::INFO::2014-12-27
08:37:04,028::state_machine::168::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Local (id 2): {'engine-health': {'reason': 'vm not running on this
host', 'health': 'bad', 'vm': 'down', 'detail': 'unknown'}, 'bridge':
True, 'mem-free': 15300.0, 'maintenance': False, 'cpu-load': 0.0215,
'gateway': True}
MainThread::INFO::2014-12-27
08:37:04,265::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-27
08:37:04,265::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
Thanks,
Cong
On 2014/12/22, at 5:29, "Simone Tiraboschi" <stirabos(a)redhat.com<mailto:stirabos@redhat.com>> wrote:
----- Original Message -----
From: "Cong Yue" <Cong_Yue(a)alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com>>
To: "Simone Tiraboschi" <stirabos(a)redhat.com<mailto:stirabos@redhat.com>>
Cc: users(a)ovirt.org<mailto:users@ovirt.org>
Sent: Friday, December 19, 2014 7:22:10 PM
Subject: RE: [ovirt-users] VM failover with ovirt3.5
Thanks for the information. This is the log for my three ovirt nodes.
>From the output of hosted-engine --vm-status, it shows the engine state for
my 2nd and 3rd ovirt node is DOWN.
Is this the reason why VM failover not work in my environment?
No, they looks ok: you can run the engine VM on single host at a time.
How can I make
also engine works for my 2nd and 3rd ovit nodes?
If you put the host 1 in local maintenance mode ( hosted-engine --set-maintenance --mode=local ) the VM should migrate to host 2; if you reactivate host 1 ( hosted-engine --set-maintenance --mode=none ) and put host 2 in local maintenance mode the VM should migrate again.
Can you please try that and post the logs if something is going bad?
--
--== Host 1 status ==--
Status up-to-date : True
Hostname : 10.0.0.94
Host ID : 1
Engine status : {"health": "good", "vm": "up",
"detail": "up"}
Score : 2400
Local maintenance : False
Host timestamp : 150475
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=150475 (Fri Dec 19 13:12:18 2014)
host-id=1
score=2400
maintenance=False
state=EngineUp
--== Host 2 status ==--
Status up-to-date : True
Hostname : 10.0.0.93
Host ID : 2
Engine status : {"reason": "vm not running on
this host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 2400
Local maintenance : False
Host timestamp : 1572
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=1572 (Fri Dec 19 10:12:18 2014)
host-id=2
score=2400
maintenance=False
state=EngineDown
--== Host 3 status ==--
Status up-to-date : False
Hostname : 10.0.0.92
Host ID : 3
Engine status : unknown stale-data
Score : 2400
Local maintenance : False
Host timestamp : 987
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=987 (Fri Dec 19 10:09:58 2014)
host-id=3
score=2400
maintenance=False
state=EngineDown
--
And the /var/log/ovirt-hosted-engine-ha/agent.log for three ovirt nodes are
as follows:
--
10.0.0.94(hosted-engine-1)
---
MainThread::INFO::2014-12-19
13:09:33,716::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:09:33,716::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:09:44,017::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:09:44,017::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:09:54,303::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:09:54,303::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:10:04,342::states::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
Engine vm running on localhost
MainThread::INFO::2014-12-19
13:10:04,617::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:10:04,617::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:10:14,657::state_machine::160::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Global metadata: {'maintenance': False}
MainThread::INFO::2014-12-19
13:10:14,657::state_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Host 10.0.0.93 (id 2): {'extra':
'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=1448
(Fri Dec 19 10:10:14
2014)\nhost-id=2\nscore=2400\nmaintenance=False\nstate=EngineDown\n',
'hostname': '10.0.0.93', 'alive': True, 'host-id': 2, 'engine-status':
{'reason': 'vm not running on this host', 'health': 'bad', 'vm':
'down', 'detail': 'unknown'}, 'score': 2400, 'maintenance': False,
'host-ts': 1448}
MainThread::INFO::2014-12-19
13:10:14,657::state_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Host 10.0.0.92 (id 3): {'extra':
'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=987
(Fri Dec 19 10:09:58
2014)\nhost-id=3\nscore=2400\nmaintenance=False\nstate=EngineDown\n',
'hostname': '10.0.0.92', 'alive': True, 'host-id': 3, 'engine-status':
{'reason': 'vm not running on this host', 'health': 'bad', 'vm':
'down', 'detail': 'unknown'}, 'score': 2400, 'maintenance': False,
'host-ts': 987}
MainThread::INFO::2014-12-19
13:10:14,658::state_machine::168::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Local (id 1): {'engine-health': {'health': 'good', 'vm': 'up',
'detail': 'up'}, 'bridge': True, 'mem-free': 1079.0, 'maintenance':
False, 'cpu-load': 0.0269, 'gateway': True}
MainThread::INFO::2014-12-19
13:10:14,904::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:10:14,904::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:10:25,210::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:10:25,210::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:10:35,499::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:10:35,499::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:10:45,784::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:10:45,785::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:10:56,070::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:10:56,070::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:11:06,109::states::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
Engine vm running on localhost
MainThread::INFO::2014-12-19
13:11:06,359::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:11:06,359::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:11:16,658::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:11:16,658::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:11:26,991::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:11:26,991::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:11:37,341::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:11:37,341::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
----
10.0.0.93 (hosted-engine-2)
MainThread::INFO::2014-12-19
10:12:18,339::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-19
10:12:18,339::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-19
10:12:28,651::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-19
10:12:28,652::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-19
10:12:39,010::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-19
10:12:39,010::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-19
10:12:49,338::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-19
10:12:49,338::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-19
10:12:59,642::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-19
10:12:59,642::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-19
10:13:10,010::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-19
10:13:10,010::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
10.0.0.92(hosted-engine-3)
same as 10.0.0.93
--
-----Original Message-----
From: Simone Tiraboschi [mailto:stirabos@redhat.com]
Sent: Friday, December 19, 2014 12:28 AM
To: Yue, Cong
Cc: users(a)ovirt.org<mailto:users@ovirt.org>
Subject: Re: [ovirt-users] VM failover with ovirt3.5
----- Original Message -----
From: "Cong Yue" <Cong_Yue(a)alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com>>
To: users(a)ovirt.org<mailto:users@ovirt.org>
Sent: Friday, December 19, 2014 2:14:33 AM
Subject: [ovirt-users] VM failover with ovirt3.5
Hi
In my environment, I have 3 ovirt nodes as one cluster. And on top of
host-1, there is one vm to host ovirt engine.
Also I have one external storage for the cluster to use as data domain
of engine and data.
I confirmed live migration works well in my environment.
But it seems very buggy for VM failover if I try to force to shut down
one ovirt node. Sometimes the VM in the node which is shutdown can
migrate to other host, but it take more than several minutes.
Sometimes, it can not migrate at all. Sometimes, only when the host is
back, the VM is beginning to move.
Can you please check or share the logs under /var/log/ovirt-hosted-engine-ha/
?
Is there some documentation to explain how VM failover is working? And
is there some bugs reported related with this?
http://www.ovirt.org/Features/Self_Hosted_Engine#Agent_State_Diagram
Thanks in advance,
Cong
This e-mail message is for the sole use of the intended recipient(s)
and may contain confidential and privileged information. Any
unauthorized review, use, disclosure or distribution is prohibited. If
you are not the intended recipient, please contact the sender by reply
e-mail and destroy all copies of the original message. If you are the
intended recipient, please be advised that the content of this message
is subject to access, review and disclosure by the sender's e-mail System
Administrator.
_______________________________________________
Users mailing list
Users(a)ovirt.org<mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
This e-mail message is for the sole use of the intended recipient(s) and may
contain confidential and privileged information. Any unauthorized review,
use, disclosure or distribution is prohibited. If you are not the intended
recipient, please contact the sender by reply e-mail and destroy all copies
of the original message. If you are the intended recipient, please be
advised that the content of this message is subject to access, review and
disclosure by the sender's e-mail System Administrator.
This e-mail message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. If you are the intended recipient, please be advised that the content of this message is subject to access, review and disclosure by the sender's e-mail System Administrator.
_______________________________________________
Users mailing list
Users(a)ovirt.org<mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
________________________________
This e-mail message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. If you are the intended recipient, please be advised that the content of this message is subject to access, review and disclosure by the sender's e-mail System Administrator.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20141228/c5ac26a7/atta...>
------------------------------
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
End of Users Digest, Vol 39, Issue 163
**************************************
------=_Part_1871238_1615445632.1419874799888
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: georgia,serif; font-size: 12pt; colo=
r: #000000"><div>I'd like to add that<span style=3D"font-size: 12pt;"> =
;using of floating MAC </span><span style=3D"font-size: 12pt;">"balance-tlb=
" </span><span style=3D"font-size: 12pt;">for mode 5 or ARP </span><span st=
yle=3D"font-size: 12pt;">negotiation for mode 6 load balancing "</span><spa=
n style=3D"font-size: 12pt;">balance-alb" will influence latency and perfor=
mance, using such mode should be avoided. </span></div><div>Mode zero =
or "balance-rr" should be also avoided as it is the only m=
ode that will allow a single TCP/IP stream to utilize more than one interfa=
ce, hence will create additional jitter, latency and performance impacts,&n=
bsp;as frames/packets will be sent and arrive from different interfaces, wh=
ile preferred is to balance on per flow. Unless in your data center you're =
not using L2-only based traffic, I really don't see any usage for mode zero=
.</div><div>In Cisco routers the is a functionality called IP-CEF, which is=
turned on by default and balancing traffic on per TCP/IP flow, instead of =
per-packet, it is being used for better routing decisions for per-flow load=
balancing, if turned off, then per-packet load balancing will be enforced,=
causing high performance impact on router's CPU and memory resources, as d=
ecision have to be made on per-packet level, the higher the bit rate, the h=
arder impact on resources of the router will be, especially for small sized=
packets.</div><div><br></div><div><span name=3D"x"></span><br>Thanks in ad=
vance.<br><div><br></div>Best regards,<br>Nikolai<br>____________________<b=
r>Nikolai Sednev<br>Senior Quality Engineer at Compute team<br>Red Hat Isra=
el<br>34 Jerusalem Road,<br>Ra'anana, Israel 43501<br><div><br></div>Tel: &=
nbsp; +972 9 7692043<br>Mobile: +972 52 7342734<br>Ema=
il: nsednev(a)redhat.com<br>IRC: nsednev<span name=3D"x"></span><br></div><di=
v><br></div><hr id=3D"zwchr"><div style=3D"color:#000;font-weight:normal;fo=
nt-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif=
;font-size:12pt;"><b>From: </b>users-request(a)ovirt.org<br><b>To: </b>users@=
ovirt.org<br><b>Sent: </b>Monday, December 29, 2014 6:53:59 AM<br><b>Subjec=
t: </b>Users Digest, Vol 39, Issue 163<br><div><br></div>Send Users mailing=
list submissions to<br> use=
rs(a)ovirt.org<br><div><br></div>To subscribe or unsubscribe via the World Wi=
de Web, visit<br> http://lis=
ts.ovirt.org/mailman/listinfo/users<br>or, via email, send a message with s=
ubject or body 'help' to<br>  =
;users-request(a)ovirt.org<br><div><br></div>You can reach the person managin=
g the list at<br> users-owne=
r(a)ovirt.org<br><div><br></div>When replying, please edit your Subject line =
so it is more specific<br>than "Re: Contents of Users digest..."<br><div><b=
r></div><br>Today's Topics:<br><div><br></div> 1. Re: Pro=
blem after update ovirt to 3.5 (Juan Jose)<br> 2. Re: ??:=
bond mode balance-alb (Dan Kenigsberg)<br> 3. Re: VM fai=
lover with ovirt3.5 (Yue, Cong)<br><div><br></div><br>---------------------=
-------------------------------------------------<br><div><br></div>Message=
: 1<br>Date: Sun, 28 Dec 2014 20:08:37 +0100<br>From: Juan Jose <jj19700=
5(a)gmail.com><br>To: Simone Tiraboschi <stirabos(a)redhat.com><br>Cc:=
"users(a)ovirt.org" <users(a)ovirt.org><br>Subject: Re: [ovirt-users] Pr=
oblem after update ovirt to 3.5<br>Message-ID:<br> &=
nbsp; <CADrE9wYtNdMPNsyjjZxA3zbyKZhYB5DA03wQ17dTLfuBBtA=
-Bg(a)mail.gmail.com><br>Content-Type: text/plain; charset=3D"utf-8"<br><d=
iv><br></div>Many thanks Simone,<br><div><br></div>Juanjo.<br><div><br></di=
v>On Tue, Dec 16, 2014 at 1:48 PM, Simone Tiraboschi <stirabos(a)redhat.co=
m><br>wrote:<br><div><br></div>><br>><br>> ----- Original Messa=
ge -----<br>> > From: "Juan Jose" <jj197005(a)gmail.com><br>> =
> To: "Yedidyah Bar David" <didi(a)redhat.com>, sbonazzo(a)redhat.com<=
br>> > Cc: users(a)ovirt.org<br>> > Sent: Tuesday, December 16, 2=
014 1:03:17 PM<br>> > Subject: Re: [ovirt-users] Problem after update=
ovirt to 3.5<br>> ><br>> > Hello everybody,<br>> ><br>&g=
t; > It was the firewall, after upgrade my engine the NFS configuration =
had<br>> > disappered, I have configured again as Red Hat says and no=
w it works<br>> > properly again.<br>> ><br>> > Many than=
k again for the indications.<br>><br>> We already had a patch for it =
[1],<br>> it will released next month with oVirt 3.5.1<br>><br>> [=
1] http://gerrit.ovirt.org/#/c/32874/<br>><br>> > Juanjo.<br>> =
><br>> > On Mon, Dec 15, 2014 at 2:32 PM, Yedidyah Bar David < =
didi(a)redhat.com ><br>> > wrote:<br>> ><br>> ><br>> =
> ----- Original Message -----<br>> > > From: "Juan Jose" < =
jj197005(a)gmail.com ><br>> > > To: users(a)ovirt.org<br>> > =
> Sent: Monday, December 15, 2014 3:17:15 PM<br>> > > Subject: =
[ovirt-users] Problem after update ovirt to 3.5<br>> > ><br>> &=
gt; > Hello everybody,<br>> > ><br>> > > After upgrade=
my engine to oVirt 3.5, I have also upgraded one of my<br>> hosts<br>&g=
t; > > to<br>> > > oVirt 3.5. After that it seems that all h=
ave gone good aparently.<br>> > ><br>> > > But in some se=
conds my ISO domain is desconnected and it is impossible<br>> to<br>>=
> > Activate. I'm attaching my engine.log. The below error is showed=
each<br>> time<br>> > > I<br>> > > try to Activate th=
e ISO domain. Before the upgrade it was working<br>> without<br>> >=
; > problems:<br>> > ><br>> > > 2014-12-15 13:25:07,60=
7 ERROR<br>> > > [org.ovirt.engine.core.dal.dbbroker.auditloghandl=
ing.AuditLogDirector]<br>> > > (org.ovirt.thread.pool-8-thread-5) =
[460733dd] Correlation ID: null,<br>> Call<br>> > > Stack: null=
, Custom Event ID: -1, Message: Failed to connect Host<br>> host1 to<br>=
> > > the Storage Domains ISO_DOMAIN.<br>> > > 2014-12-15=
13:25:07,608 INFO<br>> > ><br>> [org.ovirt.engine.core.vdsbrok=
er.vdsbroker.ConnectStorageServerVDSCommand]<br>> > > (org.ovirt.t=
hread.pool-8-thread-5) [460733dd] FINISH,<br>> > > ConnectStorageS=
erverVDSCommand, return:<br>> > > {81c0a853-715c-4478-a812-6a74808=
fc482=3D477}, log id: 3590969e<br>> > > 2014-12-15 13:25:07,615 ER=
ROR<br>> > > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.=
AuditLogDirector]<br>> > > (org.ovirt.thread.pool-8-thread-5) [460=
733dd] Correlation ID: null,<br>> Call<br>> > > Stack: null, Cu=
stom Event ID: -1, Message: The error message for<br>> connection<br>>=
; > > ovirt-engine.siee.local:/var/lib/exports/iso-20140303082312 ret=
urned by<br>> > > VDSM<br>> > > was: Problem while trying=
to mount target<br>> > > 2014-12-15 13:25:07,616 ERROR<br>> &g=
t; > [org.ovirt.engine.core.bll.storage.NFSStorageHelper]<br>> > &=
gt; (org.ovirt.thread.pool-8-thread-5) [460733dd] The connection with<br>&g=
t; details<br>> > > ovirt-engine.siee.local:/var/lib/exports/iso-2=
0140303082312 failed<br>> because<br>> > > of error code 477 an=
d error message is: problem while trying to mount<br>> > > target<=
br>> > ><br>> > > If any other information is required, p=
lease tell me.<br>> ><br>> > Is the ISO domain on the engine ho=
st?<br>> ><br>> > Please check there iptables and /etc/exports,=
/etc/exports.d.<br>> ><br>> > Please post the setup (upgrade) =
log, check /var/log/ovirt-engine/setup.<br>> ><br>> > Thanks,<b=
r>> > --<br>> > Didi<br>> ><br>> > ________________=
_______________________________<br>> > Users mailing list<br>> >=
; Users(a)ovirt.org<br>> > http://lists.ovirt.org/mailman/listinfo/user=
s<br>> ><br>><br>-------------- next part --------------<br>An HTM=
L attachment was scrubbed...<br>URL: <http://lists.ovirt.org/pipermail/u=
sers/attachments/20141228/bab30c2a/attachment-0001.html><br><div><br></d=
iv>------------------------------<br><div><br></div>Message: 2<br>Date: Sun=
, 28 Dec 2014 23:56:58 +0000<br>From: Dan Kenigsberg <danken(a)redhat.com&=
gt;<br>To: Blaster <Blaster(a)556nato.com><br>Cc: "Users(a)ovirt.org List=
" <users(a)ovirt.org><br>Subject: Re: [ovirt-users] ??: bond mode balan=
ce-alb<br>Message-ID: <20141228235658.GE21690(a)redhat.com><br>Content-=
Type: text/plain; charset=3Dus-ascii<br><div><br></div>On Fri, Dec 26, 2014=
at 12:39:45PM -0600, Blaster wrote:<br>> On 12/23/2014 2:55 AM, Dan Ken=
igsberg wrote:<br>> >Bug 1094842 - Bonding modes 0, 5 and 6 should be=
avoided for VM networks<br>> >https://bugzilla.redhat.com/show_bug.c=
gi?id=3D1094842#c0<br>> <br>> Dan,<br>> <br>> What is bad about=
these modes that oVirt can't use them?<br><div><br></div>I can only quote =
jpirko's workds from the link above:<br><div><br></div> D=
o not use tlb or alb in bridge, never! It does not work, that's it. The rea=
son<br> is it mangles source macs in xmit frames and arps=
. When it is possible, just<br> use mode 4 (lacp). That s=
hould be always possible because all enterprise<br> switc=
hes support that. Generally, for 99% of use cases, you *should* use mode<br=
> 4. There is no reason to use other modes.<br><div><br><=
/div>> <br>> I just tested mode 4, and the LACP with Fedora 20 appear=
s to not be<br>> compatible with the LAG mode on my Dell 2824.<br>> <=
br>> Would there be any issues with bringing two NICS into the VM and do=
ing<br>> balance-alb at the guest level?<br>> <br>> <br>> <br><=
div><br></div><br>------------------------------<br><div><br></div>Message:=
3<br>Date: Sun, 28 Dec 2014 20:53:44 -0800<br>From: "Yue, Cong" <Cong_Y=
ue(a)alliedtelesis.com><br>To: Artyom Lukianov <alukiano(a)redhat.com>=
<br>Cc: "users(a)ovirt.org" <users(a)ovirt.org><br>Subject: Re: [ovirt-us=
ers] VM failover with ovirt3.5<br>Message-ID: <B7E7D6D4-B85D-471C-87A7-E=
A9AD32BF279(a)alliedtelesis.com><br>Content-Type: text/plain; charset=3D"u=
tf-8"<br><div><br></div>I checked it again and confirmed there is one guest=
VM is running on the top of this host. The log is as follows:<br><div><br>=
</div>[root@compute2-1 vdsm]# ps -ef | grep qemu<br>qemu &nbs=
p;2983 846 0 Dec19 ? 00:00:00<x-=
apple-data-detectors://0> [supervdsmServer] <defunct><br>root &nbs=
p; 5489 3053 0 20:49<x-apple-data-detectors://1=
> pts/0 00:00:00<x-apple-data-detectors://2> grep --c=
olor=3Dauto qemu<br>qemu 26128 1 0 Dec19 =
? 01:09:19 /usr/libexec/qemu-kvm<br>-name testvm=
2 -S -machine rhel6.5.0,accel=3Dkvm,usb=3Doff -cpu Nehalem -m<br>500 -realt=
ime mlock=3Doff -smp 1,maxcpus=3D16,sockets=3D16,cores=3D1,threads=3D1<br>-=
uuid e46bca87-4df5-4287-844b-90a26fccef33 -smbios<br>type=3D1,manufacturer=
=3DoVirt,product=3DoVirt<br>Node,version=3D7-0.1406.el7.centos.2.5,serial=
=3D4C4C4544-0030-3310-8059-B8C04F585231,uuid=3De46bca87-4df5-4287-844b-90a2=
6fccef33<br>-no-user-config -nodefaults -chardev<br>socket,id=3Dcharmonitor=
,path=3D/var/lib/libvirt/qemu/testvm2.monitor,server,nowait<br>-mon chardev=
=3Dcharmonitor,id=3Dmonitor,mode=3Dcontrol -rtc<br>base=3D2014-12-19T20:18:=
01<x-apple-data-detectors://4>,driftfix=3Dslew -no-kvm-pit-reinjectio=
n<br>-no-hpet -no-shutdown -boot strict=3Don -device<br>piix3-usb-uhci,id=
=3Dusb,bus=3Dpci.0,addr=3D0x1.0x2 -device<br>virtio-scsi-pci,id=3Dscsi0,bus=
=3Dpci.0,addr=3D0x4 -device<br>virtio-serial-pci,id=3Dvirtio-serial0,max_po=
rts=3D16,bus=3Dpci.0,addr=3D0x5<br>-drive if=3Dnone,id=3Ddrive-ide0-1-0,rea=
donly=3Don,format=3Draw,serial=3D<br>-device ide-cd,bus=3Dide.1,unit=3D0,dr=
ive=3Ddrive-ide0-1-0,id=3Dide0-1-0<br>-drive file=3D/rhev/data-center/00000=
002-0002-0002-0002-0000000001e4/1dc71096-27c4-4256-b2ac-bd7265525c69/images=
/b4b5426b-95e3-41af-b286-da245891cdaf/0f688d49-97e3-4f1d-84d4-ac1432d903b3,=
if=3Dnone,id=3Ddrive-virtio-disk0,format=3Dqcow2,serial=3Db4b5426b-95e3-41a=
f-b286-da245891cdaf,cache=3Dnone,werror=3Dstop,rerror=3Dstop,aio=3Dthreads<=
br>-device virtio-blk-pci,scsi=3Doff,bus=3Dpci.0,addr=3D0x6,drive=3Ddrive-v=
irtio-disk0,id=3Dvirtio-disk0,bootindex=3D1<br>-netdev tap,fd=3D26,id=3Dhos=
tnet0,vhost=3Don,vhostfd=3D27 -device<br>virtio-net-pci,netdev=3Dhostnet0,i=
d=3Dnet0,mac=3D00:1a:4a:db:94:01,bus=3Dpci.0,addr=3D0x3<br>-chardev socket,=
id=3Dcharchannel0,path=3D/var/lib/libvirt/qemu/channels/e46bca87-4df5-4287-=
844b-90a26fccef33.com.redhat.rhevm.vdsm,server,nowait<br>-device virtserial=
port,bus=3Dvirtio-serial0.0,nr=3D1,chardev=3Dcharchannel0,id=3Dchannel0,nam=
e=3Dcom.redhat.rhevm.vdsm<br>-chardev socket,id=3Dcharchannel1,path=3D/var/=
lib/libvirt/qemu/channels/e46bca87-4df5-4287-844b-90a26fccef33.org.qemu.gue=
st_agent.0,server,nowait<br>-device virtserialport,bus=3Dvirtio-serial0.0,n=
r=3D2,chardev=3Dcharchannel1,id=3Dchannel1,name=3Dorg.qemu.guest_agent.0<br=
>-chardev spicevmc,id=3Dcharchannel2,name=3Dvdagent -device<br>virtserialpo=
rt,bus=3Dvirtio-serial0.0,nr=3D3,chardev=3Dcharchannel2,id=3Dchannel2,name=
=3Dcom.redhat.spice.0<br>-spice tls-port=3D5900,addr=3D10.0.0.92,x509-dir=
=3D/etc/pki/vdsm/libvirt-spice,tls-channel=3Dmain,tls-channel=3Ddisplay,tls=
-channel=3Dinputs,tls-channel=3Dcursor,tls-channel=3Dplayback,tls-channel=
=3Drecord,tls-channel=3Dsmartcard,tls-channel=3Dusbredir,seamless-migration=
=3Don<br>-k en-us -vga qxl -global qxl-vga.ram_size=3D67108864<tel:67108=
864> -global<br>qxl-vga.vram_size=3D33554432<tel:33554432> -incomi=
ng tcp:[::]:49152 -device<br>virtio-balloon-pci,id=3Dballoon0,bus=3Dpci.0,a=
ddr=3D0x7<br>[root@compute2-1 vdsm]# tail -f /var/log/ovirt-hosted-engine-h=
a/agent.log<br>MainThread::INFO::2014-12-28<br>20:49:27,315::state_decorato=
rs::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)<b=
r>Local maintenance detected<br>MainThread::INFO::2014-12-28<br>20:49:27,64=
6::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEng=
ine::(start_monitoring)<br>Current state LocalMaintenance (score: 0)<br>Mai=
nThread::INFO::2014-12-28<br>20:49:27,646::hosted_engine::332::ovirt_hosted=
_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Best rem=
ote host 10.0.0.94 (id: 1, score: 2400)<br>MainThread::INFO::2014-12-28<br>=
20:49:37,732::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_en=
gine.HostedEngine::(check)<br>Local maintenance detected<br>MainThread::INF=
O::2014-12-28<br>20:49:37,961::hosted_engine::327::ovirt_hosted_engine_ha.a=
gent.hosted_engine.HostedEngine::(start_monitoring)<br>Current state LocalM=
aintenance (score: 0)<br>MainThread::INFO::2014-12-28<br>20:49:37,961::host=
ed_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(s=
tart_monitoring)<br>Best remote host 10.0.0.94 (id: 1, score: 2400)<br>Main=
Thread::INFO::2014-12-28<br>20:49:48,048::state_decorators::124::ovirt_host=
ed_engine_ha.agent.hosted_engine.HostedEngine::(check)<br>Local maintenance=
detected<br>MainThread::INFO::2014-12-28<br>20:49:48,319::states::208::ovi=
rt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)<br>Score is 0=
due to local maintenance mode<br>MainThread::INFO::2014-12-28<br>20:49:48,=
319::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedE=
ngine::(start_monitoring)<br>Current state LocalMaintenance (score: 0)<br>M=
ainThread::INFO::2014-12-28<br>20:49:48,319::hosted_engine::332::ovirt_host=
ed_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Best r=
emote host 10.0.0.94 (id: 1, score: 2400)<br><div><br></div>Thanks,<br>Cong=
<br><div><br></div><br>On 2014/12/28, at 3:46, "Artyom Lukianov" <alukia=
no@redhat.com<mailto:alukiano@redhat.com>> wrote:<br><div><br></di=
v>I see that you set local maintenance on host3 that do not have engine vm =
on it, so it nothing to migrate from this host.<br>If you set local mainten=
ance on host1, vm must migrate to another host with positive score.<br>Than=
ks<br><div><br></div>----- Original Message -----<br>From: "Cong Yue" <C=
ong_Yue@alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com>><br>T=
o: "Simone Tiraboschi" <stirabos@redhat.com<mailto:stirabos@redhat.co=
m>><br>Cc: users@ovirt.org<mailto:users@ovirt.org><br>Sent: Sat=
urday, December 27, 2014 6:58:32 PM<br>Subject: Re: [ovirt-users] VM failov=
er with ovirt3.5<br><div><br></div>Hi<br><div><br></div>I had a try with "h=
osted-engine --set-maintence --mode=3Dlocal" on<br>compute2-1, which is hos=
t 3 in my cluster. From the log, it shows<br>maintence mode is dectected, b=
ut migration does not happen.<br><div><br></div>The logs are as follows. Is=
there any other config I need to check?<br><div><br></div>[root@compute2-1=
vdsm]# hosted-engine --vm-status<br><div><br></div><br>--=3D=3D Host 1 sta=
tus =3D=3D-<br><div><br></div>Status up-to-date =
: True<br>Hostname =
: 10.=
0.0.94<br>Host ID &=
nbsp; : 1<br>Engine status =
: {"health": =
"good", "vm": "up",<br>"detail": "up"}<br>Score =
&nbs=
p;: 2400<br>Local maintenance &nb=
sp; : False<br>Host timestamp &nbs=
p; : 836296<br>Extra metadata (valid at =
timestamp):<br>metadata_parse_version=3D1<br>metadata_feature_version=3D1<b=
r>timestamp=3D836296 (Sat Dec 27 11:42:39 2014)<br>host-id=3D1<br>score=3D2=
400<br>maintenance=3DFalse<br>state=3DEngineUp<br><div><br></div><br>--=3D=
=3D Host 2 status =3D=3D--<br><div><br></div>Status up-to-date  =
; : True<br>Hostname =
&nbs=
p; : 10.0.0.93<br>Host ID =
: 2<br>Engine status=
&nbs=
p;: {"reason": "vm not running on<br>this host", "health": "bad", "vm": "do=
wn", "detail": "unknown"}<br>Score  =
; : 2400<br>L=
ocal maintenance &n=
bsp;: False<br>Host timestamp &nb=
sp; : 687358<br>Extra metadata (valid at timestamp):<b=
r>metadata_parse_version=3D1<br>metadata_feature_version=3D1<br>timestamp=
=3D687358 (Sat Dec 27 08:42:04 2014)<br>host-id=3D2<br>score=3D2400<br>main=
tenance=3DFalse<br>state=3DEngineDown<br><div><br></div><br>--=3D=3D Host 3=
status =3D=3D--<br><div><br></div>Status up-to-date &=
nbsp; : True<br>Hostname &n=
bsp; =
: 10.0.0.92<br>Host ID &nb=
sp; : 3<br>Engine status &n=
bsp; : {"reas=
on": "vm not running on<br>this host", "health": "bad", "vm": "down", "deta=
il": "unknown"}<br>Score &=
nbsp; : 0<br>Local maintena=
nce : True<br=
>Host timestamp &nb=
sp; : 681827<br>Extra metadata (valid at timestamp):<br>metadata_par=
se_version=3D1<br>metadata_feature_version=3D1<br>timestamp=3D681827 (Sat D=
ec 27 08:42:40 2014)<br>host-id=3D3<br>score=3D0<br>maintenance=3DTrue<br>s=
tate=3DLocalMaintenance<br>[root@compute2-1 vdsm]# tail -f /var/log/ovirt-h=
osted-engine-ha/agent.log<br>MainThread::INFO::2014-12-27<br>08:42:41,109::=
hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine=
::(start_monitoring)<br>Best remote host 10.0.0.94 (id: 1, score: 2400)<br>=
MainThread::INFO::2014-12-27<br>08:42:51,198::state_decorators::124::ovirt_=
hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)<br>Local mainten=
ance detected<br>MainThread::INFO::2014-12-27<br>08:42:51,420::hosted_engin=
e::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_mon=
itoring)<br>Current state LocalMaintenance (score: 0)<br>MainThread::INFO::=
2014-12-27<br>08:42:51,420::hosted_engine::332::ovirt_hosted_engine_ha.agen=
t.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0=
.94 (id: 1, score: 2400)<br>MainThread::INFO::2014-12-27<br>08:43:01,507::s=
tate_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngi=
ne::(check)<br>Local maintenance detected<br>MainThread::INFO::2014-12-27<b=
r>08:43:01,773::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_eng=
ine.HostedEngine::(start_monitoring)<br>Current state LocalMaintenance (sco=
re: 0)<br>MainThread::INFO::2014-12-27<br>08:43:01,773::hosted_engine::332:=
:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring=
)<br>Best remote host 10.0.0.94 (id: 1, score: 2400)<br>MainThread::INFO::2=
014-12-27<br>08:43:11,859::state_decorators::124::ovirt_hosted_engine_ha.ag=
ent.hosted_engine.HostedEngine::(check)<br>Local maintenance detected<br>Ma=
inThread::INFO::2014-12-27<br>08:43:12,072::hosted_engine::327::ovirt_hoste=
d_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Current=
state LocalMaintenance (score: 0)<br>MainThread::INFO::2014-12-27<br>08:43=
:12,072::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.Hos=
tedEngine::(start_monitoring)<br>Best remote host 10.0.0.94 (id: 1, score: =
2400)<br><div><br></div><br><div><br></div>[root@compute2-3 ~]# tail -f /va=
r/log/ovirt-hosted-engine-ha/agent.log<br>MainThread::INFO::2014-12-27<br>1=
1:36:28,855::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine=
.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.93 (id: 2, sco=
re: 2400)<br>MainThread::INFO::2014-12-27<br>11:36:39,130::hosted_engine::3=
27::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitor=
ing)<br>Current state EngineUp (score: 2400)<br>MainThread::INFO::2014-12-2=
7<br>11:36:39,130::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_=
engine.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.93 (id: =
2, score: 2400)<br>MainThread::INFO::2014-12-27<br>11:36:49,449::hosted_eng=
ine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_m=
onitoring)<br>Current state EngineUp (score: 2400)<br>MainThread::INFO::201=
4-12-27<br>11:36:49,449::hosted_engine::332::ovirt_hosted_engine_ha.agent.h=
osted_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.93=
(id: 2, score: 2400)<br>MainThread::INFO::2014-12-27<br>11:36:59,739::host=
ed_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(s=
tart_monitoring)<br>Current state EngineUp (score: 2400)<br>MainThread::INF=
O::2014-12-27<br>11:36:59,739::hosted_engine::332::ovirt_hosted_engine_ha.a=
gent.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.=
0.0.93 (id: 2, score: 2400)<br>MainThread::INFO::2014-12-27<br>11:37:09,779=
::states::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(co=
nsume)<br>Engine vm running on localhost<br>MainThread::INFO::2014-12-27<br=
>11:37:10,026::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engi=
ne.HostedEngine::(start_monitoring)<br>Current state EngineUp (score: 2400)=
<br>MainThread::INFO::2014-12-27<br>11:37:10,026::hosted_engine::332::ovirt=
_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>B=
est remote host 10.0.0.93 (id: 2, score: 2400)<br>MainThread::INFO::2014-12=
-27<br>11:37:20,331::hosted_engine::327::ovirt_hosted_engine_ha.agent.hoste=
d_engine.HostedEngine::(start_monitoring)<br>Current state EngineUp (score:=
2400)<br>MainThread::INFO::2014-12-27<br>11:37:20,331::hosted_engine::332:=
:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring=
)<br>Best remote host 10.0.0.93 (id: 2, score: 2400)<br><div><br></div><br>=
[root@compute2-2 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log<br>M=
ainThread::INFO::2014-12-27<br>08:36:12,462::hosted_engine::332::ovirt_host=
ed_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Best r=
emote host 10.0.0.94 (id: 1, score: 2400)<br>MainThread::INFO::2014-12-27<b=
r>08:36:22,797::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_eng=
ine.HostedEngine::(start_monitoring)<br>Current state EngineDown (score: 24=
00)<br>MainThread::INFO::2014-12-27<br>08:36:22,798::hosted_engine::332::ov=
irt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<b=
r>Best remote host 10.0.0.94 (id: 1, score: 2400)<br>MainThread::INFO::2014=
-12-27<br>08:36:32,876::states::437::ovirt_hosted_engine_ha.agent.hosted_en=
gine.HostedEngine::(consume)<br>Engine vm is running on host 10.0.0.94 (id =
1)<br>MainThread::INFO::2014-12-27<br>08:36:33,169::hosted_engine::327::ovi=
rt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br=
>Current state EngineDown (score: 2400)<br>MainThread::INFO::2014-12-27<br>=
08:36:33,169::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engin=
e.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.94 (id: 1, sc=
ore: 2400)<br>MainThread::INFO::2014-12-27<br>08:36:43,567::hosted_engine::=
327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monito=
ring)<br>Current state EngineDown (score: 2400)<br>MainThread::INFO::2014-1=
2-27<br>08:36:43,567::hosted_engine::332::ovirt_hosted_engine_ha.agent.host=
ed_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.94 (i=
d: 1, score: 2400)<br>MainThread::INFO::2014-12-27<br>08:36:53,858::hosted_=
engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(star=
t_monitoring)<br>Current state EngineDown (score: 2400)<br>MainThread::INFO=
::2014-12-27<br>08:36:53,858::hosted_engine::332::ovirt_hosted_engine_ha.ag=
ent.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.0=
.0.94 (id: 1, score: 2400)<br>MainThread::INFO::2014-12-27<br>08:37:04,028:=
:state_machine::160::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngin=
e::(refresh)<br>Global metadata: {'maintenance': False}<br>MainThread::INFO=
::2014-12-27<br>08:37:04,028::state_machine::165::ovirt_hosted_engine_ha.ag=
ent.hosted_engine.HostedEngine::(refresh)<br>Host 10.0.0.94 (id 1): {'extra=
':<br>'metadata_parse_version=3D1\nmetadata_feature_version=3D1\ntimestamp=
=3D835987<br>(Sat Dec 27 11:37:30<br>2014)\nhost-id=3D1\nscore=3D2400\nmain=
tenance=3DFalse\nstate=3DEngineUp\n',<br>'hostname': '10.0.0.94', 'alive': =
True, 'host-id': 1, 'engine-status':<br>{'health': 'good', 'vm': 'up', 'det=
ail': 'up'}, 'score': 2400,<br>'maintenance': False, 'host-ts': 835987}<br>=
MainThread::INFO::2014-12-27<br>08:37:04,028::state_machine::165::ovirt_hos=
ted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)<br>Host 10.0.0.92=
(id 3): {'extra':<br>'metadata_parse_version=3D1\nmetadata_feature_version=
=3D1\ntimestamp=3D681528<br>(Sat Dec 27 08:37:41<br>2014)\nhost-id=3D3\nsco=
re=3D0\nmaintenance=3DTrue\nstate=3DLocalMaintenance\n',<br>'hostname': '10=
.0.0.92', 'alive': True, 'host-id': 3, 'engine-status':<br>{'reason': 'vm n=
ot running on this host', 'health': 'bad', 'vm':<br>'down', 'detail': 'unkn=
own'}, 'score': 0, 'maintenance': True,<br>'host-ts': 681528}<br>MainThread=
::INFO::2014-12-27<br>08:37:04,028::state_machine::168::ovirt_hosted_engine=
_ha.agent.hosted_engine.HostedEngine::(refresh)<br>Local (id 2): {'engine-h=
ealth': {'reason': 'vm not running on this<br>host', 'health': 'bad', 'vm':=
'down', 'detail': 'unknown'}, 'bridge':<br>True, 'mem-free': 15300.0, 'mai=
ntenance': False, 'cpu-load': 0.0215,<br>'gateway': True}<br>MainThread::IN=
FO::2014-12-27<br>08:37:04,265::hosted_engine::327::ovirt_hosted_engine_ha.=
agent.hosted_engine.HostedEngine::(start_monitoring)<br>Current state Engin=
eDown (score: 2400)<br>MainThread::INFO::2014-12-27<br>08:37:04,265::hosted=
_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(sta=
rt_monitoring)<br>Best remote host 10.0.0.94 (id: 1, score: 2400)<br><div><=
br></div>Thanks,<br>Cong<br><div><br></div>On 2014/12/22, at 5:29, "Simone =
Tiraboschi" <stirabos@redhat.com<mailto:stirabos@redhat.com>> w=
rote:<br><div><br></div><br><div><br></div>----- Original Message -----<br>=
From: "Cong Yue" <Cong_Yue@alliedtelesis.com<mailto:Cong_Yue@alliedte=
lesis.com>><br>To: "Simone Tiraboschi" <stirabos(a)redhat.com<mai=
lto:stirabos@redhat.com>><br>Cc: users@ovirt.org<mailto:users@ovir=
t.org><br>Sent: Friday, December 19, 2014 7:22:10 PM<br>Subject: RE: [ov=
irt-users] VM failover with ovirt3.5<br><div><br></div>Thanks for the infor=
mation. This is the log for my three ovirt nodes.<br>From the output of hos=
ted-engine --vm-status, it shows the engine state for<br>my 2nd and 3rd ovi=
rt node is DOWN.<br>Is this the reason why VM failover not work in my envir=
onment?<br><div><br></div>No, they looks ok: you can run the engine VM on s=
ingle host at a time.<br><div><br></div>How can I make<br>also engine works=
for my 2nd and 3rd ovit nodes?<br><div><br></div>If you put the host 1 in =
local maintenance mode ( hosted-engine --set-maintenance --mode=3Dlocal ) t=
he VM should migrate to host 2; if you reactivate host 1 ( hosted-engine --=
set-maintenance --mode=3Dnone ) and put host 2 in local maintenance mode th=
e VM should migrate again.<br><div><br></div>Can you please try that and po=
st the logs if something is going bad?<br><div><br></div><br>--<br>--=3D=3D=
Host 1 status =3D=3D--<br><div><br></div>Status up-to-date &=
nbsp; : True<br>Hostname &n=
bsp; =
: 10.0.0.94<br>Host ID &nb=
sp; : 1<br>Engine status &n=
bsp; :=
{"health": "good", "vm": "up",<br>"detail": "up"}<br>Score &=
nbsp; =
: 2400<br>Local maintenance  =
; : False<br>Host timestamp =
: 150475<br>Extra metadat=
a (valid at timestamp):<br>metadata_parse_version=3D1<br>metadata_feature_v=
ersion=3D1<br>timestamp=3D150475 (Fri Dec 19 13:12:18 2014)<br>host-id=3D1<=
br>score=3D2400<br>maintenance=3DFalse<br>state=3DEngineUp<br><div><br></di=
v><br>--=3D=3D Host 2 status =3D=3D--<br><div><br></div>Status up-to-date &=
nbsp; : True<br>Host=
name =
: 10.0.0.93<br>Host ID &nb=
sp; : 2<br>En=
gine status =
: {"reason": "vm not running on<br>this host", "health": "bad"=
, "vm": "down", "detail": "unknown"}<br>Score &=
nbsp; =
: 2400<br>Local maintenance  =
; : False<br>Host timestamp =
: 1572<br>Extra metadata (valid at time=
stamp):<br>metadata_parse_version=3D1<br>metadata_feature_version=3D1<br>ti=
mestamp=3D1572 (Fri Dec 19 10:12:18 2014)<br>host-id=3D2<br>score=3D2400<br=
>maintenance=3DFalse<br>state=3DEngineDown<br><div><br></div><br>--=3D=3D H=
ost 3 status =3D=3D--<br><div><br></div>Status up-to-date &nb=
sp; : False<br>Hostname &nb=
sp; &=
nbsp; : 10.0.0.92<br>Host ID &nbs=
p; : 3<br>Engine status &nb=
sp; : =
unknown stale-data<br>Score  =
; : 2400<br>Local ma=
intenance : F=
alse<br>Host timestamp &nb=
sp; : 987<br>Extra metadata (valid at timestamp):<br>metadata=
_parse_version=3D1<br>metadata_feature_version=3D1<br>timestamp=3D987 (Fri =
Dec 19 10:09:58 2014)<br>host-id=3D3<br>score=3D2400<br>maintenance=3DFalse=
<br>state=3DEngineDown<br><div><br></div>--<br>And the /var/log/ovirt-hoste=
d-engine-ha/agent.log for three ovirt nodes are<br>as follows:<br>--<br>10.=
0.0.94(hosted-engine-1)<br>---<br>MainThread::INFO::2014-12-19<br>13:09:33,=
716::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedE=
ngine::(start_monitoring)<br>Current state EngineUp (score: 2400)<br>MainTh=
read::INFO::2014-12-19<br>13:09:33,716::hosted_engine::332::ovirt_hosted_en=
gine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote=
host 10.0.0.93 (id: 2, score: 2400)<br>MainThread::INFO::2014-12-19<br>13:=
09:44,017::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.H=
ostedEngine::(start_monitoring)<br>Current state EngineUp (score: 2400)<br>=
MainThread::INFO::2014-12-19<br>13:09:44,017::hosted_engine::332::ovirt_hos=
ted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Best =
remote host 10.0.0.93 (id: 2, score: 2400)<br>MainThread::INFO::2014-12-19<=
br>13:09:54,303::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_en=
gine.HostedEngine::(start_monitoring)<br>Current state EngineUp (score: 240=
0)<br>MainThread::INFO::2014-12-19<br>13:09:54,303::hosted_engine::332::ovi=
rt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br=
>Best remote host 10.0.0.93 (id: 2, score: 2400)<br>MainThread::INFO::2014-=
12-19<br>13:10:04,342::states::394::ovirt_hosted_engine_ha.agent.hosted_eng=
ine.HostedEngine::(consume)<br>Engine vm running on localhost<br>MainThread=
::INFO::2014-12-19<br>13:10:04,617::hosted_engine::327::ovirt_hosted_engine=
_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Current state E=
ngineUp (score: 2400)<br>MainThread::INFO::2014-12-19<br>13:10:04,617::host=
ed_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(s=
tart_monitoring)<br>Best remote host 10.0.0.93 (id: 2, score: 2400)<br>Main=
Thread::INFO::2014-12-19<br>13:10:14,657::state_machine::160::ovirt_hosted_=
engine_ha.agent.hosted_engine.HostedEngine::(refresh)<br>Global metadata: {=
'maintenance': False}<br>MainThread::INFO::2014-12-19<br>13:10:14,657::stat=
e_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(r=
efresh)<br>Host 10.0.0.93 (id 2): {'extra':<br>'metadata_parse_version=3D1\=
nmetadata_feature_version=3D1\ntimestamp=3D1448<br>(Fri Dec 19 10:10:14<br>=
2014)\nhost-id=3D2\nscore=3D2400\nmaintenance=3DFalse\nstate=3DEngineDown\n=
',<br>'hostname': '10.0.0.93', 'alive': True, 'host-id': 2, 'engine-status'=
:<br>{'reason': 'vm not running on this host', 'health': 'bad', 'vm':<br>'d=
own', 'detail': 'unknown'}, 'score': 2400, 'maintenance': False,<br>'host-t=
s': 1448}<br>MainThread::INFO::2014-12-19<br>13:10:14,657::state_machine::1=
65::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)<br>H=
ost 10.0.0.92 (id 3): {'extra':<br>'metadata_parse_version=3D1\nmetadata_fe=
ature_version=3D1\ntimestamp=3D987<br>(Fri Dec 19 10:09:58<br>2014)\nhost-i=
d=3D3\nscore=3D2400\nmaintenance=3DFalse\nstate=3DEngineDown\n',<br>'hostna=
me': '10.0.0.92', 'alive': True, 'host-id': 3, 'engine-status':<br>{'reason=
': 'vm not running on this host', 'health': 'bad', 'vm':<br>'down', 'detail=
': 'unknown'}, 'score': 2400, 'maintenance': False,<br>'host-ts': 987}<br>M=
ainThread::INFO::2014-12-19<br>13:10:14,658::state_machine::168::ovirt_host=
ed_engine_ha.agent.hosted_engine.HostedEngine::(refresh)<br>Local (id 1): {=
'engine-health': {'health': 'good', 'vm': 'up',<br>'detail': 'up'}, 'bridge=
': True, 'mem-free': 1079.0, 'maintenance':<br>False, 'cpu-load': 0.0269, '=
gateway': True}<br>MainThread::INFO::2014-12-19<br>13:10:14,904::hosted_eng=
ine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_m=
onitoring)<br>Current state EngineUp (score: 2400)<br>MainThread::INFO::201=
4-12-19<br>13:10:14,904::hosted_engine::332::ovirt_hosted_engine_ha.agent.h=
osted_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.93=
(id: 2, score: 2400)<br>MainThread::INFO::2014-12-19<br>13:10:25,210::host=
ed_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(s=
tart_monitoring)<br>Current state EngineUp (score: 2400)<br>MainThread::INF=
O::2014-12-19<br>13:10:25,210::hosted_engine::332::ovirt_hosted_engine_ha.a=
gent.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.=
0.0.93 (id: 2, score: 2400)<br>MainThread::INFO::2014-12-19<br>13:10:35,499=
::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngi=
ne::(start_monitoring)<br>Current state EngineUp (score: 2400)<br>MainThrea=
d::INFO::2014-12-19<br>13:10:35,499::hosted_engine::332::ovirt_hosted_engin=
e_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote ho=
st 10.0.0.93 (id: 2, score: 2400)<br>MainThread::INFO::2014-12-19<br>13:10:=
45,784::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.Host=
edEngine::(start_monitoring)<br>Current state EngineUp (score: 2400)<br>Mai=
nThread::INFO::2014-12-19<br>13:10:45,785::hosted_engine::332::ovirt_hosted=
_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Best rem=
ote host 10.0.0.93 (id: 2, score: 2400)<br>MainThread::INFO::2014-12-19<br>=
13:10:56,070::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engin=
e.HostedEngine::(start_monitoring)<br>Current state EngineUp (score: 2400)<=
br>MainThread::INFO::2014-12-19<br>13:10:56,070::hosted_engine::332::ovirt_=
hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Be=
st remote host 10.0.0.93 (id: 2, score: 2400)<br>MainThread::INFO::2014-12-=
19<br>13:11:06,109::states::394::ovirt_hosted_engine_ha.agent.hosted_engine=
.HostedEngine::(consume)<br>Engine vm running on localhost<br>MainThread::I=
NFO::2014-12-19<br>13:11:06,359::hosted_engine::327::ovirt_hosted_engine_ha=
.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Current state Engi=
neUp (score: 2400)<br>MainThread::INFO::2014-12-19<br>13:11:06,359::hosted_=
engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(star=
t_monitoring)<br>Best remote host 10.0.0.93 (id: 2, score: 2400)<br>MainThr=
ead::INFO::2014-12-19<br>13:11:16,658::hosted_engine::327::ovirt_hosted_eng=
ine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Current stat=
e EngineUp (score: 2400)<br>MainThread::INFO::2014-12-19<br>13:11:16,658::h=
osted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:=
:(start_monitoring)<br>Best remote host 10.0.0.93 (id: 2, score: 2400)<br>M=
ainThread::INFO::2014-12-19<br>13:11:26,991::hosted_engine::327::ovirt_host=
ed_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Curren=
t state EngineUp (score: 2400)<br>MainThread::INFO::2014-12-19<br>13:11:26,=
991::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedE=
ngine::(start_monitoring)<br>Best remote host 10.0.0.93 (id: 2, score: 2400=
)<br>MainThread::INFO::2014-12-19<br>13:11:37,341::hosted_engine::327::ovir=
t_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>=
Current state EngineUp (score: 2400)<br>MainThread::INFO::2014-12-19<br>13:=
11:37,341::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.H=
ostedEngine::(start_monitoring)<br>Best remote host 10.0.0.93 (id: 2, score=
: 2400)<br>----<br><div><br></div>10.0.0.93 (hosted-engine-2)<br>MainThread=
::INFO::2014-12-19<br>10:12:18,339::hosted_engine::327::ovirt_hosted_engine=
_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Current state E=
ngineDown (score: 2400)<br>MainThread::INFO::2014-12-19<br>10:12:18,339::ho=
sted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::=
(start_monitoring)<br>Best remote host 10.0.0.94 (id: 1, score: 2400)<br>Ma=
inThread::INFO::2014-12-19<br>10:12:28,651::hosted_engine::327::ovirt_hoste=
d_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Current=
state EngineDown (score: 2400)<br>MainThread::INFO::2014-12-19<br>10:12:28=
,652::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.Hosted=
Engine::(start_monitoring)<br>Best remote host 10.0.0.94 (id: 1, score: 240=
0)<br>MainThread::INFO::2014-12-19<br>10:12:39,010::hosted_engine::327::ovi=
rt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br=
>Current state EngineDown (score: 2400)<br>MainThread::INFO::2014-12-19<br>=
10:12:39,010::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engin=
e.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.94 (id: 1, sc=
ore: 2400)<br>MainThread::INFO::2014-12-19<br>10:12:49,338::hosted_engine::=
327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monito=
ring)<br>Current state EngineDown (score: 2400)<br>MainThread::INFO::2014-1=
2-19<br>10:12:49,338::hosted_engine::332::ovirt_hosted_engine_ha.agent.host=
ed_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.94 (i=
d: 1, score: 2400)<br>MainThread::INFO::2014-12-19<br>10:12:59,642::hosted_=
engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(star=
t_monitoring)<br>Current state EngineDown (score: 2400)<br>MainThread::INFO=
::2014-12-19<br>10:12:59,642::hosted_engine::332::ovirt_hosted_engine_ha.ag=
ent.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.0=
.0.94 (id: 1, score: 2400)<br>MainThread::INFO::2014-12-19<br>10:13:10,010:=
:hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngin=
e::(start_monitoring)<br>Current state EngineDown (score: 2400)<br>MainThre=
ad::INFO::2014-12-19<br>10:13:10,010::hosted_engine::332::ovirt_hosted_engi=
ne_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote h=
ost 10.0.0.94 (id: 1, score: 2400)<br><div><br></div><br>10.0.0.92(hosted-e=
ngine-3)<br>same as 10.0.0.93<br>--<br><div><br></div>-----Original Message=
-----<br>From: Simone Tiraboschi [mailto:stirabos@redhat.com]<br>Sent: Frid=
ay, December 19, 2014 12:28 AM<br>To: Yue, Cong<br>Cc: users(a)ovirt.org<m=
ailto:users@ovirt.org><br>Subject: Re: [ovirt-users] VM failover with ov=
irt3.5<br><div><br></div><br><div><br></div>----- Original Message -----<br=
>From: "Cong Yue" <Cong_Yue@alliedtelesis.com<mailto:Cong_Yue@alliedt=
elesis.com>><br>To: users@ovirt.org<mailto:users@ovirt.org><br>=
Sent: Friday, December 19, 2014 2:14:33 AM<br>Subject: [ovirt-users] VM fai=
lover with ovirt3.5<br><div><br></div><br><div><br></div>Hi<br><div><br></d=
iv><br><div><br></div>In my environment, I have 3 ovirt nodes as one cluste=
r. And on top of<br>host-1, there is one vm to host ovirt engine.<br><div><=
br></div>Also I have one external storage for the cluster to use as data do=
main<br>of engine and data.<br><div><br></div>I confirmed live migration wo=
rks well in my environment.<br><div><br></div>But it seems very buggy for V=
M failover if I try to force to shut down<br>one ovirt node. Sometimes the =
VM in the node which is shutdown can<br>migrate to other host, but it take =
more than several minutes.<br><div><br></div>Sometimes, it can not migrate =
at all. Sometimes, only when the host is<br>back, the VM is beginning to mo=
ve.<br><div><br></div>Can you please check or share the logs under /var/log=
/ovirt-hosted-engine-ha/<br>?<br><div><br></div>Is there some documentation=
to explain how VM failover is working? And<br>is there some bugs reported =
related with this?<br><div><br></div>http://www.ovirt.org/Features/Self_Hos=
ted_Engine#Agent_State_Diagram<br><div><br></div>Thanks in advance,<br><div=
><br></div>Cong<br><div><br></div><br><div><br></div><br>This e-mail messag=
e is for the sole use of the intended recipient(s)<br>and may contain confi=
dential and privileged information. Any<br>unauthorized review, use, disclo=
sure or distribution is prohibited. If<br>you are not the intended recipien=
t, please contact the sender by reply<br>e-mail and destroy all copies of t=
he original message. If you are the<br>intended recipient, please be advise=
d that the content of this message<br>is subject to access, review and disc=
losure by the sender's e-mail System<br>Administrator.<br><div><br></div>__=
_____________________________________________<br>Users mailing list<br>User=
s@ovirt.org<mailto:Users@ovirt.org><br>http://lists.ovirt.org/mailman=
/listinfo/users<br><div><br></div>This e-mail message is for the sole use o=
f the intended recipient(s) and may<br>contain confidential and privileged =
information. Any unauthorized review,<br>use, disclosure or distribution is=
prohibited. If you are not the intended<br>recipient, please contact the s=
ender by reply e-mail and destroy all copies<br>of the original message. If=
you are the intended recipient, please be<br>advised that the content of t=
his message is subject to access, review and<br>disclosure by the sender's =
e-mail System Administrator.<br><div><br></div><br>This e-mail message is f=
or the sole use of the intended recipient(s) and may contain confidential a=
nd privileged information. Any unauthorized review, use, disclosure or dist=
ribution is prohibited. If you are not the intended recipient, please conta=
ct the sender by reply e-mail and destroy all copies of the original messag=
e. If you are the intended recipient, please be advised that the content of=
this message is subject to access, review and disclosure by the sender's e=
-mail System Administrator.<br>____________________________________________=
___<br>Users mailing list<br>Users@ovirt.org<mailto:Users@ovirt.org><=
br>http://lists.ovirt.org/mailman/listinfo/users<br><div><br></div>________=
________________________<br>This e-mail message is for the sole use of the =
intended recipient(s) and may contain confidential and privileged informati=
on. Any unauthorized review, use, disclosure or distribution is prohibited.=
If you are not the intended recipient, please contact the sender by reply =
e-mail and destroy all copies of the original message. If you are the inten=
ded recipient, please be advised that the content of this message is subjec=
t to access, review and disclosure by the sender's e-mail System Administra=
tor.<br>-------------- next part --------------<br>An HTML attachment was s=
crubbed...<br>URL: <http://lists.ovirt.org/pipermail/users/attachments/2=
0141228/c5ac26a7/attachment.html><br><div><br></div>--------------------=
----------<br><div><br></div>______________________________________________=
_<br>Users mailing list<br>Users(a)ovirt.org<br>http://lists.ovirt.org/mailma=
n/listinfo/users<br><div><br></div><br>End of Users Digest, Vol 39, Issue 1=
63<br>**************************************<br></div><div><br></div></div>=
</body></html>
------=_Part_1871238_1615445632.1419874799888--
10 years, 4 months
Re: [ovirt-users] Backup and Restore of VMs
by Nathanaël Blanchet
Le 29/12/2014 12:10, Nathanaël Blanchet a écrit :
> Hello,
>
> Thank you for the script, yes, it is clearer now.
> However, there is a something I misunderstand, my raisoning may be
> stupid, just tell me.
> It is closely about the backup process, precisely when the disk is
> attached to the vm... At this moment, an extern process should do this
> step. If we consider using the dd command to have a byte-to-byte copy
> from the snapshot disk, why not directly attaching this cloned raw
> virtual disk to the new OVF cloned VM instead of creating a new
> provisonned disk?
> But you just might consider doing file copy during the backup process
> (rsnyc like) which implies to format the new created disk and many
> additionnal steps as creating Logical Volumes if needed, etc...
> Can anybody help me with understanding this step?
> Thank you.
>
> Le 28/12/2014 10:02, Liron Aravot a écrit :
>> Hi All,
>> I've uploaded an example script (oVirt python-sdk) that contains
>> examples to the steps
>> described on
>> http://www.ovirt.org/Features/Backup-Restore_API_Integration
>>
>> let me know how it works out for you -
>> https://github.com/laravot/backuprestoreapi
>>
>>
>> ----- Original Message -----
>>> From: "Liron Aravot" <laravot(a)redhat.com>
>>> To: "Soeren Malchow" <soeren.malchow(a)mcon.net>
>>> Cc: "Vered Volansky" <vered(a)redhat.com>, Users(a)ovirt.org
>>> Sent: Wednesday, December 24, 2014 12:20:36 PM
>>> Subject: Re: [ovirt-users] Backup and Restore of VMs
>>>
>>> Hi guys,
>>> I'm currently working on complete example of the steps appear in -
>>> http://www.ovirt.org/Features/Backup-Restore_API_Integration
>>>
>>> will share with you as soon as i'm done with it.
>>>
>>> thanks,
>>> Liron
>>>
>>> ----- Original Message -----
>>>> From: "Soeren Malchow" <soeren.malchow(a)mcon.net>
>>>> To: "Vered Volansky" <vered(a)redhat.com>
>>>> Cc: Users(a)ovirt.org
>>>> Sent: Wednesday, December 24, 2014 11:58:01 AM
>>>> Subject: Re: [ovirt-users] Backup and Restore of VMs
>>>>
>>>> Dear Vered,
>>>>
>>>> at some point we have to start, and right now we are getting
>>>> closer, even
>>>> with the documentation it is sometime hard to find the correct
>>>> place to
>>>> start, especially without specific examples (and I have decades of
>>>> experience now)
>>>>
>>>> with the backup plugin that came from Lucas Vandroux we have a
>>>> starting
>>>> point
>>>> right now, and we will continue form here and try to work with him
>>>> on this.
>>>>
>>>> Regards
>>>> Soeren
>>>>
>>>>
>>>> -----Original Message-----
>>>> From: users-bounces(a)ovirt.org [mailto:users-bounces@ovirt.org] On
>>>> Behalf Of
>>>> Blaster
>>>> Sent: Tuesday, December 23, 2014 5:49 PM
>>>> To: Vered Volansky
>>>> Cc: Users(a)ovirt.org
>>>> Subject: Re: [ovirt-users] Backup and Restore of VMs
>>>>
>>>> Sounds like a Chicken/Egg problem.
>>>>
>>>>
>>>>
>>>> On 12/23/2014 12:03 AM, Vered Volansky wrote:
>>>>> Well, real world is community...
>>>>> Maybe change the name of the thread in order to make this more
>>>>> clear for
>>>>> someone from the community that might be able to could help.
>>>>> Maybe something like:
>>>>> Request for sharing real world example of VM backups.
>>>>>
>>>>> We obviously use it as part as developing, but I don't have what
>>>>> you're
>>>>> asking for.
>>>>> If you try it yourself and stumble onto questions in the process,
>>>>> please
>>>>> ask the list and we'll do our best to help.
>>>>>
>>>>> Best Regards,
>>>>> Vered
>>>>>
>>>>> ----- Original Message -----
>>>>>> From: "Blaster" <blaster(a)556nato.com>
>>>>>> To: "Vered Volansky" <vered(a)redhat.com>
>>>>>> Cc: Users(a)ovirt.org
>>>>>> Sent: Tuesday, December 23, 2014 5:56:13 AM
>>>>>> Subject: Re: [ovirt-users] Backup and Restore of VMs
>>>>>>
>>>>>>
>>>>>> Vered,
>>>>>>
>>>>>> It sounds like Soeren already knows about that page. His issue seems
>>>>>> to be, as well as the issue of others judging by comments on
>>>>>> here, is
>>>>>> that there aren’t any real world examples of how the API is used.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Dec 22, 2014, at 9:26 AM, Vered Volansky <vered(a)redhat.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Please take a look at:
>>>>>>> http://www.ovirt.org/Features/Backup-Restore_API_Integration
>>>>>>>
>>>>>>> Specifically:
>>>>>>> http://www.ovirt.org/Features/Backup-Restore_API_Integration#Full_VM
>>>>>>>
>>>>>>> _Backups
>>>>>>>
>>>>>>> Regards,
>>>>>>> Vered
>>>>>>>
>>>>>>> ----- Original Message -----
>>>>>>>> From: "Soeren Malchow" <soeren.malchow(a)mcon.net>
>>>>>>>> To: Users(a)ovirt.org
>>>>>>>> Sent: Friday, December 19, 2014 1:44:38 PM
>>>>>>>> Subject: [ovirt-users] Backup and Restore of VMs
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Dear all,
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> ovirt: 3.5
>>>>>>>>
>>>>>>>> gluster: 3.6.1
>>>>>>>>
>>>>>>>> OS: CentOS 7 (except ovirt hosted engine = centos 6.6)
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> i spent quite a while researching backup and restore for VMs right
>>>>>>>> now, so far I have come up with this as a start for us
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> - API calls to create schedule snapshots of virtual machines This
>>>>>>>> is or short term storage and to guard against accidential deletion
>>>>>>>> within the VM but not for storage corruption
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> - Since we are using a gluster backend, gluster snapshots I wasn’t
>>>>>>>> able so far to really test it since the LV needs to be thin
>>>>>>>> provisioned and we did not do that in the setup
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> For the API calls we have the problem that we can not find any
>>>>>>>> existing scripts or something like that to do those snapshots (and
>>>>>>>> i/we are not developers enough to do that).
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> As an additional information, we have a ZFS based storage with
>>>>>>>> deduplication that we use for other backup purposes which does a
>>>>>>>> great job especially because of the deduplication (we can storage
>>>>>>>> generations of backups without problems), this storage can be NFS
>>>>>>>> exported and used as backup repository.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Are there any backup and restore procedure you guys are using for
>>>>>>>> backup and restore that works for you and can you point me into
>>>>>>>> the
>>>>>>>> right direction ?
>>>>>>>>
>>>>>>>> I am a little bit list right now and would appreciate any help.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Regards
>>>>>>>>
>>>>>>>> Soeren
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> Users mailing list
>>>>>>>> Users(a)ovirt.org
>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Users mailing list
>>>>>>> Users(a)ovirt.org
>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>> _______________________________________________
>>>>>> Users mailing list
>>>>>> Users(a)ovirt.org
>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>
>>>>
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users(a)ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users(a)ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
10 years, 4 months
vdsm noipspoof.py vdsm hook problem
by InterNetX - Juergen Gotteswinter
Hi,
i am trying to get the noipspoof.py Hook up and running, which works
fine so far if i only feed it with a single ip. when trying to add 2+,
like described in the ource (comma seperated), the gui tells me that
this isnt expected / nice and wont let me do this.
I already tried modding the Regex, which made the engine to take a
2nd/3rd ip (comma seperated), but it seems that theres somehere else
something wrong with parsing this.
VDSM throws this:
vdsm vm.Vm ERROR vmId=`4c9cb160-2283-4769-a69c-434e6c992c2b`::The vm
start process failed#012Traceback (most recent call last):#012 File
"/usr/share/vdsm/virt/vm.py", line 2266, in _startUnderlyingVm#012
self._run()#012 File "/usr/share/vdsm/virt/vm.py", line 3332, in
_run#012 domxml = hooks.before_vm_start(self._buildCmdLine(),
self.conf)#012 File "/usr/share/vdsm/hooks.py", line 142, in
before_vm_start#012 return _runHooksDir(domxml, 'before_vm_start',
vmconf=vmconf)#012 File "/usr/share/vdsm/hooks.py", line 110, in
_runHooksDir#012 raise HookError()#012HookError
The VM fails to start, engine tries this on every available host (which,
not surprising fail, too).
Anyone any ideas / patches / hints how to mod this hook ?
Thanks
Juergen
10 years, 4 months
Problem after update ovirt to 3.5
by Juan Jose
Hello everybody,
After upgrade my engine to oVirt 3.5, I have also upgraded one of my hosts
to oVirt 3.5. After that it seems that all have gone good aparently.
But in some seconds my ISO domain is desconnected and it is impossible to
Activate. I'm attaching my engine.log. The below error is showed each time
I try to Activate the ISO domain. Before the upgrade it was working without
problems:
2014-12-15 13:25:07,607 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-5) [460733dd] Correlation ID: null, Call
Stack: null, Custom Event ID: -1, Message: Failed to connect Host host1 to
the Storage Domains ISO_DOMAIN.
2014-12-15 13:25:07,608 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(org.ovirt.thread.pool-8-thread-5) [460733dd] FINISH,
ConnectStorageServerVDSCommand, return:
{81c0a853-715c-4478-a812-6a74808fc482=477}, log id: 3590969e
2014-12-15 13:25:07,615 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-5) [460733dd] Correlation ID: null, Call
Stack: null, Custom Event ID: -1, Message: The error message for connection
ovirt-engine.siee.local:/var/lib/exports/iso-20140303082312 returned by
VDSM was: Problem while trying to mount target
2014-12-15 13:25:07,616 ERROR
[org.ovirt.engine.core.bll.storage.NFSStorageHelper]
(org.ovirt.thread.pool-8-thread-5) [460733dd] The connection with details
ovirt-engine.siee.local:/var/lib/exports/iso-20140303082312 failed because
of error code 477 and error message is: problem while trying to mount target
If any other information is required, please tell me.
Many thanks in advanced,
Juanjo.
10 years, 4 months
Stucked VM Migration and now only run once
by Kurt Woitschach
This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--gioR5jNPrlhFWJc7CMfuUMwUJC4shLNV5
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hi all,
we have a Problem with a VM that can only be started in run-once mode.
After a temporary network disconnect on the hosting node, the vm (and
some others) was down. When I tried to start regularly, it showed a
currently beeing migrated status.
I only could start it with run-once.
Reboot didn't make a change.
Any ideas?
Greets
Kurt
--=20
Kurt Woitschach-M=C3=BCller kurt.woitschach-mueller(a)tngtech.com * +49-17=
43180076=20
TNG Technology Consulting GmbH, Betastr. 13a, 85774 Unterf=C3=B6hring=20
Gesch=C3=A4ftsf=C3=BChrer: Henrik Klagges, Gerhard M=C3=BCller, Christoph=
Stock=20
Sitz: Unterf=C3=B6hring * Amtsgericht M=C3=BCnchen * HRB 135082=20
--gioR5jNPrlhFWJc7CMfuUMwUJC4shLNV5
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAEBAgAGBQJUnwb7AAoJEO13zDeMkkLQR2MH/j17+d1d/OPy7oqp4jCorHnk
u5B/RC9ZNpPEhFBsw9KTVAOE/W8iKfBoHprYKWm/82obmchEC/FysYD9SCLCJNx3
Zjj2mT+Mxh+L+FFwkSCE4+DBh/CehxcO2AagbmGejjl1a5mYYZuNZBYhbTBdY8JF
PPow94KENh0VHDCO5suRcnG/oDI90D/M1wU5zxO56+XojKzFnhXWdAlGgbTgJ+3d
CCTP99mRM1e+A5DxKfB17CLXBBiou9iswnO7XW2PYR0Lon7D5rRdJCtdW9zDU2FW
pXL7+vGvevIeY1hpZl6rIo8xMiZIbyi9hSrLI3sbHejmBX0ONcO9iRnt4q9AfmI=
=0iXO
-----END PGP SIGNATURE-----
--gioR5jNPrlhFWJc7CMfuUMwUJC4shLNV5--
10 years, 4 months
Re: [ovirt-users] Can not connect to Storage domain data
by Yue, Cong
--_000_ED08B56256B38842A463A2A0804C5AC0326ACA471Csvrcaexch1atg_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
I found the workaround for this.
For some reason my Storage domain of data can not be mounted. I just mount =
it manually, like
mount -t nfs nfs2-3:/data /rhev/data-center/mnt/nfs2-3:_data
Actually, the folder of "/rhev/data-center/mnt/nfs2-3:_data" has been creat=
ed. I think this may be some bug as for in my environment, I will reproduce=
it always if I try to deploy the host from second time.
Thanks,
Cong
From: Yue, Cong
Sent: Thursday, December 18, 2014 2:17 PM
To: 'users(a)ovirt.org'
Subject: RE: Can not connect to Storage domain data
I think the problems for my issue are related with the NFS version.
>From the second, if I change the value of Defaultver /etc/nfsmount.conf fr=
om "Defaultvres=3D4" to "Defaulvers=3D3", the mount can not be done. When I=
changed it back to "Defaultvers=3D4", it will work.
Also from /proc/mounts, it shows the nfs version is nfs4. But for my first =
host, it is nfs3.
Do somebody have the similar issue about thi?
Thank in advance,
Cong
From: Yue, Cong
Sent: Thursday, December 18, 2014 9:52 AM
To: users(a)ovirt.org<mailto:users@ovirt.org>
Subject: Can not connect to Storage domain data
Hi
I successfully deployed the first ovirt host with hosted-engine -deploy. En=
gine VM works well.
While, when I try to create the second host with the same way as the guide =
of
http://community.redhat.com/blog/2014/11/up-and-running-with-ovirt-3-5-part=
-two/
I am not using GlusterFS, and just use one external storage(nfs) in my envi=
ronment.
The issue I have is in the engine administration menu, it says "can not con=
nect to storage domain data"
In the second host, I checked with nfs-check.py for both storage and data d=
omain. It shows the status is ok.
http://www.ovirt.org/Troubleshooting_NFS_Storage_Issues
During deployment of the second host, how the data domain is trying to be m=
ounted?
Thanks,
________________________________
This e-mail message is for the sole use of the intended recipient(s) and ma=
y contain confidential and privileged information. Any unauthorized review,=
use, disclosure or distribution is prohibited. If you are not the intended=
recipient, please contact the sender by reply e-mail and destroy all copie=
s of the original message. If you are the intended recipient, please be adv=
ised that the content of this message is subject to access, review and disc=
losure by the sender's e-mail System Administrator.
--_000_ED08B56256B38842A463A2A0804C5AC0326ACA471Csvrcaexch1atg_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"MS Mincho";
panose-1:2 2 6 9 4 2 5 8 3 4;}
@font-face
{font-family:SimSun;
panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
{font-family:Tahoma;
panose-1:2 11 6 4 3 5 4 4 2 4;}
@font-face
{font-family:"\@SimSun";
panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
{font-family:"\@MS Mincho";
panose-1:2 2 6 9 4 2 5 8 3 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
p.MsoAcetate, li.MsoAcetate, div.MsoAcetate
{mso-style-priority:99;
mso-style-link:"Balloon Text Char";
margin:0in;
margin-bottom:.0001pt;
font-size:8.0pt;
font-family:"Tahoma","sans-serif";}
span.BalloonTextChar
{mso-style-name:"Balloon Text Char";
mso-style-priority:99;
mso-style-link:"Balloon Text";
font-family:"Tahoma","sans-serif";}
span.EmailStyle19
{mso-style-type:personal;
font-family:"Calibri","sans-serif";
color:windowtext;}
span.EmailStyle20
{mso-style-type:personal;
font-family:"Calibri","sans-serif";
color:#1F497D;}
span.EmailStyle21
{mso-style-type:personal-reply;
font-family:"Calibri","sans-serif";
color:#1F497D;}
.MsoChpDefault
{mso-style-type:export-only;
font-size:10.0pt;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
">I found the workaround for this.<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
">For some reason my Storage domain of data can not be mounted. I just moun=
t it manually, like<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
">mount –t nfs nfs2-3:/data /rhev/data-center/mnt/nfs2-3:_data<o:p></=
o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
">Actually, the folder of “/rhev/data-center/mnt/nfs2-3:_data” =
has been created. I think this may be some bug as for in my environment, I =
will reproduce it always if I try to deploy the
host from second time.<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
">Thanks,<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
">Cong<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D"><o:p> </o:p></spa=
n></p>
<div>
<div style=3D"border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in =
0in 0in">
<p class=3D"MsoNormal"><b><span style=3D"font-size:10.0pt;font-family:"=
;Tahoma","sans-serif"">From:</span></b><span style=3D"font-s=
ize:10.0pt;font-family:"Tahoma","sans-serif""> Yue, Con=
g
<br>
<b>Sent:</b> Thursday, December 18, 2014 2:17 PM<br>
<b>To:</b> 'users(a)ovirt.org'<br>
<b>Subject:</b> RE: Can not connect to Storage domain data <o:p></o:p></spa=
n></p>
</div>
</div>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
">I think the problems for my issue are related with the NFS version.<o:p><=
/o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
">From the second, if I change the value of Defaultver /etc/nfsmount.=
conf from “Defaultvres=3D4” to “Defaulvers=3D3”, th=
e mount can not be done. When I changed it back to “Defaultvers=3D4&#=
8221;, it
will work.<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
">Also from /proc/mounts, it shows the nfs version is nfs4. But for my firs=
t host, it is nfs3.<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
">Do somebody have the similar issue about thi?<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
">Thank in advance,<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
">Cong<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D"><o:p> </o:p></spa=
n></p>
<div>
<div style=3D"border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in =
0in 0in">
<p class=3D"MsoNormal"><b><span style=3D"font-size:10.0pt;font-family:"=
;Tahoma","sans-serif"">From:</span></b><span style=3D"font-s=
ize:10.0pt;font-family:"Tahoma","sans-serif""> Yue, Con=
g
<br>
<b>Sent:</b> Thursday, December 18, 2014 9:52 AM<br>
<b>To:</b> <a href=3D"mailto:users@ovirt.org">users(a)ovirt.org</a><br>
<b>Subject:</b> Can not connect to Storage domain data <o:p></o:p></span></=
p>
</div>
</div>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">Hi<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">I successfully deployed the first ovirt host with ho=
sted-engine –deploy. Engine VM works well.<o:p></o:p></p>
<p class=3D"MsoNormal">While, when I try to create the second host with the=
same way as the guide of
<o:p></o:p></p>
<p class=3D"MsoNormal"><a href=3D"http://community.redhat.com/blog/2014/11/=
up-and-running-with-ovirt-3-5-part-two/">http://community.redhat.com/blog/2=
014/11/up-and-running-with-ovirt-3-5-part-two/</a><o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">I am not using GlusterFS, and just use one external =
storage(nfs) in my environment.<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">The issue I have is in the engine administration men=
u, it says “can not connect to storage domain data”<o:p></o:p><=
/p>
<p class=3D"MsoNormal">In the second host, I checked with nfs-check.py for =
both storage and data domain. It shows the status is ok.<o:p></o:p></p>
<p class=3D"MsoNormal"><a href=3D"http://www.ovirt.org/Troubleshooting_NFS_=
Storage_Issues">http://www.ovirt.org/Troubleshooting_NFS_Storage_Issues</a>=
<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">During deployment of the second host, how the data d=
omain is trying to be mounted?<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">Thanks,<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
</div>
<br>
<hr>
<font face=3D"Arial" color=3D"Gray" size=3D"1">This e-mail message is for t=
he sole use of the intended recipient(s) and may contain confidential and p=
rivileged information. Any unauthorized review, use, disclosure or distribu=
tion is prohibited. If you are not the
intended recipient, please contact the sender by reply e-mail and destroy =
all copies of the original message. If you are the intended recipient, plea=
se be advised that the content of this message is subject to access, review=
and disclosure by the sender's
e-mail System Administrator.<br>
</font>
</body>
</html>
--_000_ED08B56256B38842A463A2A0804C5AC0326ACA471Csvrcaexch1atg_--
10 years, 4 months
Re: [ovirt-users] Introduction!
by Donny Davis
----_com.android.email_839995840196020
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: base64
CiAgICAKV2VsY29tZSB0byBvdmlydC4KSWYgeW91IHdhbnQgdG8gc2VlIG92aXJ0IGNoZWNrIG91
dCBjbG91ZHNwaW4ubWUKSXRzIGZyZWUKCgpIYXBweSBDb25uZWN0aW5nLiBTZW50IGZyb20gbXkg
U3ByaW50IFNhbXN1bmcgR2FsYXh5IFPCriA1CgotLS0tLS0tLSBPcmlnaW5hbCBtZXNzYWdlIC0t
LS0tLS0tCkZyb206IFllZGlkeWFoIEJhciBEYXZpZCA8ZGlkaUByZWRoYXQuY29tPiAKRGF0ZTog
MTIvMjMvMjAxNCAgMTE6NDkgUE0gIChHTVQtMDc6MDApIApUbzogVG9tIFdlZWtzIDx0b20ubS53
ZWVrc0BnbWFpbC5jb20+IApDYzogdXNlcnNAb3ZpcnQub3JnIApTdWJqZWN0OiBSZTogW292aXJ0
LXVzZXJzXSBJbnRyb2R1Y3Rpb24hIAoKSGkgVG9tLAoKLS0tLS0gT3JpZ2luYWwgTWVzc2FnZSAt
LS0tLQo+IEZyb206ICJUb20gV2Vla3MiIDx0b20ubS53ZWVrc0BnbWFpbC5jb20+Cj4gVG86IHVz
ZXJzQG92aXJ0Lm9yZwo+IFNlbnQ6IFdlZG5lc2RheSwgRGVjZW1iZXIgMjQsIDIwMTQgNDoxNToy
NSBBTQo+IFN1YmplY3Q6IFtvdmlydC11c2Vyc10gSW50cm9kdWN0aW9uIQo+IAo+IEhlbGxvLAo+
IAo+IEkgYW0gaGFwcHkgdG8gam9pbiB0aGUgY29tbXVuaXR5IGFuZCBoZWxwIHN1cHBvcnQgdGhl
IHByb2plY3QuIEknbSBhIGxvbmcKPiB0aW1lIHVzZXIgb2YgdlNwaGVyZS92Q2VudGVyIGJ1dCBJ
IGFtIGFuIGFzcGlyaW5nIHRvIHdvcmsgd2l0aGluIG9wZW4tc291cmNlCj4gd29ybGQuIE15IHdv
cmsgZXhwZXJpZW5jZSBpcyBpbiBjb3Jwb3JhdGUgZW52aXJvbm1lbnRzIGFuZCBpbmNsdWRlcwo+
IHZpcnR1YWxpemF0aW9uLCBzdG9yYWdlLCBhcyB3ZWxsIGFzIGJhc2ljIG5ldHdvcmtpbmcuCj4g
Cj4gSSBob3BlIEkgY2FuIHN0YXJ0IGhlbHBpbmcgYnkgY29udmVydGluZyBteSBob21lbGFiIGZy
b20gdGhlIFZNd2FyZSBzdGFjayB0bwo+IG9WaXJ0LiBJIGNhbiBjb250cmlidXRlIGJ5IHN1Ym1p
dHRpbmcgYnVnIHJlcXVlc3RzIGFuZCBkb2N1bWVudGF0aW9uLi4ubGV0Cj4gbWUga25vdyBpZiB0
aGF0IHdvdWxkIGJlIGhlbHBmdWwhCgpUaGF0IHdvdWxkIGRlZmluaXRlbHkgYmUgaGVscGZ1bCEK
Ckdvb2QgbHVjayBhbmQgYmVzdCByZWdhcmRzLAotLSAKRGlkaQpfX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fXwpVc2VycyBtYWlsaW5nIGxpc3QKVXNlcnNAb3Zp
cnQub3JnCmh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2Vycwo=
----_com.android.email_839995840196020
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: base64
PGh0bWw+PGhlYWQ+PG1ldGEgaHR0cC1lcXVpdj0iQ29udGVudC1UeXBlIiBjb250ZW50PSJ0ZXh0
L2h0bWw7IGNoYXJzZXQ9VVRGLTgiPjwvaGVhZD48Ym9keT4KICAgIAo8ZGl2PldlbGNvbWUgdG8g
b3ZpcnQuPC9kaXY+PGRpdj48YnI+PC9kaXY+PGRpdj5JZiB5b3Ugd2FudCB0byBzZWUgb3ZpcnQg
Y2hlY2sgb3V0IGNsb3Vkc3Bpbi5tZTwvZGl2PjxkaXY+PGJyPjwvZGl2PjxkaXY+SXRzIGZyZWU8
L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2IGlk
PSJjb21wb3Nlcl9zaWduYXR1cmUiPjxkaXYgc3R5bGU9ImZvbnQtc2l6ZTo4NSU7Y29sb3I6IzU3
NTc1NyI+SGFwcHkgQ29ubmVjdGluZy4gU2VudCBmcm9tIG15IFNwcmludCBTYW1zdW5nIEdhbGF4
eSBTwq4gNTwvZGl2PjwvZGl2Pjxicj48YnI+LS0tLS0tLS0gT3JpZ2luYWwgbWVzc2FnZSAtLS0t
LS0tLTxicj5Gcm9tOiBZZWRpZHlhaCBCYXIgRGF2aWQgJmx0O2RpZGlAcmVkaGF0LmNvbSZndDsg
PGJyPkRhdGU6IDEyLzIzLzIwMTQgIDExOjQ5IFBNICAoR01ULTA3OjAwKSA8YnI+VG86IFRvbSBX
ZWVrcyAmbHQ7dG9tLm0ud2Vla3NAZ21haWwuY29tJmd0OyA8YnI+Q2M6IHVzZXJzQG92aXJ0Lm9y
ZyA8YnI+U3ViamVjdDogUmU6IFtvdmlydC11c2Vyc10gSW50cm9kdWN0aW9uISA8YnI+PGJyPkhp
IFRvbSw8YnI+PGJyPi0tLS0tIE9yaWdpbmFsIE1lc3NhZ2UgLS0tLS08YnI+Jmd0OyBGcm9tOiAi
VG9tIFdlZWtzIiAmbHQ7dG9tLm0ud2Vla3NAZ21haWwuY29tJmd0Ozxicj4mZ3Q7IFRvOiB1c2Vy
c0BvdmlydC5vcmc8YnI+Jmd0OyBTZW50OiBXZWRuZXNkYXksIERlY2VtYmVyIDI0LCAyMDE0IDQ6
MTU6MjUgQU08YnI+Jmd0OyBTdWJqZWN0OiBbb3ZpcnQtdXNlcnNdIEludHJvZHVjdGlvbiE8YnI+
Jmd0OyA8YnI+Jmd0OyBIZWxsbyw8YnI+Jmd0OyA8YnI+Jmd0OyBJIGFtIGhhcHB5IHRvIGpvaW4g
dGhlIGNvbW11bml0eSBhbmQgaGVscCBzdXBwb3J0IHRoZSBwcm9qZWN0LiBJJ20gYSBsb25nPGJy
PiZndDsgdGltZSB1c2VyIG9mIHZTcGhlcmUvdkNlbnRlciBidXQgSSBhbSBhbiBhc3BpcmluZyB0
byB3b3JrIHdpdGhpbiBvcGVuLXNvdXJjZTxicj4mZ3Q7IHdvcmxkLiBNeSB3b3JrIGV4cGVyaWVu
Y2UgaXMgaW4gY29ycG9yYXRlIGVudmlyb25tZW50cyBhbmQgaW5jbHVkZXM8YnI+Jmd0OyB2aXJ0
dWFsaXphdGlvbiwgc3RvcmFnZSwgYXMgd2VsbCBhcyBiYXNpYyBuZXR3b3JraW5nLjxicj4mZ3Q7
IDxicj4mZ3Q7IEkgaG9wZSBJIGNhbiBzdGFydCBoZWxwaW5nIGJ5IGNvbnZlcnRpbmcgbXkgaG9t
ZWxhYiBmcm9tIHRoZSBWTXdhcmUgc3RhY2sgdG88YnI+Jmd0OyBvVmlydC4gSSBjYW4gY29udHJp
YnV0ZSBieSBzdWJtaXR0aW5nIGJ1ZyByZXF1ZXN0cyBhbmQgZG9jdW1lbnRhdGlvbi4uLmxldDxi
cj4mZ3Q7IG1lIGtub3cgaWYgdGhhdCB3b3VsZCBiZSBoZWxwZnVsITxicj48YnI+VGhhdCB3b3Vs
ZCBkZWZpbml0ZWx5IGJlIGhlbHBmdWwhPGJyPjxicj5Hb29kIGx1Y2sgYW5kIGJlc3QgcmVnYXJk
cyw8YnI+LS0gPGJyPkRpZGk8YnI+X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX188YnI+VXNlcnMgbWFpbGluZyBsaXN0PGJyPlVzZXJzQG92aXJ0Lm9yZzxicj5o
dHRwOi8vbGlzdHMub3ZpcnQub3JnL21haWxtYW4vbGlzdGluZm8vdXNlcnM8YnI+PC9ib2R5Pjwv
aHRtbD4=
----_com.android.email_839995840196020--
10 years, 4 months
Introduction!
by Tom Weeks
Hello,
I am happy to join the community and help support the project. I'm a long
time user of vSphere/vCenter but I am an aspiring to work within
open-source world. My work experience is in corporate environments and
includes virtualization, storage, as well as basic networking.
I hope I can start helping by converting my homelab from the VMware stack
to oVirt. I can contribute by submitting bug requests and
documentation...let me know if that would be helpful!
-Tom
10 years, 4 months
Re: [ovirt-users] Migration from Proxmox 3.x to Ovirt
by Myles Wakeham
Nicolas writes:
> I would be glad too to ear about the way to do a 'one step VM migration'
> between two oVirt datacenters...
Hmmm... Maybe I'm making an assumption here about a feature that doesn't exist. In Proxmox, once you have defined a 'cluster' of hypervisors, and they achieve Quorum (e.g. they can all see each other), you can select a single HN (VM) and select to 'Migrate' to another hypervisor right from the web interface. When you process it, it takes a snapshot of the HN and moves it to the target hypervisor, and then brings it up on the target.
Is that not possible with oVirt?
Myles
=================
Myles Wakeham
Chief Technology Officer
Edgeneering LLC
http://www.edgeneering.com
Ph: +1-480-553-8940
Fax: +1-480-452-1979
10 years, 4 months
Migration from Proxmox 3.x to Ovirt
by Myles Wakeham
We are considering migrating a number of hypervisors from Proxmox 3.x to Ovirt and I was reaching out to see if anyone here had gone through this process and might have some war stories to share?
The bulk of our VMs are OpenVZ containers running Linux, but we have a handful of KVMs with Windows Server 2008. We've used the virtio drivers in those KVM servers. The biggest issue for us is the ridiculously complex clustering model with PM and we have multiple data centers with colocated servers in racks and some do not allow multicasting between them, forcing us to have to change out our VPNs between the servers. Our goal is to allow a 'one step migration' capability of VMs between data centers, which is a major effort to get setup with PM since v2.
If oVirt can help us achieve this, I'm all ears as I think we are ready to make this migration happen.
Thanks in advance for any suggestions or comments.
Myles
=================
Myles Wakeham
Chief Technology Officer
Edgeneering LLC
http://www.edgeneering.com
Ph: +1-480-553-8940
Fax: +1-480-452-1979
10 years, 4 months
Two new plugins for oVirt
by Lucas Vandroux
This is a multi-part message in MIME format.
------=_NextPart_5493C7CD_0A31C798_253BB12C
Content-Type: text/plain;
charset="utf-8"
Content-Transfer-Encoding: base64
RGVhciBhbGwsDQoNCg0KV2UgZGV2ZWxvcGVkIDIgbmV3IHBsdWdpbnMgZm9yIHRoZSBvVmly
dC1FbmdpbmUuDQoNCg0KVGhlIGZpcnN0IG9uZSBpcyB0byBpbnRlcmFjdCB3aXRoIHRoZSBl
bmdpbmUtbWFuYWdlLWRvbWFpbnMgdG9vbCBkaXJlY3RseSBmcm9tIFdlYkFkbWluOiBodHRw
czovL2dpdGh1Yi5jb20vb3ZpcnQtY2hpbmEvbWFuYWdlLWRvbWFpbnMtcGx1Z2luDQoNCg0K
VGhlIHNlY29uZCBvbmUgaXMgdG8gc2NoZWR1bGUgYXV0b21hdGljIGJhY2t1cHMgb2YgeW91
ciB2bXM6IGh0dHBzOi8vZ2l0aHViLmNvbS9vdmlydC1jaGluYS92bS1iYWNrdXAtc2NoZWR1
bGVyDQoNCg0KTWF5YmUgdGhleSBjYW4gaGVscCB5b3UuDQoNCg0KQmVzdCByZWdhcmRzLA0K
DQoNCkx1Y2FzIFZhbmRyb3V4IGZvciB0aGUgb1ZpcnQtQ2hpbmEgVGVhbSAoaHR0cDovL292
aXJ0LWNoaW5hLm9yZy8p
------=_NextPart_5493C7CD_0A31C798_253BB12C
Content-Type: text/html;
charset="utf-8"
Content-Transfer-Encoding: base64
PGRpdj5EZWFyIGFsbCw8L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PjxzcGFuIHN0eWxlPSJj
b2xvcjogcmdiKDIwLCAyNCwgMzUpOyBmb250LWZhbWlseTogSGVsdmV0aWNhLCBBcmlhbCwg
J2x1Y2lkYSBncmFuZGUnLCB0YWhvbWEsIHZlcmRhbmEsIGFyaWFsLCBzYW5zLXNlcmlmOyBs
aW5lLWhlaWdodDogMTkuMzE5OTk5Njk0ODI0MnB4OyI+V2UgZGV2ZWxvcGVkIDIgbmV3IHBs
dWdpbnMgZm9yIHRoZSBvVmlydC1FbmdpbmUuPC9zcGFuPjwvZGl2PjxkaXY+PHNwYW4gc3R5
bGU9ImNvbG9yOiByZ2IoMjAsIDI0LCAzNSk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2EsIEFy
aWFsLCAnbHVjaWRhIGdyYW5kZScsIHRhaG9tYSwgdmVyZGFuYSwgYXJpYWwsIHNhbnMtc2Vy
aWY7IGxpbmUtaGVpZ2h0OiAxOS4zMTk5OTk2OTQ4MjQycHg7Ij48YnI+PC9zcGFuPjwvZGl2
PjxkaXY+PHNwYW4gc3R5bGU9ImNvbG9yOiByZ2IoMjAsIDI0LCAzNSk7IGZvbnQtZmFtaWx5
OiBIZWx2ZXRpY2EsIEFyaWFsLCAnbHVjaWRhIGdyYW5kZScsIHRhaG9tYSwgdmVyZGFuYSwg
YXJpYWwsIHNhbnMtc2VyaWY7IGxpbmUtaGVpZ2h0OiAxOS4zMTk5OTk2OTQ4MjQycHg7Ij5U
aGUgZmlyc3Qgb25lIGlzJm5ic3A7PC9zcGFuPjxzcGFuIHN0eWxlPSJjb2xvcjogcmdiKDIw
LCAyNCwgMzUpOyBmb250LWZhbWlseTogSGVsdmV0aWNhLCBBcmlhbCwgJ2x1Y2lkYSBncmFu
ZGUnLCB0YWhvbWEsIHZlcmRhbmEsIGFyaWFsLCBzYW5zLXNlcmlmOyBsaW5lLWhlaWdodDog
MTkuMzE5OTk5Njk0ODI0MnB4OyI+dG8gaW50ZXJhY3Qgd2l0aCB0aGUgZW5naW5lLW1hbmFn
ZS1kb21haW5zIHRvb2wgZGlyZWN0bHkgZnJvbSBXZWJBZG1pbjombmJzcDs8L3NwYW4+PGZv
bnQgY29sb3I9IiMxNDE4MjMiIGZhY2U9IkhlbHZldGljYSwgQXJpYWwsIGx1Y2lkYSBncmFu
ZGUsIHRhaG9tYSwgdmVyZGFuYSwgYXJpYWwsIHNhbnMtc2VyaWYiPjxzcGFuIHN0eWxlPSJs
aW5lLWhlaWdodDogMTkuMzE5OTk5Njk0ODI0MnB4OyI+aHR0cHM6Ly9naXRodWIuY29tL292
aXJ0LWNoaW5hL21hbmFnZS1kb21haW5zLXBsdWdpbjwvc3Bhbj48L2ZvbnQ+PC9kaXY+PGRp
dj48Zm9udCBjb2xvcj0iIzE0MTgyMyIgZmFjZT0iSGVsdmV0aWNhLCBBcmlhbCwgbHVjaWRh
IGdyYW5kZSwgdGFob21hLCB2ZXJkYW5hLCBhcmlhbCwgc2Fucy1zZXJpZiI+PHNwYW4gc3R5
bGU9ImxpbmUtaGVpZ2h0OiAxOS4zMTk5OTk2OTQ4MjQycHg7Ij48YnI+PC9zcGFuPjwvZm9u
dD48L2Rpdj48ZGl2Pjxmb250IGNvbG9yPSIjMTQxODIzIiBmYWNlPSJIZWx2ZXRpY2EsIEFy
aWFsLCBsdWNpZGEgZ3JhbmRlLCB0YWhvbWEsIHZlcmRhbmEsIGFyaWFsLCBzYW5zLXNlcmlm
Ij48c3BhbiBzdHlsZT0ibGluZS1oZWlnaHQ6IDE5LjMxOTk5OTY5NDgyNDJweDsiPlRoZSBz
ZWNvbmQgb25lIGlzIHRvIHNjaGVkdWxlIGF1dG9tYXRpYyBiYWNrdXBzIG9mIHlvdXIgdm1z
OiZuYnNwO2h0dHBzOi8vZ2l0aHViLmNvbS9vdmlydC1jaGluYS92bS1iYWNrdXAtc2NoZWR1
bGVyPC9zcGFuPjwvZm9udD48L2Rpdj48ZGl2Pjxmb250IGNvbG9yPSIjMTQxODIzIiBmYWNl
PSJIZWx2ZXRpY2EsIEFyaWFsLCBsdWNpZGEgZ3JhbmRlLCB0YWhvbWEsIHZlcmRhbmEsIGFy
aWFsLCBzYW5zLXNlcmlmIj48c3BhbiBzdHlsZT0ibGluZS1oZWlnaHQ6IDE5LjMxOTk5OTY5
NDgyNDJweDsiPjxicj48L3NwYW4+PC9mb250PjwvZGl2PjxkaXY+PHNwYW4gc3R5bGU9ImNv
bG9yOiByZ2IoMjAsIDI0LCAzNSk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2EsIEFyaWFsLCAn
bHVjaWRhIGdyYW5kZScsIHRhaG9tYSwgdmVyZGFuYSwgYXJpYWwsIHNhbnMtc2VyaWY7IGxp
bmUtaGVpZ2h0OiAxOS4zMTk5OTk2OTQ4MjQycHg7Ij5NYXliZSB0aGV5IGNhbiBoZWxwIHlv
dS48L3NwYW4+PC9kaXY+PGRpdj48c3BhbiBzdHlsZT0iY29sb3I6IHJnYigyMCwgMjQsIDM1
KTsgZm9udC1mYW1pbHk6IEhlbHZldGljYSwgQXJpYWwsICdsdWNpZGEgZ3JhbmRlJywgdGFo
b21hLCB2ZXJkYW5hLCBhcmlhbCwgc2Fucy1zZXJpZjsgbGluZS1oZWlnaHQ6IDE5LjMxOTk5
OTY5NDgyNDJweDsiPjxicj48L3NwYW4+PC9kaXY+PGRpdj48c3BhbiBzdHlsZT0iY29sb3I6
IHJnYigyMCwgMjQsIDM1KTsgZm9udC1mYW1pbHk6IEhlbHZldGljYSwgQXJpYWwsICdsdWNp
ZGEgZ3JhbmRlJywgdGFob21hLCB2ZXJkYW5hLCBhcmlhbCwgc2Fucy1zZXJpZjsgbGluZS1o
ZWlnaHQ6IDE5LjMxOTk5OTY5NDgyNDJweDsiPkJlc3QgcmVnYXJkcyw8L3NwYW4+PC9kaXY+
PGRpdj48c3BhbiBzdHlsZT0iY29sb3I6IHJnYigyMCwgMjQsIDM1KTsgZm9udC1mYW1pbHk6
IEhlbHZldGljYSwgQXJpYWwsICdsdWNpZGEgZ3JhbmRlJywgdGFob21hLCB2ZXJkYW5hLCBh
cmlhbCwgc2Fucy1zZXJpZjsgbGluZS1oZWlnaHQ6IDE5LjMxOTk5OTY5NDgyNDJweDsiPjxi
cj48L3NwYW4+PC9kaXY+PGRpdj48c3BhbiBzdHlsZT0iY29sb3I6IHJnYigyMCwgMjQsIDM1
KTsgZm9udC1mYW1pbHk6IEhlbHZldGljYSwgQXJpYWwsICdsdWNpZGEgZ3JhbmRlJywgdGFo
b21hLCB2ZXJkYW5hLCBhcmlhbCwgc2Fucy1zZXJpZjsgbGluZS1oZWlnaHQ6IDE5LjMxOTk5
OTY5NDgyNDJweDsiPkx1Y2FzIFZhbmRyb3V4IGZvciB0aGUgb1ZpcnQtQ2hpbmEgVGVhbSAo
PC9zcGFuPjxmb250IGNvbG9yPSIjMTQxODIzIiBmYWNlPSJIZWx2ZXRpY2EsIEFyaWFsLCBs
dWNpZGEgZ3JhbmRlLCB0YWhvbWEsIHZlcmRhbmEsIGFyaWFsLCBzYW5zLXNlcmlmIj48c3Bh
biBzdHlsZT0ibGluZS1oZWlnaHQ6IDE5LjMxOTk5OTY5NDgyNDJweDsiPmh0dHA6Ly9vdmly
dC1jaGluYS5vcmcvPC9zcGFuPjwvZm9udD48c3BhbiBzdHlsZT0iY29sb3I6IHJnYigyMCwg
MjQsIDM1KTsgZm9udC1mYW1pbHk6IEhlbHZldGljYSwgQXJpYWwsICdsdWNpZGEgZ3JhbmRl
JywgdGFob21hLCB2ZXJkYW5hLCBhcmlhbCwgc2Fucy1zZXJpZjsgbGluZS1oZWlnaHQ6IDE5
LjMxOTk5OTY5NDgyNDJweDsiPik8L3NwYW4+PC9kaXY+PGRpdj48Zm9udCBjb2xvcj0iIzE0
MTgyMyIgZmFjZT0iSGVsdmV0aWNhLCBBcmlhbCwgbHVjaWRhIGdyYW5kZSwgdGFob21hLCB2
ZXJkYW5hLCBhcmlhbCwgc2Fucy1zZXJpZiI+PHNwYW4gc3R5bGU9ImxpbmUtaGVpZ2h0OiAx
OS4zMTk5OTk2OTQ4MjQycHg7Ij48YnI+PC9zcGFuPjwvZm9udD48L2Rpdj48ZGl2Pjxmb250
IGNvbG9yPSIjMTQxODIzIiBmYWNlPSJIZWx2ZXRpY2EsIEFyaWFsLCBsdWNpZGEgZ3JhbmRl
LCB0YWhvbWEsIHZlcmRhbmEsIGFyaWFsLCBzYW5zLXNlcmlmIj48c3BhbiBzdHlsZT0ibGlu
ZS1oZWlnaHQ6IDE5LjMxOTk5OTY5NDgyNDJweDsiPjxicj48L3NwYW4+PC9mb250PjwvZGl2
PjxkaXY+PGluY2x1ZGV0YWlsPjwhLS08IVtlbmRpZl0tLT48L2luY2x1ZGV0YWlsPjwvZGl2
Pg==
------=_NextPart_5493C7CD_0A31C798_253BB12C--
10 years, 4 months
SPM host and snapshot deletion
by Demeter Tibor
------=_Part_8476872_1620595366.1419319418351
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Hi,
I have an ovirt 3.5 with glusterfs and three nodes. centos 6.5 and glusterfs 3.5.2.
When I do a snapshot deletion on stopped VM, then on the SPM host eats all of virtual memory and whit this kill all of running VMs that is running on SPM host.
It is a very big problem because I need to delete a lot of snapshot.
In this case I need to powereing off these VMs because there are no other options for stopping.
I've try out the live migration, but in this case the live migration does not working.
is it a know bug?
Thanks in advance.
Tibor
------=_Part_8476872_1620595366.1419319418351
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 7bit
<html><body><div style="font-family: times new roman, new york, times, serif; font-size: 12pt; color: #000000"><div>Hi,</div><div>I have an ovirt 3.5 with glusterfs and three nodes. centos 6.5 and glusterfs 3.5.2.</div><div>When I do a snapshot deletion on stopped VM, then on the SPM host eats all of virtual memory and whit this kill all of running VMs that is running on SPM host.</div><div>It is a very big problem because I need to delete a lot of snapshot.</div><div>In this case I need to powereing off these VMs because there are no other options for stopping. </div><div>I've try out the live migration, but in this case the live migration does not working.</div><div><br></div><div>is it a know bug?</div><div><br></div><div>Thanks in advance.</div><div>Tibor</div></div></body></html>
------=_Part_8476872_1620595366.1419319418351--
10 years, 4 months
oVirt bonding mode4 + cisco 2960 XR
by Алексей Николаев
Hi, community!
I have made bond0 mode4 (eth0+eth1+eth2+eth3) by oVirt portal. It's work
well on CentOS 7 node.
How I can setup my cisco 2960 XR switch for work with this bond0 for
loadbalancing + aggregation (802.3ad)?
10 years, 4 months
Centos 7 guest on rhev 3.4
by Jakub Bittner
Hello,
we are running rhev 3.4 and we installed Centos 7.0 as guest with KDE
gui. When I connect to this guest via spice I can see desktop, but it
doesnt resize. I have to manually change resolution inside guest (Centos
7). I have ovirt-agent and vd-agent installed and running in guest. We
are connecting from Centos 7. (Resizing in Centos 6 works as expected)
In Centos 7 guest we have:
spice-server-0.12.4-5.el7_0.1.x86_64
spice-gtk3-0.20-8.el7.x86_64
spice-glib-0.20-8.el7.x86_64
spice-xpi-2.8-5.el7.x86_64
spice-vdagent-0.14.0-7.el7.x86_64
ovirt-guest-agent-common-1.0.10-2.el7.noarch
In Centos 7 from which we connects:
virt-viewer-0.5.7-7.el7.x86_64
spice-gtk3-0.20-8.el7.x86_64
spice-vdagent-0.14.0-7.el7.x86_64
spice-xpi-2.8-5.el7.x86_64
spice-glib-0.20-8.el7.x86_64
Maybe I should install some package in to guest, but I dont know which.
Thanks for help.
10 years, 4 months
Replace Failed Master Data Domain
by Jerry Champlin
List:
What is the process for replacing a master data domain. The storage
attached to this was corrupted and has been replaced. We need to get the
data domain back. Any pointers greatly appreciated.
-Jerry
Jerry Champlin
Absolute Performance Inc.
Phone: 303-565-4401
--
Enabling businesses to deliver critical applications at lower cost and
higher value to their customers.
NON-DISCLOSURE NOTICE: This communication including any and all
attachments is for the intended recipient(s) only and may contain
confidential and privileged information. If you are not the intended
recipient of this communication, any disclosure, copying further
distribution or use of this communication is prohibited. If you received
this communication in error, please contact the sender and delete/destroy
all copies of this communication immediately.
10 years, 4 months
Change host hostname/ip
by Zenon D'Elee
Hello,
I have a hosted ovirt infra with only one host and one storage. I don't
know how to change the host ip address in the ovirt web manager (grey). Do
I have to use the "ovirt-engine-rename" command line on the ovirt vm ? Do I
have to put the host in maintenance ?
Thank you for helping.
10 years, 4 months
Re: [ovirt-users] Strange error messages
by Timothy Asir Jeyasingh
------=_Part_288197_798400652.1419246610865
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Ovirt stores the hook files at the first time it discovers
any gluster hook file in the specific path (/var/lib/glusterd/hooks/1) in the node
and keeps checking the hooks file across every node with its master copy
and returns a conflict message whenever it finds any differences.
To resolve this conflict, Ovirt UI provides a resolve option.
Using which one can copy the master hook file to every node or
can select any particular host's hook file to be copied across every nodes
and update the Ovirt's master copy.
Regards,
Timothy
----- Original Message -----
> -------- Original Message --------
> Subject: Re: [ovirt-users] Strange error messages
> Date: Mon, 17 Nov 2014 18:21:20 +0530
> From: knarra <knarra(a)redhat.com>
> To: Demeter Tibor <tdemeter(a)itsmart.hu> , "users(a)ovirt.org List"
> <users(a)ovirt.org>
> On 11/17/2014 06:14 PM, Demeter Tibor wrote:
> > Hi,
>
> > Meanwhile this happening in every two hours.
>
> > For example 09:21, 11:21, 13:21
>
> > Anybody can me help?
>
> > Thanks,
>
> > Tibor
>
> This happens because the time for syncing the hooks from node to engine has
> been configured for two hours.
> > ----- Original Message -----
>
> > > Hi,
> >
>
> > > In this morning I have got a lot of similar messages to console:
> >
>
> > > 2014-Nov-17, 03:21
> >
>
> > > Detected conflict in hook set-POST-30samba-set.sh of Cluster
> > > r710cluster1.
> >
>
> > > 2014-Nov-17, 03:21
> >
>
> > > Detected conflict in hook stop-PRE-29CTDB-teardown.sh of Cluster
> > > r710cluster1.
> >
>
> > > 2014-Nov-17, 03:21
> >
>
> > > Detected conflict in hook add-brick-PRE-28Quota-enable-root-xattr-heal.sh
> > > of
> > > Cluster r710cluster1.
> >
>
> > > 2014-Nov-17, 03:21
> >
>
> > > Detected conflict in hook set-POST-31ganesha-set.sh of Cluster
> > > r710cluster1.
> >
>
> > > 2014-Nov-17, 03:21
> >
>
> > > Detected conflict in hook start-POST-30samba-start.sh of Cluster
> > > r710cluster1.
> >
>
> > > 2014-Nov-17, 03:21
> >
>
> > > Detected conflict in hook reset-POST-31ganesha-reset.sh of Cluster
> > > r710cluster1.
> >
>
> > > 2014-Nov-17, 03:21
> >
>
> > > Detected conflict in hook
> > > gsync-create-POST-56glusterd-geo-rep-create-post.sh
> > > of Cluster r710cluster1.
> >
>
> > > What does this mean?
> >
>
> > > The system seems to be working.
> >
>
> > > Thanks:
> >
>
> > > Tibor
> >
>
> > > _______________________________________________
> >
>
> > > Users mailing list
> >
>
> > > Users(a)ovirt.org
> >
>
> > > http://lists.ovirt.org/mailman/listinfo/users
> >
>
> > _______________________________________________
>
> > Users mailing list Users(a)ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
>
------=_Part_288197_798400652.1419246610865
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: times new roman, new york, times, se=
rif; font-size: 12pt; color: #000000"><div>Ovirt stores the hook files at t=
he first time it discovers<br>any gluster hook file in the specific path (/=
var/lib/glusterd/hooks/1) in the node<br>and keeps checking the hooks file =
across every node with its master copy<br>and returns a conflict message wh=
enever it finds any differences.<br><br>To resolve this conflict, Ovirt UI =
provides a resolve option.<br>Using which one can copy the master hook file=
to every node or<br>can select any particular host's hook file to be copie=
d across every nodes<br>and update the Ovirt's master copy.</div><div><br><=
/div><div>Regards,<br></div><div>Timothy<br></div><div><br></div><hr id=3D"=
zwchr"><blockquote style=3D"border-left:2px solid #1010FF;margin-left:5px;p=
adding-left:5px;color:#000;font-weight:normal;font-style:normal;text-decora=
tion:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"><br><div =
class=3D"moz-forward-container">
-------- Original Message --------
<table class=3D"moz-email-headers-table mceItemTable" border=3D"0" ce=
llpadding=3D"0" cellspacing=3D"0">
<tbody>
<tr>
<th align=3D"RIGHT" nowrap=3D"nowrap" valign=3D"BASELINE">Subje=
ct:
</th>
<td>Re: [ovirt-users] Strange error messages</td>
</tr>
<tr>
<th align=3D"RIGHT" nowrap=3D"nowrap" valign=3D"BASELINE">Date:=
</th>
<td>Mon, 17 Nov 2014 18:21:20 +0530</td>
</tr>
<tr>
<th align=3D"RIGHT" nowrap=3D"nowrap" valign=3D"BASELINE">From:=
</th>
<td>knarra <a class=3D"moz-txt-link-rfc2396E" href=3D"mailto:kn=
arra(a)redhat.com" target=3D"_blank"><knarra(a)redhat.com></a><br data-mc=
e-bogus=3D"1"></td>
</tr>
<tr>
<th align=3D"RIGHT" nowrap=3D"nowrap" valign=3D"BASELINE">To: <=
/th>
<td>Demeter Tibor <a class=3D"moz-txt-link-rfc2396E" href=3D"ma=
ilto:tdemeter@itsmart.hu" target=3D"_blank"><tdemeter(a)itsmart.hu></a>=
,
<a class=3D"moz-txt-link-rfc2396E" href=3D"mailto:users@ovirt=
.orgList" target=3D"_blank">"users(a)ovirt.org List"</a> <a class=3D"moz-txt-=
link-rfc2396E" href=3D"mailto:users@ovirt.org" target=3D"_blank"><users@=
ovirt.org></a><br data-mce-bogus=3D"1"></td>
</tr>
</tbody>
</table>
<br>
<br>
=20
<div class=3D"moz-cite-prefix">On 11/17/2014 06:14 PM, Demeter Tibor
wrote:<br>
</div>
<blockquote cite=3D"mid:1832467516.3984213.1416228270346.JavaMail.zim=
bra(a)itsmart.hu">
<div style=3D"font-family: times new roman, new york, times,
serif; font-size: 12pt; color: #000000">
<div>Hi,<br>
</div>
<div><br>
</div>
<div>Meanwhile this happening in every two hours.</div>
<div>For example 09:21, 11:21, 13:21</div>
<div><br>
</div>
<div>Anybody can me help?</div>
<div><br>
</div>
<div>Thanks,</div>
<div><br>
</div>
<div>Tibor</div>
</div>
</blockquote>
<br>
This happens because the time for syncing the hooks from node to
engine has been configured for two hours. <br>
<br>
<blockquote cite=3D"mid:1832467516.3984213.1416228270346.JavaMail.zim=
bra(a)itsmart.hu">
<div style=3D"font-family: times new roman, new york, times,
serif; font-size: 12pt; color: #000000">
<div><br>
</div>
<hr id=3D"zwchr">
<blockquote style=3D"border-left:2px solid
#1010FF;margin-left:5px;padding-left:5px;color:#000;font-weight:normal;font=
-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;f=
ont-size:12pt;">
<div style=3D"font-family: times new roman, new york, times,
serif; font-size: 12pt; color: #000000">
<div>Hi, <br>
</div>
<div><br>
</div>
<div>In this morning I have got a lot of similar messages
to console: <br>
</div>
<div><br>
</div>
<div> <br>
2014-Nov-17, 03:21 <br>
<br>
Detected conflict in hook set-POST-30samba-set.sh of
Cluster r710cluster1. <br>
</div>
<div><br>
</div>
<div> <br>
2014-Nov-17, 03:21 <br>
<br>
Detected conflict in hook stop-PRE-29CTDB-teardown.sh of
Cluster r710cluster1. <br>
</div>
<div><br>
</div>
<div> <br>
2014-Nov-17, 03:21 <br>
<br>
Detected conflict in hook
add-brick-PRE-28Quota-enable-root-xattr-heal.sh of
Cluster r710cluster1. <br>
</div>
<div><br>
</div>
<div> <br>
2014-Nov-17, 03:21 <br>
<br>
Detected conflict in hook set-POST-31ganesha-set.sh of
Cluster r710cluster1. <br>
</div>
<div><br>
</div>
<div> <br>
2014-Nov-17, 03:21 <br>
<br>
Detected conflict in hook start-POST-30samba-start.sh of
Cluster r710cluster1. <br>
</div>
<div><br>
</div>
<div> <br>
2014-Nov-17, 03:21 <br>
<br>
Detected conflict in hook reset-POST-31ganesha-reset.sh
of Cluster r710cluster1. <br>
</div>
<div><br>
</div>
<div> <br>
2014-Nov-17, 03:21 <br>
<br>
Detected conflict in hook
gsync-create-POST-56glusterd-geo-rep-create-post.sh of
Cluster r710cluster1. <br>
</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<div>What does this mean? <br>
</div>
<div><br>
</div>
<div>The system seems to be working. <br>
</div>
<div><br>
</div>
<div>Thanks: <br>
</div>
<div><br>
</div>
<div>Tibor<br>
</div>
<div><br>
</div>
<div><br>
</div>
</div>
<br>
_______________________________________________<br>
Users mailing list<br>
<a class=3D"moz-txt-link-abbreviated" href=3D"mailto:Users@ovir=
t.org" target=3D"_blank">Users(a)ovirt.org</a><br>
<a class=3D"moz-txt-link-freetext" href=3D"http://lists.ovirt.o=
rg/mailman/listinfo/users" target=3D"_blank">http://lists.ovirt.org/mailman=
/listinfo/users</a><br>
</blockquote>
<div><br>
</div>
</div>
<br>
<fieldset class=3D"mimeAttachmentHeader"></fieldset>
<br>
<pre>_______________________________________________
Users mailing list
<a class=3D"moz-txt-link-abbreviated" href=3D"mailto:Users@ovirt.org" targe=
t=3D"_blank">Users(a)ovirt.org</a>
<a class=3D"moz-txt-link-freetext" href=3D"http://lists.ovirt.org/mailman/l=
istinfo/users" target=3D"_blank">http://lists.ovirt.org/mailman/listinfo/us=
ers</a>
</pre>
</blockquote>
<br>
<br>
</div>
<br>
=20
</blockquote><div><br></div></div></body></html>
------=_Part_288197_798400652.1419246610865--
10 years, 4 months
Re: [ovirt-users] Using 10gb vNIC/vbridge into VM is possible?
by Kalil de A. Carvalho
Dear Amador.
No, unfortunately not.
This was just a friend question, because he need VM with 10GB NIC's.
Today he is using Xenserver, this solution does not attend his expedition
and he is research another solution.
I told him about oVirt/KVM but this is a prerequisite to use.
I will plan with him to try make a project to test a take the resolts.
Best regards.
On Fri, Dec 19, 2014 at 1:18 AM, Amador Segundo <asegundo(a)redhat.com> wrote:
> Virtio devices does not support speed, so we fake their speeds showing
> "1000mbps" in Admin. Portal. If your boxes have 10gbps devices then your
> vms are already taking advantage of that. Did you test it? Could you share
> some results?
>
>
> -----Original Message-----
> From: Kalil de A. Carvalho [kalilac(a)gmail.com]
> Received: Thursday, 18 Dec 2014, 22:59
> To: users(a)ovirt.org
> Subject: [ovirt-users] Using 10gb vNIC/vbridge into VM is possible?
>
>
> Hello all.
>
> Today a follow work ask me if is possible to use vNIC or vbridge in a VM
> managed by ovirt.
>
> What he wants is to have a virtual 10gb network to some machines.
>
> All hosts NIC's are 10gb.
>
> Is this possible?
>
> If yes how can I do it?
>
> Best regards.
>
--
Atenciosamente,
Kalil de A. Carvalho
10 years, 4 months
all hosts non-operational
by Brent Hartzell
------=_NextPart_000_2B93_01D01D96.3EAE13C0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Hello,
After testing replacing a failed Gluster brick (shared ovirt/gluster) ALL
hosts in the cluster go non-responsive, storage drops off etc. Now, gluster
peer status fails, can't set any volume options, the volume randomly drops
out of oVirt (was created from oVirt), log in oVirt dashboard shows the
entry that the volume was deleted (but is there). Any gluster commands just
hang. The combination of Ovirt & Gluster seems stable until there's a
problem, then literally everything just grinds to a halt. All VM's go down,
datacenter & hosts go non-responsive and the whole thing is broke.. Any
ideas on what we should be looking for?
------=_NextPart_000_2B93_01D01D96.3EAE13C0
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" =
xmlns:o=3D"urn:schemas-microsoft-com:office:office" =
xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" =
xmlns=3D"http://www.w3.org/TR/REC-html40"><head><META =
HTTP-EQUIV=3D"Content-Type" CONTENT=3D"text/html; =
charset=3Dus-ascii"><meta name=3DGenerator content=3D"Microsoft Word 15 =
(filtered medium)"><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri",serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri",serif;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DEN-US =
link=3D"#0563C1" vlink=3D"#954F72"><div class=3DWordSection1><p =
class=3DMsoNormal>Hello,<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>After =
testing replacing a failed Gluster brick (shared ovirt/gluster) ALL =
hosts in the cluster go non-responsive, storage drops off etc. Now, =
gluster peer status fails, can’t set any volume options, the =
volume randomly drops out of oVirt (was created from oVirt), log in =
oVirt dashboard shows the entry that the volume was deleted (but is =
there). Any gluster commands just hang. The combination of Ovirt & =
Gluster seems stable until there’s a problem, then literally =
everything just grinds to a halt. All VM’s go down, datacenter =
& hosts go non-responsive and the whole thing is broke.. Any ideas =
on what we should be looking for?<o:p></o:p></p></div></body></html>
------=_NextPart_000_2B93_01D01D96.3EAE13C0--
10 years, 4 months
vm has paused due to unknown storage error
by Punit Dambiwal
Hi,
Suddenly all of my VM on one host paused with the following error :-
vm has paused due to unknown storage error
I am using glusterfs storage with distributed replicate replica=2....my
storage and compute both running on the same node...
engine logs :- http://ur1.ca/j31iu
Host logs :- http://ur1.ca/j31kk (I grep it for one Failed VM)
Thanks,
Punit
10 years, 4 months
Setup, run and access a VNC session built from script
by Nicolas Ecarnot
Hi,
Some months ago, I think I read something like that here, but I can not
find it...
We have several oVirt setups, and we can not change that.
We have several VMs on each of them, and sometimes, we have to access
their oVirt VNC console, but it is painful to know on which oVirt they
are running. At present, I either have to remember on which one it lies,
or to web-connect on every oVirt web GUI and crawl.
I thought I would be possible to have a DNS cname made like this :
c-myServerName, pointing to something leading to the VNC oVirt console
of the myServerName VM.
That ( <- ? ) would lead to asking oVirt to create a VNC session for the
correct VM, create the passwd, then run a noVNC session with the correct
credentials.
I'm used to play with simple oVirt shell commands, but I guess that many
things will happen in a HTML and WWW context, so I've started to play
with REST.
Does anybody knows if someone took the time to do similar things, or if
someone remembers having seen such a workflow?
Regards
--
Nicolas Ecarnot
10 years, 4 months
Re: [ovirt-users] Using 10gb vNIC/vbridge into VM is possible? (Darrell Budic)
by Nikolai Sednev
------=_Part_235182_797090940.1419163706720
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
https://bugzilla.redhat.com/show_bug.cgi?id=1168478
SR-IOV target release 3.6 and you'll need appropriate NIC on your Host, supporting this functionality http://wiki.ovirt.org/Feature/SR-IOV
Thanks in advance.
Best regards,
Nikolai
____________________
Nikolai Sednev
Senior Quality Engineer at Compute team
Red Hat Israel
34 Jerusalem Road,
Ra'anana, Israel 43501
Tel: +972 9 7692043
Mobile: +972 52 7342734
Email: nsednev(a)redhat.com
IRC: nsednev
----- Original Message -----
From: users-request(a)ovirt.org
To: users(a)ovirt.org
Sent: Friday, December 19, 2014 8:22:17 PM
Subject: Users Digest, Vol 39, Issue 135
Send Users mailing list submissions to
users(a)ovirt.org
To subscribe or unsubscribe via the World Wide Web, visit
http://lists.ovirt.org/mailman/listinfo/users
or, via email, send a message with subject or body 'help' to
users-request(a)ovirt.org
You can reach the person managing the list at
users-owner(a)ovirt.org
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Users digest..."
Today's Topics:
1. Re: console viewer for ovrit engine (Yue, Cong)
2. Re: Using 10gb vNIC/vbridge into VM is possible? (Darrell Budic)
3. Re: VM failover with ovirt3.5 (Yue, Cong)
----------------------------------------------------------------------
Message: 1
Date: Fri, 19 Dec 2014 09:03:11 -0800
From: "Yue, Cong" <Cong_Yue(a)alliedtelesis.com>
To: Simone Tiraboschi <stirabos(a)redhat.com>
Cc: "users(a)ovirt.org" <users(a)ovirt.org>
Subject: Re: [ovirt-users] console viewer for ovrit engine
Message-ID:
<ED08B56256B38842A463A2A0804C5AC0326AEEF291(a)svr-ca-exch1.atg.lc>
Content-Type: text/plain; charset="utf-8"
So. It means the web-proxy should be installed into the engine inside the virt nodes where host the VM.
The ovirt engine is with centos 6.6.
I will have a try for this later.
Thanks,
Cong
-----Original Message-----
From: Simone Tiraboschi [mailto:stirabos@redhat.com]
Sent: Friday, December 19, 2014 12:31 AM
To: Yue, Cong
Cc: Gianluca Cecchi; users(a)ovirt.org
Subject: Re: [ovirt-users] console viewer for ovrit engine
----- Original Message -----
> From: "Cong Yue" <Cong_Yue(a)alliedtelesis.com>
> To: "Simone Tiraboschi" <stirabos(a)redhat.com>
> Cc: "Gianluca Cecchi" <gianluca.cecchi(a)gmail.com>, users(a)ovirt.org
> Sent: Thursday, December 18, 2014 5:39:39 PM
> Subject: RE: [ovirt-users] console viewer for ovrit engine
>
> Sorry. It is centos 7 for host.
>
>
> -----Original Message-----
> From: Yue, Cong
> Sent: Thursday, December 18, 2014 8:38 AM
> To: 'Simone Tiraboschi'
> Cc: Gianluca Cecchi; users(a)ovirt.org
> Subject: RE: [ovirt-users] console viewer for ovrit engine
>
> Thanks for the reply.
> Yes, I am using centos for host. What engine host in your definition?
> Is the host PC where engine VM is running on the top?
The physical host or VM witch runs the engine.
> I am doing the walkthrough as
> http://community.redhat.com/blog/2014/10/up-and-running-with-ovirt-3-5
> /
> http://community.redhat.com/blog/2014/11/up-and-running-with-ovirt-3-5
> -part-two/
>
> I am glad to test your new 3.5.1 in my environment.
>
> Thanks,
> Cong
>
>
> -----Original Message-----
> From: Simone Tiraboschi [mailto:stirabos@redhat.com]
> Sent: Thursday, December 18, 2014 1:02 AM
> To: Yue, Cong
> Cc: Gianluca Cecchi; users(a)ovirt.org
> Subject: Re: [ovirt-users] console viewer for ovrit engine
>
>
>
> ----- Original Message -----
> > From: "Cong Yue" <Cong_Yue(a)alliedtelesis.com>
> > To: "Gianluca Cecchi" <gianluca.cecchi(a)gmail.com>
> > Cc: users(a)ovirt.org
> > Sent: Thursday, December 18, 2014 12:45:09 AM
> > Subject: Re: [ovirt-users] console viewer for ovrit engine
> >
> >
> >
> > I checked but also it saids there is no such package. It works now
> > with my native client.
> >
> > Some strange..
>
> The oVirt websocket proxy is part of the engine and it's by default
> installed on the engine host when you install the engine; engine-setup
> asks if you want to configure it or not.
> Than, if you really need for your network design, you can also install
> and configure the websocket proxy on a different host.
> http://www.ovirt.org/Features/WebSocketProxy_on_a_separate_host
> but this is already a special case.
>
> I read in this thread that you are using centos7 (also for the engine host?).
> We didn't release oVirt 3.5.0 for el7 but we'll do since 3.5.1 which
> is targeted just after Christmas vacation (you are welcome to help us
> testing it on centos7!) so now you still cannot find
> ovirt-websocket-proxy rpm for
> el7 on the stable branch.
> If you really need to install it right now on el7 you can try from
> nightly snapshot.
>
>
>
>
> > Thanks,
> >
> > Cong
> >
> >
> >
> >
> >
> > From: Gianluca Cecchi [mailto:gianluca.cecchi@gmail.com]
> > Sent: Wednesday, December 17, 2014 2:07 PM
> > To: Yue, Cong
> > Cc: Donny Davis; awels(a)redhat.com; users(a)ovirt.org
> > Subject: Re: [ovirt-users] console viewer for ovrit engine
> >
> >
> >
> >
> > On Wed, Dec 17, 2014 at 10:58 PM, Yue, Cong <
> > Cong_Yue(a)alliedtelesis.com >
> > wrote:
> >
> > Thanks, but it saids there is no ovirt-websocket-proxy packges.
> > I am using the repository of
> > http://resources.ovirt.org/pub/yum-repo/ovirt-release35.rpm
> >
> > Thanks,
> > Cong
> >
> >
> >
> >
> > The package, at least in 3.5 is
> > ovirt-engine-websocket-proxy
> >
> >
> > Gianluca
> >
> >
> >
> >
> >
> > This e-mail message is for the sole use of the intended recipient(s)
> > and may contain confidential and privileged information. Any
> > unauthorized review, use, disclosure or distribution is prohibited.
> > If you are not the intended recipient, please contact the sender by
> > reply e-mail and destroy all copies of the original message. If you
> > are the intended recipient, please be advised that the content of
> > this message is subject to access, review and disclosure by the
> > sender's e-mail System Administrator.
> >
> > _______________________________________________
> > Users mailing list
> > Users(a)ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
>
> This e-mail message is for the sole use of the intended recipient(s)
> and may contain confidential and privileged information. Any
> unauthorized review, use, disclosure or distribution is prohibited. If
> you are not the intended recipient, please contact the sender by reply
> e-mail and destroy all copies of the original message. If you are the
> intended recipient, please be advised that the content of this message
> is subject to access, review and disclosure by the sender's e-mail System Administrator.
>
This e-mail message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. If you are the intended recipient, please be advised that the content of this message is subject to access, review and disclosure by the sender's e-mail System Administrator.
------------------------------
Message: 2
Date: Fri, 19 Dec 2014 11:08:03 -0600
From: Darrell Budic <budic(a)onholyground.com>
To: "Kalil de A. Carvalho" <kalilac(a)gmail.com>
Cc: Amador Segundo <asegundo(a)redhat.com>, users(a)ovirt.org
Subject: Re: [ovirt-users] Using 10gb vNIC/vbridge into VM is
possible?
Message-ID: <B6ECAED5-7CE3-44D2-A4E8-90EF46B3203D(a)onholyground.com>
Content-Type: text/plain; charset="utf-8"
I tried a quick iperf test a while back and got 3-4Gb/sec between a pair of VMs on separate hosts with a 10G infrastructure, no real tuning and no SRIOV. That met my needs so I didn?t try anything further. If you were aiming for 10G for all, you?d want to work on SRIOV I imagine, but they get pretty good performance even without it.
> On Dec 19, 2014, at 4:49 AM, Kalil de A. Carvalho <kalilac(a)gmail.com> wrote:
>
> Dear Amador.
>
> No, unfortunately not.
>
> This was just a friend question, because he need VM with 10GB NIC's.
>
> Today he is using Xenserver, this solution does not attend his expedition and he is research another solution.
>
> I told him about oVirt/KVM but this is a prerequisite to use.
>
> I will plan with him to try make a project to test a take the resolts.
>
> Best regards.
>
> On Fri, Dec 19, 2014 at 1:18 AM, Amador Segundo <asegundo(a)redhat.com <mailto:asegundo@redhat.com>> wrote:
> Virtio devices does not support speed, so we fake their speeds showing "1000mbps" in Admin. Portal. If your boxes have 10gbps devices then your vms are already taking advantage of that. Did you test it? Could you share some results?
>
>
> -----Original Message-----
> From: Kalil de A. Carvalho [kalilac(a)gmail.com <mailto:kalilac@gmail.com>]
> Received: Thursday, 18 Dec 2014, 22:59
> To: users(a)ovirt.org <mailto:users@ovirt.org>
> Subject: [ovirt-users] Using 10gb vNIC/vbridge into VM is possible?
>
>
> Hello all.
>
> Today a follow work ask me if is possible to use vNIC or vbridge in a VM managed by ovirt.
>
> What he wants is to have a virtual 10gb network to some machines.
>
> All hosts NIC's are 10gb.
>
> Is this possible?
>
> If yes how can I do it?
>
> Best regards.
>
>
>
>
> --
> Atenciosamente,
> Kalil de A. Carvalho
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org <mailto:Users@ovirt.org>
> http://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20141219/e2a27490/atta...>
------------------------------
Message: 3
Date: Fri, 19 Dec 2014 10:22:10 -0800
From: "Yue, Cong" <Cong_Yue(a)alliedtelesis.com>
To: Simone Tiraboschi <stirabos(a)redhat.com>
Cc: "users(a)ovirt.org" <users(a)ovirt.org>
Subject: Re: [ovirt-users] VM failover with ovirt3.5
Message-ID:
<ED08B56256B38842A463A2A0804C5AC0326AEEF2F1(a)svr-ca-exch1.atg.lc>
Content-Type: text/plain; charset="utf-8"
Thanks for the information. This is the log for my three ovirt nodes.
>From the output of hosted-engine --vm-status, it shows the engine state for my 2nd and 3rd ovirt node is DOWN.
Is this the reason why VM failover not work in my environment? How can I make also engine works for my 2nd and 3rd ovit nodes?
--
--== Host 1 status ==--
Status up-to-date : True
Hostname : 10.0.0.94
Host ID : 1
Engine status : {"health": "good", "vm": "up",
"detail": "up"}
Score : 2400
Local maintenance : False
Host timestamp : 150475
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=150475 (Fri Dec 19 13:12:18 2014)
host-id=1
score=2400
maintenance=False
state=EngineUp
--== Host 2 status ==--
Status up-to-date : True
Hostname : 10.0.0.93
Host ID : 2
Engine status : {"reason": "vm not running on
this host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 2400
Local maintenance : False
Host timestamp : 1572
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=1572 (Fri Dec 19 10:12:18 2014)
host-id=2
score=2400
maintenance=False
state=EngineDown
--== Host 3 status ==--
Status up-to-date : False
Hostname : 10.0.0.92
Host ID : 3
Engine status : unknown stale-data
Score : 2400
Local maintenance : False
Host timestamp : 987
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=987 (Fri Dec 19 10:09:58 2014)
host-id=3
score=2400
maintenance=False
state=EngineDown
--
And the /var/log/ovirt-hosted-engine-ha/agent.log for three ovirt nodes are as follows:
--
10.0.0.94(hosted-engine-1)
---
MainThread::INFO::2014-12-19
13:09:33,716::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:09:33,716::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:09:44,017::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:09:44,017::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:09:54,303::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:09:54,303::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:10:04,342::states::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
Engine vm running on localhost
MainThread::INFO::2014-12-19
13:10:04,617::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:10:04,617::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:10:14,657::state_machine::160::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Global metadata: {'maintenance': False}
MainThread::INFO::2014-12-19
13:10:14,657::state_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Host 10.0.0.93 (id 2): {'extra':
'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=1448
(Fri Dec 19 10:10:14
2014)\nhost-id=2\nscore=2400\nmaintenance=False\nstate=EngineDown\n',
'hostname': '10.0.0.93', 'alive': True, 'host-id': 2, 'engine-status':
{'reason': 'vm not running on this host', 'health': 'bad', 'vm':
'down', 'detail': 'unknown'}, 'score': 2400, 'maintenance': False,
'host-ts': 1448}
MainThread::INFO::2014-12-19
13:10:14,657::state_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Host 10.0.0.92 (id 3): {'extra':
'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=987
(Fri Dec 19 10:09:58
2014)\nhost-id=3\nscore=2400\nmaintenance=False\nstate=EngineDown\n',
'hostname': '10.0.0.92', 'alive': True, 'host-id': 3, 'engine-status':
{'reason': 'vm not running on this host', 'health': 'bad', 'vm':
'down', 'detail': 'unknown'}, 'score': 2400, 'maintenance': False,
'host-ts': 987}
MainThread::INFO::2014-12-19
13:10:14,658::state_machine::168::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Local (id 1): {'engine-health': {'health': 'good', 'vm': 'up',
'detail': 'up'}, 'bridge': True, 'mem-free': 1079.0, 'maintenance':
False, 'cpu-load': 0.0269, 'gateway': True}
MainThread::INFO::2014-12-19
13:10:14,904::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:10:14,904::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:10:25,210::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:10:25,210::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:10:35,499::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:10:35,499::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:10:45,784::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:10:45,785::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:10:56,070::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:10:56,070::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:11:06,109::states::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
Engine vm running on localhost
MainThread::INFO::2014-12-19
13:11:06,359::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:11:06,359::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:11:16,658::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:11:16,658::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:11:26,991::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:11:26,991::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:11:37,341::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:11:37,341::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
----
10.0.0.93 (hosted-engine-2)
MainThread::INFO::2014-12-19
10:12:18,339::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-19
10:12:18,339::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-19
10:12:28,651::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-19
10:12:28,652::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-19
10:12:39,010::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-19
10:12:39,010::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-19
10:12:49,338::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-19
10:12:49,338::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-19
10:12:59,642::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-19
10:12:59,642::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-19
10:13:10,010::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-19
10:13:10,010::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
10.0.0.92(hosted-engine-3)
same as 10.0.0.93
--
-----Original Message-----
From: Simone Tiraboschi [mailto:stirabos@redhat.com]
Sent: Friday, December 19, 2014 12:28 AM
To: Yue, Cong
Cc: users(a)ovirt.org
Subject: Re: [ovirt-users] VM failover with ovirt3.5
----- Original Message -----
> From: "Cong Yue" <Cong_Yue(a)alliedtelesis.com>
> To: users(a)ovirt.org
> Sent: Friday, December 19, 2014 2:14:33 AM
> Subject: [ovirt-users] VM failover with ovirt3.5
>
>
>
> Hi
>
>
>
> In my environment, I have 3 ovirt nodes as one cluster. And on top of
> host-1, there is one vm to host ovirt engine.
>
> Also I have one external storage for the cluster to use as data domain
> of engine and data.
>
> I confirmed live migration works well in my environment.
>
> But it seems very buggy for VM failover if I try to force to shut down
> one ovirt node. Sometimes the VM in the node which is shutdown can
> migrate to other host, but it take more than several minutes.
>
> Sometimes, it can not migrate at all. Sometimes, only when the host is
> back, the VM is beginning to move.
Can you please check or share the logs under /var/log/ovirt-hosted-engine-ha/ ?
> Is there some documentation to explain how VM failover is working? And
> is there some bugs reported related with this?
http://www.ovirt.org/Features/Self_Hosted_Engine#Agent_State_Diagram
> Thanks in advance,
>
> Cong
>
>
>
>
> This e-mail message is for the sole use of the intended recipient(s)
> and may contain confidential and privileged information. Any
> unauthorized review, use, disclosure or distribution is prohibited. If
> you are not the intended recipient, please contact the sender by reply
> e-mail and destroy all copies of the original message. If you are the
> intended recipient, please be advised that the content of this message
> is subject to access, review and disclosure by the sender's e-mail System Administrator.
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
This e-mail message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. If you are the intended recipient, please be advised that the content of this message is subject to access, review and disclosure by the sender's e-mail System Administrator.
------------------------------
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
End of Users Digest, Vol 39, Issue 135
**************************************
------=_Part_235182_797090940.1419163706720
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: georgia,serif; font-size: 12pt; colo=
r: #000000"><div><a href=3D"https://bugzilla.redhat.com/show_bug.cgi?id=3D1=
168478" data-mce-href=3D"https://bugzilla.redhat.com/show_bug.cgi?id=3D1168=
478">https://bugzilla.redhat.com/show_bug.cgi?id=3D1168478</a></div><div><b=
r></div><div>SR-IOV target release 3.6 and you'll need appropriate NIC on y=
our Host, supporting this functionality <a href=3D"http://wiki.ovirt.o=
rg/Feature/SR-IOV" data-mce-href=3D"http://wiki.ovirt.org/Feature/SR-IOV">h=
ttp://wiki.ovirt.org/Feature/SR-IOV</a></div><div><br></div><div><span name=
=3D"x"></span><br>Thanks in advance.<br><div><br></div>Best regards,<br>Nik=
olai<br>____________________<br>Nikolai Sednev<br>Senior Quality Engineer a=
t Compute team<br>Red Hat Israel<br>34 Jerusalem Road,<br>Ra'anana, Israel =
43501<br><div><br></div>Tel: +972 9 7692043<br>=
Mobile: +972 52 7342734<br>Email: nsednev(a)redhat.com<br>IRC: nsednev<span n=
ame=3D"x"></span><br></div><div><br></div><hr id=3D"zwchr"><div style=3D"co=
lor:#000;font-weight:normal;font-style:normal;text-decoration:none;font-fam=
ily:Helvetica,Arial,sans-serif;font-size:12pt;"><b>From: </b>users-request@=
ovirt.org<br><b>To: </b>users(a)ovirt.org<br><b>Sent: </b>Friday, December 19=
, 2014 8:22:17 PM<br><b>Subject: </b>Users Digest, Vol 39, Issue 135<br><di=
v><br></div>Send Users mailing list submissions to<br> &nb=
sp; users(a)ovirt.org<br><div><br></div>To subscribe o=
r unsubscribe via the World Wide Web, visit<br> &nbs=
p; http://lists.ovirt.org/mailman/listinfo/users<br>or, vi=
a email, send a message with subject or body 'help' to<br>  =
; users-request(a)ovirt.org<br><div><br></div>Yo=
u can reach the person managing the list at<br> &nbs=
p; users-owner(a)ovirt.org<br><div><br></div>When replying, =
please edit your Subject line so it is more specific<br>than "Re: Contents =
of Users digest..."<br><div><br></div><br>Today's Topics:<br><div><br></div=
> 1. Re: console viewer for ovrit engine (Yue, Cong)<br>&=
nbsp; 2. Re: Using 10gb vNIC/vbridge into VM is possible? (Darr=
ell Budic)<br> 3. Re: VM failover with ovirt3.5 (Yue, Con=
g)<br><div><br></div><br>--------------------------------------------------=
--------------------<br><div><br></div>Message: 1<br>Date: Fri, 19 Dec 2014=
09:03:11 -0800<br>From: "Yue, Cong" <Cong_Yue(a)alliedtelesis.com><br>=
To: Simone Tiraboschi <stirabos(a)redhat.com><br>Cc: "users(a)ovirt.org" =
<users(a)ovirt.org><br>Subject: Re: [ovirt-users] console viewer for ov=
rit engine<br>Message-ID:<br> &nbs=
p;<ED08B56256B38842A463A2A0804C5AC0326AEEF291(a)svr-ca-exch1.atg.lc><br=
>Content-Type: text/plain; charset=3D"utf-8"<br><div><br></div>So. It means=
the web-proxy should be installed into the engine inside the virt nodes wh=
ere host the VM.<br>The ovirt engine is with centos 6.6.<br><div><br></div>=
I will have a try for this later.<br><div><br></div>Thanks,<br>Cong<br><div=
><br></div>-----Original Message-----<br>From: Simone Tiraboschi [mailto:st=
irabos(a)redhat.com]<br>Sent: Friday, December 19, 2014 12:31 AM<br>To: Yue, =
Cong<br>Cc: Gianluca Cecchi; users(a)ovirt.org<br>Subject: Re: [ovirt-users] =
console viewer for ovrit engine<br><div><br></div><br><div><br></div>----- =
Original Message -----<br>> From: "Cong Yue" <Cong_Yue(a)alliedtelesis.=
com><br>> To: "Simone Tiraboschi" <stirabos(a)redhat.com><br>>=
Cc: "Gianluca Cecchi" <gianluca.cecchi(a)gmail.com>, users(a)ovirt.org<b=
r>> Sent: Thursday, December 18, 2014 5:39:39 PM<br>> Subject: RE: [o=
virt-users] console viewer for ovrit engine<br>><br>> Sorry. It is ce=
ntos 7 for host.<br>><br>><br>> -----Original Message-----<br>>=
From: Yue, Cong<br>> Sent: Thursday, December 18, 2014 8:38 AM<br>> =
To: 'Simone Tiraboschi'<br>> Cc: Gianluca Cecchi; users(a)ovirt.org<br>>=
; Subject: RE: [ovirt-users] console viewer for ovrit engine<br>><br>>=
; Thanks for the reply.<br>> Yes, I am using centos for host. What engin=
e host in your definition?<br>> Is the host PC where engine VM is runnin=
g on the top?<br><div><br></div>The physical host or VM witch runs the engi=
ne.<br><div><br></div>> I am doing the walkthrough as<br>> http://com=
munity.redhat.com/blog/2014/10/up-and-running-with-ovirt-3-5<br>> /<br>&=
gt; http://community.redhat.com/blog/2014/11/up-and-running-with-ovirt-3-5<=
br>> -part-two/<br>><br>> I am glad to test your new 3.5.1 in my e=
nvironment.<br>><br>> Thanks,<br>> Cong<br>><br>><br>> --=
---Original Message-----<br>> From: Simone Tiraboschi [mailto:stirabos@r=
edhat.com]<br>> Sent: Thursday, December 18, 2014 1:02 AM<br>> To: Yu=
e, Cong<br>> Cc: Gianluca Cecchi; users(a)ovirt.org<br>> Subject: Re: [=
ovirt-users] console viewer for ovrit engine<br>><br>><br>><br>>=
; ----- Original Message -----<br>> > From: "Cong Yue" <Cong_Yue@a=
lliedtelesis.com><br>> > To: "Gianluca Cecchi" <gianluca.cecchi=
@gmail.com><br>> > Cc: users(a)ovirt.org<br>> > Sent: Thursday=
, December 18, 2014 12:45:09 AM<br>> > Subject: Re: [ovirt-users] con=
sole viewer for ovrit engine<br>> ><br>> ><br>> ><br>>=
> I checked but also it saids there is no such package. It works now<br=
>> > with my native client.<br>> ><br>> > Some strange..<=
br>><br>> The oVirt websocket proxy is part of the engine and it's by=
default<br>> installed on the engine host when you install the engine; =
engine-setup<br>> asks if you want to configure it or not.<br>> Than,=
if you really need for your network design, you can also install<br>> a=
nd configure the websocket proxy on a different host.<br>> http://=
www.ovirt.org/Features/WebSocketProxy_on_a_separate_host<br>> but this i=
s already a special case.<br>><br>> I read in this thread that you ar=
e using centos7 (also for the engine host?).<br>> We didn't release oVir=
t 3.5.0 for el7 but we'll do since 3.5.1 which<br>> is targeted just aft=
er Christmas vacation (you are welcome to help us<br>> testing it on cen=
tos7!) so now you still cannot find<br>> ovirt-websocket-proxy rpm for<b=
r>> el7 on the stable branch.<br>> If you really need to install it r=
ight now on el7 you can try from<br>> nightly snapshot.<br>><br>><=
br>><br>><br>> > Thanks,<br>> ><br>> > Cong<br>>=
><br>> ><br>> ><br>> ><br>> ><br>> > From=
: Gianluca Cecchi [mailto:gianluca.cecchi@gmail.com]<br>> > Sent: Wed=
nesday, December 17, 2014 2:07 PM<br>> > To: Yue, Cong<br>> > C=
c: Donny Davis; awels(a)redhat.com; users(a)ovirt.org<br>> > Subject: Re:=
[ovirt-users] console viewer for ovrit engine<br>> ><br>> ><br=
>> ><br>> ><br>> > On Wed, Dec 17, 2014 at 10:58 PM, Yue,=
Cong <<br>> > Cong_Yue(a)alliedtelesis.com ><br>> > wrote:=
<br>> ><br>> > Thanks, but it saids there is no ovirt-websocket=
-proxy packges.<br>> > I am using the repository of<br>> > http=
://resources.ovirt.org/pub/yum-repo/ovirt-release35.rpm<br>> ><br>>=
; > Thanks,<br>> > Cong<br>> ><br>> ><br>> ><br>=
> ><br>> > The package, at least in 3.5 is<br>> > ovirt-e=
ngine-websocket-proxy<br>> ><br>> ><br>> > Gianluca<br>&g=
t; ><br>> ><br>> ><br>> ><br>> ><br>> > Th=
is e-mail message is for the sole use of the intended recipient(s)<br>> =
> and may contain confidential and privileged information. Any<br>> &=
gt; unauthorized review, use, disclosure or distribution is prohibited.<br>=
> > If you are not the intended recipient, please contact the sender =
by<br>> > reply e-mail and destroy all copies of the original message=
. If you<br>> > are the intended recipient, please be advised that th=
e content of<br>> > this message is subject to access, review and dis=
closure by the<br>> > sender's e-mail System Administrator.<br>> &=
gt;<br>> > _______________________________________________<br>> &g=
t; Users mailing list<br>> > Users(a)ovirt.org<br>> > http://list=
s.ovirt.org/mailman/listinfo/users<br>> ><br>><br>> This e-mail=
message is for the sole use of the intended recipient(s)<br>> and may c=
ontain confidential and privileged information. Any<br>> unauthorized re=
view, use, disclosure or distribution is prohibited. If<br>> you are not=
the intended recipient, please contact the sender by reply<br>> e-mail =
and destroy all copies of the original message. If you are the<br>> inte=
nded recipient, please be advised that the content of this message<br>> =
is subject to access, review and disclosure by the sender's e-mail System A=
dministrator.<br>><br><div><br></div>This e-mail message is for the sole=
use of the intended recipient(s) and may contain confidential and privileg=
ed information. Any unauthorized review, use, disclosure or distribution is=
prohibited. If you are not the intended recipient, please contact the send=
er by reply e-mail and destroy all copies of the original message. If you a=
re the intended recipient, please be advised that the content of this messa=
ge is subject to access, review and disclosure by the sender's e-mail Syste=
m Administrator.<br><div><br></div>------------------------------<br><div><=
br></div>Message: 2<br>Date: Fri, 19 Dec 2014 11:08:03 -0600<br>From: Darre=
ll Budic <budic(a)onholyground.com><br>To: "Kalil de A. Carvalho" <k=
alilac(a)gmail.com><br>Cc: Amador Segundo <asegundo(a)redhat.com>, use=
rs(a)ovirt.org<br>Subject: Re: [ovirt-users] Using 10gb vNIC/vbridge into VM =
is<br> possible?<br>Message-=
ID: <B6ECAED5-7CE3-44D2-A4E8-90EF46B3203D(a)onholyground.com><br>Conten=
t-Type: text/plain; charset=3D"utf-8"<br><div><br></div>I tried a quick ipe=
rf test a while back and got 3-4Gb/sec between a pair of VMs on separate ho=
sts with a 10G infrastructure, no real tuning and no SRIOV. That met my nee=
ds so I didn?t try anything further. If you were aiming for 10G for all, yo=
u?d want to work on SRIOV I imagine, but they get pretty good performance e=
ven without it.<br><div><br></div><br>> On Dec 19, 2014, at 4:49 AM, Kal=
il de A. Carvalho <kalilac(a)gmail.com> wrote:<br>> <br>> Dear Am=
ador.<br>> <br>> No, unfortunately not.<br>> <br>> This was jus=
t a friend question, because he need VM with 10GB NIC's.<br>> <br>> T=
oday he is using Xenserver, this solution does not attend his expedition an=
d he is research another solution.<br>> <br>> I told him about oVirt/=
KVM but this is a prerequisite to use.<br>> <br>> I will plan with hi=
m to try make a project to test a take the resolts.<br>> <br>> Best r=
egards. <br>> <br>> On Fri, Dec 19, 2014 at 1:18 AM, Amador Segundo &=
lt;asegundo(a)redhat.com <mailto:asegundo@redhat.com>> wrote:<br>>=
; Virtio devices does not support speed, so we fake their speeds showing "1=
000mbps" in Admin. Portal. If your boxes have 10gbps devices then your vms =
are already taking advantage of that. Did you test it? Could you share some=
results? <br>> <br>> <br>> -----Original Message----- <br>> Fr=
om: Kalil de A. Carvalho [kalilac(a)gmail.com <mailto:kalilac@gmail.com>=
;] <br>> Received: Thursday, 18 Dec 2014, 22:59 <br>> To: users@ovirt=
.org <mailto:users@ovirt.org> <br>> Subject: [ovirt-users] Using 1=
0gb vNIC/vbridge into VM is possible? <br>> <br>> <br>> Hello all.=
<br>> <br>> Today a follow work ask me if is possible to use vNIC or =
vbridge in a VM managed by ovirt.<br>> <br>> What he wants is to have=
a virtual 10gb network to some machines.<br>> <br>> All hosts NIC's =
are 10gb.<br>> <br>> Is this possible?<br>> <br>> If yes how ca=
n I do it?<br>> <br>> Best regards.<br>> <br>> <br>> <br>>=
; <br>> -- <br>> Atenciosamente,<br>> Kalil de A. Carvalho<br>>=
<br>> _______________________________________________<br>> Users mai=
ling list<br>> Users(a)ovirt.org <mailto:Users@ovirt.org><br>> ht=
tp://lists.ovirt.org/mailman/listinfo/users <http://lists.ovirt.org/mail=
man/listinfo/users><br>-------------- next part --------------<br>An HTM=
L attachment was scrubbed...<br>URL: <http://lists.ovirt.org/pipermail/u=
sers/attachments/20141219/e2a27490/attachment-0001.html><br><div><br></d=
iv>------------------------------<br><div><br></div>Message: 3<br>Date: Fri=
, 19 Dec 2014 10:22:10 -0800<br>From: "Yue, Cong" <Cong_Yue@alliedtelesi=
s.com><br>To: Simone Tiraboschi <stirabos(a)redhat.com><br>Cc: "user=
s(a)ovirt.org" <users(a)ovirt.org><br>Subject: Re: [ovirt-users] VM failo=
ver with ovirt3.5<br>Message-ID:<br> &nb=
sp; <ED08B56256B38842A463A2A0804C5AC0326AEEF2F1(a)svr-ca-exch1.atg.lc=
><br>Content-Type: text/plain; charset=3D"utf-8"<br><div><br></div>Thank=
s for the information. This is the log for my three ovirt nodes.<br>From th=
e output of hosted-engine --vm-status, it shows the engine state for my 2nd=
and 3rd ovirt node is DOWN.<br>Is this the reason why VM failover not work=
in my environment? How can I make also engine works for my 2nd and 3rd ovi=
t nodes?<br>--<br>--=3D=3D Host 1 status =3D=3D--<br><div><br></div>Status =
up-to-date : =
True<br>Hostname &n=
bsp; : 10.0.0.94<br>Host ID  =
; &nb=
sp;: 1<br>Engine status &n=
bsp; : {"health": "good", "vm": "up",<br>"detail": "up"=
}<br>Score &=
nbsp; : 2400<br>Local maintenance =
: False<br>Host time=
stamp =
: 150475<br>Extra metadata (valid at timestamp):<br>metadata_parse_version=
=3D1<br>metadata_feature_version=3D1<br>timestamp=3D150475 (Fri Dec 19 13:1=
2:18 2014)<br>host-id=3D1<br>score=3D2400<br>maintenance=3DFalse<br>state=
=3DEngineUp<br><div><br></div><br>--=3D=3D Host 2 status =3D=3D--<br><div><=
br></div>Status up-to-date =
: True<br>Hostname =
: 10.0.0.93<br>Host ID &nb=
sp; &=
nbsp; : 2<br>Engine status =
: {"reason": "vm not running on<br=
>this host", "health": "bad", "vm": "down", "detail": "unknown"}<br>Score &=
nbsp; =
: 2400<br>Local maintenance  =
; : False<br>Host timestamp =
: 1572<br>E=
xtra metadata (valid at timestamp):<br>metadata_parse_version=3D1<br>metada=
ta_feature_version=3D1<br>timestamp=3D1572 (Fri Dec 19 10:12:18 2014)<br>ho=
st-id=3D2<br>score=3D2400<br>maintenance=3DFalse<br>state=3DEngineDown<br><=
div><br></div><br>--=3D=3D Host 3 status =3D=3D--<br><div><br></div>Status =
up-to-date : =
False<br>Hostname &=
nbsp; : 10.0.0.92<br>Host ID &nbs=
p; &n=
bsp;: 3<br>Engine status &=
nbsp; : unknown stale-data<br>Score  =
; &nb=
sp; : 2400<br>Local maintenance &n=
bsp; : False<br>Host timestamp &nb=
sp; : 987<br>Extra metadata (vali=
d at timestamp):<br>metadata_parse_version=3D1<br>metadata_feature_version=
=3D1<br>timestamp=3D987 (Fri Dec 19 10:09:58 2014)<br>host-id=3D3<br>score=
=3D2400<br>maintenance=3DFalse<br>state=3DEngineDown<br><div><br></div>--<b=
r>And the /var/log/ovirt-hosted-engine-ha/agent.log for three ovirt nodes a=
re as follows:<br>--<br>10.0.0.94(hosted-engine-1)<br>---<br>MainThread::IN=
FO::2014-12-19<br>13:09:33,716::hosted_engine::327::ovirt_hosted_engine_ha.=
agent.hosted_engine.HostedEngine::(start_monitoring)<br>Current state Engin=
eUp (score: 2400)<br>MainThread::INFO::2014-12-19<br>13:09:33,716::hosted_e=
ngine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start=
_monitoring)<br>Best remote host 10.0.0.93 (id: 2, score: 2400)<br>MainThre=
ad::INFO::2014-12-19<br>13:09:44,017::hosted_engine::327::ovirt_hosted_engi=
ne_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Current state=
EngineUp (score: 2400)<br>MainThread::INFO::2014-12-19<br>13:09:44,017::ho=
sted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::=
(start_monitoring)<br>Best remote host 10.0.0.93 (id: 2, score: 2400)<br>Ma=
inThread::INFO::2014-12-19<br>13:09:54,303::hosted_engine::327::ovirt_hoste=
d_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Current=
state EngineUp (score: 2400)<br>MainThread::INFO::2014-12-19<br>13:09:54,3=
03::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEn=
gine::(start_monitoring)<br>Best remote host 10.0.0.93 (id: 2, score: 2400)=
<br>MainThread::INFO::2014-12-19<br>13:10:04,342::states::394::ovirt_hosted=
_engine_ha.agent.hosted_engine.HostedEngine::(consume)<br>Engine vm running=
on localhost<br>MainThread::INFO::2014-12-19<br>13:10:04,617::hosted_engin=
e::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_mon=
itoring)<br>Current state EngineUp (score: 2400)<br>MainThread::INFO::2014-=
12-19<br>13:10:04,617::hosted_engine::332::ovirt_hosted_engine_ha.agent.hos=
ted_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.93 (=
id: 2, score: 2400)<br>MainThread::INFO::2014-12-19<br>13:10:14,657::state_=
machine::160::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(ref=
resh)<br>Global metadata: {'maintenance': False}<br>MainThread::INFO::2014-=
12-19<br>13:10:14,657::state_machine::165::ovirt_hosted_engine_ha.agent.hos=
ted_engine.HostedEngine::(refresh)<br>Host 10.0.0.93 (id 2): {'extra':<br>'=
metadata_parse_version=3D1\nmetadata_feature_version=3D1\ntimestamp=3D1448<=
br>(Fri Dec 19 10:10:14<br>2014)\nhost-id=3D2\nscore=3D2400\nmaintenance=3D=
False\nstate=3DEngineDown\n',<br>'hostname': '10.0.0.93', 'alive': True, 'h=
ost-id': 2, 'engine-status':<br>{'reason': 'vm not running on this host', '=
health': 'bad', 'vm':<br>'down', 'detail': 'unknown'}, 'score': 2400, 'main=
tenance': False,<br>'host-ts': 1448}<br>MainThread::INFO::2014-12-19<br>13:=
10:14,657::state_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.H=
ostedEngine::(refresh)<br>Host 10.0.0.92 (id 3): {'extra':<br>'metadata_par=
se_version=3D1\nmetadata_feature_version=3D1\ntimestamp=3D987<br>(Fri Dec 1=
9 10:09:58<br>2014)\nhost-id=3D3\nscore=3D2400\nmaintenance=3DFalse\nstate=
=3DEngineDown\n',<br>'hostname': '10.0.0.92', 'alive': True, 'host-id': 3, =
'engine-status':<br>{'reason': 'vm not running on this host', 'health': 'ba=
d', 'vm':<br>'down', 'detail': 'unknown'}, 'score': 2400, 'maintenance': Fa=
lse,<br>'host-ts': 987}<br>MainThread::INFO::2014-12-19<br>13:10:14,658::st=
ate_machine::168::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::=
(refresh)<br>Local (id 1): {'engine-health': {'health': 'good', 'vm': 'up',=
<br>'detail': 'up'}, 'bridge': True, 'mem-free': 1079.0, 'maintenance':<br>=
False, 'cpu-load': 0.0269, 'gateway': True}<br>MainThread::INFO::2014-12-19=
<br>13:10:14,904::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_e=
ngine.HostedEngine::(start_monitoring)<br>Current state EngineUp (score: 24=
00)<br>MainThread::INFO::2014-12-19<br>13:10:14,904::hosted_engine::332::ov=
irt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<b=
r>Best remote host 10.0.0.93 (id: 2, score: 2400)<br>MainThread::INFO::2014=
-12-19<br>13:10:25,210::hosted_engine::327::ovirt_hosted_engine_ha.agent.ho=
sted_engine.HostedEngine::(start_monitoring)<br>Current state EngineUp (sco=
re: 2400)<br>MainThread::INFO::2014-12-19<br>13:10:25,210::hosted_engine::3=
32::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitor=
ing)<br>Best remote host 10.0.0.93 (id: 2, score: 2400)<br>MainThread::INFO=
::2014-12-19<br>13:10:35,499::hosted_engine::327::ovirt_hosted_engine_ha.ag=
ent.hosted_engine.HostedEngine::(start_monitoring)<br>Current state EngineU=
p (score: 2400)<br>MainThread::INFO::2014-12-19<br>13:10:35,499::hosted_eng=
ine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_m=
onitoring)<br>Best remote host 10.0.0.93 (id: 2, score: 2400)<br>MainThread=
::INFO::2014-12-19<br>13:10:45,784::hosted_engine::327::ovirt_hosted_engine=
_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Current state E=
ngineUp (score: 2400)<br>MainThread::INFO::2014-12-19<br>13:10:45,785::host=
ed_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(s=
tart_monitoring)<br>Best remote host 10.0.0.93 (id: 2, score: 2400)<br>Main=
Thread::INFO::2014-12-19<br>13:10:56,070::hosted_engine::327::ovirt_hosted_=
engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Current s=
tate EngineUp (score: 2400)<br>MainThread::INFO::2014-12-19<br>13:10:56,070=
::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngi=
ne::(start_monitoring)<br>Best remote host 10.0.0.93 (id: 2, score: 2400)<b=
r>MainThread::INFO::2014-12-19<br>13:11:06,109::states::394::ovirt_hosted_e=
ngine_ha.agent.hosted_engine.HostedEngine::(consume)<br>Engine vm running o=
n localhost<br>MainThread::INFO::2014-12-19<br>13:11:06,359::hosted_engine:=
:327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monit=
oring)<br>Current state EngineUp (score: 2400)<br>MainThread::INFO::2014-12=
-19<br>13:11:06,359::hosted_engine::332::ovirt_hosted_engine_ha.agent.hoste=
d_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.93 (id=
: 2, score: 2400)<br>MainThread::INFO::2014-12-19<br>13:11:16,658::hosted_e=
ngine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start=
_monitoring)<br>Current state EngineUp (score: 2400)<br>MainThread::INFO::2=
014-12-19<br>13:11:16,658::hosted_engine::332::ovirt_hosted_engine_ha.agent=
.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.=
93 (id: 2, score: 2400)<br>MainThread::INFO::2014-12-19<br>13:11:26,991::ho=
sted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::=
(start_monitoring)<br>Current state EngineUp (score: 2400)<br>MainThread::I=
NFO::2014-12-19<br>13:11:26,991::hosted_engine::332::ovirt_hosted_engine_ha=
.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote host 1=
0.0.0.93 (id: 2, score: 2400)<br>MainThread::INFO::2014-12-19<br>13:11:37,3=
41::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEn=
gine::(start_monitoring)<br>Current state EngineUp (score: 2400)<br>MainThr=
ead::INFO::2014-12-19<br>13:11:37,341::hosted_engine::332::ovirt_hosted_eng=
ine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote =
host 10.0.0.93 (id: 2, score: 2400)<br>----<br><div><br></div>10.0.0.93 (ho=
sted-engine-2)<br>MainThread::INFO::2014-12-19<br>10:12:18,339::hosted_engi=
ne::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_mo=
nitoring)<br>Current state EngineDown (score: 2400)<br>MainThread::INFO::20=
14-12-19<br>10:12:18,339::hosted_engine::332::ovirt_hosted_engine_ha.agent.=
hosted_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.9=
4 (id: 1, score: 2400)<br>MainThread::INFO::2014-12-19<br>10:12:28,651::hos=
ted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(=
start_monitoring)<br>Current state EngineDown (score: 2400)<br>MainThread::=
INFO::2014-12-19<br>10:12:28,652::hosted_engine::332::ovirt_hosted_engine_h=
a.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote host =
10.0.0.94 (id: 1, score: 2400)<br>MainThread::INFO::2014-12-19<br>10:12:39,=
010::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedE=
ngine::(start_monitoring)<br>Current state EngineDown (score: 2400)<br>Main=
Thread::INFO::2014-12-19<br>10:12:39,010::hosted_engine::332::ovirt_hosted_=
engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Best remo=
te host 10.0.0.94 (id: 1, score: 2400)<br>MainThread::INFO::2014-12-19<br>1=
0:12:49,338::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine=
.HostedEngine::(start_monitoring)<br>Current state EngineDown (score: 2400)=
<br>MainThread::INFO::2014-12-19<br>10:12:49,338::hosted_engine::332::ovirt=
_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>B=
est remote host 10.0.0.94 (id: 1, score: 2400)<br>MainThread::INFO::2014-12=
-19<br>10:12:59,642::hosted_engine::327::ovirt_hosted_engine_ha.agent.hoste=
d_engine.HostedEngine::(start_monitoring)<br>Current state EngineDown (scor=
e: 2400)<br>MainThread::INFO::2014-12-19<br>10:12:59,642::hosted_engine::33=
2::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitori=
ng)<br>Best remote host 10.0.0.94 (id: 1, score: 2400)<br>MainThread::INFO:=
:2014-12-19<br>10:13:10,010::hosted_engine::327::ovirt_hosted_engine_ha.age=
nt.hosted_engine.HostedEngine::(start_monitoring)<br>Current state EngineDo=
wn (score: 2400)<br>MainThread::INFO::2014-12-19<br>10:13:10,010::hosted_en=
gine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_=
monitoring)<br>Best remote host 10.0.0.94 (id: 1, score: 2400)<br><div><br>=
</div><br>10.0.0.92(hosted-engine-3)<br>same as 10.0.0.93<br>--<br><div><br=
></div>-----Original Message-----<br>From: Simone Tiraboschi [mailto:stirab=
os(a)redhat.com]<br>Sent: Friday, December 19, 2014 12:28 AM<br>To: Yue, Cong=
<br>Cc: users(a)ovirt.org<br>Subject: Re: [ovirt-users] VM failover with ovir=
t3.5<br><div><br></div><br><div><br></div>----- Original Message -----<br>&=
gt; From: "Cong Yue" <Cong_Yue(a)alliedtelesis.com><br>> To: users@o=
virt.org<br>> Sent: Friday, December 19, 2014 2:14:33 AM<br>> Subject=
: [ovirt-users] VM failover with ovirt3.5<br>><br>><br>><br>> H=
i<br>><br>><br>><br>> In my environment, I have 3 ovirt nodes a=
s one cluster. And on top of<br>> host-1, there is one vm to host ovirt =
engine.<br>><br>> Also I have one external storage for the cluster to=
use as data domain<br>> of engine and data.<br>><br>> I confirmed=
live migration works well in my environment.<br>><br>> But it seems =
very buggy for VM failover if I try to force to shut down<br>> one ovirt=
node. Sometimes the VM in the node which is shutdown can<br>> migrate t=
o other host, but it take more than several minutes.<br>><br>> Someti=
mes, it can not migrate at all. Sometimes, only when the host is<br>> ba=
ck, the VM is beginning to move.<br><div><br></div>Can you please check or =
share the logs under /var/log/ovirt-hosted-engine-ha/ ?<br><div><br></div>&=
gt; Is there some documentation to explain how VM failover is working? And<=
br>> is there some bugs reported related with this?<br><div><br></div>ht=
tp://www.ovirt.org/Features/Self_Hosted_Engine#Agent_State_Diagram<br><div>=
<br></div>> Thanks in advance,<br>><br>> Cong<br>><br>><br>&=
gt;<br>><br>> This e-mail message is for the sole use of the intended=
recipient(s)<br>> and may contain confidential and privileged informati=
on. Any<br>> unauthorized review, use, disclosure or distribution is pro=
hibited. If<br>> you are not the intended recipient, please contact the =
sender by reply<br>> e-mail and destroy all copies of the original messa=
ge. If you are the<br>> intended recipient, please be advised that the c=
ontent of this message<br>> is subject to access, review and disclosure =
by the sender's e-mail System Administrator.<br>><br>> ______________=
_________________________________<br>> Users mailing list<br>> Users@=
ovirt.org<br>> http://lists.ovirt.org/mailman/listinfo/users<br>><br>=
<div><br></div>This e-mail message is for the sole use of the intended reci=
pient(s) and may contain confidential and privileged information. Any unaut=
horized review, use, disclosure or distribution is prohibited. If you are n=
ot the intended recipient, please contact the sender by reply e-mail and de=
stroy all copies of the original message. If you are the intended recipient=
, please be advised that the content of this message is subject to access, =
review and disclosure by the sender's e-mail System Administrator.<br><div>=
<br></div>------------------------------<br><div><br></div>________________=
_______________________________<br>Users mailing list<br>Users(a)ovirt.org<br=
>http://lists.ovirt.org/mailman/listinfo/users<br><div><br></div><br>End of=
Users Digest, Vol 39, Issue 135<br>**************************************<=
br></div><div><br></div></div></body></html>
------=_Part_235182_797090940.1419163706720--
10 years, 4 months
EXTNET Hook and Libvirtd "Default" Network Setup
by Andrew Wagner
All,
I'm testing out oVirt for one of our projects that wants to try an
all-in-one setup before going to a larger deployment. For their testing,
they want to use the default NAT'd network from libvirtd on the host.
I've install oVirt, installed the extnet hook, enabled IP forwarding in
sysctl.conf and loaded the setting, and created a vm that attaches to
the libvirtd "default" network and gets an IP. The VM can ssh to the
virbr0 IP address, in this case 192.168.122.1, to access the host.
However, the VM cannot reach any IP address off of the NAT'd subnet. I
haven't changed any of the default iptables rules that oVirt and
libvirtd create. Looking at ip route and the iptables rules, I feel that
traffic should be getting directed appropriately.
Does anyone have any thoughts as to what the issue may be? For some
reason, the ovirtmgmt bridge doesn't seem to be receiving or allowing
traffic from virbr0 to pass across it. I can provide more information if
that would be helpful!
Andrew Wagner
10 years, 4 months
Can and How to move a VM from VMware to oVirt
by zhangjian2011
Hi,
I want to move a windows VM(managed by VMware Player in Windows7 host)
to oVirt, can and how can i do it?
Thanks.
--
--------------------------------------------------
Zhang Jian
Development Dept.I
Nanjing Fujitsu Nanda Software Tech. Co., Ltd.(FNST)
No.6 Wenzhu Road, Nanjing, 210012, China
TEL: +86+25-86630566-8526
FUJITSU INTERNAL: 7998-8526
FAX: +86+25-83317685
MAIL: zhangjian2011(a)cn.fujitsu.com
--------------------------------------------------
10 years, 4 months
Server 2012 R2 + Intel Conroe Cluster
by Nathan Llaneza
Hey All,
I think I have found a bug in oVirt 3.4.4. We just bought a new server that
supports the Conroe CPU model, and I am trying to install Server 2012 R2
without luck. I keep getting error code 0x000000C4. The problem is while
Windows is still to load into its pre-installation environment it cashes
and then immediately resets. This is a continuous loop. I have found a way
to install Server 2012 R2. Move the cluster away from the Conroe Family (in
my case Penryn). Thanks for all you do.
10 years, 4 months
CPU Type
by Brent Hartzell
------=_NextPart_000_0276_01D01C48.79079EF0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Hello,
Is there a way to add "Xeon" or another class of CPU Type to oVirt? We have
some test hosts, which use a combo of the following CPU types:
Xeon L5420
Xeon E5430
Xeon E5420
The only two CPU Types that will work in oVirt are Conroe & Penryn. Inside
of a VM, it reports "Core 2 Duo".
Host reports:
model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz
VM reports:
model name : Intel Core 2 Duo P9xxx (Penryn Class Core 2)
//
Is there a way to have the VM report the correct CPU? It doesn't appear to
cause any performance or other issues, but seems to be just a display issue.
My concern though, is that we may not be able to add other servers with
different Intels to the same cluster, for example, new hosts with E5-XXXX or
E3-XXXX processors. Can someone confirm this wouldn't be an issue?
------=_NextPart_000_0276_01D01C48.79079EF0
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" =
xmlns:o=3D"urn:schemas-microsoft-com:office:office" =
xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" =
xmlns=3D"http://www.w3.org/TR/REC-html40"><head><meta =
http-equiv=3DContent-Type content=3D"text/html; =
charset=3Dus-ascii"><meta name=3DGenerator content=3D"Microsoft Word 15 =
(filtered medium)"><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",serif;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri",serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri",serif;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DEN-US =
link=3D"#0563C1" vlink=3D"#954F72"><div class=3DWordSection1><p =
class=3DMsoNormal>Hello,<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>Is there a =
way to add “Xeon” or another class of CPU Type to oVirt? We =
have some test hosts, which use a combo of the following CPU =
types:<o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>Xeon L5420<o:p></o:p></p><p class=3DMsoNormal>Xeon =
E5430<o:p></o:p></p><p class=3DMsoNormal>Xeon E5420<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>The only two =
CPU Types that will work in oVirt are Conroe & Penryn. Inside of a =
VM, it reports “Core 2 Duo”.<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>Host =
reports:<o:p></o:p></p><p class=3DMsoNormal>model =
name : Intel(R) Xeon(R) =
CPU =
E5430 @ 2.66GHz<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>VM =
reports:<o:p></o:p></p><p class=3DMsoNormal>model =
name : Intel Core 2 Duo P9xxx (Penryn =
Class Core 2)<o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>//<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>Is there a =
way to have the VM report the correct CPU? It doesn’t appear to =
cause any performance or other issues, but seems to be just a display =
issue. My concern though, is that we may not be able to add other =
servers with different Intels to the same cluster, for example, new =
hosts with E5-XXXX or E3-XXXX processors. Can someone confirm this =
wouldn’t be an issue?<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p></div></body></html>
------=_NextPart_000_0276_01D01C48.79079EF0--
10 years, 4 months
not receiveing email
by lucas castro
Hey folks,
Is there any problem on the mailing list?
since Dec 15 I've not received mail anymore.
--
contatos:
Celular: ( 99 ) 9143-5954 - Vivo
skype: lucasd3castro
msn: lucascastroborges(a)hotmail.com
10 years, 4 months
console viewer for ovrit engine
by Yue, Cong
--_000_ED08B56256B38842A463A2A0804C5AC0326ACA4249svrcaexch1atg_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hi
I finally created VM and it seems work well. But for the administrator menu=
,but I can not show the browser-embeded console, noVNC and Spice HTML5 brow=
ser client.
I am using firefox and chrome in Ubuntu 14.04 desktop OS.
Is there any setting or requirements for the browser-embeded console viewer=
?
Also if I select other console options as native client, one .w file will b=
e downloaded, what is the recommendation to open these .w files with some n=
ative viewer for Ubuntu 14.04 desktop?
Thanks,
Cong
________________________________
This e-mail message is for the sole use of the intended recipient(s) and ma=
y contain confidential and privileged information. Any unauthorized review,=
use, disclosure or distribution is prohibited. If you are not the intended=
recipient, please contact the sender by reply e-mail and destroy all copie=
s of the original message. If you are the intended recipient, please be adv=
ised that the content of this message is subject to access, review and disc=
losure by the sender's e-mail System Administrator.
--_000_ED08B56256B38842A463A2A0804C5AC0326ACA4249svrcaexch1atg_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"MS Mincho";
panose-1:2 2 6 9 4 2 5 8 3 4;}
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
{font-family:Tahoma;
panose-1:2 11 6 4 3 5 4 4 2 4;}
@font-face
{font-family:"\@MS Mincho";
panose-1:2 2 6 9 4 2 5 8 3 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
p.MsoAcetate, li.MsoAcetate, div.MsoAcetate
{mso-style-priority:99;
mso-style-link:"Balloon Text Char";
margin:0in;
margin-bottom:.0001pt;
font-size:8.0pt;
font-family:"Tahoma","sans-serif";}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri","sans-serif";
color:windowtext;}
span.BalloonTextChar
{mso-style-name:"Balloon Text Char";
mso-style-priority:99;
mso-style-link:"Balloon Text";
font-family:"Tahoma","sans-serif";}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri","sans-serif";}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span style=3D"mso-fareast-language:JA">Hi<o:p></o:p=
></span></p>
<p class=3D"MsoNormal"><span style=3D"mso-fareast-language:JA"><o:p> <=
/o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"mso-fareast-language:JA">I finally cr=
eated VM and it seems work well. But for the administrator menu,but I can n=
ot show the browser-embeded console, noVNC and Spice HTML5 browser client.<=
o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"mso-fareast-language:JA">I am using f=
irefox and chrome in Ubuntu 14.04 desktop OS.<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"mso-fareast-language:JA">Is there any=
setting or requirements for the browser-embeded console viewer?<o:p></o:p>=
</span></p>
<p class=3D"MsoNormal"><span style=3D"mso-fareast-language:JA">Also if I se=
lect other console options as native client, one .w file will be downloaded=
, what is the recommendation to open these .w files with some native viewer=
for Ubuntu 14.04 desktop?<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"mso-fareast-language:JA"><o:p> <=
/o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"mso-fareast-language:JA">Thanks,<o:p>=
</o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"mso-fareast-language:JA">Cong<o:p></o=
:p></span></p>
<p class=3D"MsoNormal"><span style=3D"mso-fareast-language:JA"><o:p> <=
/o:p></span></p>
</div>
<br>
<hr>
<font face=3D"Arial" color=3D"Gray" size=3D"1">This e-mail message is for t=
he sole use of the intended recipient(s) and may contain confidential and p=
rivileged information. Any unauthorized review, use, disclosure or distribu=
tion is prohibited. If you are not the
intended recipient, please contact the sender by reply e-mail and destroy =
all copies of the original message. If you are the intended recipient, plea=
se be advised that the content of this message is subject to access, review=
and disclosure by the sender's
e-mail System Administrator.<br>
</font>
</body>
</html>
--_000_ED08B56256B38842A463A2A0804C5AC0326ACA4249svrcaexch1atg_--
10 years, 4 months
Re: [ovirt-users] Ovirt Engine WAN security
by Martijn Grendelman
Donny Davis schreef op 18-12-2014 om 23:25:
> I would like to inquire if anyone is using the ovirt engine to control
> remote datacenters, and if so.. How are you securing it. I realize you
> cannot devulge trade secrets or your actual setup.. Just general info,
> like we are using vpn, or SSH..
We use a 'management VLAN', only reachable through VPN.
Best regards,
Martijn.
10 years, 4 months
Re: [ovirt-users] Cannot activate storage domain
by Sahina Bose
[+Sas - thanks for the link to virt-store usecase article inline]
On 12/18/2014 06:56 PM, Brent Hartzell wrote:
> Hello,
>
> I had actually gotten this sorted out, somewhat. If I disable server quorum
> on the volume, the storage domain will activate. The volume is/was optimized
> for virt store via oVirt. The brick in question was not the first brick
> added to the volume through oVirt however, it appears that it may have been
> the first brick in the replica being used, but I'm not certain how to find
> this out.
The recommended setting is to have both client and server side quorum
turned on. But turning on server-side quorum with a 2-way replica volume
would mean that your volume goes offline when one of the bricks goes down.
"gluster volume info" command will give you information about the volume
topology. So will the bricks sub-tab for Volume in oVirt. The order in
which the bricks are listed, is the order of the replica sets.
> Disabling quorum allowed me to get the VM's affected back online however, is
> this the recommended procedure? I tried to use replace-brick with another
> node but it failed because the failed brick was not available. Would we
> leave quorum disabled until that brick gets replaced? IE - rebuild the
> server with the same hostname/IP file structure and rebalance the cluster?
http://www.gluster.org/community/documentation/index.php/Virt-store-usecase
- for recommendations on volume tunables.
You could add another brick to your volume to make it a replica 3 and
then turn on quorum?
For help on recovering your volume, I suggest you write to
gluster-users(a)gluster.org
>
> ////
>
> While that happened, I read somewhere about this happening with a replica 2
> - I've created a new volume with replica 3 and plan to test this again. Is
> there any info you can point me to for how to handle this when it happens or
> what the correct procedure is when a "first" brick fails?
>
>
> -----Original Message-----
> From: Sahina Bose [mailto:sabose@redhat.com]
> Sent: Thursday, December 18, 2014 3:51 AM
> To: Vered Volansky; Brent Hartzell
> Cc: users(a)ovirt.org
> Subject: Re: [ovirt-users] Cannot activate storage domain
>
>
> On 12/18/2014 01:35 PM, Vered Volansky wrote:
>> Adding Sahina.
>>
>> ----- Original Message -----
>>> From: "Brent Hartzell" <brent.hartzell(a)outlook.com>
>>> To: users(a)ovirt.org
>>> Sent: Thursday, December 18, 2014 3:38:11 AM
>>> Subject: [ovirt-users] Cannot activate storage domain
>>>
>>>
>>>
>>> Have the following:
>>>
>>>
>>>
>>> 6 hosts - virt + Gluster shared
>>>
>>>
>>>
>>> Gluster volume is distributed-replicate - replica 2
>>>
>>>
>>>
>>> Shutting down servers one at a time all work except for 1 brick. If
>>> we shut down one specific brick (1 brick per host) - we're unable to
>>> activate the storage domain. VM's that were actively running from
>>> other bricks continue to run. Whatever was running form that specific
>>> brick fails to run, gets paused etc.
>>>
>>>
>>>
>>> Error log shows the entry below. I'm not certain what it's saying is
>>> read only.nothing is read only that I can find.
>>>
>>>
>>>
>>>
>>>
>>> 2014-12-17 19:57:13,362 ERROR
>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStatusVDSCommand]
>>> (DefaultQuartzScheduler_Worker-47) [4e9290a2] Command
>>> SpmStatusVDSCommand(HostName = U23.domainame.net, HostId =
>>> 0db58e46-68a3-4ba0-a8aa-094893c045a1, storagePoolId =
>>> 7ccd6ea9-7d80-4170-afa1-64c10c185aa6) execution failed. Exception:
>>> VDSErrorException: VDSGenericException: VDSErrorException: Failed to
>>> SpmStatusVDS, error = [Errno 30] Read-only file system, code = 100
>>>
>>> 2014-12-17 19:57:13,363 INFO
>>> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyData]
>>> (DefaultQuartzScheduler_Worker-47) [4e9290a2]
>>> hostFromVds::selectedVds - U23.domainname.net, spmStatus returned null!
>>>
>>>
>>>
>>>
>>>
>>> According to Ovirt/Gluster, if a brick goes down, the VM should be
>>> able to be restarted from another brick without issue. This does not
>>> appear to be the case. If we take other bricks offline, it appears to
> work as expected.
>>> Something with this specific brick cases everything to break which
>>> then makes any VM's that were running from the brick unable to start.
> Do you have the recommended options for using volume as virt store turned
> on? Is client-side quorum turned on for the volume? Is the brick that causes
> the issue, the first brick in the replica set?
>
>
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
10 years, 4 months
Ovirt Engine WAN security
by Donny Davis
This is a multipart message in MIME format.
------=_NextPart_000_0301_01D01AD6.E6399000
Content-Type: text/plain;
charset="us-ascii"
Content-Transfer-Encoding: 7bit
I would like to inquire if anyone is using the ovirt engine to control
remote datacenters, and if so.. How are you securing it. I realize you
cannot devulge trade secrets or your actual setup.. Just general info, like
we are using vpn, or SSH..
Thanks for any info anybody can provide.
Donny D
------=_NextPart_000_0301_01D01AD6.E6399000
Content-Type: text/html;
charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" =
xmlns:o=3D"urn:schemas-microsoft-com:office:office" =
xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" =
xmlns=3D"http://www.w3.org/TR/REC-html40"><head><META =
HTTP-EQUIV=3D"Content-Type" CONTENT=3D"text/html; =
charset=3Dus-ascii"><meta name=3DGenerator content=3D"Microsoft Word 14 =
(filtered medium)"><style><!--
/* Font Definitions */
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri","sans-serif";
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri","sans-serif";}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DEN-US link=3Dblue =
vlink=3Dpurple><div class=3DWordSection1><p class=3DMsoNormal>I would =
like to inquire if anyone is using the ovirt engine to control remote =
datacenters, and if so.. How are you securing it. I realize you cannot =
devulge trade secrets or your actual setup.. Just general info, like we =
are using vpn, or SSH.. <o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p class=3DMsoNormal>Thanks for =
any info anybody can provide.<o:p></o:p></p><p class=3DMsoNormal>Donny =
D<o:p></o:p></p></div></body></html>
------=_NextPart_000_0301_01D01AD6.E6399000--
10 years, 4 months
qemu update & Windows activation
by Markus Stockhausen
------=_NextPartTM-000-4326d9f2-687b-46ec-9ef4-9036536f9e0c
Content-Type: multipart/alternative;
boundary="_000_12EF8D94C6F8734FB2FF37B9FBEDD1735F8FC093EXCHANGEcollogi_"
--_000_12EF8D94C6F8734FB2FF37B9FBEDD1735F8FC093EXCHANGEcollogi_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
Hello,
we just build a new cluster with FC20 + virt-preview repos enabled.
Idea behind that is to enable snapshot live merge feature. This
seems to work quite well.
The only culprit is the Windows activation. For some reasons the
VM hardware of the old qemu 1.6/seabios 1.7.3 hypervisors is
different to the new qemu 2.1/seabios 1.7.5. So the OS is no longer
activated.
We already activated the VMs twice. During first install on VMware
and then afterwards after the migration to OVirt. I have no problem
to reactivate them a third time. I have more fears that I must
reactivate them with each new hypervisor generation. That can
be quite a lot of phone calls.
An interesting article from the Proxmox guys can be read here:
http://forum.proxmox.com/archive/index.php/t-19743.html
Conclusion of the discussion: After you go from qemu 1.7 to 2.1
you can force the old hardware layout using -M pc-i440fx-1.7
Looking at the qemu command line in OVirt I can see our VMs
are fired up with pc-1.0,accel=3Dkvm,usb=3Doff - regardless of an
old (FC20) or a new hypervisor (FC20+virt-preview). So I would
guess that everything should work without a new reactivation.
Any ideas?
Markus
--_000_12EF8D94C6F8734FB2FF37B9FBEDD1735F8FC093EXCHANGEcollogi_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html dir=3D"ltr">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-=
1">
<style type=3D"text/css" id=3D"owaParaStyle"></style>
</head>
<body fpstyle=3D"1" ocsi=3D"0">
<div style=3D"direction: ltr;font-family: Tahoma;color: #000000;font-size: =
10pt;">Hello,
<div><br>
</div>
<div>we just build a new cluster with FC20 + virt-preview repos enabled=
.</div>
<div>Idea behind that is to enable snapshot live merge feature. This <=
/div>
<div>seems to work quite well. </div>
<div><br>
</div>
<div>The only culprit is the Windows activation. For some reasons the</div>
<div>VM hardware of the old qemu 1.6/seabios 1.7.3 hypervisors is</div>
<div>different to the new qemu 2.1/seabios 1.7.5. So the OS is no longer</d=
iv>
<div>activated.</div>
<div><br>
</div>
<div>We already activated the VMs twice. During first install on VMware</di=
v>
<div>and then afterwards after the migration to OVirt. <span style=3D"=
font-size: 10pt;">I have no problem</span></div>
<div><span style=3D"font-size: 10pt;">to reactivate them a third time. I ha=
ve more fears that I must</span></div>
<div><span style=3D"font-size: 10pt;">reactivate them with each new hypervi=
sor generation. That can</span></div>
<div><span style=3D"font-size: 10pt;">be quite a lot of phone calls.</span>=
</div>
<div><span style=3D"font-size: 10pt;"><br>
</span></div>
<div><span style=3D"font-size: 10pt;">An interesting article from the Proxm=
ox guys can be read here:</span></div>
<div>http://forum.proxmox.com/archive/index.php/t-19743.html<br>
</div>
<div><span style=3D"font-size: 10pt;">Conclusion of the discussion: After y=
ou go from qemu 1.7 to 2.1 </span></div>
<div><span style=3D"font-size: 10pt;">you </span><span style=3D"font-s=
ize: 10pt;">can force the old hardware layout using </span><span style=
=3D"font-family: verdana, arial, sans-serif; font-size: 11.1999998092651px;=
background-color: rgb(249, 249, 249);">-M pc-i440fx-1.7</span></div>
<div><br>
</div>
<div>Looking at the qemu command line in OVirt I can see our VMs</div>
<div>are fired up with pc-1.0,accel=3Dkvm,usb=3Doff - regardless of an=
</div>
<div>old (FC20) or a new hypervisor (FC20+virt-preview). So I would</di=
v>
<div>guess that everything should work without a new reactivation.</div>
<div><br>
</div>
<div>Any ideas?</div>
<div><br>
</div>
<div>Markus</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<div> </div>
</div>
</body>
</html>
--_000_12EF8D94C6F8734FB2FF37B9FBEDD1735F8FC093EXCHANGEcollogi_--
------=_NextPartTM-000-4326d9f2-687b-46ec-9ef4-9036536f9e0c
Content-Type: text/plain;
name="InterScan_Disclaimer.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
filename="InterScan_Disclaimer.txt"
****************************************************************************
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.
Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
Vorstand:
Kadir Akin
Dr. Michael Höhnerbach
Vorsitzender des Aufsichtsrates:
Hans Kristian Langva
Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497
This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.
e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
executive board:
Kadir Akin
Dr. Michael Höhnerbach
President of the supervisory board:
Hans Kristian Langva
Registry office: district court Cologne
Register number: HRB 52 497
****************************************************************************
------=_NextPartTM-000-4326d9f2-687b-46ec-9ef4-9036536f9e0c--
10 years, 4 months
Using 10gb vNIC/vbridge into VM is possible?
by Kalil de A. Carvalho
Hello all.
Today a follow work ask me if is possible to use vNIC or vbridge in a VM
managed by ovirt.
What he wants is to have a virtual 10gb network to some machines.
All hosts NIC's are 10gb.
Is this possible?
If yes how can I do it?
Best regards.
10 years, 4 months
Add new IP in ubuntu VM will clear the existing configuration | cloud-init
by Punit Dambiwal
Hi,
I tired ubuntu 14.04 and debian 7.6....both has the same issue...
1. Create VM with one NIC "eth0"...then run once and insert the cloud-init
data and make it up...
2. Powerdown the VM.
3. Add extra ip "eth1"...
4. run once and insert the cloud-init data and make it up...
5. VM successfully come up but removed the eth0 and loopback address config
from /etc/network/interfaces file....
6. Now from ifconfig it displays only eth1...becasue eth0 and loopback
config removed by eth1..
Thanks,
Punit
10 years, 4 months
Free Ovirt Powered Cloud
by Donny Davis
This is a multipart message in MIME format.
------=_NextPart_000_0050_01D01848.FDBA6EB0
Content-Type: text/plain;
charset="us-ascii"
Content-Transfer-Encoding: 7bit
Hi guys, I'm providing a free public cloud solution entirely based on
vanilla oVirt called cloudspin.me
It runs on IPv6, and I am looking for people to use the system, host
services and report back to me with their results.
Data I am looking for
Connection Speed - Is it comparable to other services
User experience - Are there any changes recommended
Does it work for you - What does, and does not work for you.
I am trying to get funding to keep this a free resource for everyone to use.
(not from here:)
I am completely open to any and all suggestions, and or help with things. I
am a one man show at the moment.
If anyone has any questions please email me back
Donny D
------=_NextPart_000_0050_01D01848.FDBA6EB0
Content-Type: text/html;
charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" =
xmlns:o=3D"urn:schemas-microsoft-com:office:office" =
xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" =
xmlns=3D"http://www.w3.org/TR/REC-html40"><head><META =
HTTP-EQUIV=3D"Content-Type" CONTENT=3D"text/html; =
charset=3Dus-ascii"><meta name=3DGenerator content=3D"Microsoft Word 14 =
(filtered medium)"><style><!--
/* Font Definitions */
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:12.0pt;
font-family:"Times New Roman","serif";}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri","sans-serif";
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri","sans-serif";}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DEN-US link=3Dblue =
vlink=3Dpurple><div class=3DWordSection1><p class=3DMsoNormal =
style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Hi guys, =
I'm providing a free public cloud solution entirely based on vanilla =
oVirt called <a href=3D"http://cloudspin.me" =
target=3D"_blank">cloudspin.me</a> <o:p></o:p></p><p class=3DMsoNormal =
style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>It runs on =
IPv6, and I am looking for people to use the system, host services and =
report back to me with their results. <o:p></o:p></p><p =
class=3DMsoNormal =
style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Data I am =
looking for<o:p></o:p></p><p class=3DMsoNormal =
style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Connection =
Speed - Is it comparable to other services<o:p></o:p></p><p =
class=3DMsoNormal =
style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>User =
experience - Are there any changes recommended<o:p></o:p></p><p =
class=3DMsoNormal =
style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Does it =
work for you - What does, and does not work for you. <o:p></o:p></p><p =
class=3DMsoNormal =
style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'><o:p> <=
/o:p></p><p class=3DMsoNormal =
style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>I am trying =
to get funding to keep this a free resource for everyone to use. (not =
from here:) <o:p></o:p></p><p class=3DMsoNormal =
style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>I am =
completely open to any and all suggestions, and or help with things. I =
am a one man show at the moment. <o:p></o:p></p><p class=3DMsoNormal =
style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>If anyone =
has any questions please email me back <o:p></o:p></p><p =
class=3DMsoNormal =
style=3D'mso-margin-top-alt:auto;mso-margin-bottom-alt:auto'>Donny =
D<o:p></o:p></p><p class=3DMsoNormal><span =
style=3D'font-size:11.0pt;font-family:"Calibri","sans-serif"'><o:p> =
</o:p></span></p></div></body></html>
------=_NextPart_000_0050_01D01848.FDBA6EB0--
10 years, 4 months
Re: [ovirt-users] Can not connect to Storage domain data
by Yue, Cong
--_000_ED08B56256B38842A463A2A0804C5AC0326ACA46CAsvrcaexch1atg_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
I think the problems for my issue are related with the NFS version.
>From the second, if I change the value of Defaultver /etc/nfsmount.conf fr=
om "Defaultvres=3D4" to "Defaulvers=3D3", the mount can not be done. When I=
changed it back to "Defaultvers=3D4", it will work.
Also from /proc/mounts, it shows the nfs version is nfs4. But for my first =
host, it is nfs3.
Do somebody have the similar issue about thi?
Thank in advance,
Cong
From: Yue, Cong
Sent: Thursday, December 18, 2014 9:52 AM
To: users(a)ovirt.org
Subject: Can not connect to Storage domain data
Hi
I successfully deployed the first ovirt host with hosted-engine -deploy. En=
gine VM works well.
While, when I try to create the second host with the same way as the guide =
of
http://community.redhat.com/blog/2014/11/up-and-running-with-ovirt-3-5-part=
-two/
I am not using GlusterFS, and just use one external storage(nfs) in my envi=
ronment.
The issue I have is in the engine administration menu, it says "can not con=
nect to storage domain data"
In the second host, I checked with nfs-check.py for both storage and data d=
omain. It shows the status is ok.
http://www.ovirt.org/Troubleshooting_NFS_Storage_Issues
During deployment of the second host, how the data domain is trying to be m=
ounted?
Thanks,
________________________________
This e-mail message is for the sole use of the intended recipient(s) and ma=
y contain confidential and privileged information. Any unauthorized review,=
use, disclosure or distribution is prohibited. If you are not the intended=
recipient, please contact the sender by reply e-mail and destroy all copie=
s of the original message. If you are the intended recipient, please be adv=
ised that the content of this message is subject to access, review and disc=
losure by the sender's e-mail System Administrator.
--_000_ED08B56256B38842A463A2A0804C5AC0326ACA46CAsvrcaexch1atg_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"MS Mincho";
panose-1:2 2 6 9 4 2 5 8 3 4;}
@font-face
{font-family:SimSun;
panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
{font-family:Tahoma;
panose-1:2 11 6 4 3 5 4 4 2 4;}
@font-face
{font-family:"\@SimSun";
panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
{font-family:"\@MS Mincho";
panose-1:2 2 6 9 4 2 5 8 3 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
p.MsoAcetate, li.MsoAcetate, div.MsoAcetate
{mso-style-priority:99;
mso-style-link:"Balloon Text Char";
margin:0in;
margin-bottom:.0001pt;
font-size:8.0pt;
font-family:"Tahoma","sans-serif";}
span.BalloonTextChar
{mso-style-name:"Balloon Text Char";
mso-style-priority:99;
mso-style-link:"Balloon Text";
font-family:"Tahoma","sans-serif";}
span.EmailStyle19
{mso-style-type:personal;
font-family:"Calibri","sans-serif";
color:windowtext;}
span.EmailStyle20
{mso-style-type:personal-reply;
font-family:"Calibri","sans-serif";
color:#1F497D;}
.MsoChpDefault
{mso-style-type:export-only;
font-size:10.0pt;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
">I think the problems for my issue are related with the NFS version.<o:p><=
/o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
">From the second, if I change the value of Defaultver /etc/nfsmount.=
conf from “Defaultvres=3D4” to “Defaulvers=3D3”, th=
e mount can not be done. When I changed it back to “Defaultvers=3D4&#=
8221;, it
will work.<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
">Also from /proc/mounts, it shows the nfs version is nfs4. But for my firs=
t host, it is nfs3.<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
">Do somebody have the similar issue about thi?<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
">Thank in advance,<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
">Cong<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D"><o:p> </o:p></spa=
n></p>
<div>
<div style=3D"border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in =
0in 0in">
<p class=3D"MsoNormal"><b><span style=3D"font-size:10.0pt;font-family:"=
;Tahoma","sans-serif"">From:</span></b><span style=3D"font-s=
ize:10.0pt;font-family:"Tahoma","sans-serif""> Yue, Con=
g
<br>
<b>Sent:</b> Thursday, December 18, 2014 9:52 AM<br>
<b>To:</b> users(a)ovirt.org<br>
<b>Subject:</b> Can not connect to Storage domain data <o:p></o:p></span></=
p>
</div>
</div>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">Hi<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">I successfully deployed the first ovirt host with ho=
sted-engine –deploy. Engine VM works well.<o:p></o:p></p>
<p class=3D"MsoNormal">While, when I try to create the second host with the=
same way as the guide of
<o:p></o:p></p>
<p class=3D"MsoNormal"><a href=3D"http://community.redhat.com/blog/2014/11/=
up-and-running-with-ovirt-3-5-part-two/">http://community.redhat.com/blog/2=
014/11/up-and-running-with-ovirt-3-5-part-two/</a><o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">I am not using GlusterFS, and just use one external =
storage(nfs) in my environment.<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">The issue I have is in the engine administration men=
u, it says “can not connect to storage domain data”<o:p></o:p><=
/p>
<p class=3D"MsoNormal">In the second host, I checked with nfs-check.py for =
both storage and data domain. It shows the status is ok.<o:p></o:p></p>
<p class=3D"MsoNormal"><a href=3D"http://www.ovirt.org/Troubleshooting_NFS_=
Storage_Issues">http://www.ovirt.org/Troubleshooting_NFS_Storage_Issues</a>=
<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">During deployment of the second host, how the data d=
omain is trying to be mounted?<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">Thanks,<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
</div>
<br>
<hr>
<font face=3D"Arial" color=3D"Gray" size=3D"1">This e-mail message is for t=
he sole use of the intended recipient(s) and may contain confidential and p=
rivileged information. Any unauthorized review, use, disclosure or distribu=
tion is prohibited. If you are not the
intended recipient, please contact the sender by reply e-mail and destroy =
all copies of the original message. If you are the intended recipient, plea=
se be advised that the content of this message is subject to access, review=
and disclosure by the sender's
e-mail System Administrator.<br>
</font>
</body>
</html>
--_000_ED08B56256B38842A463A2A0804C5AC0326ACA46CAsvrcaexch1atg_--
10 years, 4 months