oVirt 3.6 Feature: Cumulative Network Usage Statistics
by Lior Vernia
Hello users and developers,
Just put up a feature page for the aforementioned feature; in summary,
to report total RX/TX statistics for hosts and VMs in oVirt. This has
been requested several times on the users mailing list, and is
especially useful for accounting in VDI deployments.
You're more than welcome to review the feature page:
http://www.ovirt.org/Features/Cumulative_RX_TX_Statistics
Note that this only deals with network usage - it'll be great if we have
similar features for CPU and disk usage!
Yours, Lior.
9 years, 9 months
name of virtual machine and hostname
by nicola gentile
Good morning,
I would like to ask you an information.
After I have installed ovirt, I have created a pool of vm with name
like centos-?? (from 1 to 20)
and then ovirt generated 20 vm with name centos-1, centos-2, centos-3 etc. etc.
The problem is when the vm starts the hostname is not the same of the
vm name in ovirt but is the same name of the template.
Is it possible to make sure that the name of vm and the hostname is identical?
Best regard
Nicola Gentile
9 years, 10 months
Re: [ovirt-users] oVirt 3.5 and FreeIpa
by Marcelo Donato
Below the solution. Resolved By "Alon Bar-Lev" <alonbl(a)redhat.com>
1. install ovirt-engine-extension-aaa-ldap, it is available in
ovirt-3.5-snapshots repository.
2. create /etc/ovirt-engine/extensions.d/din.intranet-authz.properties
ovirt.engine.extension.name = din-intranet-authz
ovirt.engine.extension.bindings.method = jbossmodule
ovirt.engine.extension.binding.jbossmodule.module =
org.ovirt.engine-extensions.aaa.ldap
ovirt.engine.extension.binding.jbossmodule.class =
org.ovirt.engineextensions.aaa.ldap.AuthzExtension
ovirt.engine.extension.provides = org.ovirt.engine.api.extensions.aaa.Authz
config.profile.file.1 = /etc/ovirt-engine/aaa/din.intranet.properties
3. create /etc/ovirt-engine/extensions.d/din.intranet-authn.properties
ovirt.engine.extension.name = din-intranet-authn
ovirt.engine.extension.bindings.method = jbossmodule
ovirt.engine.extension.binding.jbossmodule.module =
org.ovirt.engine-extensions.aaa.ldap
ovirt.engine.extension.binding.jbossmodule.class =
org.ovirt.engineextensions.aaa.ldap.AuthnExtension
ovirt.engine.extension.provides = org.ovirt.engine.api.extensions.aaa.Authn
ovirt.engine.aaa.authn.profile.name = din.intranet
ovirt.engine.aaa.authn.authz.plugin = din-intranet-authz
config.profile.file.1 = /etc/ovirt-engine/aaa/din.intranet.properties
4. create /etc/ovirt-engine/aaa/din.intranet.properties
include = <ipa.properties>
vars.user = uid=admin,cn=users,cn=accounts,dc=din,dc=intranet
vars.password = 123456
vars.server = ipa1.din.intranet
pool.default.serverset.single.server = ${global:vars.server}
pool.default.auth.simple.bindDN = ${global:vars.user}
pool.default.auth.simple.password = ${global:vars.password}
5. restart engine.
Thanks a lot Alon.
--
Ao encaminhar esta mensagem, por favor:
1. Apague o meu e-mail e o meu nome.
2. Apague também os endereços dos amigos antes de reenviar
3. Use Cco ou Bcc para enviar mensagens!
Dificulte a disseminação de vírus e spam.
9 years, 10 months
Problem Upgrading 3.4.4 -> 3.5
by InterNetX - Juergen Gotteswinter
Hi,
i am currently trying to upgrade an existing 3.4.4 Setup (which got
upgraded several times before, starting at 3.3), but this time i run
into a Error while Upgrading the DB
-- snip --
********* QUERY **********
ALTER TABLE event_subscriber DROP CONSTRAINT
fk_event_subscriber_event_notification_methods;
**************************
2014-12-20 00:16:27 DEBUG
otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema
plugin.executeRaw:803 execute-result:
['/usr/share/ovirt-engine/dbscripts/schema.sh', '-s', 'localhost', '-p',
'5432', '-u', 'engine', '-d', 'engine', '-l',
'/var/log/ovirt-engine/setup/ovirt-engine-setup-20141220001232-3xjymi.log',
'-c', 'apply'], rc=1
2014-12-20 00:16:27 DEBUG
otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema
plugin.execute:861 execute-output:
['/usr/share/ovirt-engine/dbscripts/schema.sh', '-s', 'localhost', '-p',
'5432', '-u', 'engine', '-d', 'engine', '-l',
'/var/log/ovirt-engine/setup/ovirt-engine-setup-20141220001232-3xjymi.log',
'-c', 'apply'] stdout:
Creating schema engine@localhost:5432/engine
Saving custom users permissions on database objects...
upgrade script detected a change in Config, View or Stored Procedure...
Running upgrade sql script
'/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0000_config.sql'...
Running upgrade sql script
'/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0010_custom.sql'...
Running upgrade sql script
'/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0020_add_materialized_views_table.sql'...
Running upgrade sql script
'/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0030_materialized_views_extensions.sql'...
Running upgrade sql script
'/usr/share/ovirt-engine/dbscripts/upgrade/pre_upgrade/0040_extend_installed_by_column.sql'...
Dropping materialized views...
Running upgrade sql script
'/usr/share/ovirt-engine/dbscripts/upgrade/03_05_0010_add_tables_for_gluster_volume_and_brick_details.sql'...
Running upgrade sql script
'/usr/share/ovirt-engine/dbscripts/upgrade/03_05_0020_gluster_refresh_gluster_volume_details-event_map.sql'...
Skipping upgrade script
/usr/share/ovirt-engine/dbscripts/upgrade/03_05_0030_add_ha_columns_to_vds_statistics.sql,
already installed by 03040610
Skipping upgrade script
/usr/share/ovirt-engine/dbscripts/upgrade/03_05_0040_add_ha_maintenance_events.sql,
already installed by 03040620
Running upgrade sql script
'/usr/share/ovirt-engine/dbscripts/upgrade/03_05_0050_event_notification_methods.sql'...
2014-12-20 00:16:27 DEBUG
otopi.plugins.ovirt_engine_setup.ovirt_engine.db.schema
plugin.execute:866 execute-output:
['/usr/share/ovirt-engine/dbscripts/schema.sh', '-s', 'localhost', '-p',
'5432', '-u', 'engine', '-d', 'engine', '-l',
'/var/log/ovirt-engine/setup/ovirt-engine-setup-20141220001232-3xjymi.log',
'-c', 'apply'] stderr:
psql:/usr/share/ovirt-engine/dbscripts/upgrade/03_05_0050_event_notification_methods.sql:2:
ERROR: constraint "fk_event_subscriber_event_notification_methods" of
relation "event_subscriber" does not exist
FATAL: Cannot execute sql command:
--file=/usr/share/ovirt-engine/dbscripts/upgrade/03_05_0050_event_notification_methods.sql
2014-12-20 00:16:27 DEBUG otopi.context context._executeMethod:152
method exception
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/otopi/context.py", line 142, in
_executeMethod
method['method']()
File
"/usr/share/ovirt-engine/setup/bin/../plugins/ovirt-engine-setup/ovirt-engine/db/schema.py",
line 291, in _misc
oenginecons.EngineDBEnv.PGPASS_FILE
File "/usr/lib/python2.6/site-packages/otopi/plugin.py", line 871, in
execute
command=args[0],
RuntimeError: Command '/usr/share/ovirt-engine/dbscripts/schema.sh'
failed to execute
2014-12-20 00:16:27 ERROR otopi.context context._executeMethod:161
Failed to execute stage 'Misc configuration': Command
'/usr/share/ovirt-engine/dbscripts/schema.sh' failed to execute
2014-12-20 00:16:27 DEBUG otopi.transaction transaction.abort:131
aborting 'Yum Transaction'
-- snip --
after that, engine-setup starts doing a rollback to 3.4.4 which
worksflawless.
Anyone got an Idea what is causing this?
Thanks,
Juergen
9 years, 10 months
[RFC] oVirt mobile client
by Greg Sheremeta
Hi,
The focus of our OPW internship program starting in December will be
mobile and/or lightweight engine clients -- hopefully integrating the
new ovirt.js project.
I see that there are some already existing mobile clients for oVirt.
I'm trying to grasp what we have and what the needs are.
moVirt: https://github.com/matobet/moVirt (mbetak)
This appears to be more of a lightweight webadmin. No console access,
but I believe it's planned as part of OPW. (?)
nomad: http://www.ovirt.org/Project_Proposal_-_Nomad and
https://github.com/Vizuri/ovirt-nomad
Looks dead -- last commit 3 years ago.
Anyone know more about this one?
That's all I see on the first few pages of google.
When I think of a mobile client for oVirt, I think the most useful
part would be the user portal -- simple operations for start, stop,
and the ability to view the console of vms. moVirt mentions it wants
to support some basic management operations, though. I think it would
be difficult to do complex management in a mobile client. (I'm biased
towards huge screens, though.)
I'd like to see an official subproject started that coordinates our
mobile efforts.
Is this possible? What would it take to start it?
What would people like to see in such an app?
Greg Sheremeta
Red Hat, Inc.
Sr. Software Engineer, RHEV
Cell: 919-807-1086
gshereme(a)redhat.com
9 years, 10 months
Issues with vm start up
by Shanil S
Hi All,
I am using the ovirt version 3.5 and having some issues with the vm startup
with cloud-init using api in run-once mode.....
Below is the steps i follow :-
1. Create the VM by API from precreated Template..
2. Start the VM in run-once mode and push the cloud-init data from API..
3. VM stuck and from console it display the following :-
Booting from DVD/CD.. ...
Boot failed : could not read from CDROM (code 004)
I am using the following xml for this operation :-
<action>
<vm>
<os>
<boot dev='cdrom'/>
</os>
<initialization>
<cloud_init>
<host>
<address>test</address>
</host>
<network_configuration>
<nics>
<nic>
<interface>virtIO</interface>
<name>eth0</name>
<boot_protocol>static</boot_protocol>
<mac address=''/>
<network>
<ip address='' netmask='' gateway=''/>
</network>
<on_boot>true</on_boot><vnic_profile id='' />
</nic>
<nic>
<interface>virtIO</interface>
<name>eth1</name>
<boot_protocol>static</boot_protocol>
<mac address=''/>
<network>
<ip address='' netmask='255.255.255.0' gateway=''/>
</network>
<on_boot>true</on_boot><vnic_profile id='' />
</nic>
</nics>
</network_configuration>
<files>
<file>
<name>/ignored</name><content><![CDATA[#cloud-config
disable-ec2-metadata: true
disable_root: false
ssh_pwauth: true
ssh_deletekeys: true
chpasswd: { expire: False }
users:
- name: root
primary-group: root
passwd: 8W7RQ5Bh
lock-passwd: false
runcmd:
- sed -i '/nameserver/d' /etc/resolv.conf
- echo 'nameserver 8.8.8.8' >> /etc/resolv.conf
- echo 'nameserver 8.8.4.4' >> /etc/resolv.conf
- echo 'root:8W7RQ5Bh' | chpasswd
- yum -y update
- yum -y install rdate
- rdate -s stdtime.gov.hk]]></content>
<type>plaintext</type>
</file>
</files>
</cloud_init><custom_script><![CDATA[#cloud-config
disable-ec2-metadata: true
disable_root: false
ssh_pwauth: true
ssh_deletekeys: true
chpasswd: { expire: False }
users:
- name: root
primary-group: root
passwd: 8W7RQ5Bh
lock-passwd: false
runcmd:
- sed -i '/nameserver/d' /etc/resolv.conf
- echo 'nameserver 8.8.8.8' >> /etc/resolv.conf
- echo 'nameserver 8.8.4.4' >> /etc/resolv.conf
- echo 'root:8W7RQ5Bh' | chpasswd
- yum -y update
- yum -y install rdate
- rdate -s stdtime.gov.hk]]></custom_script>
</initialization>
</vm>
</action>
I am also attaching the screen shot to this.
--
Regards
Shanil
9 years, 10 months
Backup and Restore of VMs
by Soeren Malchow
--_000_687766E0007010429FBFF98CA70824887896D9mcexch01mcongroup_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Dear all,
ovirt: 3.5
gluster: 3.6.1
OS: CentOS 7 (except ovirt hosted engine =3D centos 6.6)
i spent quite a while researching backup and restore for VMs right now, so =
far I have come up with this as a start for us
- API calls to create schedule snapshots of virtual machines
This is or short term storage and to guard against accidential deletion wit=
hin the VM but not for storage corruption
- Since we are using a gluster backend, gluster snapshots
I wasn't able so far to really test it since the LV needs to be thin provis=
ioned and we did not do that in the setup
For the API calls we have the problem that we can not find any existing scr=
ipts or something like that to do those snapshots (and i/we are not develop=
ers enough to do that).
As an additional information, we have a ZFS based storage with deduplicatio=
n that we use for other backup purposes which does a great job especially b=
ecause of the deduplication (we can storage generations of backups without =
problems), this storage can be NFS exported and used as backup repository.
Are there any backup and restore procedure you guys are using for backup an=
d restore that works for you and can you point me into the right direction =
?
I am a little bit list right now and would appreciate any help.
Regards
Soeren
--_000_687766E0007010429FBFF98CA70824887896D9mcexch01mcongroup_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:Wingdings;
panose-1:5 0 0 0 0 0 0 0 0 0;}
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;
mso-fareast-language:EN-US;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:#0563C1;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:#954F72;
text-decoration:underline;}
p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph
{mso-style-priority:34;
margin-top:0in;
margin-right:0in;
margin-bottom:0in;
margin-left:.5in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri",sans-serif;
mso-fareast-language:EN-US;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri",sans-serif;
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri",sans-serif;
mso-fareast-language:EN-US;}
@page WordSection1
{size:8.5in 11.0in;
margin:70.85pt 70.85pt 56.7pt 70.85pt;}
div.WordSection1
{page:WordSection1;}
/* List Definitions */
@list l0
{mso-list-id:1885018823;
mso-list-type:hybrid;
mso-list-template-ids:518132072 -1996084360 67567619 67567621 67567617 675=
67619 67567621 67567617 67567619 67567621;}
@list l0:level1
{mso-level-start-at:0;
mso-level-number-format:bullet;
mso-level-text:-;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-.25in;
font-family:"Calibri",sans-serif;
mso-fareast-font-family:Calibri;
mso-bidi-font-family:"Times New Roman";}
@list l0:level2
{mso-level-number-format:bullet;
mso-level-text:o;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-.25in;
font-family:"Courier New";}
@list l0:level3
{mso-level-number-format:bullet;
mso-level-text:\F0A7;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-.25in;
font-family:Wingdings;}
@list l0:level4
{mso-level-number-format:bullet;
mso-level-text:\F0B7;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-.25in;
font-family:Symbol;}
@list l0:level5
{mso-level-number-format:bullet;
mso-level-text:o;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-.25in;
font-family:"Courier New";}
@list l0:level6
{mso-level-number-format:bullet;
mso-level-text:\F0A7;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-.25in;
font-family:Wingdings;}
@list l0:level7
{mso-level-number-format:bullet;
mso-level-text:\F0B7;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-.25in;
font-family:Symbol;}
@list l0:level8
{mso-level-number-format:bullet;
mso-level-text:o;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-.25in;
font-family:"Courier New";}
@list l0:level9
{mso-level-number-format:bullet;
mso-level-text:\F0A7;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-.25in;
font-family:Wingdings;}
ol
{margin-bottom:0in;}
ul
{margin-bottom:0in;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"DE" link=3D"#0563C1" vlink=3D"#954F72">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">Dear all,<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">ovirt: 3.5<o:p></o:p></span></p=
>
<p class=3D"MsoNormal"><span lang=3D"EN-US">gluster: 3.6.1<o:p></o:p></span=
></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">OS: CentOS 7 (except ovirt host=
ed engine =3D centos 6.6)<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">i spent quite a while researchi=
ng backup and restore for VMs right now, so far I have come up with this as=
a start for us<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p> </o:p></span></p>
<p class=3D"MsoListParagraph" style=3D"text-indent:-.25in;mso-list:l0 level=
1 lfo1"><![if !supportLists]><span lang=3D"EN-US"><span style=3D"mso-list:I=
gnore">-<span style=3D"font:7.0pt "Times New Roman""> =
</span></span></span><![endif]><span lang=3D"EN-US">API calls to create sch=
edule snapshots of virtual machines<br>
This is or short term storage and to guard against accidential deletion wit=
hin the VM but not for storage corruption<br>
<br>
<o:p></o:p></span></p>
<p class=3D"MsoListParagraph" style=3D"text-indent:-.25in;mso-list:l0 level=
1 lfo1"><![if !supportLists]><span lang=3D"EN-US"><span style=3D"mso-list:I=
gnore">-<span style=3D"font:7.0pt "Times New Roman""> =
</span></span></span><![endif]><span lang=3D"EN-US">Since we are using a gl=
uster backend, gluster snapshots<br>
I wasn’t able so far to really test it since the LV needs to be thin =
provisioned and we did not do that in the setup<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">For the API calls we have the p=
roblem that we can not find any existing scripts or something like that to =
do those snapshots (and i/we are not developers enough to do that).<o:p></o=
:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">As an additional information, w=
e have a ZFS based storage with deduplication that we use for other backup =
purposes which does a great job especially because of the deduplication (we=
can storage generations of backups
without problems), this storage can be NFS exported and used as backup rep=
ository.<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Are there any backup and restor=
e procedure you guys are using for backup and restore that works for you an=
d can you point me into the right direction ?<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">I am a little bit list right no=
w and would appreciate any help.<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Regards<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span lang=3D"EN-US">Soeren<o:p></o:p></span></p>
</div>
</body>
</html>
--_000_687766E0007010429FBFF98CA70824887896D9mcexch01mcongroup_--
9 years, 10 months
Re: [ovirt-users] VM failover with ovirt3.5
by cong yue
The vdsm.log just after I turned the host where HE VM is to local.
In the log, there is some part like
---
GuestMonitor-HostedEngine::DEBUG::2014-12-30
13:01:03,988::vm::486::vm.Vm::(_getUserCpuTuneInfo)
vmId=`0d3adb5c-0960-483c-9d73-5e256a519f2f`::Domain Metadata is not
set
GuestMonitor-HostedEngine::DEBUG::2014-12-30
13:01:03,989::vm::486::vm.Vm::(_getUserCpuTuneInfo)
vmId=`0d3adb5c-0960-483c-9d73-5e256a519f2f`::Domain Metadata is not
set
GuestMonitor-HostedEngine::DEBUG::2014-12-30
13:01:03,990::vm::486::vm.Vm::(_getUserCpuTuneInfo)
vmId=`0d3adb5c-0960-483c-9d73-5e256a519f2f`::Domain Metadata is not
set
JsonRpc (StompReactor)::DEBUG::2014-12-30
13:01:04,675::stompReactor::98::Broker.StompAdapter::(handle_frame)
Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2014-12-30
13:01:04,676::__init__::504::jsonrpc.JsonRpcServer::(serve_requests)
Waiting for request
Thread-1806995::DEBUG::2014-12-30
13:01:04,677::stompReactor::163::yajsonrpc.StompServer::(send) Sending
response
JsonRpc (StompReactor)::DEBUG::2014-12-30
13:01:04,678::stompReactor::98::Broker.StompAdapter::(handle_frame)
Handling message <StompFrame command='SEND'>
JsonRpcServer::DEBUG::2014-12-30
13:01:04,679::__init__::504::jsonrpc.JsonRpcServer::(serve_requests)
Waiting for request
Thread-1806996::DEBUG::2014-12-30
13:01:04,681::vm::486::vm.Vm::(_getUserCpuTuneInfo)
vmId=`0d3adb5c-0960-483c-9d73-5e256a519f2f`::Domain Metadata is not
set
---
I this with some wrong?
Thanks,
Cong
> From: Artyom Lukianov <alukiano(a)redhat.com>
> Date: 2014年12月29日 23:13:45 GMT-8
> To: "Yue, Cong" <Cong_Yue(a)alliedtelesis.com>
> Cc: Simone Tiraboschi <stirabos(a)redhat.com>, "users(a)ovirt.org"
> <users(a)ovirt.org>
> Subject: Re: [ovirt-users] VM failover with ovirt3.5
>
> HE vm migrated only by ovirt-ha-agent and not by engine, but FatalError it's
> more interesting, can you provide vdsm.log for this one please.
>
> ----- Original Message -----
> From: "Cong Yue" <Cong_Yue(a)alliedtelesis.com>
> To: "Artyom Lukianov" <alukiano(a)redhat.com>
> Cc: "Simone Tiraboschi" <stirabos(a)redhat.com>, users(a)ovirt.org
> Sent: Monday, December 29, 2014 8:29:04 PM
> Subject: Re: [ovirt-users] VM failover with ovirt3.5
>
> I disabled local maintenance mode for all hosts, and then only set the host
> where HE VM is there to local maintenance mode. The logs are as follows.
> During the migration of HE VM , it shows some fatal error happen. By the
> way, also HE VM can not work with live migration. Instead, other VMs can do
> live migration.
>
> ---
> [root@compute2-3 ~]# hosted-engine --set-maintenance --mode=local
> You have new mail in /var/spool/mail/root
> [root@compute2-3 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
> MainThread::INFO::2014-12-29
> 13:16:12,435::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.92 (id: 3, score: 2400)
> MainThread::INFO::2014-12-29
> 13:16:22,711::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-29
> 13:16:22,711::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.92 (id: 3, score: 2400)
> MainThread::INFO::2014-12-29
> 13:16:32,978::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-29
> 13:16:32,978::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-29
> 13:16:43,272::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-29
> 13:16:43,272::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-29
> 13:16:53,316::states::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
> Engine vm running on localhost
> MainThread::INFO::2014-12-29
> 13:16:53,562::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-29
> 13:16:53,562::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-29
> 13:17:03,600::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
> Local maintenance detected
> MainThread::INFO::2014-12-29
> 13:17:03,611::brokerlink::111::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
> Trying: notify time=1419877023.61 type=state_transition
> detail=EngineUp-LocalMaintenanceMigrateVm hostname='compute2-3'
> MainThread::INFO::2014-12-29
> 13:17:03,672::brokerlink::120::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
> Success, was notification of state_transition
> (EngineUp-LocalMaintenanceMigrateVm) sent? sent
> MainThread::INFO::2014-12-29
> 13:17:03,911::states::208::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)
> Score is 0 due to local maintenance mode
> MainThread::INFO::2014-12-29
> 13:17:03,912::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state LocalMaintenanceMigrateVm (score: 0)
> MainThread::INFO::2014-12-29
> 13:17:03,912::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-29
> 13:17:03,960::brokerlink::111::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
> Trying: notify time=1419877023.96 type=state_transition
> detail=LocalMaintenanceMigrateVm-EngineMigratingAway
> hostname='compute2-3'
> MainThread::INFO::2014-12-29
> 13:17:03,980::brokerlink::120::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
> Success, was notification of state_transition
> (LocalMaintenanceMigrateVm-EngineMigratingAway) sent? sent
> MainThread::INFO::2014-12-29
> 13:17:04,218::states::66::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_penalize_memory)
> Penalizing score by 400 due to low free memory
> MainThread::INFO::2014-12-29
> 13:17:04,218::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineMigratingAway (score: 2000)
> MainThread::INFO::2014-12-29
> 13:17:04,219::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::ERROR::2014-12-29
> 13:17:14,251::hosted_engine::867::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_monitor_migration)
> Failed to migrate
> Traceback (most recent call last):
> File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
> line 863, in _monitor_migration
> vm_id,
> File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/vds_client.py",
> line 85, in run_vds_client_cmd
> response['status']['message'])
> DetailedError: Error 12 from migrateStatus: Fatal error during migration
> MainThread::INFO::2014-12-29
> 13:17:14,262::brokerlink::111::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
> Trying: notify time=1419877034.26 type=state_transition
> detail=EngineMigratingAway-ReinitializeFSM hostname='compute2-3'
> MainThread::INFO::2014-12-29
> 13:17:14,263::brokerlink::120::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
> Success, was notification of state_transition
> (EngineMigratingAway-ReinitializeFSM) sent? ignored
> MainThread::INFO::2014-12-29
> 13:17:14,496::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state ReinitializeFSM (score: 0)
> MainThread::INFO::2014-12-29
> 13:17:14,496::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-29
> 13:17:24,536::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
> Local maintenance detected
> MainThread::INFO::2014-12-29
> 13:17:24,547::brokerlink::111::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
> Trying: notify time=1419877044.55 type=state_transition
> detail=ReinitializeFSM-LocalMaintenance hostname='compute2-3'
> MainThread::INFO::2014-12-29
> 13:17:24,574::brokerlink::120::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
> Success, was notification of state_transition
> (ReinitializeFSM-LocalMaintenance) sent? sent
> MainThread::INFO::2014-12-29
> 13:17:24,812::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state LocalMaintenance (score: 0)
> MainThread::INFO::2014-12-29
> 13:17:24,812::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-29
> 13:17:34,851::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
> Local maintenance detected
> MainThread::INFO::2014-12-29
> 13:17:35,095::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state LocalMaintenance (score: 0)
> MainThread::INFO::2014-12-29
> 13:17:35,095::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-29
> 13:17:45,130::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
> Local maintenance detected
> MainThread::INFO::2014-12-29
> 13:17:45,368::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state LocalMaintenance (score: 0)
> MainThread::INFO::2014-12-29
> 13:17:45,368::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> ^C
> [root@compute2-3 ~]#
>
>
> [root@compute2-3 ~]# hosted-engine --vm-status
>
>
> --== Host 1 status ==--
>
> Status up-to-date : True
> Hostname : 10.0.0.94
> Host ID : 1
> Engine status : {"health": "good", "vm": "up",
> "detail": "up"}
> Score : 0
> Local maintenance : True
> Host timestamp : 1014956<tel:1014956>
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=1014956<tel:1014956> (Mon Dec 29 13:20:19 2014)
> host-id=1
> score=0
> maintenance=True
> state=LocalMaintenance
>
>
> --== Host 2 status ==--
>
> Status up-to-date : True
> Hostname : 10.0.0.93
> Host ID : 2
> Engine status : {"reason": "vm not running on
> this host", "health": "bad", "vm": "down", "detail": "unknown"}
> Score : 2400
> Local maintenance : False
> Host timestamp : 866019
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=866019 (Mon Dec 29 10:19:45 2014)
> host-id=2
> score=2400
> maintenance=False
> state=EngineDown
>
>
> --== Host 3 status ==--
>
> Status up-to-date : True
> Hostname : 10.0.0.92
> Host ID : 3
> Engine status : {"reason": "vm not running on
> this host", "health": "bad", "vm": "down", "detail": "unknown"}
> Score : 2400
> Local maintenance : False
> Host timestamp : 860493
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=860493 (Mon Dec 29 10:20:35 2014)
> host-id=3
> score=2400
> maintenance=False
> state=EngineDown
> [root@compute2-3 ~]#
> ---
> Thanks,
> Cong
>
>
>
> On 2014/12/29, at 8:43, "Artyom Lukianov"
> <alukiano(a)redhat.com<mailto:alukiano@redhat.com>> wrote:
>
> I see that HE vm run on host with ip 10.0.0.94, and two another hosts in
> "Local Maintenance" state, so vm will not migrate to any of them, can you
> try disable local maintenance on all hosts in HE environment and after
> enable "local maintenance" on host where HE vm run, and provide also output
> of hosted-engine --vm-status.
> Failover works in next way:
> 1) if host where run HE vm have score less by 800 that some other host in HE
> environment, HE vm will migrate on host with best score
> 2) if something happen to vm(kernel panic, crash of service...), agent will
> restart HE vm on another host in HE environment with positive score
> 3) if put to local maintenance host with HE vm, vm will migrate to another
> host with positive score
> Thanks.
>
> ----- Original Message -----
> From: "Cong Yue"
> <Cong_Yue(a)alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com>>
> To: "Artyom Lukianov" <alukiano(a)redhat.com<mailto:alukiano@redhat.com>>
> Cc: "Simone Tiraboschi" <stirabos(a)redhat.com<mailto:stirabos@redhat.com>>,
> users(a)ovirt.org<mailto:users@ovirt.org>
> Sent: Monday, December 29, 2014 6:30:42 PM
> Subject: Re: [ovirt-users] VM failover with ovirt3.5
>
> Thanks and the --vm-status log is as follows:
> [root@compute2-2 ~]# hosted-engine --vm-status
>
>
> --== Host 1 status ==--
>
> Status up-to-date : True
> Hostname : 10.0.0.94
> Host ID : 1
> Engine status : {"health": "good", "vm": "up",
> "detail": "up"}
> Score : 2400
> Local maintenance : False
> Host timestamp : 1008087
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=1008087<tel:1008087> (Mon Dec 29 11:25:51 2014)
> host-id=1
> score=2400
> maintenance=False
> state=EngineUp
>
>
> --== Host 2 status ==--
>
> Status up-to-date : True
> Hostname : 10.0.0.93
> Host ID : 2
> Engine status : {"reason": "vm not running on
> this host", "health": "bad", "vm": "down", "detail": "unknown"}
> Score : 0
> Local maintenance : True
> Host timestamp : 859142
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=859142 (Mon Dec 29 08:25:08 2014)
> host-id=2
> score=0
> maintenance=True
> state=LocalMaintenance
>
>
> --== Host 3 status ==--
>
> Status up-to-date : True
> Hostname : 10.0.0.92
> Host ID : 3
> Engine status : {"reason": "vm not running on
> this host", "health": "bad", "vm": "down", "detail": "unknown"}
> Score : 0
> Local maintenance : True
> Host timestamp : 853615
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=853615 (Mon Dec 29 08:25:57 2014)
> host-id=3
> score=0
> maintenance=True
> state=LocalMaintenance
> You have new mail in /var/spool/mail/root
> [root@compute2-2 ~]#
>
> Could you please explain how VM failover works inside ovirt? Is there any
> other debug option I can enable to check the problem?
>
> Thanks,
> Cong
>
>
> On 2014/12/29, at 1:39, "Artyom Lukianov"
> <alukiano(a)redhat.com<mailto:alukiano@redhat.com><mailto:alukiano@redhat.com>>
> wrote:
>
> Can you also provide output of hosted-engine --vm-status please, previous
> time it was useful, because I do not see something unusual.
> Thanks
>
> ----- Original Message -----
> From: "Cong Yue"
> <Cong_Yue(a)alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com><mailto:Cong_Yue@alliedtelesis.com>>
> To: "Artyom Lukianov"
> <alukiano(a)redhat.com<mailto:alukiano@redhat.com><mailto:alukiano@redhat.com>>
> Cc: "Simone Tiraboschi"
> <stirabos(a)redhat.com<mailto:stirabos@redhat.com><mailto:stirabos@redhat.com>>,
> users(a)ovirt.org<mailto:users@ovirt.org><mailto:users@ovirt.org>
> Sent: Monday, December 29, 2014 7:15:24 AM
> Subject: Re: [ovirt-users] VM failover with ovirt3.5
>
> Also I change the maintenance mode to local in another host. But also the VM
> in this host can not be migrated. The logs are as follows.
>
> [root@compute2-2 ~]# hosted-engine --set-maintenance --mode=local
> [root@compute2-2 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
> MainThread::INFO::2014-12-28
> 21:09:04,184::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-28
> 21:09:14,603::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-28
> 21:09:14,603::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-28
> 21:09:24,903::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-28
> 21:09:24,904::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-28
> 21:09:35,026::states::437::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
> Engine vm is running on host 10.0.0.94 (id 1)
> MainThread::INFO::2014-12-28
> 21:09:35,236::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-28
> 21:09:35,236::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-28
> 21:09:45,604::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-28
> 21:09:45,604::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-28
> 21:09:55,691::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
> Local maintenance detected
> MainThread::INFO::2014-12-28
> 21:09:55,701::brokerlink::111::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
> Trying: notify time=1419829795.7 type=state_transition
> detail=EngineDown-LocalMaintenance hostname='compute2-2'
> MainThread::INFO::2014-12-28
> 21:09:55,761::brokerlink::120::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
> Success, was notification of state_transition
> (EngineDown-LocalMaintenance) sent? sent
> MainThread::INFO::2014-12-28
> 21:09:55,990::states::208::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)
> Score is 0 due to local maintenance mode
> MainThread::INFO::2014-12-28
> 21:09:55,990::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state LocalMaintenance (score: 0)
> MainThread::INFO::2014-12-28
> 21:09:55,991::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> ^C
> You have new mail in /var/spool/mail/root
> [root@compute2-2 ~]# ps -ef | grep qemu
> root 18420 2777 0 21:10<x-apple-data-detectors://39> pts/0
> 00:00:00<x-apple-data-detectors://40> grep --color=auto qemu
> qemu 29809 1 0 Dec19 ? 01:17:20 /usr/libexec/qemu-kvm
> -name testvm2-2 -S -machine rhel6.5.0,accel=kvm,usb=off -cpu Nehalem
> -m 500 -realtime mlock=off -smp
> 1,maxcpus=16,sockets=16,cores=1,threads=1 -uuid
> c31e97d0-135e-42da-9954-162b5228dce3 -smbios
> type=1,manufacturer=oVirt,product=oVirt
> Node,version=7-0.1406.el7.centos.2.5,serial=4C4C4544-0059-3610-8033-B4C04F395931,uuid=c31e97d0-135e-42da-9954-162b5228dce3
> -no-user-config -nodefaults -chardev
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/testvm2-2.monitor,server,nowait
> -mon chardev=charmonitor,id=monitor,mode=control -rtc
> base=2014-12-19T20:17:17<x-apple-data-detectors://42>,driftfix=slew
> -no-kvm-pit-reinjection
> -no-hpet -no-shutdown -boot strict=on -device
> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
> virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5
> -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial=
> -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
> -drive
> file=/rhev/data-center/00000002-0002-0002-0002-0000000001e4/1dc71096-27c4-4256-b2ac-bd7265525c69/images/5cbeb8c9-4f04-48d0-a5eb-78c49187c550/a0570e8c-9867-4ec4-818f-11e102fc4f9b,if=none,id=drive-virtio-disk0,format=qcow2,serial=5cbeb8c9-4f04-48d0-a5eb-78c49187c550,cache=none,werror=stop,rerror=stop,aio=threads
> -device
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
> -netdev tap,fd=28,id=hostnet0,vhost=on,vhostfd=29 -device
> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:db:94:00,bus=pci.0,addr=0x3
> -chardev
> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/c31e97d0-135e-42da-9954-162b5228dce3.com.redhat.rhevm.vdsm,server,nowait
> -device
> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> -chardev
> socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/c31e97d0-135e-42da-9954-162b5228dce3.org.qemu.guest_agent.0,server,nowait
> -device
> virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
> -chardev spicevmc,id=charchannel2,name=vdagent -device
> virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
> -spice
> tls-port=5901,addr=10.0.0.93,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
> -k en-us -vga qxl -global qxl-vga.ram_size=67108864<tel:67108864> -global
> qxl-vga.vram_size=33554432<tel:33554432> -incoming tcp:[::]:49152 -device
> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7
> [root@compute2-2 ~]#
>
> Thanks,
> Cong
>
>
> On 2014/12/28, at 20:53, "Yue, Cong"
> <Cong_Yue(a)alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com><mailto:Cong_Yue@alliedtelesis.com><mailto:Cong_Yue@alliedtelesis.com>>
> wrote:
>
> I checked it again and confirmed there is one guest VM is running on the top
> of this host. The log is as follows:
>
> [root@compute2-1 vdsm]# ps -ef | grep qemu
> qemu 2983 846 0 Dec19 ? 00:00:00<x-apple-data-detectors://0>
> [supervdsmServer] <defunct>
> root 5489 3053 0 20:49<x-apple-data-detectors://1> pts/0
> 00:00:00<x-apple-data-detectors://2> grep --color=auto qemu
> qemu 26128 1 0 Dec19 ? 01:09:19 /usr/libexec/qemu-kvm
> -name testvm2 -S -machine rhel6.5.0,accel=kvm,usb=off -cpu Nehalem -m
> 500 -realtime mlock=off -smp 1,maxcpus=16,sockets=16,cores=1,threads=1
> -uuid e46bca87-4df5-4287-844b-90a26fccef33 -smbios
> type=1,manufacturer=oVirt,product=oVirt
> Node,version=7-0.1406.el7.centos.2.5,serial=4C4C4544-0030-3310-8059-B8C04F585231,uuid=e46bca87-4df5-4287-844b-90a26fccef33
> -no-user-config -nodefaults -chardev
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/testvm2.monitor,server,nowait
> -mon chardev=charmonitor,id=monitor,mode=control -rtc
> base=2014-12-19T20:18:01<x-apple-data-detectors://4>,driftfix=slew
> -no-kvm-pit-reinjection
> -no-hpet -no-shutdown -boot strict=on -device
> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
> virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5
> -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial=
> -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
> -drive
> file=/rhev/data-center/00000002-0002-0002-0002-0000000001e4/1dc71096-27c4-4256-b2ac-bd7265525c69/images/b4b5426b-95e3-41af-b286-da245891cdaf/0f688d49-97e3-4f1d-84d4-ac1432d903b3,if=none,id=drive-virtio-disk0,format=qcow2,serial=b4b5426b-95e3-41af-b286-da245891cdaf,cache=none,werror=stop,rerror=stop,aio=threads
> -device
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
> -netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=27 -device
> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:db:94:01,bus=pci.0,addr=0x3
> -chardev
> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/e46bca87-4df5-4287-844b-90a26fccef33.com.redhat.rhevm.vdsm,server,nowait
> -device
> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> -chardev
> socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/e46bca87-4df5-4287-844b-90a26fccef33.org.qemu.guest_agent.0,server,nowait
> -device
> virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
> -chardev spicevmc,id=charchannel2,name=vdagent -device
> virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
> -spice
> tls-port=5900,addr=10.0.0.92,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
> -k en-us -vga qxl -global qxl-vga.ram_size=67108864<tel:67108864> -global
> qxl-vga.vram_size=33554432<tel:33554432> -incoming tcp:[::]:49152 -device
> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7
> [root@compute2-1 vdsm]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
> MainThread::INFO::2014-12-28
> 20:49:27,315::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
> Local maintenance detected
> MainThread::INFO::2014-12-28
> 20:49:27,646::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state LocalMaintenance (score: 0)
> MainThread::INFO::2014-12-28
> 20:49:27,646::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-28
> 20:49:37,732::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
> Local maintenance detected
> MainThread::INFO::2014-12-28
> 20:49:37,961::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state LocalMaintenance (score: 0)
> MainThread::INFO::2014-12-28
> 20:49:37,961::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-28
> 20:49:48,048::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
> Local maintenance detected
> MainThread::INFO::2014-12-28
> 20:49:48,319::states::208::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)
> Score is 0 due to local maintenance mode
> MainThread::INFO::2014-12-28
> 20:49:48,319::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state LocalMaintenance (score: 0)
> MainThread::INFO::2014-12-28
> 20:49:48,319::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
>
> Thanks,
> Cong
>
>
> On 2014/12/28, at 3:46, "Artyom Lukianov"
> <alukiano(a)redhat.com<mailto:alukiano@redhat.com><mailto:alukiano@redhat.com><mailto:alukiano@redhat.com>>
> wrote:
>
> I see that you set local maintenance on host3 that do not have engine vm on
> it, so it nothing to migrate from this host.
> If you set local maintenance on host1, vm must migrate to another host with
> positive score.
> Thanks
>
> ----- Original Message -----
> From: "Cong Yue"
> <Cong_Yue(a)alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com><mailto:Cong_Yue@alliedtelesis.com><mailto:Cong_Yue@alliedtelesis.com>>
> To: "Simone Tiraboschi"
> <stirabos(a)redhat.com<mailto:stirabos@redhat.com><mailto:stirabos@redhat.com><mailto:stirabos@redhat.com>>
> Cc:
> users(a)ovirt.org<mailto:users@ovirt.org><mailto:users@ovirt.org><mailto:users@ovirt.org>
> Sent: Saturday, December 27, 2014 6:58:32 PM
> Subject: Re: [ovirt-users] VM failover with ovirt3.5
>
> Hi
>
> I had a try with "hosted-engine --set-maintence --mode=local" on
> compute2-1, which is host 3 in my cluster. From the log, it shows
> maintence mode is dectected, but migration does not happen.
>
> The logs are as follows. Is there any other config I need to check?
>
> [root@compute2-1 vdsm]# hosted-engine --vm-status
>
>
> --== Host 1 status ==-
>
> Status up-to-date : True
> Hostname : 10.0.0.94
> Host ID : 1
> Engine status : {"health": "good", "vm": "up",
> "detail": "up"}
> Score : 2400
> Local maintenance : False
> Host timestamp : 836296
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=836296 (Sat Dec 27 11:42:39 2014)
> host-id=1
> score=2400
> maintenance=False
> state=EngineUp
>
>
> --== Host 2 status ==--
>
> Status up-to-date : True
> Hostname : 10.0.0.93
> Host ID : 2
> Engine status : {"reason": "vm not running on
> this host", "health": "bad", "vm": "down", "detail": "unknown"}
> Score : 2400
> Local maintenance : False
> Host timestamp : 687358
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=687358 (Sat Dec 27 08:42:04 2014)
> host-id=2
> score=2400
> maintenance=False
> state=EngineDown
>
>
> --== Host 3 status ==--
>
> Status up-to-date : True
> Hostname : 10.0.0.92
> Host ID : 3
> Engine status : {"reason": "vm not running on
> this host", "health": "bad", "vm": "down", "detail": "unknown"}
> Score : 0
> Local maintenance : True
> Host timestamp : 681827
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=681827 (Sat Dec 27 08:42:40 2014)
> host-id=3
> score=0
> maintenance=True
> state=LocalMaintenance
> [root@compute2-1 vdsm]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
> MainThread::INFO::2014-12-27
> 08:42:41,109::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-27
> 08:42:51,198::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
> Local maintenance detected
> MainThread::INFO::2014-12-27
> 08:42:51,420::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state LocalMaintenance (score: 0)
> MainThread::INFO::2014-12-27
> 08:42:51,420::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-27
> 08:43:01,507::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
> Local maintenance detected
> MainThread::INFO::2014-12-27
> 08:43:01,773::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state LocalMaintenance (score: 0)
> MainThread::INFO::2014-12-27
> 08:43:01,773::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-27
> 08:43:11,859::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
> Local maintenance detected
> MainThread::INFO::2014-12-27
> 08:43:12,072::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state LocalMaintenance (score: 0)
> MainThread::INFO::2014-12-27
> 08:43:12,072::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
>
>
>
> [root@compute2-3 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
> MainThread::INFO::2014-12-27
> 11:36:28,855::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-27
> 11:36:39,130::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-27
> 11:36:39,130::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-27
> 11:36:49,449::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-27
> 11:36:49,449::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-27
> 11:36:59,739::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-27
> 11:36:59,739::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-27
> 11:37:09,779::states::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
> Engine vm running on localhost
> MainThread::INFO::2014-12-27
> 11:37:10,026::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-27
> 11:37:10,026::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-27
> 11:37:20,331::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-27
> 11:37:20,331::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
>
>
> [root@compute2-2 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
> MainThread::INFO::2014-12-27
> 08:36:12,462::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-27
> 08:36:22,797::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-27
> 08:36:22,798::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-27
> 08:36:32,876::states::437::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
> Engine vm is running on host 10.0.0.94 (id 1)
> MainThread::INFO::2014-12-27
> 08:36:33,169::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-27
> 08:36:33,169::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-27
> 08:36:43,567::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-27
> 08:36:43,567::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-27
> 08:36:53,858::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-27
> 08:36:53,858::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-27
> 08:37:04,028::state_machine::160::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
> Global metadata: {'maintenance': False}
> MainThread::INFO::2014-12-27
> 08:37:04,028::state_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
> Host 10.0.0.94 (id 1): {'extra':
> 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=835987
> (Sat Dec 27 11:37:30
> 2014)\nhost-id=1\nscore=2400\nmaintenance=False\nstate=EngineUp\n',
> 'hostname': '10.0.0.94', 'alive': True, 'host-id': 1, 'engine-status':
> {'health': 'good', 'vm': 'up', 'detail': 'up'}, 'score': 2400,
> 'maintenance': False, 'host-ts': 835987}
> MainThread::INFO::2014-12-27
> 08:37:04,028::state_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
> Host 10.0.0.92 (id 3): {'extra':
> 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=681528
> (Sat Dec 27 08:37:41
> 2014)\nhost-id=3\nscore=0\nmaintenance=True\nstate=LocalMaintenance\n',
> 'hostname': '10.0.0.92', 'alive': True, 'host-id': 3, 'engine-status':
> {'reason': 'vm not running on this host', 'health': 'bad', 'vm':
> 'down', 'detail': 'unknown'}, 'score': 0, 'maintenance': True,
> 'host-ts': 681528}
> MainThread::INFO::2014-12-27
> 08:37:04,028::state_machine::168::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
> Local (id 2): {'engine-health': {'reason': 'vm not running on this
> host', 'health': 'bad', 'vm': 'down', 'detail': 'unknown'}, 'bridge':
> True, 'mem-free': 15300.0, 'maintenance': False, 'cpu-load': 0.0215,
> 'gateway': True}
> MainThread::INFO::2014-12-27
> 08:37:04,265::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-27
> 08:37:04,265::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
>
> Thanks,
> Cong
>
> On 2014/12/22, at 5:29, "Simone Tiraboschi"
> <stirabos(a)redhat.com<mailto:stirabos@redhat.com><mailto:stirabos@redhat.com><mailto:stirabos@redhat.com>>
> wrote:
>
>
>
> ----- Original Message -----
> From: "Cong Yue"
> <Cong_Yue(a)alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com><mailto:Cong_Yue@alliedtelesis.com><mailto:Cong_Yue@alliedtelesis.com>>
> To: "Simone Tiraboschi"
> <stirabos(a)redhat.com<mailto:stirabos@redhat.com><mailto:stirabos@redhat.com><mailto:stirabos@redhat.com>>
> Cc:
> users(a)ovirt.org<mailto:users@ovirt.org><mailto:users@ovirt.org><mailto:users@ovirt.org>
> Sent: Friday, December 19, 2014 7:22:10 PM
> Subject: RE: [ovirt-users] VM failover with ovirt3.5
>
> Thanks for the information. This is the log for my three ovirt nodes.
> From the output of hosted-engine --vm-status, it shows the engine state for
> my 2nd and 3rd ovirt node is DOWN.
> Is this the reason why VM failover not work in my environment?
>
> No, they looks ok: you can run the engine VM on single host at a time.
>
> How can I make
> also engine works for my 2nd and 3rd ovit nodes?
>
> If you put the host 1 in local maintenance mode ( hosted-engine
> --set-maintenance --mode=local ) the VM should migrate to host 2; if you
> reactivate host 1 ( hosted-engine --set-maintenance --mode=none ) and put
> host 2 in local maintenance mode the VM should migrate again.
>
> Can you please try that and post the logs if something is going bad?
>
>
> --
> --== Host 1 status ==--
>
> Status up-to-date : True
> Hostname : 10.0.0.94
> Host ID : 1
> Engine status : {"health": "good", "vm": "up",
> "detail": "up"}
> Score : 2400
> Local maintenance : False
> Host timestamp : 150475
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=150475 (Fri Dec 19 13:12:18 2014)
> host-id=1
> score=2400
> maintenance=False
> state=EngineUp
>
>
> --== Host 2 status ==--
>
> Status up-to-date : True
> Hostname : 10.0.0.93
> Host ID : 2
> Engine status : {"reason": "vm not running on
> this host", "health": "bad", "vm": "down", "detail": "unknown"}
> Score : 2400
> Local maintenance : False
> Host timestamp : 1572
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=1572 (Fri Dec 19 10:12:18 2014)
> host-id=2
> score=2400
> maintenance=False
> state=EngineDown
>
>
> --== Host 3 status ==--
>
> Status up-to-date : False
> Hostname : 10.0.0.92
> Host ID : 3
> Engine status : unknown stale-data
> Score : 2400
> Local maintenance : False
> Host timestamp : 987
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=987 (Fri Dec 19 10:09:58 2014)
> host-id=3
> score=2400
> maintenance=False
> state=EngineDown
>
> --
> And the /var/log/ovirt-hosted-engine-ha/agent.log for three ovirt nodes are
> as follows:
> --
> 10.0.0.94(hosted-engine-1)
> ---
> MainThread::INFO::2014-12-19
> 13:09:33,716::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:09:33,716::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:09:44,017::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:09:44,017::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:09:54,303::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:09:54,303::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:04,342::states::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
> Engine vm running on localhost
> MainThread::INFO::2014-12-19
> 13:10:04,617::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:04,617::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:14,657::state_machine::160::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
> Global metadata: {'maintenance': False}
> MainThread::INFO::2014-12-19
> 13:10:14,657::state_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
> Host 10.0.0.93 (id 2): {'extra':
> 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=1448
> (Fri Dec 19 10:10:14
> 2014)\nhost-id=2\nscore=2400\nmaintenance=False\nstate=EngineDown\n',
> 'hostname': '10.0.0.93', 'alive': True, 'host-id': 2, 'engine-status':
> {'reason': 'vm not running on this host', 'health': 'bad', 'vm':
> 'down', 'detail': 'unknown'}, 'score': 2400, 'maintenance': False,
> 'host-ts': 1448}
> MainThread::INFO::2014-12-19
> 13:10:14,657::state_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
> Host 10.0.0.92 (id 3): {'extra':
> 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=987
> (Fri Dec 19 10:09:58
> 2014)\nhost-id=3\nscore=2400\nmaintenance=False\nstate=EngineDown\n',
> 'hostname': '10.0.0.92', 'alive': True, 'host-id': 3, 'engine-status':
> {'reason': 'vm not running on this host', 'health': 'bad', 'vm':
> 'down', 'detail': 'unknown'}, 'score': 2400, 'maintenance': False,
> 'host-ts': 987}
> MainThread::INFO::2014-12-19
> 13:10:14,658::state_machine::168::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
> Local (id 1): {'engine-health': {'health': 'good', 'vm': 'up',
> 'detail': 'up'}, 'bridge': True, 'mem-free': 1079.0, 'maintenance':
> False, 'cpu-load': 0.0269, 'gateway': True}
> MainThread::INFO::2014-12-19
> 13:10:14,904::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:14,904::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:25,210::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:25,210::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:35,499::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:35,499::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:45,784::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:45,785::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:56,070::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:56,070::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:11:06,109::states::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
> Engine vm running on localhost
> MainThread::INFO::2014-12-19
> 13:11:06,359::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:11:06,359::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:11:16,658::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:11:16,658::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:11:26,991::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:11:26,991::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:11:37,341::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:11:37,341::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> ----
>
> 10.0.0.93 (hosted-engine-2)
> MainThread::INFO::2014-12-19
> 10:12:18,339::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-19
> 10:12:18,339::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-19
> 10:12:28,651::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-19
> 10:12:28,652::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-19
> 10:12:39,010::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-19
> 10:12:39,010::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-19
> 10:12:49,338::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-19
> 10:12:49,338::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-19
> 10:12:59,642::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-19
> 10:12:59,642::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-19
> 10:13:10,010::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-19
> 10:13:10,010::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
>
>
> 10.0.0.92(hosted-engine-3)
> same as 10.0.0.93
> --
>
> -----Original Message-----
> From: Simone Tiraboschi [mailto:stirabos@redhat.com]
> Sent: Friday, December 19, 2014 12:28 AM
> To: Yue, Cong
> Cc:
> users(a)ovirt.org<mailto:users@ovirt.org><mailto:users@ovirt.org><mailto:users@ovirt.org>
> Subject: Re: [ovirt-users] VM failover with ovirt3.5
>
>
>
> ----- Original Message -----
> From: "Cong Yue"
> <Cong_Yue(a)alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com><mailto:Cong_Yue@alliedtelesis.com><mailto:Cong_Yue@alliedtelesis.com>>
> To:
> users(a)ovirt.org<mailto:users@ovirt.org><mailto:users@ovirt.org><mailto:users@ovirt.org>
> Sent: Friday, December 19, 2014 2:14:33 AM
> Subject: [ovirt-users] VM failover with ovirt3.5
>
>
>
> Hi
>
>
>
> In my environment, I have 3 ovirt nodes as one cluster. And on top of
> host-1, there is one vm to host ovirt engine.
>
> Also I have one external storage for the cluster to use as data domain
> of engine and data.
>
> I confirmed live migration works well in my environment.
>
> But it seems very buggy for VM failover if I try to force to shut down
> one ovirt node. Sometimes the VM in the node which is shutdown can
> migrate to other host, but it take more than several minutes.
>
> Sometimes, it can not migrate at all. Sometimes, only when the host is
> back, the VM is beginning to move.
>
> Can you please check or share the logs under
> /var/log/ovirt-hosted-engine-ha/
> ?
>
> Is there some documentation to explain how VM failover is working? And
> is there some bugs reported related with this?
>
> http://www.ovirt.org/Features/Self_Hosted_Engine#Agent_State_Diagram
>
> Thanks in advance,
>
> Cong
>
>
>
>
> This e-mail message is for the sole use of the intended recipient(s)
> and may contain confidential and privileged information. Any
> unauthorized review, use, disclosure or distribution is prohibited. If
> you are not the intended recipient, please contact the sender by reply
> e-mail and destroy all copies of the original message. If you are the
> intended recipient, please be advised that the content of this message
> is subject to access, review and disclosure by the sender's e-mail System
> Administrator.
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org<mailto:Users@ovirt.org><mailto:Users@ovirt.org><mailto:Users@ovirt.org>
> http://lists.ovirt.org/mailman/listinfo/users
>
> This e-mail message is for the sole use of the intended recipient(s) and may
> contain confidential and privileged information. Any unauthorized review,
> use, disclosure or distribution is prohibited. If you are not the intended
> recipient, please contact the sender by reply e-mail and destroy all copies
> of the original message. If you are the intended recipient, please be
> advised that the content of this message is subject to access, review and
> disclosure by the sender's e-mail System Administrator.
>
>
> This e-mail message is for the sole use of the intended recipient(s) and may
> contain confidential and privileged information. Any unauthorized review,
> use, disclosure or distribution is prohibited. If you are not the intended
> recipient, please contact the sender by reply e-mail and destroy all copies
> of the original message. If you are the intended recipient, please be
> advised that the content of this message is subject to access, review and
> disclosure by the sender's e-mail System Administrator.
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org<mailto:Users@ovirt.org><mailto:Users@ovirt.org><mailto:Users@ovirt.org>
> http://lists.ovirt.org/mailman/listinfo/users
>
> ________________________________
> This e-mail message is for the sole use of the intended recipient(s) and may
> contain confidential and privileged information. Any unauthorized review,
> use, disclosure or distribution is prohibited. If you are not the intended
> recipient, please contact the sender by reply e-mail and destroy all copies
> of the original message. If you are the intended recipient, please be
> advised that the content of this message is subject to access, review and
> disclosure by the sender's e-mail System Administrator.
>
> ________________________________
> This e-mail message is for the sole use of the intended recipient(s) and may
> contain confidential and privileged information. Any unauthorized review,
> use, disclosure or distribution is prohibited. If you are not the intended
> recipient, please contact the sender by reply e-mail and destroy all copies
> of the original message. If you are the intended recipient, please be
> advised that the content of this message is subject to access, review and
> disclosure by the sender's e-mail System Administrator.
>
> ________________________________
> This e-mail message is for the sole use of the intended recipient(s) and may
> contain confidential and privileged information. Any unauthorized review,
> use, disclosure or distribution is prohibited. If you are not the intended
> recipient, please contact the sender by reply e-mail and destroy all copies
> of the original message. If you are the intended recipient, please be
> advised that the content of this message is subject to access, review and
> disclosure by the sender's e-mail System Administrator.
>
>
> ________________________________
> This e-mail message is for the sole use of the intended recipient(s) and may
> contain confidential and privileged information. Any unauthorized review,
> use, disclosure or distribution is prohibited. If you are not the intended
> recipient, please contact the sender by reply e-mail and destroy all copies
> of the original message. If you are the intended recipient, please be
> advised that the content of this message is subject to access, review and
> disclosure by the sender's e-mail System Administrator.
9 years, 10 months
Status libgfapi support in oVirt
by Joop
I have been trying to use libgfapi glusterfs support in oVirt but can't
get it to work. After talks on IRC it seems I should apply a patch
(http://gerrit.ovirt.org/33768) to enable libgf BUT I can't get it to
work. Systems used:
- hosts Centos7 or Fedora20 (so upto date qemu/libvirt/oVirt(3.5))
- glusterfs-3.6.1
- vdsm-4.16.0-524.gitbc618a4.el7.x86_64 (snapshot master 14-nov)
- vdsm-4.16.7-1.gitdb83943.el7.x86_64 (official ovirt-3.5 vdsm, seems
newer than master snapshot?? )
Just adding the patch to vdsm-4.16.7-1.gitdb83943.el7.x86_64 doesn't
work, vdsm doesn't start anymore due to an error in virt/vm.py.
Q1: what is de exact status of libgf and oVirt.
Q2: how do I test that patch?
Joop
9 years, 10 months
[STORAGE] Adding posix compliant FS
by Julian De Marchi
Heya--
I'm using ovirt 3.5 and trying to add a posix compliant FS to a node in
my cluster.
The storage I'm trying to add is contained within LVM. Below is a link
to my log files on the node where I'm trying to attach the storage.
http://pastebin.com/fzN9ktAX
I've read the ovirt manual for adding posix compliant storage and
believe I'm doing everything correct.
Any help to get this storage added would be great thanks and if I forgot
to include any info please ask.
--julian
9 years, 10 months