------=_Part_1871238_1615445632.1419874799888
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
I'd like to add that using of floating MAC "balance-tlb" for mode 5 or ARP
negotiation for mode 6 load balancing " balance-alb" will influence latency and
performance, using such mode should be avoided.
Mode zero or "balance-rr" should be also avoided as it is the only mode that
will allow a single TCP/IP stream to utilize more than one interface, hence will create
additional jitter, latency and performance impacts, as frames/packets will be sent and
arrive from different interfaces, while preferred is to balance on per flow. Unless in
your data center you're not using L2-only based traffic, I really don't see any
usage for mode zero.
In Cisco routers the is a functionality called IP-CEF, which is turned on by default and
balancing traffic on per TCP/IP flow, instead of per-packet, it is being used for better
routing decisions for per-flow load balancing, if turned off, then per-packet load
balancing will be enforced, causing high performance impact on router's CPU and memory
resources, as decision have to be made on per-packet level, the higher the bit rate, the
harder impact on resources of the router will be, especially for small sized packets.
Thanks in advance.
Best regards,
Nikolai
____________________
Nikolai Sednev
Senior Quality Engineer at Compute team
Red Hat Israel
34 Jerusalem Road,
Ra'anana, Israel 43501
Tel: +972 9 7692043
Mobile: +972 52 7342734
Email: nsednev(a)redhat.com
IRC: nsednev
----- Original Message -----
From: users-request(a)ovirt.org
To: users(a)ovirt.org
Sent: Monday, December 29, 2014 6:53:59 AM
Subject: Users Digest, Vol 39, Issue 163
Send Users mailing list submissions to
users(a)ovirt.org
To subscribe or unsubscribe via the World Wide Web, visit
http://lists.ovirt.org/mailman/listinfo/users
or, via email, send a message with subject or body 'help' to
users-request(a)ovirt.org
You can reach the person managing the list at
users-owner(a)ovirt.org
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Users digest..."
Today's Topics:
1. Re: Problem after update ovirt to 3.5 (Juan Jose)
2. Re: ??: bond mode balance-alb (Dan Kenigsberg)
3. Re: VM failover with ovirt3.5 (Yue, Cong)
----------------------------------------------------------------------
Message: 1
Date: Sun, 28 Dec 2014 20:08:37 +0100
From: Juan Jose <jj197005(a)gmail.com>
To: Simone Tiraboschi <stirabos(a)redhat.com>
Cc: "users(a)ovirt.org" <users(a)ovirt.org>
Subject: Re: [ovirt-users] Problem after update ovirt to 3.5
Message-ID:
<CADrE9wYtNdMPNsyjjZxA3zbyKZhYB5DA03wQ17dTLfuBBtA-Bg(a)mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
Many thanks Simone,
Juanjo.
On Tue, Dec 16, 2014 at 1:48 PM, Simone Tiraboschi <stirabos(a)redhat.com>
wrote:
----- Original Message -----
> From: "Juan Jose" <jj197005(a)gmail.com>
> To: "Yedidyah Bar David" <didi(a)redhat.com>, sbonazzo(a)redhat.com
> Cc: users(a)ovirt.org
> Sent: Tuesday, December 16, 2014 1:03:17 PM
> Subject: Re: [ovirt-users] Problem after update ovirt to 3.5
>
> Hello everybody,
>
> It was the firewall, after upgrade my engine the NFS configuration had
> disappered, I have configured again as Red Hat says and now it works
> properly again.
>
> Many thank again for the indications.
We already had a patch for it [1],
it will released next month with oVirt 3.5.1
[1]
http://gerrit.ovirt.org/#/c/32874/
> Juanjo.
>
> On Mon, Dec 15, 2014 at 2:32 PM, Yedidyah Bar David < didi(a)redhat.com >
> wrote:
>
>
> ----- Original Message -----
> > From: "Juan Jose" < jj197005(a)gmail.com >
> > To: users(a)ovirt.org
> > Sent: Monday, December 15, 2014 3:17:15 PM
> > Subject: [ovirt-users] Problem after update ovirt to 3.5
> >
> > Hello everybody,
> >
> > After upgrade my engine to oVirt 3.5, I have also upgraded one of my
hosts
> > to
> > oVirt 3.5. After that it seems that all have gone good aparently.
> >
> > But in some seconds my ISO domain is desconnected and it is impossible
to
> > Activate. I'm attaching my engine.log. The below error is showed each
time
> > I
> > try to Activate the ISO domain. Before the upgrade it was working
without
> > problems:
> >
> > 2014-12-15 13:25:07,607 ERROR
> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> > (org.ovirt.thread.pool-8-thread-5) [460733dd] Correlation ID: null,
Call
> > Stack: null, Custom Event ID: -1, Message: Failed to connect Host
host1 to
> > the Storage Domains ISO_DOMAIN.
> > 2014-12-15 13:25:07,608 INFO
> >
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
> > (org.ovirt.thread.pool-8-thread-5) [460733dd] FINISH,
> > ConnectStorageServerVDSCommand, return:
> > {81c0a853-715c-4478-a812-6a74808fc482=477}, log id: 3590969e
> > 2014-12-15 13:25:07,615 ERROR
> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> > (org.ovirt.thread.pool-8-thread-5) [460733dd] Correlation ID: null,
Call
> > Stack: null, Custom Event ID: -1, Message: The error message for
connection
> > ovirt-engine.siee.local:/var/lib/exports/iso-20140303082312 returned by
> > VDSM
> > was: Problem while trying to mount target
> > 2014-12-15 13:25:07,616 ERROR
> > [org.ovirt.engine.core.bll.storage.NFSStorageHelper]
> > (org.ovirt.thread.pool-8-thread-5) [460733dd] The connection with
details
> > ovirt-engine.siee.local:/var/lib/exports/iso-20140303082312 failed
because
> > of error code 477 and error message is: problem while trying to mount
> > target
> >
> > If any other information is required, please tell me.
>
> Is the ISO domain on the engine host?
>
> Please check there iptables and /etc/exports, /etc/exports.d.
>
> Please post the setup (upgrade) log, check /var/log/ovirt-engine/setup.
>
> Thanks,
> --
> Didi
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
>
http://lists.ovirt.org/mailman/listinfo/users
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<
http://lists.ovirt.org/pipermail/users/attachments/20141228/bab30c2a/atta...
------------------------------
Message: 2
Date: Sun, 28 Dec 2014 23:56:58 +0000
From: Dan Kenigsberg <danken(a)redhat.com>
To: Blaster <Blaster(a)556nato.com>
Cc: "Users(a)ovirt.org List" <users(a)ovirt.org>
Subject: Re: [ovirt-users] ??: bond mode balance-alb
Message-ID: <20141228235658.GE21690(a)redhat.com>
Content-Type: text/plain; charset=us-ascii
On Fri, Dec 26, 2014 at 12:39:45PM -0600, Blaster wrote:
On 12/23/2014 2:55 AM, Dan Kenigsberg wrote:
>Bug 1094842 - Bonding modes 0, 5 and 6 should be avoided for VM networks
>https://bugzilla.redhat.com/show_bug.cgi?id=1094842#c0
Dan,
What is bad about these modes that oVirt can't use them?
I can only quote jpirko's workds from the link above:
Do not use tlb or alb in bridge, never! It does not work, that's it. The reason
is it mangles source macs in xmit frames and arps. When it is possible, just
use mode 4 (lacp). That should be always possible because all enterprise
switches support that. Generally, for 99% of use cases, you *should* use mode
4. There is no reason to use other modes.
I just tested mode 4, and the LACP with Fedora 20 appears to not be
compatible with the LAG mode on my Dell 2824.
Would there be any issues with bringing two NICS into the VM and doing
balance-alb at the guest level?
------------------------------
Message: 3
Date: Sun, 28 Dec 2014 20:53:44 -0800
From: "Yue, Cong" <Cong_Yue(a)alliedtelesis.com>
To: Artyom Lukianov <alukiano(a)redhat.com>
Cc: "users(a)ovirt.org" <users(a)ovirt.org>
Subject: Re: [ovirt-users] VM failover with ovirt3.5
Message-ID: <B7E7D6D4-B85D-471C-87A7-EA9AD32BF279(a)alliedtelesis.com>
Content-Type: text/plain; charset="utf-8"
I checked it again and confirmed there is one guest VM is running on the top of this host.
The log is as follows:
[root@compute2-1 vdsm]# ps -ef | grep qemu
qemu 2983 846 0 Dec19 ? 00:00:00<x-apple-data-detectors://0> [supervdsmServer]
<defunct>
root 5489 3053 0 20:49<x-apple-data-detectors://1> pts/0
00:00:00<x-apple-data-detectors://2> grep --color=auto qemu
qemu 26128 1 0 Dec19 ? 01:09:19 /usr/libexec/qemu-kvm
-name testvm2 -S -machine rhel6.5.0,accel=kvm,usb=off -cpu Nehalem -m
500 -realtime mlock=off -smp 1,maxcpus=16,sockets=16,cores=1,threads=1
-uuid e46bca87-4df5-4287-844b-90a26fccef33 -smbios
type=1,manufacturer=oVirt,product=oVirt
Node,version=7-0.1406.el7.centos.2.5,serial=4C4C4544-0030-3310-8059-B8C04F585231,uuid=e46bca87-4df5-4287-844b-90a26fccef33
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/testvm2.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2014-12-19T20:18:01<x-apple-data-detectors://4>,driftfix=slew
-no-kvm-pit-reinjection
-no-hpet -no-shutdown -boot strict=on -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5
-drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial=
-device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
-drive
file=/rhev/data-center/00000002-0002-0002-0002-0000000001e4/1dc71096-27c4-4256-b2ac-bd7265525c69/images/b4b5426b-95e3-41af-b286-da245891cdaf/0f688d49-97e3-4f1d-84d4-ac1432d903b3,if=none,id=drive-virtio-disk0,format=qcow2,serial=b4b5426b-95e3-41af-b286-da245891cdaf,cache=none,werror=stop,rerror=stop,aio=threads
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=27 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:db:94:01,bus=pci.0,addr=0x3
-chardev
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/e46bca87-4df5-4287-844b-90a26fccef33.com.redhat.rhevm.vdsm,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev
socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/e46bca87-4df5-4287-844b-90a26fccef33.org.qemu.guest_agent.0,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev spicevmc,id=charchannel2,name=vdagent -device
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
-spice
tls-port=5900,addr=10.0.0.92,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
-k en-us -vga qxl -global qxl-vga.ram_size=67108864<tel:67108864> -global
qxl-vga.vram_size=33554432<tel:33554432> -incoming tcp:[::]:49152 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7
[root@compute2-1 vdsm]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
MainThread::INFO::2014-12-28
20:49:27,315::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-28
20:49:27,646::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenance (score: 0)
MainThread::INFO::2014-12-28
20:49:27,646::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-28
20:49:37,732::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-28
20:49:37,961::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenance (score: 0)
MainThread::INFO::2014-12-28
20:49:37,961::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-28
20:49:48,048::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-28
20:49:48,319::states::208::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)
Score is 0 due to local maintenance mode
MainThread::INFO::2014-12-28
20:49:48,319::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenance (score: 0)
MainThread::INFO::2014-12-28
20:49:48,319::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
Thanks,
Cong
On 2014/12/28, at 3:46, "Artyom Lukianov"
<alukiano@redhat.com<mailto:alukiano@redhat.com>> wrote:
I see that you set local maintenance on host3 that do not have engine vm on it, so it
nothing to migrate from this host.
If you set local maintenance on host1, vm must migrate to another host with positive
score.
Thanks
----- Original Message -----
From: "Cong Yue"
<Cong_Yue@alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com>>
To: "Simone Tiraboschi"
<stirabos@redhat.com<mailto:stirabos@redhat.com>>
Cc: users@ovirt.org<mailto:users@ovirt.org>
Sent: Saturday, December 27, 2014 6:58:32 PM
Subject: Re: [ovirt-users] VM failover with ovirt3.5
Hi
I had a try with "hosted-engine --set-maintence --mode=local" on
compute2-1, which is host 3 in my cluster. From the log, it shows
maintence mode is dectected, but migration does not happen.
The logs are as follows. Is there any other config I need to check?
[root@compute2-1 vdsm]# hosted-engine --vm-status
--== Host 1 status ==-
Status up-to-date : True
Hostname : 10.0.0.94
Host ID : 1
Engine status : {"health": "good", "vm": "up",
"detail": "up"}
Score : 2400
Local maintenance : False
Host timestamp : 836296
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=836296 (Sat Dec 27 11:42:39 2014)
host-id=1
score=2400
maintenance=False
state=EngineUp
--== Host 2 status ==--
Status up-to-date : True
Hostname : 10.0.0.93
Host ID : 2
Engine status : {"reason": "vm not running on
this host", "health": "bad", "vm": "down",
"detail": "unknown"}
Score : 2400
Local maintenance : False
Host timestamp : 687358
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=687358 (Sat Dec 27 08:42:04 2014)
host-id=2
score=2400
maintenance=False
state=EngineDown
--== Host 3 status ==--
Status up-to-date : True
Hostname : 10.0.0.92
Host ID : 3
Engine status : {"reason": "vm not running on
this host", "health": "bad", "vm": "down",
"detail": "unknown"}
Score : 0
Local maintenance : True
Host timestamp : 681827
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=681827 (Sat Dec 27 08:42:40 2014)
host-id=3
score=0
maintenance=True
state=LocalMaintenance
[root@compute2-1 vdsm]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
MainThread::INFO::2014-12-27
08:42:41,109::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-27
08:42:51,198::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-27
08:42:51,420::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenance (score: 0)
MainThread::INFO::2014-12-27
08:42:51,420::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-27
08:43:01,507::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-27
08:43:01,773::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenance (score: 0)
MainThread::INFO::2014-12-27
08:43:01,773::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-27
08:43:11,859::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-27
08:43:12,072::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenance (score: 0)
MainThread::INFO::2014-12-27
08:43:12,072::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
[root@compute2-3 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
MainThread::INFO::2014-12-27
11:36:28,855::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-27
11:36:39,130::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-27
11:36:39,130::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-27
11:36:49,449::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-27
11:36:49,449::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-27
11:36:59,739::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-27
11:36:59,739::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-27
11:37:09,779::states::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
Engine vm running on localhost
MainThread::INFO::2014-12-27
11:37:10,026::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-27
11:37:10,026::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-27
11:37:20,331::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-27
11:37:20,331::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
[root@compute2-2 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
MainThread::INFO::2014-12-27
08:36:12,462::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-27
08:36:22,797::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-27
08:36:22,798::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-27
08:36:32,876::states::437::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
Engine vm is running on host 10.0.0.94 (id 1)
MainThread::INFO::2014-12-27
08:36:33,169::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-27
08:36:33,169::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-27
08:36:43,567::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-27
08:36:43,567::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-27
08:36:53,858::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-27
08:36:53,858::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-27
08:37:04,028::state_machine::160::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Global metadata: {'maintenance': False}
MainThread::INFO::2014-12-27
08:37:04,028::state_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Host 10.0.0.94 (id 1): {'extra':
'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=835987
(Sat Dec 27 11:37:30
2014)\nhost-id=1\nscore=2400\nmaintenance=False\nstate=EngineUp\n',
'hostname': '10.0.0.94', 'alive': True, 'host-id': 1,
'engine-status':
{'health': 'good', 'vm': 'up', 'detail':
'up'}, 'score': 2400,
'maintenance': False, 'host-ts': 835987}
MainThread::INFO::2014-12-27
08:37:04,028::state_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Host 10.0.0.92 (id 3): {'extra':
'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=681528
(Sat Dec 27 08:37:41
2014)\nhost-id=3\nscore=0\nmaintenance=True\nstate=LocalMaintenance\n',
'hostname': '10.0.0.92', 'alive': True, 'host-id': 3,
'engine-status':
{'reason': 'vm not running on this host', 'health': 'bad',
'vm':
'down', 'detail': 'unknown'}, 'score': 0,
'maintenance': True,
'host-ts': 681528}
MainThread::INFO::2014-12-27
08:37:04,028::state_machine::168::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Local (id 2): {'engine-health': {'reason': 'vm not running on this
host', 'health': 'bad', 'vm': 'down',
'detail': 'unknown'}, 'bridge':
True, 'mem-free': 15300.0, 'maintenance': False, 'cpu-load':
0.0215,
'gateway': True}
MainThread::INFO::2014-12-27
08:37:04,265::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-27
08:37:04,265::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
Thanks,
Cong
On 2014/12/22, at 5:29, "Simone Tiraboschi"
<stirabos@redhat.com<mailto:stirabos@redhat.com>> wrote:
----- Original Message -----
From: "Cong Yue"
<Cong_Yue@alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com>>
To: "Simone Tiraboschi"
<stirabos@redhat.com<mailto:stirabos@redhat.com>>
Cc: users@ovirt.org<mailto:users@ovirt.org>
Sent: Friday, December 19, 2014 7:22:10 PM
Subject: RE: [ovirt-users] VM failover with ovirt3.5
Thanks for the information. This is the log for my three ovirt nodes.
From the output of hosted-engine --vm-status, it shows the engine
state for
my 2nd and 3rd ovirt node is DOWN.
Is this the reason why VM failover not work in my environment?
No, they looks ok: you can run the engine VM on single host at a time.
How can I make
also engine works for my 2nd and 3rd ovit nodes?
If you put the host 1 in local maintenance mode ( hosted-engine --set-maintenance
--mode=local ) the VM should migrate to host 2; if you reactivate host 1 ( hosted-engine
--set-maintenance --mode=none ) and put host 2 in local maintenance mode the VM should
migrate again.
Can you please try that and post the logs if something is going bad?
--
--== Host 1 status ==--
Status up-to-date : True
Hostname : 10.0.0.94
Host ID : 1
Engine status : {"health": "good", "vm": "up",
"detail": "up"}
Score : 2400
Local maintenance : False
Host timestamp : 150475
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=150475 (Fri Dec 19 13:12:18 2014)
host-id=1
score=2400
maintenance=False
state=EngineUp
--== Host 2 status ==--
Status up-to-date : True
Hostname : 10.0.0.93
Host ID : 2
Engine status : {"reason": "vm not running on
this host", "health": "bad", "vm": "down",
"detail": "unknown"}
Score : 2400
Local maintenance : False
Host timestamp : 1572
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=1572 (Fri Dec 19 10:12:18 2014)
host-id=2
score=2400
maintenance=False
state=EngineDown
--== Host 3 status ==--
Status up-to-date : False
Hostname : 10.0.0.92
Host ID : 3
Engine status : unknown stale-data
Score : 2400
Local maintenance : False
Host timestamp : 987
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=987 (Fri Dec 19 10:09:58 2014)
host-id=3
score=2400
maintenance=False
state=EngineDown
--
And the /var/log/ovirt-hosted-engine-ha/agent.log for three ovirt nodes are
as follows:
--
10.0.0.94(hosted-engine-1)
---
MainThread::INFO::2014-12-19
13:09:33,716::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:09:33,716::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:09:44,017::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:09:44,017::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:09:54,303::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:09:54,303::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:10:04,342::states::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
Engine vm running on localhost
MainThread::INFO::2014-12-19
13:10:04,617::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:10:04,617::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:10:14,657::state_machine::160::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Global metadata: {'maintenance': False}
MainThread::INFO::2014-12-19
13:10:14,657::state_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Host 10.0.0.93 (id 2): {'extra':
'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=1448
(Fri Dec 19 10:10:14
2014)\nhost-id=2\nscore=2400\nmaintenance=False\nstate=EngineDown\n',
'hostname': '10.0.0.93', 'alive': True, 'host-id': 2,
'engine-status':
{'reason': 'vm not running on this host', 'health': 'bad',
'vm':
'down', 'detail': 'unknown'}, 'score': 2400,
'maintenance': False,
'host-ts': 1448}
MainThread::INFO::2014-12-19
13:10:14,657::state_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Host 10.0.0.92 (id 3): {'extra':
'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=987
(Fri Dec 19 10:09:58
2014)\nhost-id=3\nscore=2400\nmaintenance=False\nstate=EngineDown\n',
'hostname': '10.0.0.92', 'alive': True, 'host-id': 3,
'engine-status':
{'reason': 'vm not running on this host', 'health': 'bad',
'vm':
'down', 'detail': 'unknown'}, 'score': 2400,
'maintenance': False,
'host-ts': 987}
MainThread::INFO::2014-12-19
13:10:14,658::state_machine::168::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Local (id 1): {'engine-health': {'health': 'good', 'vm':
'up',
'detail': 'up'}, 'bridge': True, 'mem-free': 1079.0,
'maintenance':
False, 'cpu-load': 0.0269, 'gateway': True}
MainThread::INFO::2014-12-19
13:10:14,904::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:10:14,904::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:10:25,210::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:10:25,210::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:10:35,499::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:10:35,499::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:10:45,784::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:10:45,785::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:10:56,070::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:10:56,070::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:11:06,109::states::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
Engine vm running on localhost
MainThread::INFO::2014-12-19
13:11:06,359::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:11:06,359::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:11:16,658::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:11:16,658::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:11:26,991::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:11:26,991::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:11:37,341::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:11:37,341::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
----
10.0.0.93 (hosted-engine-2)
MainThread::INFO::2014-12-19
10:12:18,339::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-19
10:12:18,339::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-19
10:12:28,651::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-19
10:12:28,652::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-19
10:12:39,010::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-19
10:12:39,010::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-19
10:12:49,338::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-19
10:12:49,338::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-19
10:12:59,642::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-19
10:12:59,642::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-19
10:13:10,010::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-19
10:13:10,010::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
10.0.0.92(hosted-engine-3)
same as 10.0.0.93
--
-----Original Message-----
From: Simone Tiraboschi [mailto:stirabos@redhat.com]
Sent: Friday, December 19, 2014 12:28 AM
To: Yue, Cong
Cc: users@ovirt.org<mailto:users@ovirt.org>
Subject: Re: [ovirt-users] VM failover with ovirt3.5
----- Original Message -----
From: "Cong Yue"
<Cong_Yue@alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com>>
To: users@ovirt.org<mailto:users@ovirt.org>
Sent: Friday, December 19, 2014 2:14:33 AM
Subject: [ovirt-users] VM failover with ovirt3.5
Hi
In my environment, I have 3 ovirt nodes as one cluster. And on top of
host-1, there is one vm to host ovirt engine.
Also I have one external storage for the cluster to use as data domain
of engine and data.
I confirmed live migration works well in my environment.
But it seems very buggy for VM failover if I try to force to shut down
one ovirt node. Sometimes the VM in the node which is shutdown can
migrate to other host, but it take more than several minutes.
Sometimes, it can not migrate at all. Sometimes, only when the host is
back, the VM is beginning to move.
Can you please check or share the logs under /var/log/ovirt-hosted-engine-ha/
?
Is there some documentation to explain how VM failover is working? And
is there some bugs reported related with this?
http://www.ovirt.org/Features/Self_Hosted_Engine#Agent_State_Diagram
Thanks in advance,
Cong
This e-mail message is for the sole use of the intended recipient(s)
and may contain confidential and privileged information. Any
unauthorized review, use, disclosure or distribution is prohibited. If
you are not the intended recipient, please contact the sender by reply
e-mail and destroy all copies of the original message. If you are the
intended recipient, please be advised that the content of this message
is subject to access, review and disclosure by the sender's e-mail System
Administrator.
_______________________________________________
Users mailing list
Users@ovirt.org<mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
This e-mail message is for the sole use of the intended recipient(s) and may
contain confidential and privileged information. Any unauthorized review,
use, disclosure or distribution is prohibited. If you are not the intended
recipient, please contact the sender by reply e-mail and destroy all copies
of the original message. If you are the intended recipient, please be
advised that the content of this message is subject to access, review and
disclosure by the sender's e-mail System Administrator.
This e-mail message is for the sole use of the intended recipient(s) and may contain
confidential and privileged information. Any unauthorized review, use, disclosure or
distribution is prohibited. If you are not the intended recipient, please contact the
sender by reply e-mail and destroy all copies of the original message. If you are the
intended recipient, please be advised that the content of this message is subject to
access, review and disclosure by the sender's e-mail System Administrator.
_______________________________________________
Users mailing list
Users@ovirt.org<mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
________________________________
This e-mail message is for the sole use of the intended recipient(s) and may contain
confidential and privileged information. Any unauthorized review, use, disclosure or
distribution is prohibited. If you are not the intended recipient, please contact the
sender by reply e-mail and destroy all copies of the original message. If you are the
intended recipient, please be advised that the content of this message is subject to
access, review and disclosure by the sender's e-mail System Administrator.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<
http://lists.ovirt.org/pipermail/users/attachments/20141228/c5ac26a7/atta...
------------------------------
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
End of Users Digest, Vol 39, Issue 163
**************************************
------=_Part_1871238_1615445632.1419874799888
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: georgia,serif; font-size: 12pt;
colo=
r: #000000"><div>I'd like to add that<span style=3D"font-size:
12pt;"> =
;using of floating MAC </span><span style=3D"font-size:
12pt;">"balance-tlb=
" </span><span style=3D"font-size: 12pt;">for mode 5 or ARP
</span><span st=
yle=3D"font-size: 12pt;">negotiation for mode 6 load balancing
"</span><spa=
n style=3D"font-size: 12pt;">balance-alb" will influence latency and
perfor=
mance, using such mode should be
avoided. </span></div><div>Mode zero =
or "balance-rr" should be also avoided as it is the
only m=
ode that will allow a single TCP/IP stream to utilize more than one interfa=
ce, hence will create additional jitter, latency and performance impacts,&n=
bsp;as frames/packets will be sent and arrive from different interfaces, wh=
ile preferred is to balance on per flow. Unless in your data center you're =
not using L2-only based traffic, I really don't see any usage for mode zero=
.</div><div>In Cisco routers the is a functionality called IP-CEF, which is=
turned on by default and balancing traffic on per TCP/IP flow, instead of =
per-packet, it is being used for better routing decisions for per-flow load=
balancing, if turned off, then per-packet load balancing will be enforced,=
causing high performance impact on router's CPU and memory resources, as d=
ecision have to be made on per-packet level, the higher the bit rate, the h=
arder impact on resources of the router will be, especially for small sized=
packets.</div><div><br></div><div><span
name=3D"x"></span><br>Thanks in ad=
vance.<br><div><br></div>Best
regards,<br>Nikolai<br>____________________<b=
r>Nikolai Sednev<br>Senior Quality Engineer at Compute team<br>Red Hat
Isra=
el<br>34 Jerusalem Road,<br>Ra'anana, Israel
43501<br><div><br></div>Tel: &=
nbsp; +972 9 7692043<br>Mobile: +972 52
7342734<br>Ema=
il: nsednev(a)redhat.com<br>IRC: nsednev<span
name=3D"x"></span><br></div><di=
v><br></div><hr id=3D"zwchr"><div
style=3D"color:#000;font-weight:normal;fo=
nt-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif=
;font-size:12pt;"><b>From:
</b>users-request(a)ovirt.org<br><b>To: </b>users@=
ovirt.org<br><b>Sent: </b>Monday, December 29, 2014 6:53:59
AM<br><b>Subjec=
t: </b>Users Digest, Vol 39, Issue
163<br><div><br></div>Send Users mailing=
list submissions
to<br> use=
rs(a)ovirt.org<br><div><br></div>To subscribe or unsubscribe via the
World Wi=
de Web,
visit<br> http://lis=
ts.ovirt.org/mailman/listinfo/users<br>or, via email, send a message with s=
ubject or body 'help'
to<br>  =
;users-request(a)ovirt.org<br><div><br></div>You can reach the
person managin=
g the list
at<br> users-owne=
r(a)ovirt.org<br><div><br></div>When replying, please edit your
Subject line =
so it is more specific<br>than "Re: Contents of Users
digest..."<br><div><b=
r></div><br>Today's
Topics:<br><div><br></div> 1. Re:
Pro=
blem after update ovirt to 3.5 (Juan Jose)<br> 2. Re:
??:=
bond mode balance-alb (Dan Kenigsberg)<br> 3. Re: VM
fai=
lover with ovirt3.5 (Yue,
Cong)<br><div><br></div><br>---------------------=
-------------------------------------------------<br><div><br></div>Message=
: 1<br>Date: Sun, 28 Dec 2014 20:08:37 +0100<br>From: Juan Jose
<jj19700=
5(a)gmail.com&gt;<br>To: Simone Tiraboschi
<stirabos@redhat.com><br>Cc:=
"users(a)ovirt.org" &lt;users(a)ovirt.org&gt;<br>Subject: Re:
[ovirt-users] Pr=
oblem after update ovirt to
3.5<br>Message-ID:<br> &=
nbsp; <CADrE9wYtNdMPNsyjjZxA3zbyKZhYB5DA03wQ17dTLfuBBtA=
-Bg(a)mail.gmail.com&gt;<br>Content-Type: text/plain;
charset=3D"utf-8"<br><d=
iv><br></div>Many thanks
Simone,<br><div><br></div>Juanjo.<br><div><br></di=
v>On Tue, Dec 16, 2014 at 1:48 PM, Simone Tiraboschi &lt;stirabos(a)redhat.co=
m><br>wrote:<br><div><br></div>><br>><br>>
----- Original Messa=
ge -----<br>> > From: "Juan Jose"
&lt;jj197005(a)gmail.com&gt;<br>&gt; =
> To: "Yedidyah Bar David" &lt;didi(a)redhat.com&gt;,
sbonazzo(a)redhat.com<=
br>> > Cc: users(a)ovirt.org<br>&gt; > Sent: Tuesday,
December 16, 2=
014 1:03:17 PM<br>> > Subject: Re: [ovirt-users] Problem after
update=
ovirt to 3.5<br>> ><br>> > Hello
everybody,<br>> ><br>&g=
t; > It was the firewall, after upgrade my engine the NFS configuration =
had<br>> > disappered, I have configured again as Red Hat says and
no=
w it works<br>> > properly again.<br>>
><br>> > Many than=
k again for the indications.<br>><br>> We already had a patch
for it =
[1],<br>> it will released next month with oVirt
3.5.1<br>><br>> [=
1]
http://gerrit.ovirt.org/#/c/32874/<br>><br>> >
Juanjo.<br>> =
><br>> > On Mon, Dec 15, 2014 at 2:32 PM, Yedidyah Bar David
< =
didi(a)redhat.com ><br>> > wrote:<br>>
><br>> ><br>> =
> ----- Original Message -----<br>> > > From: "Juan
Jose" < =
jj197005(a)gmail.com ><br>> > > To:
users(a)ovirt.org<br>&gt; > =
> Sent: Monday, December 15, 2014 3:17:15 PM<br>> > >
Subject: =
[ovirt-users] Problem after update ovirt to 3.5<br>> >
><br>> &=
gt; > Hello everybody,<br>> > ><br>>
> > After upgrade=
my engine to oVirt 3.5, I have also upgraded one of my<br>>
hosts<br>&g=
t; > > to<br>> > > oVirt 3.5. After that it seems
that all h=
ave gone good aparently.<br>> > ><br>> >
> But in some se=
conds my ISO domain is desconnected and it is impossible<br>>
to<br>>=
> > Activate. I'm attaching my engine.log. The below error is showed=
each<br>> time<br>> > > I<br>>
> > try to Activate th=
e ISO domain. Before the upgrade it was working<br>>
without<br>> >=
; > problems:<br>> > ><br>> >
> 2014-12-15 13:25:07,60=
7 ERROR<br>> > >
[org.ovirt.engine.core.dal.dbbroker.auditloghandl=
ing.AuditLogDirector]<br>> > >
(org.ovirt.thread.pool-8-thread-5) =
[460733dd] Correlation ID: null,<br>> Call<br>> >
> Stack: null=
, Custom Event ID: -1, Message: Failed to connect Host<br>> host1
to<br>=
> > > the Storage Domains ISO_DOMAIN.<br>> >
> 2014-12-15=
13:25:07,608 INFO<br>> > ><br>>
[org.ovirt.engine.core.vdsbrok=
er.vdsbroker.ConnectStorageServerVDSCommand]<br>> > >
(org.ovirt.t=
hread.pool-8-thread-5) [460733dd] FINISH,<br>> > >
ConnectStorageS=
erverVDSCommand, return:<br>> > >
{81c0a853-715c-4478-a812-6a74808=
fc482=3D477}, log id: 3590969e<br>> > > 2014-12-15 13:25:07,615
ER=
ROR<br>> > >
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.=
AuditLogDirector]<br>> > > (org.ovirt.thread.pool-8-thread-5)
[460=
733dd] Correlation ID: null,<br>> Call<br>> > >
Stack: null, Cu=
stom Event ID: -1, Message: The error message for<br>>
connection<br>>=
; > > ovirt-engine.siee.local:/var/lib/exports/iso-20140303082312 ret=
urned by<br>> > > VDSM<br>> > > was:
Problem while trying=
to mount target<br>> > > 2014-12-15 13:25:07,616
ERROR<br>> &g=
t; > [org.ovirt.engine.core.bll.storage.NFSStorageHelper]<br>>
> &=
gt; (org.ovirt.thread.pool-8-thread-5) [460733dd] The connection with<br>&g=
t; details<br>> > >
ovirt-engine.siee.local:/var/lib/exports/iso-2=
0140303082312 failed<br>> because<br>> > > of
error code 477 an=
d error message is: problem while trying to mount<br>> > >
target<=
br>> > ><br>> > > If any other
information is required, p=
lease tell me.<br>> ><br>> > Is the ISO domain on
the engine ho=
st?<br>> ><br>> > Please check there iptables and
/etc/exports,=
/etc/exports.d.<br>> ><br>> > Please post the
setup (upgrade) =
log, check /var/log/ovirt-engine/setup.<br>> ><br>>
> Thanks,<b=
r>> > --<br>> > Didi<br>>
><br>> > ________________=
_______________________________<br>> > Users mailing
list<br>> >=
; Users(a)ovirt.org<br>&gt; >
http://lists.ovirt.org/mailman/listinfo/user=
s<br>> ><br>><br>-------------- next part
--------------<br>An HTM=
L attachment was scrubbed...<br>URL: <http://lists.ovirt.org/pipermail/u=
sers/attachments/20141228/bab30c2a/attachment-0001.html><br><div><br></d=
iv>------------------------------<br><div><br></div>Message:
2<br>Date: Sun=
, 28 Dec 2014 23:56:58 +0000<br>From: Dan Kenigsberg
&lt;danken(a)redhat.com&=
gt;<br>To: Blaster &lt;Blaster(a)556nato.com&gt;<br>Cc:
"Users(a)ovirt.org List=
" &lt;users(a)ovirt.org&gt;<br>Subject: Re: [ovirt-users] ??: bond mode
balan=
ce-alb<br>Message-ID:
&lt;20141228235658.GE21690(a)redhat.com&gt;<br>Content-=
Type: text/plain; charset=3Dus-ascii<br><div><br></div>On Fri, Dec
26, 2014=
at 12:39:45PM -0600, Blaster wrote:<br>> On 12/23/2014 2:55 AM, Dan Ken=
igsberg wrote:<br>> >Bug 1094842 - Bonding modes 0, 5 and 6 should
be=
avoided for VM networks<br>>
>https://bugzilla.redhat.com/show_bug.c=
gi?id=3D1094842#c0<br>> <br>> Dan,<br>>
<br>> What is bad about=
these modes that oVirt can't use them?<br><div><br></div>I
can only quote =
jpirko's workds from the link
above:<br><div><br></div> D=
o not use tlb or alb in bridge, never! It does not work, that's it. The rea=
son<br> is it mangles source macs in xmit frames and
arps=
. When it is possible, just<br> use mode 4 (lacp).
That s=
hould be always possible because all enterprise<br>
switc=
hes support that. Generally, for 99% of use cases, you *should* use mode<br=
4. There is no reason to use other
modes.<br><div><br><=
/div>> <br>> I
just tested mode 4, and the LACP with Fedora 20 appear=
s to not be<br>> compatible with the LAG mode on my Dell
2824.<br>> <=
br>> Would there be any issues with bringing two NICS into the VM and do=
ing<br>> balance-alb at the guest level?<br>> <br>>
<br>> <br><=
div><br></div><br>------------------------------<br><div><br></div>Message:=
3<br>Date: Sun, 28 Dec 2014 20:53:44 -0800<br>From: "Yue, Cong"
<Cong_Y=
ue(a)alliedtelesis.com&gt;<br>To: Artyom Lukianov
&lt;alukiano(a)redhat.com&gt;=
<br>Cc: "users(a)ovirt.org"
&lt;users(a)ovirt.org&gt;<br>Subject: Re: [ovirt-us=
ers] VM failover with ovirt3.5<br>Message-ID: <B7E7D6D4-B85D-471C-87A7-E=
A9AD32BF279(a)alliedtelesis.com&gt;<br>Content-Type: text/plain;
charset=3D"u=
tf-8"<br><div><br></div>I checked it again and confirmed
there is one guest=
VM is running on the top of this host. The log is as
follows:<br><div><br>=
</div>[root@compute2-1 vdsm]# ps -ef | grep qemu<br>qemu
&nbs=
p;2983 846 0 Dec19 ?
00:00:00<x-=
apple-data-detectors://0> [supervdsmServer] <defunct><br>root
&nbs=
p; 5489 3053 0
20:49<x-apple-data-detectors://1=
> pts/0 00:00:00<x-apple-data-detectors://2>
grep --c=
olor=3Dauto qemu<br>qemu 26128 1
0 Dec19 =
? 01:09:19 /usr/libexec/qemu-kvm<br>-name
testvm=
2 -S -machine rhel6.5.0,accel=3Dkvm,usb=3Doff -cpu Nehalem -m<br>500 -realt=
ime mlock=3Doff -smp 1,maxcpus=3D16,sockets=3D16,cores=3D1,threads=3D1<br>-=
uuid e46bca87-4df5-4287-844b-90a26fccef33 -smbios<br>type=3D1,manufacturer=
=3DoVirt,product=3DoVirt<br>Node,version=3D7-0.1406.el7.centos.2.5,serial=
=3D4C4C4544-0030-3310-8059-B8C04F585231,uuid=3De46bca87-4df5-4287-844b-90a2=
6fccef33<br>-no-user-config -nodefaults -chardev<br>socket,id=3Dcharmonitor=
,path=3D/var/lib/libvirt/qemu/testvm2.monitor,server,nowait<br>-mon chardev=
=3Dcharmonitor,id=3Dmonitor,mode=3Dcontrol -rtc<br>base=3D2014-12-19T20:18:=
01<x-apple-data-detectors://4>,driftfix=3Dslew -no-kvm-pit-reinjectio=
n<br>-no-hpet -no-shutdown -boot strict=3Don -device<br>piix3-usb-uhci,id=
=3Dusb,bus=3Dpci.0,addr=3D0x1.0x2 -device<br>virtio-scsi-pci,id=3Dscsi0,bus=
=3Dpci.0,addr=3D0x4 -device<br>virtio-serial-pci,id=3Dvirtio-serial0,max_po=
rts=3D16,bus=3Dpci.0,addr=3D0x5<br>-drive if=3Dnone,id=3Ddrive-ide0-1-0,rea=
donly=3Don,format=3Draw,serial=3D<br>-device ide-cd,bus=3Dide.1,unit=3D0,dr=
ive=3Ddrive-ide0-1-0,id=3Dide0-1-0<br>-drive file=3D/rhev/data-center/00000=
002-0002-0002-0002-0000000001e4/1dc71096-27c4-4256-b2ac-bd7265525c69/images=
/b4b5426b-95e3-41af-b286-da245891cdaf/0f688d49-97e3-4f1d-84d4-ac1432d903b3,=
if=3Dnone,id=3Ddrive-virtio-disk0,format=3Dqcow2,serial=3Db4b5426b-95e3-41a=
f-b286-da245891cdaf,cache=3Dnone,werror=3Dstop,rerror=3Dstop,aio=3Dthreads<=
br>-device virtio-blk-pci,scsi=3Doff,bus=3Dpci.0,addr=3D0x6,drive=3Ddrive-v=
irtio-disk0,id=3Dvirtio-disk0,bootindex=3D1<br>-netdev tap,fd=3D26,id=3Dhos=
tnet0,vhost=3Don,vhostfd=3D27 -device<br>virtio-net-pci,netdev=3Dhostnet0,i=
d=3Dnet0,mac=3D00:1a:4a:db:94:01,bus=3Dpci.0,addr=3D0x3<br>-chardev socket,=
id=3Dcharchannel0,path=3D/var/lib/libvirt/qemu/channels/e46bca87-4df5-4287-=
844b-90a26fccef33.com.redhat.rhevm.vdsm,server,nowait<br>-device virtserial=
port,bus=3Dvirtio-serial0.0,nr=3D1,chardev=3Dcharchannel0,id=3Dchannel0,nam=
e=3Dcom.redhat.rhevm.vdsm<br>-chardev socket,id=3Dcharchannel1,path=3D/var/=
lib/libvirt/qemu/channels/e46bca87-4df5-4287-844b-90a26fccef33.org.qemu.gue=
st_agent.0,server,nowait<br>-device virtserialport,bus=3Dvirtio-serial0.0,n=
r=3D2,chardev=3Dcharchannel1,id=3Dchannel1,name=3Dorg.qemu.guest_agent.0<br=
-chardev spicevmc,id=3Dcharchannel2,name=3Dvdagent
-device<br>virtserialpo=
rt,bus=3Dvirtio-serial0.0,nr=3D3,chardev=3Dcharchannel2,id=3Dchannel2,name=
=3Dcom.redhat.spice.0<br>-spice tls-port=3D5900,addr=3D10.0.0.92,x509-dir=
=3D/etc/pki/vdsm/libvirt-spice,tls-channel=3Dmain,tls-channel=3Ddisplay,tls=
-channel=3Dinputs,tls-channel=3Dcursor,tls-channel=3Dplayback,tls-channel=
=3Drecord,tls-channel=3Dsmartcard,tls-channel=3Dusbredir,seamless-migration=
=3Don<br>-k en-us -vga qxl -global qxl-vga.ram_size=3D67108864<tel:67108=
864> -global<br>qxl-vga.vram_size=3D33554432<tel:33554432>
-incomi=
ng tcp:[::]:49152 -device<br>virtio-balloon-pci,id=3Dballoon0,bus=3Dpci.0,a=
ddr=3D0x7<br>[root@compute2-1 vdsm]# tail -f /var/log/ovirt-hosted-engine-h=
a/agent.log<br>MainThread::INFO::2014-12-28<br>20:49:27,315::state_decorato=
rs::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)<b=
r>Local maintenance
detected<br>MainThread::INFO::2014-12-28<br>20:49:27,64=
6::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEng=
ine::(start_monitoring)<br>Current state LocalMaintenance (score: 0)<br>Mai=
nThread::INFO::2014-12-28<br>20:49:27,646::hosted_engine::332::ovirt_hosted=
_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Best rem=
ote host 10.0.0.94 (id: 1, score: 2400)<br>MainThread::INFO::2014-12-28<br>=
20:49:37,732::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_en=
gine.HostedEngine::(check)<br>Local maintenance detected<br>MainThread::INF=
O::2014-12-28<br>20:49:37,961::hosted_engine::327::ovirt_hosted_engine_ha.a=
gent.hosted_engine.HostedEngine::(start_monitoring)<br>Current state LocalM=
aintenance (score: 0)<br>MainThread::INFO::2014-12-28<br>20:49:37,961::host=
ed_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(s=
tart_monitoring)<br>Best remote host 10.0.0.94 (id: 1, score: 2400)<br>Main=
Thread::INFO::2014-12-28<br>20:49:48,048::state_decorators::124::ovirt_host=
ed_engine_ha.agent.hosted_engine.HostedEngine::(check)<br>Local maintenance=
detected<br>MainThread::INFO::2014-12-28<br>20:49:48,319::states::208::ovi=
rt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)<br>Score is 0=
due to local maintenance mode<br>MainThread::INFO::2014-12-28<br>20:49:48,=
319::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedE=
ngine::(start_monitoring)<br>Current state LocalMaintenance (score: 0)<br>M=
ainThread::INFO::2014-12-28<br>20:49:48,319::hosted_engine::332::ovirt_host=
ed_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Best r=
emote host 10.0.0.94 (id: 1, score:
2400)<br><div><br></div>Thanks,<br>Cong=
<br><div><br></div><br>On 2014/12/28, at 3:46, "Artyom
Lukianov" <alukia=
no@redhat.com<mailto:alukiano@redhat.com>>
wrote:<br><div><br></di=
v>I see that you set local maintenance on host3 that do not have engine vm =
on it, so it nothing to migrate from this host.<br>If you set local mainten=
ance on host1, vm must migrate to another host with positive score.<br>Than=
ks<br><div><br></div>----- Original Message -----<br>From:
"Cong Yue" <C=
ong_Yue@alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com>><br>T=
o: "Simone Tiraboschi"
<stirabos@redhat.com<mailto:stirabos@redhat.co=
m>><br>Cc:
users@ovirt.org<mailto:users@ovirt.org><br>Sent: Sat=
urday, December 27, 2014 6:58:32 PM<br>Subject: Re: [ovirt-users] VM failov=
er with
ovirt3.5<br><div><br></div>Hi<br><div><br></div>I
had a try with "h=
osted-engine --set-maintence --mode=3Dlocal" on<br>compute2-1, which is hos=
t 3 in my cluster. From the log, it shows<br>maintence mode is dectected, b=
ut migration does not happen.<br><div><br></div>The logs are as
follows. Is=
there any other config I need to
check?<br><div><br></div>[root@compute2-1=
vdsm]# hosted-engine
--vm-status<br><div><br></div><br>--=3D=3D Host 1 sta=
tus =3D=3D-<br><div><br></div>Status up-to-date
=
: True<br>Hostname
=
: 10.=
0.0.94<br>Host ID
&=
nbsp; : 1<br>Engine status
=
: {"health": =
"good", "vm": "up",<br>"detail":
"up"}<br>Score =
&nbs=
p;: 2400<br>Local maintenance
&nb=
sp; : False<br>Host timestamp
&nbs=
p; : 836296<br>Extra metadata
(valid at =
timestamp):<br>metadata_parse_version=3D1<br>metadata_feature_version=3D1<b=
r>timestamp=3D836296 (Sat Dec 27 11:42:39
2014)<br>host-id=3D1<br>score=3D2=
400<br>maintenance=3DFalse<br>state=3DEngineUp<br><div><br></div><br>--=3D=
=3D Host 2 status =3D=3D--<br><div><br></div>Status up-to-date
 =
; :
True<br>Hostname =
&nbs=
p; : 10.0.0.93<br>Host ID
=
:
2<br>Engine status=
&nbs=
p;: {"reason": "vm not running on<br>this host",
"health": "bad", "vm": "do=
wn", "detail": "unknown"}<br>Score
 =
;
: 2400<br>L=
ocal maintenance
&n=
bsp;: False<br>Host timestamp
&nb=
sp; : 687358<br>Extra metadata (valid at
timestamp):<b=
r>metadata_parse_version=3D1<br>metadata_feature_version=3D1<br>timestamp=
=3D687358 (Sat Dec 27 08:42:04
2014)<br>host-id=3D2<br>score=3D2400<br>main=
tenance=3DFalse<br>state=3DEngineDown<br><div><br></div><br>--=3D=3D
Host 3=
status =3D=3D--<br><div><br></div>Status up-to-date
&=
nbsp; : True<br>Hostname
&n=
bsp;
=
: 10.0.0.92<br>Host ID
&nb=
sp; : 3<br>Engine
status &n=
bsp;
: {"reas=
on": "vm not running on<br>this host", "health":
"bad", "vm": "down", "deta=
il": "unknown"}<br>Score
&=
nbsp; :
0<br>Local maintena=
nce
: True<br=
Host timestamp
&nb=
sp; : 681827<br>Extra
metadata (valid at timestamp):<br>metadata_par=
se_version=3D1<br>metadata_feature_version=3D1<br>timestamp=3D681827 (Sat D=
ec 27 08:42:40
2014)<br>host-id=3D3<br>score=3D0<br>maintenance=3DTrue<br>s=
tate=3DLocalMaintenance<br>[root@compute2-1 vdsm]# tail -f /var/log/ovirt-h=
osted-engine-ha/agent.log<br>MainThread::INFO::2014-12-27<br>08:42:41,109::=
hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine=
::(start_monitoring)<br>Best remote host 10.0.0.94 (id: 1, score: 2400)<br>=
MainThread::INFO::2014-12-27<br>08:42:51,198::state_decorators::124::ovirt_=
hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)<br>Local mainten=
ance detected<br>MainThread::INFO::2014-12-27<br>08:42:51,420::hosted_engin=
e::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_mon=
itoring)<br>Current state LocalMaintenance (score: 0)<br>MainThread::INFO::=
2014-12-27<br>08:42:51,420::hosted_engine::332::ovirt_hosted_engine_ha.agen=
t.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0=
.94 (id: 1, score: 2400)<br>MainThread::INFO::2014-12-27<br>08:43:01,507::s=
tate_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngi=
ne::(check)<br>Local maintenance
detected<br>MainThread::INFO::2014-12-27<b=
r>08:43:01,773::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_eng=
ine.HostedEngine::(start_monitoring)<br>Current state LocalMaintenance (sco=
re: 0)<br>MainThread::INFO::2014-12-27<br>08:43:01,773::hosted_engine::332:=
:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring=
)<br>Best remote host 10.0.0.94 (id: 1, score: 2400)<br>MainThread::INFO::2=
014-12-27<br>08:43:11,859::state_decorators::124::ovirt_hosted_engine_ha.ag=
ent.hosted_engine.HostedEngine::(check)<br>Local maintenance detected<br>Ma=
inThread::INFO::2014-12-27<br>08:43:12,072::hosted_engine::327::ovirt_hoste=
d_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Current=
state LocalMaintenance (score: 0)<br>MainThread::INFO::2014-12-27<br>08:43=
:12,072::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.Hos=
tedEngine::(start_monitoring)<br>Best remote host 10.0.0.94 (id: 1, score: =
2400)<br><div><br></div><br><div><br></div>[root@compute2-3
~]# tail -f /va=
r/log/ovirt-hosted-engine-ha/agent.log<br>MainThread::INFO::2014-12-27<br>1=
1:36:28,855::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine=
.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.93 (id: 2, sco=
re: 2400)<br>MainThread::INFO::2014-12-27<br>11:36:39,130::hosted_engine::3=
27::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitor=
ing)<br>Current state EngineUp (score: 2400)<br>MainThread::INFO::2014-12-2=
7<br>11:36:39,130::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_=
engine.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.93 (id: =
2, score: 2400)<br>MainThread::INFO::2014-12-27<br>11:36:49,449::hosted_eng=
ine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_m=
onitoring)<br>Current state EngineUp (score: 2400)<br>MainThread::INFO::201=
4-12-27<br>11:36:49,449::hosted_engine::332::ovirt_hosted_engine_ha.agent.h=
osted_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.93=
(id: 2, score: 2400)<br>MainThread::INFO::2014-12-27<br>11:36:59,739::host=
ed_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(s=
tart_monitoring)<br>Current state EngineUp (score: 2400)<br>MainThread::INF=
O::2014-12-27<br>11:36:59,739::hosted_engine::332::ovirt_hosted_engine_ha.a=
gent.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.=
0.0.93 (id: 2, score: 2400)<br>MainThread::INFO::2014-12-27<br>11:37:09,779=
::states::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(co=
nsume)<br>Engine vm running on
localhost<br>MainThread::INFO::2014-12-27<br=
11:37:10,026::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engi=
ne.HostedEngine::(start_monitoring)<br>Current state EngineUp (score: 2400)=
<br>MainThread::INFO::2014-12-27<br>11:37:10,026::hosted_engine::332::ovirt=
_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>B=
est remote host 10.0.0.93 (id: 2, score: 2400)<br>MainThread::INFO::2014-12=
-27<br>11:37:20,331::hosted_engine::327::ovirt_hosted_engine_ha.agent.hoste=
d_engine.HostedEngine::(start_monitoring)<br>Current state EngineUp (score:=
2400)<br>MainThread::INFO::2014-12-27<br>11:37:20,331::hosted_engine::332:=
:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring=
)<br>Best remote host 10.0.0.93 (id: 2, score:
2400)<br><div><br></div><br>=
[root@compute2-2 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log<br>M=
ainThread::INFO::2014-12-27<br>08:36:12,462::hosted_engine::332::ovirt_host=
ed_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Best r=
emote host 10.0.0.94 (id: 1, score: 2400)<br>MainThread::INFO::2014-12-27<b=
r>08:36:22,797::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_eng=
ine.HostedEngine::(start_monitoring)<br>Current state EngineDown (score: 24=
00)<br>MainThread::INFO::2014-12-27<br>08:36:22,798::hosted_engine::332::ov=
irt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<b=
r>Best remote host 10.0.0.94 (id: 1, score: 2400)<br>MainThread::INFO::2014=
-12-27<br>08:36:32,876::states::437::ovirt_hosted_engine_ha.agent.hosted_en=
gine.HostedEngine::(consume)<br>Engine vm is running on host 10.0.0.94 (id =
1)<br>MainThread::INFO::2014-12-27<br>08:36:33,169::hosted_engine::327::ovi=
rt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br=
Current state EngineDown (score:
2400)<br>MainThread::INFO::2014-12-27<br>=
08:36:33,169::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engin=
e.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.94 (id: 1, sc=
ore: 2400)<br>MainThread::INFO::2014-12-27<br>08:36:43,567::hosted_engine::=
327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monito=
ring)<br>Current state EngineDown (score: 2400)<br>MainThread::INFO::2014-1=
2-27<br>08:36:43,567::hosted_engine::332::ovirt_hosted_engine_ha.agent.host=
ed_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.94 (i=
d: 1, score: 2400)<br>MainThread::INFO::2014-12-27<br>08:36:53,858::hosted_=
engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(star=
t_monitoring)<br>Current state EngineDown (score: 2400)<br>MainThread::INFO=
::2014-12-27<br>08:36:53,858::hosted_engine::332::ovirt_hosted_engine_ha.ag=
ent.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.0=
.0.94 (id: 1, score: 2400)<br>MainThread::INFO::2014-12-27<br>08:37:04,028:=
:state_machine::160::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngin=
e::(refresh)<br>Global metadata: {'maintenance':
False}<br>MainThread::INFO=
::2014-12-27<br>08:37:04,028::state_machine::165::ovirt_hosted_engine_ha.ag=
ent.hosted_engine.HostedEngine::(refresh)<br>Host 10.0.0.94 (id 1): {'extra=
':<br>'metadata_parse_version=3D1\nmetadata_feature_version=3D1\ntimestamp=
=3D835987<br>(Sat Dec 27 11:37:30<br>2014)\nhost-id=3D1\nscore=3D2400\nmain=
tenance=3DFalse\nstate=3DEngineUp\n',<br>'hostname':
'10.0.0.94', 'alive': =
True, 'host-id': 1, 'engine-status':<br>{'health':
'good', 'vm': 'up', 'det=
ail': 'up'}, 'score': 2400,<br>'maintenance': False,
'host-ts': 835987}<br>=
MainThread::INFO::2014-12-27<br>08:37:04,028::state_machine::165::ovirt_hos=
ted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)<br>Host 10.0.0.92=
(id 3):
{'extra':<br>'metadata_parse_version=3D1\nmetadata_feature_version=
=3D1\ntimestamp=3D681528<br>(Sat Dec 27 08:37:41<br>2014)\nhost-id=3D3\nsco=
re=3D0\nmaintenance=3DTrue\nstate=3DLocalMaintenance\n',<br>'hostname':
'10=
.0.0.92', 'alive': True, 'host-id': 3,
'engine-status':<br>{'reason': 'vm n=
ot running on this host', 'health': 'bad',
'vm':<br>'down', 'detail': 'unkn=
own'}, 'score': 0, 'maintenance': True,<br>'host-ts':
681528}<br>MainThread=
::INFO::2014-12-27<br>08:37:04,028::state_machine::168::ovirt_hosted_engine=
_ha.agent.hosted_engine.HostedEngine::(refresh)<br>Local (id 2): {'engine-h=
ealth': {'reason': 'vm not running on this<br>host',
'health': 'bad', 'vm':=
'down', 'detail': 'unknown'}, 'bridge':<br>True,
'mem-free': 15300.0, 'mai=
ntenance': False, 'cpu-load': 0.0215,<br>'gateway':
True}<br>MainThread::IN=
FO::2014-12-27<br>08:37:04,265::hosted_engine::327::ovirt_hosted_engine_ha.=
agent.hosted_engine.HostedEngine::(start_monitoring)<br>Current state Engin=
eDown (score: 2400)<br>MainThread::INFO::2014-12-27<br>08:37:04,265::hosted=
_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(sta=
rt_monitoring)<br>Best remote host 10.0.0.94 (id: 1, score:
2400)<br><div><=
br></div>Thanks,<br>Cong<br><div><br></div>On
2014/12/22, at 5:29, "Simone =
Tiraboschi"
<stirabos@redhat.com<mailto:stirabos@redhat.com>> w=
rote:<br><div><br></div><br><div><br></div>-----
Original Message -----<br>=
From: "Cong Yue"
<Cong_Yue@alliedtelesis.com<mailto:Cong_Yue@alliedte=
lesis.com>><br>To: "Simone Tiraboschi"
&lt;stirabos(a)redhat.com&lt;mai=
lto:stirabos@redhat.com>><br>Cc:
users@ovirt.org<mailto:users@ovir=
t.org><br>Sent: Friday, December 19, 2014 7:22:10 PM<br>Subject: RE:
[ov=
irt-users] VM failover with ovirt3.5<br><div><br></div>Thanks for
the infor=
mation. This is the log for my three ovirt nodes.<br>From the output of hos=
ted-engine --vm-status, it shows the engine state for<br>my 2nd and 3rd ovi=
rt node is DOWN.<br>Is this the reason why VM failover not work in my envir=
onment?<br><div><br></div>No, they looks ok: you can run the
engine VM on s=
ingle host at a time.<br><div><br></div>How can I
make<br>also engine works=
for my 2nd and 3rd ovit nodes?<br><div><br></div>If you put the
host 1 in =
local maintenance mode ( hosted-engine --set-maintenance --mode=3Dlocal ) t=
he VM should migrate to host 2; if you reactivate host 1 ( hosted-engine --=
set-maintenance --mode=3Dnone ) and put host 2 in local maintenance mode th=
e VM should migrate again.<br><div><br></div>Can you please try
that and po=
st the logs if something is going
bad?<br><div><br></div><br>--<br>--=3D=3D=
Host 1 status =3D=3D--<br><div><br></div>Status up-to-date
&=
nbsp; :
True<br>Hostname &n=
bsp;
=
: 10.0.0.94<br>Host ID
&nb=
sp; :
1<br>Engine status &n=
bsp;
:=
{"health": "good", "vm":
"up",<br>"detail": "up"}<br>Score
&=
nbsp;
=
: 2400<br>Local maintenance
 =
; : False<br>Host timestamp
=
:
150475<br>Extra metadat=
a (valid at timestamp):<br>metadata_parse_version=3D1<br>metadata_feature_v=
ersion=3D1<br>timestamp=3D150475 (Fri Dec 19 13:12:18
2014)<br>host-id=3D1<=
br>score=3D2400<br>maintenance=3DFalse<br>state=3DEngineUp<br><div><br></di=
v><br>--=3D=3D Host 2 status
=3D=3D--<br><div><br></div>Status up-to-date &=
nbsp;
: True<br>Host=
name
=
: 10.0.0.93<br>Host ID
&nb=
sp;
: 2<br>En=
gine status
=
: {"reason": "vm not running on<br>this
host", "health": "bad"=
, "vm": "down", "detail":
"unknown"}<br>Score &=
nbsp;
=
: 2400<br>Local maintenance
 =
; : False<br>Host timestamp
=
: 1572<br>Extra metadata
(valid at time=
stamp):<br>metadata_parse_version=3D1<br>metadata_feature_version=3D1<br>ti=
mestamp=3D1572 (Fri Dec 19 10:12:18
2014)<br>host-id=3D2<br>score=3D2400<br=
maintenance=3DFalse<br>state=3DEngineDown<br><div><br></div><br>--=3D=3D
H=
ost 3 status =3D=3D--<br><div><br></div>Status up-to-date
&nb=
sp; :
False<br>Hostname &nb=
sp;
&=
nbsp; : 10.0.0.92<br>Host ID
&nbs=
p; :
3<br>Engine status &nb=
sp;
: =
unknown stale-data<br>Score
 =
; :
2400<br>Local ma=
intenance
: F=
alse<br>Host timestamp
&nb=
sp; : 987<br>Extra metadata (valid at
timestamp):<br>metadata=
_parse_version=3D1<br>metadata_feature_version=3D1<br>timestamp=3D987 (Fri =
Dec 19 10:09:58
2014)<br>host-id=3D3<br>score=3D2400<br>maintenance=3DFalse=
<br>state=3DEngineDown<br><div><br></div>--<br>And the
/var/log/ovirt-hoste=
d-engine-ha/agent.log for three ovirt nodes are<br>as
follows:<br>--<br>10.=
0.0.94(hosted-engine-1)<br>---<br>MainThread::INFO::2014-12-19<br>13:09:33,=
716::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedE=
ngine::(start_monitoring)<br>Current state EngineUp (score: 2400)<br>MainTh=
read::INFO::2014-12-19<br>13:09:33,716::hosted_engine::332::ovirt_hosted_en=
gine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote=
host 10.0.0.93 (id: 2, score: 2400)<br>MainThread::INFO::2014-12-19<br>13:=
09:44,017::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.H=
ostedEngine::(start_monitoring)<br>Current state EngineUp (score: 2400)<br>=
MainThread::INFO::2014-12-19<br>13:09:44,017::hosted_engine::332::ovirt_hos=
ted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Best =
remote host 10.0.0.93 (id: 2, score: 2400)<br>MainThread::INFO::2014-12-19<=
br>13:09:54,303::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_en=
gine.HostedEngine::(start_monitoring)<br>Current state EngineUp (score: 240=
0)<br>MainThread::INFO::2014-12-19<br>13:09:54,303::hosted_engine::332::ovi=
rt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br=
Best remote host 10.0.0.93 (id: 2, score:
2400)<br>MainThread::INFO::2014-=
12-19<br>13:10:04,342::states::394::ovirt_hosted_engine_ha.agent.hosted_eng=
ine.HostedEngine::(consume)<br>Engine vm running on localhost<br>MainThread=
::INFO::2014-12-19<br>13:10:04,617::hosted_engine::327::ovirt_hosted_engine=
_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Current state E=
ngineUp (score: 2400)<br>MainThread::INFO::2014-12-19<br>13:10:04,617::host=
ed_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(s=
tart_monitoring)<br>Best remote host 10.0.0.93 (id: 2, score: 2400)<br>Main=
Thread::INFO::2014-12-19<br>13:10:14,657::state_machine::160::ovirt_hosted_=
engine_ha.agent.hosted_engine.HostedEngine::(refresh)<br>Global metadata: {=
'maintenance':
False}<br>MainThread::INFO::2014-12-19<br>13:10:14,657::stat=
e_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(r=
efresh)<br>Host 10.0.0.93 (id 2):
{'extra':<br>'metadata_parse_version=3D1\=
nmetadata_feature_version=3D1\ntimestamp=3D1448<br>(Fri Dec 19 10:10:14<br>=
2014)\nhost-id=3D2\nscore=3D2400\nmaintenance=3DFalse\nstate=3DEngineDown\n=
',<br>'hostname': '10.0.0.93', 'alive': True,
'host-id': 2, 'engine-status'=
:<br>{'reason': 'vm not running on this host', 'health':
'bad', 'vm':<br>'d=
own', 'detail': 'unknown'}, 'score': 2400,
'maintenance': False,<br>'host-t=
s':
1448}<br>MainThread::INFO::2014-12-19<br>13:10:14,657::state_machine::1=
65::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)<br>H=
ost 10.0.0.92 (id 3):
{'extra':<br>'metadata_parse_version=3D1\nmetadata_fe=
ature_version=3D1\ntimestamp=3D987<br>(Fri Dec 19 10:09:58<br>2014)\nhost-i=
d=3D3\nscore=3D2400\nmaintenance=3DFalse\nstate=3DEngineDown\n',<br>'hostna=
me': '10.0.0.92', 'alive': True, 'host-id': 3,
'engine-status':<br>{'reason=
': 'vm not running on this host', 'health': 'bad',
'vm':<br>'down', 'detail=
': 'unknown'}, 'score': 2400, 'maintenance':
False,<br>'host-ts': 987}<br>M=
ainThread::INFO::2014-12-19<br>13:10:14,658::state_machine::168::ovirt_host=
ed_engine_ha.agent.hosted_engine.HostedEngine::(refresh)<br>Local (id 1): {=
'engine-health': {'health': 'good', 'vm':
'up',<br>'detail': 'up'}, 'bridge=
': True, 'mem-free': 1079.0, 'maintenance':<br>False,
'cpu-load': 0.0269, '=
gateway':
True}<br>MainThread::INFO::2014-12-19<br>13:10:14,904::hosted_eng=
ine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_m=
onitoring)<br>Current state EngineUp (score: 2400)<br>MainThread::INFO::201=
4-12-19<br>13:10:14,904::hosted_engine::332::ovirt_hosted_engine_ha.agent.h=
osted_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.93=
(id: 2, score: 2400)<br>MainThread::INFO::2014-12-19<br>13:10:25,210::host=
ed_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(s=
tart_monitoring)<br>Current state EngineUp (score: 2400)<br>MainThread::INF=
O::2014-12-19<br>13:10:25,210::hosted_engine::332::ovirt_hosted_engine_ha.a=
gent.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.=
0.0.93 (id: 2, score: 2400)<br>MainThread::INFO::2014-12-19<br>13:10:35,499=
::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngi=
ne::(start_monitoring)<br>Current state EngineUp (score: 2400)<br>MainThrea=
d::INFO::2014-12-19<br>13:10:35,499::hosted_engine::332::ovirt_hosted_engin=
e_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote ho=
st 10.0.0.93 (id: 2, score: 2400)<br>MainThread::INFO::2014-12-19<br>13:10:=
45,784::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.Host=
edEngine::(start_monitoring)<br>Current state EngineUp (score: 2400)<br>Mai=
nThread::INFO::2014-12-19<br>13:10:45,785::hosted_engine::332::ovirt_hosted=
_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Best rem=
ote host 10.0.0.93 (id: 2, score: 2400)<br>MainThread::INFO::2014-12-19<br>=
13:10:56,070::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engin=
e.HostedEngine::(start_monitoring)<br>Current state EngineUp (score: 2400)<=
br>MainThread::INFO::2014-12-19<br>13:10:56,070::hosted_engine::332::ovirt_=
hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Be=
st remote host 10.0.0.93 (id: 2, score: 2400)<br>MainThread::INFO::2014-12-=
19<br>13:11:06,109::states::394::ovirt_hosted_engine_ha.agent.hosted_engine=
.HostedEngine::(consume)<br>Engine vm running on localhost<br>MainThread::I=
NFO::2014-12-19<br>13:11:06,359::hosted_engine::327::ovirt_hosted_engine_ha=
.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Current state Engi=
neUp (score: 2400)<br>MainThread::INFO::2014-12-19<br>13:11:06,359::hosted_=
engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(star=
t_monitoring)<br>Best remote host 10.0.0.93 (id: 2, score: 2400)<br>MainThr=
ead::INFO::2014-12-19<br>13:11:16,658::hosted_engine::327::ovirt_hosted_eng=
ine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Current stat=
e EngineUp (score: 2400)<br>MainThread::INFO::2014-12-19<br>13:11:16,658::h=
osted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:=
:(start_monitoring)<br>Best remote host 10.0.0.93 (id: 2, score: 2400)<br>M=
ainThread::INFO::2014-12-19<br>13:11:26,991::hosted_engine::327::ovirt_host=
ed_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Curren=
t state EngineUp (score: 2400)<br>MainThread::INFO::2014-12-19<br>13:11:26,=
991::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedE=
ngine::(start_monitoring)<br>Best remote host 10.0.0.93 (id: 2, score: 2400=
)<br>MainThread::INFO::2014-12-19<br>13:11:37,341::hosted_engine::327::ovir=
t_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>=
Current state EngineUp (score: 2400)<br>MainThread::INFO::2014-12-19<br>13:=
11:37,341::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.H=
ostedEngine::(start_monitoring)<br>Best remote host 10.0.0.93 (id: 2, score=
: 2400)<br>----<br><div><br></div>10.0.0.93
(hosted-engine-2)<br>MainThread=
::INFO::2014-12-19<br>10:12:18,339::hosted_engine::327::ovirt_hosted_engine=
_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Current state E=
ngineDown (score: 2400)<br>MainThread::INFO::2014-12-19<br>10:12:18,339::ho=
sted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::=
(start_monitoring)<br>Best remote host 10.0.0.94 (id: 1, score: 2400)<br>Ma=
inThread::INFO::2014-12-19<br>10:12:28,651::hosted_engine::327::ovirt_hoste=
d_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Current=
state EngineDown (score: 2400)<br>MainThread::INFO::2014-12-19<br>10:12:28=
,652::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.Hosted=
Engine::(start_monitoring)<br>Best remote host 10.0.0.94 (id: 1, score: 240=
0)<br>MainThread::INFO::2014-12-19<br>10:12:39,010::hosted_engine::327::ovi=
rt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br=
Current state EngineDown (score:
2400)<br>MainThread::INFO::2014-12-19<br>=
10:12:39,010::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engin=
e.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.94 (id: 1, sc=
ore: 2400)<br>MainThread::INFO::2014-12-19<br>10:12:49,338::hosted_engine::=
327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monito=
ring)<br>Current state EngineDown (score: 2400)<br>MainThread::INFO::2014-1=
2-19<br>10:12:49,338::hosted_engine::332::ovirt_hosted_engine_ha.agent.host=
ed_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.94 (i=
d: 1, score: 2400)<br>MainThread::INFO::2014-12-19<br>10:12:59,642::hosted_=
engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(star=
t_monitoring)<br>Current state EngineDown (score: 2400)<br>MainThread::INFO=
::2014-12-19<br>10:12:59,642::hosted_engine::332::ovirt_hosted_engine_ha.ag=
ent.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.0=
.0.94 (id: 1, score: 2400)<br>MainThread::INFO::2014-12-19<br>10:13:10,010:=
:hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngin=
e::(start_monitoring)<br>Current state EngineDown (score: 2400)<br>MainThre=
ad::INFO::2014-12-19<br>10:13:10,010::hosted_engine::332::ovirt_hosted_engi=
ne_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote h=
ost 10.0.0.94 (id: 1, score:
2400)<br><div><br></div><br>10.0.0.92(hosted-e=
ngine-3)<br>same as
10.0.0.93<br>--<br><div><br></div>-----Original Message=
-----<br>From: Simone Tiraboschi [mailto:stirabos@redhat.com]<br>Sent: Frid=
ay, December 19, 2014 12:28 AM<br>To: Yue, Cong<br>Cc:
users(a)ovirt.org&lt;m=
ailto:users@ovirt.org><br>Subject: Re: [ovirt-users] VM failover with ov=
irt3.5<br><div><br></div><br><div><br></div>-----
Original Message -----<br=
From: "Cong Yue"
<Cong_Yue@alliedtelesis.com<mailto:Cong_Yue@alliedt=
elesis.com>><br>To:
users@ovirt.org<mailto:users@ovirt.org><br>=
Sent: Friday, December 19, 2014 2:14:33 AM<br>Subject: [ovirt-users] VM fai=
lover with
ovirt3.5<br><div><br></div><br><div><br></div>Hi<br><div><br></d=
iv><br><div><br></div>In my environment, I have 3 ovirt nodes
as one cluste=
r. And on top of<br>host-1, there is one vm to host ovirt
engine.<br><div><=
br></div>Also I have one external storage for the cluster to use as data do=
main<br>of engine and data.<br><div><br></div>I confirmed
live migration wo=
rks well in my environment.<br><div><br></div>But it seems very
buggy for V=
M failover if I try to force to shut down<br>one ovirt node. Sometimes the =
VM in the node which is shutdown can<br>migrate to other host, but it take =
more than several minutes.<br><div><br></div>Sometimes, it can not
migrate =
at all. Sometimes, only when the host is<br>back, the VM is beginning to mo=
ve.<br><div><br></div>Can you please check or share the logs under
/var/log=
/ovirt-hosted-engine-ha/<br>?<br><div><br></div>Is there
some documentation=
to explain how VM failover is working? And<br>is there some bugs reported =
related with
this?<br><div><br></div>http://www.ovirt.org/Feat...
ted_Engine#Agent_State_Diagram<br><div><br></div>Thanks in
advance,<br><div=
<br></div>Cong<br><div><br></div><br><div><br></div><br>This
e-mail messag=
e is for the sole use of the intended recipient(s)<br>and may
contain confi=
dential and privileged information. Any<br>unauthorized review, use, disclo=
sure or distribution is prohibited. If<br>you are not the intended recipien=
t, please contact the sender by reply<br>e-mail and destroy all copies of t=
he original message. If you are the<br>intended recipient, please be advise=
d that the content of this message<br>is subject to access, review and disc=
losure by the sender's e-mail
System<br>Administrator.<br><div><br></div>__=
_____________________________________________<br>Users mailing list<br>User=
s@ovirt.org<mailto:Users@ovirt.org><br>http://lists.ovirt.org/mailman=
/listinfo/users<br><div><br></div>This e-mail message is for the
sole use o=
f the intended recipient(s) and may<br>contain confidential and privileged =
information. Any unauthorized review,<br>use, disclosure or distribution is=
prohibited. If you are not the intended<br>recipient, please contact the s=
ender by reply e-mail and destroy all copies<br>of the original message. If=
you are the intended recipient, please be<br>advised that the content of t=
his message is subject to access, review and<br>disclosure by the sender's =
e-mail System Administrator.<br><div><br></div><br>This
e-mail message is f=
or the sole use of the intended recipient(s) and may contain confidential a=
nd privileged information. Any unauthorized review, use, disclosure or dist=
ribution is prohibited. If you are not the intended recipient, please conta=
ct the sender by reply e-mail and destroy all copies of the original messag=
e. If you are the intended recipient, please be advised that the content of=
this message is subject to access, review and disclosure by the sender's e=
-mail System Administrator.<br>____________________________________________=
___<br>Users mailing
list<br>Users@ovirt.org<mailto:Users@ovirt.org><=
br>http://lists.ovirt.org/mailman/listinfo/users<br><div>&...
________________________<br>This e-mail message is for the sole use of the =
intended recipient(s) and may contain confidential and privileged informati=
on. Any unauthorized review, use, disclosure or distribution is prohibited.=
If you are not the intended recipient, please contact the sender by reply =
e-mail and destroy all copies of the original message. If you are the inten=
ded recipient, please be advised that the content of this message is subjec=
t to access, review and disclosure by the sender's e-mail System Administra=
tor.<br>-------------- next part --------------<br>An HTML attachment was s=
crubbed...<br>URL: <http://lists.ovirt.org/pipermail/users/attachments/2=
0141228/c5ac26a7/attachment.html><br><div><br></div>--------------------=
----------<br><div><br></div>______________________________________________=
_<br>Users mailing
list<br>Users@ovirt.org<br>http://lists.ovirt.org/mailma=
n/listinfo/users<br><div><br></div><br>End of Users Digest,
Vol 39, Issue 1=
63<br>**************************************<br></div><div><br></div></div>=
</body></html>
------=_Part_1871238_1615445632.1419874799888--