[ovirt-users] ??: bond mode balance-alb

Nikolai Sednev nsednev at redhat.com
Mon Dec 29 17:40:00 UTC 2014


I'd like to add that using of floating MAC "balance-tlb" for mode 5 or ARP negotiation for mode 6 load balancing " balance-alb" will influence latency and performance, using such mode should be avoided. 
Mode zero or "balance-rr" should be also avoided as it is the only mode that will allow a single TCP/IP stream to utilize more than one interface, hence will create additional jitter, latency and performance impacts, as frames/packets will be sent and arrive from different interfaces, while preferred is to balance on per flow. Unless in your data center you're not using L2-only based traffic, I really don't see any usage for mode zero. 
In Cisco routers the is a functionality called IP-CEF, which is turned on by default and balancing traffic on per TCP/IP flow, instead of per-packet, it is being used for better routing decisions for per-flow load balancing, if turned off, then per-packet load balancing will be enforced, causing high performance impact on router's CPU and memory resources, as decision have to be made on per-packet level, the higher the bit rate, the harder impact on resources of the router will be, especially for small sized packets. 


Thanks in advance. 

Best regards, 
Nikolai 
____________________ 
Nikolai Sednev 
Senior Quality Engineer at Compute team 
Red Hat Israel 
34 Jerusalem Road, 
Ra'anana, Israel 43501 

Tel: +972 9 7692043 
Mobile: +972 52 7342734 
Email: nsednev at redhat.com 
IRC: nsednev 

----- Original Message -----

From: users-request at ovirt.org 
To: users at ovirt.org 
Sent: Monday, December 29, 2014 6:53:59 AM 
Subject: Users Digest, Vol 39, Issue 163 

Send Users mailing list submissions to 
users at ovirt.org 

To subscribe or unsubscribe via the World Wide Web, visit 
http://lists.ovirt.org/mailman/listinfo/users 
or, via email, send a message with subject or body 'help' to 
users-request at ovirt.org 

You can reach the person managing the list at 
users-owner at ovirt.org 

When replying, please edit your Subject line so it is more specific 
than "Re: Contents of Users digest..." 


Today's Topics: 

1. Re: Problem after update ovirt to 3.5 (Juan Jose) 
2. Re: ??: bond mode balance-alb (Dan Kenigsberg) 
3. Re: VM failover with ovirt3.5 (Yue, Cong) 


---------------------------------------------------------------------- 

Message: 1 
Date: Sun, 28 Dec 2014 20:08:37 +0100 
From: Juan Jose <jj197005 at gmail.com> 
To: Simone Tiraboschi <stirabos at redhat.com> 
Cc: "users at ovirt.org" <users at ovirt.org> 
Subject: Re: [ovirt-users] Problem after update ovirt to 3.5 
Message-ID: 
<CADrE9wYtNdMPNsyjjZxA3zbyKZhYB5DA03wQ17dTLfuBBtA-Bg at mail.gmail.com> 
Content-Type: text/plain; charset="utf-8" 

Many thanks Simone, 

Juanjo. 

On Tue, Dec 16, 2014 at 1:48 PM, Simone Tiraboschi <stirabos at redhat.com> 
wrote: 

> 
> 
> ----- Original Message ----- 
> > From: "Juan Jose" <jj197005 at gmail.com> 
> > To: "Yedidyah Bar David" <didi at redhat.com>, sbonazzo at redhat.com 
> > Cc: users at ovirt.org 
> > Sent: Tuesday, December 16, 2014 1:03:17 PM 
> > Subject: Re: [ovirt-users] Problem after update ovirt to 3.5 
> > 
> > Hello everybody, 
> > 
> > It was the firewall, after upgrade my engine the NFS configuration had 
> > disappered, I have configured again as Red Hat says and now it works 
> > properly again. 
> > 
> > Many thank again for the indications. 
> 
> We already had a patch for it [1], 
> it will released next month with oVirt 3.5.1 
> 
> [1] http://gerrit.ovirt.org/#/c/32874/ 
> 
> > Juanjo. 
> > 
> > On Mon, Dec 15, 2014 at 2:32 PM, Yedidyah Bar David < didi at redhat.com > 
> > wrote: 
> > 
> > 
> > ----- Original Message ----- 
> > > From: "Juan Jose" < jj197005 at gmail.com > 
> > > To: users at ovirt.org 
> > > Sent: Monday, December 15, 2014 3:17:15 PM 
> > > Subject: [ovirt-users] Problem after update ovirt to 3.5 
> > > 
> > > Hello everybody, 
> > > 
> > > After upgrade my engine to oVirt 3.5, I have also upgraded one of my 
> hosts 
> > > to 
> > > oVirt 3.5. After that it seems that all have gone good aparently. 
> > > 
> > > But in some seconds my ISO domain is desconnected and it is impossible 
> to 
> > > Activate. I'm attaching my engine.log. The below error is showed each 
> time 
> > > I 
> > > try to Activate the ISO domain. Before the upgrade it was working 
> without 
> > > problems: 
> > > 
> > > 2014-12-15 13:25:07,607 ERROR 
> > > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
> > > (org.ovirt.thread.pool-8-thread-5) [460733dd] Correlation ID: null, 
> Call 
> > > Stack: null, Custom Event ID: -1, Message: Failed to connect Host 
> host1 to 
> > > the Storage Domains ISO_DOMAIN. 
> > > 2014-12-15 13:25:07,608 INFO 
> > > 
> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand] 
> > > (org.ovirt.thread.pool-8-thread-5) [460733dd] FINISH, 
> > > ConnectStorageServerVDSCommand, return: 
> > > {81c0a853-715c-4478-a812-6a74808fc482=477}, log id: 3590969e 
> > > 2014-12-15 13:25:07,615 ERROR 
> > > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
> > > (org.ovirt.thread.pool-8-thread-5) [460733dd] Correlation ID: null, 
> Call 
> > > Stack: null, Custom Event ID: -1, Message: The error message for 
> connection 
> > > ovirt-engine.siee.local:/var/lib/exports/iso-20140303082312 returned by 
> > > VDSM 
> > > was: Problem while trying to mount target 
> > > 2014-12-15 13:25:07,616 ERROR 
> > > [org.ovirt.engine.core.bll.storage.NFSStorageHelper] 
> > > (org.ovirt.thread.pool-8-thread-5) [460733dd] The connection with 
> details 
> > > ovirt-engine.siee.local:/var/lib/exports/iso-20140303082312 failed 
> because 
> > > of error code 477 and error message is: problem while trying to mount 
> > > target 
> > > 
> > > If any other information is required, please tell me. 
> > 
> > Is the ISO domain on the engine host? 
> > 
> > Please check there iptables and /etc/exports, /etc/exports.d. 
> > 
> > Please post the setup (upgrade) log, check /var/log/ovirt-engine/setup. 
> > 
> > Thanks, 
> > -- 
> > Didi 
> > 
> > _______________________________________________ 
> > Users mailing list 
> > Users at ovirt.org 
> > http://lists.ovirt.org/mailman/listinfo/users 
> > 
> 
-------------- next part -------------- 
An HTML attachment was scrubbed... 
URL: <http://lists.ovirt.org/pipermail/users/attachments/20141228/bab30c2a/attachment-0001.html> 

------------------------------ 

Message: 2 
Date: Sun, 28 Dec 2014 23:56:58 +0000 
From: Dan Kenigsberg <danken at redhat.com> 
To: Blaster <Blaster at 556nato.com> 
Cc: "Users at ovirt.org List" <users at ovirt.org> 
Subject: Re: [ovirt-users] ??: bond mode balance-alb 
Message-ID: <20141228235658.GE21690 at redhat.com> 
Content-Type: text/plain; charset=us-ascii 

On Fri, Dec 26, 2014 at 12:39:45PM -0600, Blaster wrote: 
> On 12/23/2014 2:55 AM, Dan Kenigsberg wrote: 
> >Bug 1094842 - Bonding modes 0, 5 and 6 should be avoided for VM networks 
> >https://bugzilla.redhat.com/show_bug.cgi?id=1094842#c0 
> 
> Dan, 
> 
> What is bad about these modes that oVirt can't use them? 

I can only quote jpirko's workds from the link above: 

Do not use tlb or alb in bridge, never! It does not work, that's it. The reason 
is it mangles source macs in xmit frames and arps. When it is possible, just 
use mode 4 (lacp). That should be always possible because all enterprise 
switches support that. Generally, for 99% of use cases, you *should* use mode 
4. There is no reason to use other modes. 

> 
> I just tested mode 4, and the LACP with Fedora 20 appears to not be 
> compatible with the LAG mode on my Dell 2824. 
> 
> Would there be any issues with bringing two NICS into the VM and doing 
> balance-alb at the guest level? 
> 
> 
> 


------------------------------ 

Message: 3 
Date: Sun, 28 Dec 2014 20:53:44 -0800 
From: "Yue, Cong" <Cong_Yue at alliedtelesis.com> 
To: Artyom Lukianov <alukiano at redhat.com> 
Cc: "users at ovirt.org" <users at ovirt.org> 
Subject: Re: [ovirt-users] VM failover with ovirt3.5 
Message-ID: <B7E7D6D4-B85D-471C-87A7-EA9AD32BF279 at alliedtelesis.com> 
Content-Type: text/plain; charset="utf-8" 

I checked it again and confirmed there is one guest VM is running on the top of this host. The log is as follows: 

[root at compute2-1 vdsm]# ps -ef | grep qemu 
qemu 2983 846 0 Dec19 ? 00:00:00<x-apple-data-detectors://0> [supervdsmServer] <defunct> 
root 5489 3053 0 20:49<x-apple-data-detectors://1> pts/0 00:00:00<x-apple-data-detectors://2> grep --color=auto qemu 
qemu 26128 1 0 Dec19 ? 01:09:19 /usr/libexec/qemu-kvm 
-name testvm2 -S -machine rhel6.5.0,accel=kvm,usb=off -cpu Nehalem -m 
500 -realtime mlock=off -smp 1,maxcpus=16,sockets=16,cores=1,threads=1 
-uuid e46bca87-4df5-4287-844b-90a26fccef33 -smbios 
type=1,manufacturer=oVirt,product=oVirt 
Node,version=7-0.1406.el7.centos.2.5,serial=4C4C4544-0030-3310-8059-B8C04F585231,uuid=e46bca87-4df5-4287-844b-90a26fccef33 
-no-user-config -nodefaults -chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/testvm2.monitor,server,nowait 
-mon chardev=charmonitor,id=monitor,mode=control -rtc 
base=2014-12-19T20:18:01<x-apple-data-detectors://4>,driftfix=slew -no-kvm-pit-reinjection 
-no-hpet -no-shutdown -boot strict=on -device 
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device 
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device 
virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 
-drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= 
-device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 
-drive file=/rhev/data-center/00000002-0002-0002-0002-0000000001e4/1dc71096-27c4-4256-b2ac-bd7265525c69/images/b4b5426b-95e3-41af-b286-da245891cdaf/0f688d49-97e3-4f1d-84d4-ac1432d903b3,if=none,id=drive-virtio-disk0,format=qcow2,serial=b4b5426b-95e3-41af-b286-da245891cdaf,cache=none,werror=stop,rerror=stop,aio=threads 
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 
-netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=27 -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:db:94:01,bus=pci.0,addr=0x3 
-chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/e46bca87-4df5-4287-844b-90a26fccef33.com.redhat.rhevm.vdsm,server,nowait 
-device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm 
-chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/e46bca87-4df5-4287-844b-90a26fccef33.org.qemu.guest_agent.0,server,nowait 
-device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 
-chardev spicevmc,id=charchannel2,name=vdagent -device 
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 
-spice tls-port=5900,addr=10.0.0.92,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on 
-k en-us -vga qxl -global qxl-vga.ram_size=67108864<tel:67108864> -global 
qxl-vga.vram_size=33554432<tel:33554432> -incoming tcp:[::]:49152 -device 
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 
[root at compute2-1 vdsm]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log 
MainThread::INFO::2014-12-28 
20:49:27,315::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check) 
Local maintenance detected 
MainThread::INFO::2014-12-28 
20:49:27,646::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Current state LocalMaintenance (score: 0) 
MainThread::INFO::2014-12-28 
20:49:27,646::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Best remote host 10.0.0.94 (id: 1, score: 2400) 
MainThread::INFO::2014-12-28 
20:49:37,732::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check) 
Local maintenance detected 
MainThread::INFO::2014-12-28 
20:49:37,961::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Current state LocalMaintenance (score: 0) 
MainThread::INFO::2014-12-28 
20:49:37,961::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Best remote host 10.0.0.94 (id: 1, score: 2400) 
MainThread::INFO::2014-12-28 
20:49:48,048::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check) 
Local maintenance detected 
MainThread::INFO::2014-12-28 
20:49:48,319::states::208::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score) 
Score is 0 due to local maintenance mode 
MainThread::INFO::2014-12-28 
20:49:48,319::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Current state LocalMaintenance (score: 0) 
MainThread::INFO::2014-12-28 
20:49:48,319::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Best remote host 10.0.0.94 (id: 1, score: 2400) 

Thanks, 
Cong 


On 2014/12/28, at 3:46, "Artyom Lukianov" <alukiano at redhat.com<mailto:alukiano at redhat.com>> wrote: 

I see that you set local maintenance on host3 that do not have engine vm on it, so it nothing to migrate from this host. 
If you set local maintenance on host1, vm must migrate to another host with positive score. 
Thanks 

----- Original Message ----- 
From: "Cong Yue" <Cong_Yue at alliedtelesis.com<mailto:Cong_Yue at alliedtelesis.com>> 
To: "Simone Tiraboschi" <stirabos at redhat.com<mailto:stirabos at redhat.com>> 
Cc: users at ovirt.org<mailto:users at ovirt.org> 
Sent: Saturday, December 27, 2014 6:58:32 PM 
Subject: Re: [ovirt-users] VM failover with ovirt3.5 

Hi 

I had a try with "hosted-engine --set-maintence --mode=local" on 
compute2-1, which is host 3 in my cluster. From the log, it shows 
maintence mode is dectected, but migration does not happen. 

The logs are as follows. Is there any other config I need to check? 

[root at compute2-1 vdsm]# hosted-engine --vm-status 


--== Host 1 status ==- 

Status up-to-date : True 
Hostname : 10.0.0.94 
Host ID : 1 
Engine status : {"health": "good", "vm": "up", 
"detail": "up"} 
Score : 2400 
Local maintenance : False 
Host timestamp : 836296 
Extra metadata (valid at timestamp): 
metadata_parse_version=1 
metadata_feature_version=1 
timestamp=836296 (Sat Dec 27 11:42:39 2014) 
host-id=1 
score=2400 
maintenance=False 
state=EngineUp 


--== Host 2 status ==-- 

Status up-to-date : True 
Hostname : 10.0.0.93 
Host ID : 2 
Engine status : {"reason": "vm not running on 
this host", "health": "bad", "vm": "down", "detail": "unknown"} 
Score : 2400 
Local maintenance : False 
Host timestamp : 687358 
Extra metadata (valid at timestamp): 
metadata_parse_version=1 
metadata_feature_version=1 
timestamp=687358 (Sat Dec 27 08:42:04 2014) 
host-id=2 
score=2400 
maintenance=False 
state=EngineDown 


--== Host 3 status ==-- 

Status up-to-date : True 
Hostname : 10.0.0.92 
Host ID : 3 
Engine status : {"reason": "vm not running on 
this host", "health": "bad", "vm": "down", "detail": "unknown"} 
Score : 0 
Local maintenance : True 
Host timestamp : 681827 
Extra metadata (valid at timestamp): 
metadata_parse_version=1 
metadata_feature_version=1 
timestamp=681827 (Sat Dec 27 08:42:40 2014) 
host-id=3 
score=0 
maintenance=True 
state=LocalMaintenance 
[root at compute2-1 vdsm]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log 
MainThread::INFO::2014-12-27 
08:42:41,109::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Best remote host 10.0.0.94 (id: 1, score: 2400) 
MainThread::INFO::2014-12-27 
08:42:51,198::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check) 
Local maintenance detected 
MainThread::INFO::2014-12-27 
08:42:51,420::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Current state LocalMaintenance (score: 0) 
MainThread::INFO::2014-12-27 
08:42:51,420::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Best remote host 10.0.0.94 (id: 1, score: 2400) 
MainThread::INFO::2014-12-27 
08:43:01,507::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check) 
Local maintenance detected 
MainThread::INFO::2014-12-27 
08:43:01,773::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Current state LocalMaintenance (score: 0) 
MainThread::INFO::2014-12-27 
08:43:01,773::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Best remote host 10.0.0.94 (id: 1, score: 2400) 
MainThread::INFO::2014-12-27 
08:43:11,859::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check) 
Local maintenance detected 
MainThread::INFO::2014-12-27 
08:43:12,072::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Current state LocalMaintenance (score: 0) 
MainThread::INFO::2014-12-27 
08:43:12,072::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Best remote host 10.0.0.94 (id: 1, score: 2400) 



[root at compute2-3 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log 
MainThread::INFO::2014-12-27 
11:36:28,855::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Best remote host 10.0.0.93 (id: 2, score: 2400) 
MainThread::INFO::2014-12-27 
11:36:39,130::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Current state EngineUp (score: 2400) 
MainThread::INFO::2014-12-27 
11:36:39,130::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Best remote host 10.0.0.93 (id: 2, score: 2400) 
MainThread::INFO::2014-12-27 
11:36:49,449::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Current state EngineUp (score: 2400) 
MainThread::INFO::2014-12-27 
11:36:49,449::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Best remote host 10.0.0.93 (id: 2, score: 2400) 
MainThread::INFO::2014-12-27 
11:36:59,739::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Current state EngineUp (score: 2400) 
MainThread::INFO::2014-12-27 
11:36:59,739::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Best remote host 10.0.0.93 (id: 2, score: 2400) 
MainThread::INFO::2014-12-27 
11:37:09,779::states::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) 
Engine vm running on localhost 
MainThread::INFO::2014-12-27 
11:37:10,026::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Current state EngineUp (score: 2400) 
MainThread::INFO::2014-12-27 
11:37:10,026::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Best remote host 10.0.0.93 (id: 2, score: 2400) 
MainThread::INFO::2014-12-27 
11:37:20,331::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Current state EngineUp (score: 2400) 
MainThread::INFO::2014-12-27 
11:37:20,331::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Best remote host 10.0.0.93 (id: 2, score: 2400) 


[root at compute2-2 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log 
MainThread::INFO::2014-12-27 
08:36:12,462::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Best remote host 10.0.0.94 (id: 1, score: 2400) 
MainThread::INFO::2014-12-27 
08:36:22,797::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Current state EngineDown (score: 2400) 
MainThread::INFO::2014-12-27 
08:36:22,798::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Best remote host 10.0.0.94 (id: 1, score: 2400) 
MainThread::INFO::2014-12-27 
08:36:32,876::states::437::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) 
Engine vm is running on host 10.0.0.94 (id 1) 
MainThread::INFO::2014-12-27 
08:36:33,169::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Current state EngineDown (score: 2400) 
MainThread::INFO::2014-12-27 
08:36:33,169::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Best remote host 10.0.0.94 (id: 1, score: 2400) 
MainThread::INFO::2014-12-27 
08:36:43,567::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Current state EngineDown (score: 2400) 
MainThread::INFO::2014-12-27 
08:36:43,567::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Best remote host 10.0.0.94 (id: 1, score: 2400) 
MainThread::INFO::2014-12-27 
08:36:53,858::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Current state EngineDown (score: 2400) 
MainThread::INFO::2014-12-27 
08:36:53,858::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Best remote host 10.0.0.94 (id: 1, score: 2400) 
MainThread::INFO::2014-12-27 
08:37:04,028::state_machine::160::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) 
Global metadata: {'maintenance': False} 
MainThread::INFO::2014-12-27 
08:37:04,028::state_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) 
Host 10.0.0.94 (id 1): {'extra': 
'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=835987 
(Sat Dec 27 11:37:30 
2014)\nhost-id=1\nscore=2400\nmaintenance=False\nstate=EngineUp\n', 
'hostname': '10.0.0.94', 'alive': True, 'host-id': 1, 'engine-status': 
{'health': 'good', 'vm': 'up', 'detail': 'up'}, 'score': 2400, 
'maintenance': False, 'host-ts': 835987} 
MainThread::INFO::2014-12-27 
08:37:04,028::state_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) 
Host 10.0.0.92 (id 3): {'extra': 
'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=681528 
(Sat Dec 27 08:37:41 
2014)\nhost-id=3\nscore=0\nmaintenance=True\nstate=LocalMaintenance\n', 
'hostname': '10.0.0.92', 'alive': True, 'host-id': 3, 'engine-status': 
{'reason': 'vm not running on this host', 'health': 'bad', 'vm': 
'down', 'detail': 'unknown'}, 'score': 0, 'maintenance': True, 
'host-ts': 681528} 
MainThread::INFO::2014-12-27 
08:37:04,028::state_machine::168::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) 
Local (id 2): {'engine-health': {'reason': 'vm not running on this 
host', 'health': 'bad', 'vm': 'down', 'detail': 'unknown'}, 'bridge': 
True, 'mem-free': 15300.0, 'maintenance': False, 'cpu-load': 0.0215, 
'gateway': True} 
MainThread::INFO::2014-12-27 
08:37:04,265::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Current state EngineDown (score: 2400) 
MainThread::INFO::2014-12-27 
08:37:04,265::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Best remote host 10.0.0.94 (id: 1, score: 2400) 

Thanks, 
Cong 

On 2014/12/22, at 5:29, "Simone Tiraboschi" <stirabos at redhat.com<mailto:stirabos at redhat.com>> wrote: 



----- Original Message ----- 
From: "Cong Yue" <Cong_Yue at alliedtelesis.com<mailto:Cong_Yue at alliedtelesis.com>> 
To: "Simone Tiraboschi" <stirabos at redhat.com<mailto:stirabos at redhat.com>> 
Cc: users at ovirt.org<mailto:users at ovirt.org> 
Sent: Friday, December 19, 2014 7:22:10 PM 
Subject: RE: [ovirt-users] VM failover with ovirt3.5 

Thanks for the information. This is the log for my three ovirt nodes. 
>From the output of hosted-engine --vm-status, it shows the engine state for 
my 2nd and 3rd ovirt node is DOWN. 
Is this the reason why VM failover not work in my environment? 

No, they looks ok: you can run the engine VM on single host at a time. 

How can I make 
also engine works for my 2nd and 3rd ovit nodes? 

If you put the host 1 in local maintenance mode ( hosted-engine --set-maintenance --mode=local ) the VM should migrate to host 2; if you reactivate host 1 ( hosted-engine --set-maintenance --mode=none ) and put host 2 in local maintenance mode the VM should migrate again. 

Can you please try that and post the logs if something is going bad? 


-- 
--== Host 1 status ==-- 

Status up-to-date : True 
Hostname : 10.0.0.94 
Host ID : 1 
Engine status : {"health": "good", "vm": "up", 
"detail": "up"} 
Score : 2400 
Local maintenance : False 
Host timestamp : 150475 
Extra metadata (valid at timestamp): 
metadata_parse_version=1 
metadata_feature_version=1 
timestamp=150475 (Fri Dec 19 13:12:18 2014) 
host-id=1 
score=2400 
maintenance=False 
state=EngineUp 


--== Host 2 status ==-- 

Status up-to-date : True 
Hostname : 10.0.0.93 
Host ID : 2 
Engine status : {"reason": "vm not running on 
this host", "health": "bad", "vm": "down", "detail": "unknown"} 
Score : 2400 
Local maintenance : False 
Host timestamp : 1572 
Extra metadata (valid at timestamp): 
metadata_parse_version=1 
metadata_feature_version=1 
timestamp=1572 (Fri Dec 19 10:12:18 2014) 
host-id=2 
score=2400 
maintenance=False 
state=EngineDown 


--== Host 3 status ==-- 

Status up-to-date : False 
Hostname : 10.0.0.92 
Host ID : 3 
Engine status : unknown stale-data 
Score : 2400 
Local maintenance : False 
Host timestamp : 987 
Extra metadata (valid at timestamp): 
metadata_parse_version=1 
metadata_feature_version=1 
timestamp=987 (Fri Dec 19 10:09:58 2014) 
host-id=3 
score=2400 
maintenance=False 
state=EngineDown 

-- 
And the /var/log/ovirt-hosted-engine-ha/agent.log for three ovirt nodes are 
as follows: 
-- 
10.0.0.94(hosted-engine-1) 
--- 
MainThread::INFO::2014-12-19 
13:09:33,716::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Current state EngineUp (score: 2400) 
MainThread::INFO::2014-12-19 
13:09:33,716::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Best remote host 10.0.0.93 (id: 2, score: 2400) 
MainThread::INFO::2014-12-19 
13:09:44,017::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Current state EngineUp (score: 2400) 
MainThread::INFO::2014-12-19 
13:09:44,017::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Best remote host 10.0.0.93 (id: 2, score: 2400) 
MainThread::INFO::2014-12-19 
13:09:54,303::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Current state EngineUp (score: 2400) 
MainThread::INFO::2014-12-19 
13:09:54,303::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Best remote host 10.0.0.93 (id: 2, score: 2400) 
MainThread::INFO::2014-12-19 
13:10:04,342::states::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) 
Engine vm running on localhost 
MainThread::INFO::2014-12-19 
13:10:04,617::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Current state EngineUp (score: 2400) 
MainThread::INFO::2014-12-19 
13:10:04,617::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Best remote host 10.0.0.93 (id: 2, score: 2400) 
MainThread::INFO::2014-12-19 
13:10:14,657::state_machine::160::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) 
Global metadata: {'maintenance': False} 
MainThread::INFO::2014-12-19 
13:10:14,657::state_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) 
Host 10.0.0.93 (id 2): {'extra': 
'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=1448 
(Fri Dec 19 10:10:14 
2014)\nhost-id=2\nscore=2400\nmaintenance=False\nstate=EngineDown\n', 
'hostname': '10.0.0.93', 'alive': True, 'host-id': 2, 'engine-status': 
{'reason': 'vm not running on this host', 'health': 'bad', 'vm': 
'down', 'detail': 'unknown'}, 'score': 2400, 'maintenance': False, 
'host-ts': 1448} 
MainThread::INFO::2014-12-19 
13:10:14,657::state_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) 
Host 10.0.0.92 (id 3): {'extra': 
'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=987 
(Fri Dec 19 10:09:58 
2014)\nhost-id=3\nscore=2400\nmaintenance=False\nstate=EngineDown\n', 
'hostname': '10.0.0.92', 'alive': True, 'host-id': 3, 'engine-status': 
{'reason': 'vm not running on this host', 'health': 'bad', 'vm': 
'down', 'detail': 'unknown'}, 'score': 2400, 'maintenance': False, 
'host-ts': 987} 
MainThread::INFO::2014-12-19 
13:10:14,658::state_machine::168::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh) 
Local (id 1): {'engine-health': {'health': 'good', 'vm': 'up', 
'detail': 'up'}, 'bridge': True, 'mem-free': 1079.0, 'maintenance': 
False, 'cpu-load': 0.0269, 'gateway': True} 
MainThread::INFO::2014-12-19 
13:10:14,904::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Current state EngineUp (score: 2400) 
MainThread::INFO::2014-12-19 
13:10:14,904::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Best remote host 10.0.0.93 (id: 2, score: 2400) 
MainThread::INFO::2014-12-19 
13:10:25,210::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Current state EngineUp (score: 2400) 
MainThread::INFO::2014-12-19 
13:10:25,210::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Best remote host 10.0.0.93 (id: 2, score: 2400) 
MainThread::INFO::2014-12-19 
13:10:35,499::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Current state EngineUp (score: 2400) 
MainThread::INFO::2014-12-19 
13:10:35,499::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Best remote host 10.0.0.93 (id: 2, score: 2400) 
MainThread::INFO::2014-12-19 
13:10:45,784::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Current state EngineUp (score: 2400) 
MainThread::INFO::2014-12-19 
13:10:45,785::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Best remote host 10.0.0.93 (id: 2, score: 2400) 
MainThread::INFO::2014-12-19 
13:10:56,070::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Current state EngineUp (score: 2400) 
MainThread::INFO::2014-12-19 
13:10:56,070::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Best remote host 10.0.0.93 (id: 2, score: 2400) 
MainThread::INFO::2014-12-19 
13:11:06,109::states::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume) 
Engine vm running on localhost 
MainThread::INFO::2014-12-19 
13:11:06,359::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Current state EngineUp (score: 2400) 
MainThread::INFO::2014-12-19 
13:11:06,359::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Best remote host 10.0.0.93 (id: 2, score: 2400) 
MainThread::INFO::2014-12-19 
13:11:16,658::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Current state EngineUp (score: 2400) 
MainThread::INFO::2014-12-19 
13:11:16,658::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Best remote host 10.0.0.93 (id: 2, score: 2400) 
MainThread::INFO::2014-12-19 
13:11:26,991::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Current state EngineUp (score: 2400) 
MainThread::INFO::2014-12-19 
13:11:26,991::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Best remote host 10.0.0.93 (id: 2, score: 2400) 
MainThread::INFO::2014-12-19 
13:11:37,341::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Current state EngineUp (score: 2400) 
MainThread::INFO::2014-12-19 
13:11:37,341::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Best remote host 10.0.0.93 (id: 2, score: 2400) 
---- 

10.0.0.93 (hosted-engine-2) 
MainThread::INFO::2014-12-19 
10:12:18,339::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Current state EngineDown (score: 2400) 
MainThread::INFO::2014-12-19 
10:12:18,339::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Best remote host 10.0.0.94 (id: 1, score: 2400) 
MainThread::INFO::2014-12-19 
10:12:28,651::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Current state EngineDown (score: 2400) 
MainThread::INFO::2014-12-19 
10:12:28,652::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Best remote host 10.0.0.94 (id: 1, score: 2400) 
MainThread::INFO::2014-12-19 
10:12:39,010::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Current state EngineDown (score: 2400) 
MainThread::INFO::2014-12-19 
10:12:39,010::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Best remote host 10.0.0.94 (id: 1, score: 2400) 
MainThread::INFO::2014-12-19 
10:12:49,338::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Current state EngineDown (score: 2400) 
MainThread::INFO::2014-12-19 
10:12:49,338::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Best remote host 10.0.0.94 (id: 1, score: 2400) 
MainThread::INFO::2014-12-19 
10:12:59,642::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Current state EngineDown (score: 2400) 
MainThread::INFO::2014-12-19 
10:12:59,642::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Best remote host 10.0.0.94 (id: 1, score: 2400) 
MainThread::INFO::2014-12-19 
10:13:10,010::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Current state EngineDown (score: 2400) 
MainThread::INFO::2014-12-19 
10:13:10,010::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) 
Best remote host 10.0.0.94 (id: 1, score: 2400) 


10.0.0.92(hosted-engine-3) 
same as 10.0.0.93 
-- 

-----Original Message----- 
From: Simone Tiraboschi [mailto:stirabos at redhat.com] 
Sent: Friday, December 19, 2014 12:28 AM 
To: Yue, Cong 
Cc: users at ovirt.org<mailto:users at ovirt.org> 
Subject: Re: [ovirt-users] VM failover with ovirt3.5 



----- Original Message ----- 
From: "Cong Yue" <Cong_Yue at alliedtelesis.com<mailto:Cong_Yue at alliedtelesis.com>> 
To: users at ovirt.org<mailto:users at ovirt.org> 
Sent: Friday, December 19, 2014 2:14:33 AM 
Subject: [ovirt-users] VM failover with ovirt3.5 



Hi 



In my environment, I have 3 ovirt nodes as one cluster. And on top of 
host-1, there is one vm to host ovirt engine. 

Also I have one external storage for the cluster to use as data domain 
of engine and data. 

I confirmed live migration works well in my environment. 

But it seems very buggy for VM failover if I try to force to shut down 
one ovirt node. Sometimes the VM in the node which is shutdown can 
migrate to other host, but it take more than several minutes. 

Sometimes, it can not migrate at all. Sometimes, only when the host is 
back, the VM is beginning to move. 

Can you please check or share the logs under /var/log/ovirt-hosted-engine-ha/ 
? 

Is there some documentation to explain how VM failover is working? And 
is there some bugs reported related with this? 

http://www.ovirt.org/Features/Self_Hosted_Engine#Agent_State_Diagram 

Thanks in advance, 

Cong 




This e-mail message is for the sole use of the intended recipient(s) 
and may contain confidential and privileged information. Any 
unauthorized review, use, disclosure or distribution is prohibited. If 
you are not the intended recipient, please contact the sender by reply 
e-mail and destroy all copies of the original message. If you are the 
intended recipient, please be advised that the content of this message 
is subject to access, review and disclosure by the sender's e-mail System 
Administrator. 

_______________________________________________ 
Users mailing list 
Users at ovirt.org<mailto:Users at ovirt.org> 
http://lists.ovirt.org/mailman/listinfo/users 

This e-mail message is for the sole use of the intended recipient(s) and may 
contain confidential and privileged information. Any unauthorized review, 
use, disclosure or distribution is prohibited. If you are not the intended 
recipient, please contact the sender by reply e-mail and destroy all copies 
of the original message. If you are the intended recipient, please be 
advised that the content of this message is subject to access, review and 
disclosure by the sender's e-mail System Administrator. 


This e-mail message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. If you are the intended recipient, please be advised that the content of this message is subject to access, review and disclosure by the sender's e-mail System Administrator. 
_______________________________________________ 
Users mailing list 
Users at ovirt.org<mailto:Users at ovirt.org> 
http://lists.ovirt.org/mailman/listinfo/users 

________________________________ 
This e-mail message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. If you are the intended recipient, please be advised that the content of this message is subject to access, review and disclosure by the sender's e-mail System Administrator. 
-------------- next part -------------- 
An HTML attachment was scrubbed... 
URL: <http://lists.ovirt.org/pipermail/users/attachments/20141228/c5ac26a7/attachment.html> 

------------------------------ 

_______________________________________________ 
Users mailing list 
Users at ovirt.org 
http://lists.ovirt.org/mailman/listinfo/users 


End of Users Digest, Vol 39, Issue 163 
************************************** 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20141229/dcc331b5/attachment-0001.html>


More information about the Users mailing list