Re: [ovirt-users] VM failover with ovirt3.5
by Nikolai Sednev
------=_Part_1875460_365779577.1419876418683
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Hi,
Your guest-vm have to be defined as " Highly Available"
Highly Available
Select this check box if the virtual machine is to be highly available. For example, in cases of host maintenance or failure, the virtual machine is automatically moved to or re-launched on another host. If the host is manually shut down by the system administrator, the virtual machine is not automatically moved to another host.
Note that this option is unavailable if the Migration Options setting in the Hosts tab is set to either Allow manual migration only or No migration . For a virtual machine to be highly available, it must be possible for the Manager to migrate the virtual machine to other available hosts as necessary.
Thanks in advance.
Best regards,
Nikolai
____________________
Nikolai Sednev
Senior Quality Engineer at Compute team
Red Hat Israel
34 Jerusalem Road,
Ra'anana, Israel 43501
Tel: +972 9 7692043
Mobile: +972 52 7342734
Email: nsednev(a)redhat.com
IRC: nsednev
----- Original Message -----
From: users-request(a)ovirt.org
To: users(a)ovirt.org
Sent: Monday, December 29, 2014 7:50:07 PM
Subject: Users Digest, Vol 39, Issue 169
Send Users mailing list submissions to
users(a)ovirt.org
To subscribe or unsubscribe via the World Wide Web, visit
http://lists.ovirt.org/mailman/listinfo/users
or, via email, send a message with subject or body 'help' to
users-request(a)ovirt.org
You can reach the person managing the list at
users-owner(a)ovirt.org
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Users digest..."
Today's Topics:
1. Re: VM failover with ovirt3.5 (Yue, Cong)
----------------------------------------------------------------------
Message: 1
Date: Mon, 29 Dec 2014 09:49:58 -0800
From: "Yue, Cong" <Cong_Yue(a)alliedtelesis.com>
To: Artyom Lukianov <alukiano(a)redhat.com>
Cc: "users(a)ovirt.org" <users(a)ovirt.org>
Subject: Re: [ovirt-users] VM failover with ovirt3.5
Message-ID: <11A51118-8B03-41FE-8FD0-C81AC8897EF6(a)alliedtelesis.com>
Content-Type: text/plain; charset="us-ascii"
Thanks for detailed explanation. Do you mean only HE VM can be failover? I want to have a try with the VM on any host to check whether VM can be failover to other host automatically like VMware or Xenserver?
I will have a try as you advised and provide the log for your further advice.
Thanks,
Cong
> On 2014/12/29, at 8:43, "Artyom Lukianov" <alukiano(a)redhat.com> wrote:
>
> I see that HE vm run on host with ip 10.0.0.94, and two another hosts in "Local Maintenance" state, so vm will not migrate to any of them, can you try disable local maintenance on all hosts in HE environment and after enable "local maintenance" on host where HE vm run, and provide also output of hosted-engine --vm-status.
> Failover works in next way:
> 1) if host where run HE vm have score less by 800 that some other host in HE environment, HE vm will migrate on host with best score
> 2) if something happen to vm(kernel panic, crash of service...), agent will restart HE vm on another host in HE environment with positive score
> 3) if put to local maintenance host with HE vm, vm will migrate to another host with positive score
> Thanks.
>
> ----- Original Message -----
> From: "Cong Yue" <Cong_Yue(a)alliedtelesis.com>
> To: "Artyom Lukianov" <alukiano(a)redhat.com>
> Cc: "Simone Tiraboschi" <stirabos(a)redhat.com>, users(a)ovirt.org
> Sent: Monday, December 29, 2014 6:30:42 PM
> Subject: Re: [ovirt-users] VM failover with ovirt3.5
>
> Thanks and the --vm-status log is as follows:
> [root@compute2-2 ~]# hosted-engine --vm-status
>
>
> --== Host 1 status ==--
>
> Status up-to-date : True
> Hostname : 10.0.0.94
> Host ID : 1
> Engine status : {"health": "good", "vm": "up",
> "detail": "up"}
> Score : 2400
> Local maintenance : False
> Host timestamp : 1008087
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=1008087<tel:1008087> (Mon Dec 29 11:25:51 2014)
> host-id=1
> score=2400
> maintenance=False
> state=EngineUp
>
>
> --== Host 2 status ==--
>
> Status up-to-date : True
> Hostname : 10.0.0.93
> Host ID : 2
> Engine status : {"reason": "vm not running on
> this host", "health": "bad", "vm": "down", "detail": "unknown"}
> Score : 0
> Local maintenance : True
> Host timestamp : 859142
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=859142 (Mon Dec 29 08:25:08 2014)
> host-id=2
> score=0
> maintenance=True
> state=LocalMaintenance
>
>
> --== Host 3 status ==--
>
> Status up-to-date : True
> Hostname : 10.0.0.92
> Host ID : 3
> Engine status : {"reason": "vm not running on
> this host", "health": "bad", "vm": "down", "detail": "unknown"}
> Score : 0
> Local maintenance : True
> Host timestamp : 853615
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=853615 (Mon Dec 29 08:25:57 2014)
> host-id=3
> score=0
> maintenance=True
> state=LocalMaintenance
> You have new mail in /var/spool/mail/root
> [root@compute2-2 ~]#
>
> Could you please explain how VM failover works inside ovirt? Is there any other debug option I can enable to check the problem?
>
> Thanks,
> Cong
>
>
> On 2014/12/29, at 1:39, "Artyom Lukianov" <alukiano(a)redhat.com<mailto:alukiano@redhat.com>> wrote:
>
> Can you also provide output of hosted-engine --vm-status please, previous time it was useful, because I do not see something unusual.
> Thanks
>
> ----- Original Message -----
> From: "Cong Yue" <Cong_Yue(a)alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com>>
> To: "Artyom Lukianov" <alukiano(a)redhat.com<mailto:alukiano@redhat.com>>
> Cc: "Simone Tiraboschi" <stirabos(a)redhat.com<mailto:stirabos@redhat.com>>, users(a)ovirt.org<mailto:users@ovirt.org>
> Sent: Monday, December 29, 2014 7:15:24 AM
> Subject: Re: [ovirt-users] VM failover with ovirt3.5
>
> Also I change the maintenance mode to local in another host. But also the VM in this host can not be migrated. The logs are as follows.
>
> [root@compute2-2 ~]# hosted-engine --set-maintenance --mode=local
> [root@compute2-2 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
> MainThread::INFO::2014-12-28
> 21:09:04,184::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-28
> 21:09:14,603::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-28
> 21:09:14,603::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-28
> 21:09:24,903::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-28
> 21:09:24,904::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-28
> 21:09:35,026::states::437::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
> Engine vm is running on host 10.0.0.94 (id 1)
> MainThread::INFO::2014-12-28
> 21:09:35,236::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-28
> 21:09:35,236::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-28
> 21:09:45,604::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-28
> 21:09:45,604::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-28
> 21:09:55,691::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
> Local maintenance detected
> MainThread::INFO::2014-12-28
> 21:09:55,701::brokerlink::111::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
> Trying: notify time=1419829795.7 type=state_transition
> detail=EngineDown-LocalMaintenance hostname='compute2-2'
> MainThread::INFO::2014-12-28
> 21:09:55,761::brokerlink::120::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(notify)
> Success, was notification of state_transition
> (EngineDown-LocalMaintenance) sent? sent
> MainThread::INFO::2014-12-28
> 21:09:55,990::states::208::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)
> Score is 0 due to local maintenance mode
> MainThread::INFO::2014-12-28
> 21:09:55,990::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state LocalMaintenance (score: 0)
> MainThread::INFO::2014-12-28
> 21:09:55,991::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> ^C
> You have new mail in /var/spool/mail/root
> [root@compute2-2 ~]# ps -ef | grep qemu
> root 18420 2777 0 21:10<x-apple-data-detectors://39> pts/0 00:00:00<x-apple-data-detectors://40> grep --color=auto qemu
> qemu 29809 1 0 Dec19 ? 01:17:20 /usr/libexec/qemu-kvm
> -name testvm2-2 -S -machine rhel6.5.0,accel=kvm,usb=off -cpu Nehalem
> -m 500 -realtime mlock=off -smp
> 1,maxcpus=16,sockets=16,cores=1,threads=1 -uuid
> c31e97d0-135e-42da-9954-162b5228dce3 -smbios
> type=1,manufacturer=oVirt,product=oVirt
> Node,version=7-0.1406.el7.centos.2.5,serial=4C4C4544-0059-3610-8033-B4C04F395931,uuid=c31e97d0-135e-42da-9954-162b5228dce3
> -no-user-config -nodefaults -chardev
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/testvm2-2.monitor,server,nowait
> -mon chardev=charmonitor,id=monitor,mode=control -rtc
> base=2014-12-19T20:17:17<x-apple-data-detectors://42>,driftfix=slew -no-kvm-pit-reinjection
> -no-hpet -no-shutdown -boot strict=on -device
> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
> virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5
> -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial=
> -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
> -drive file=/rhev/data-center/00000002-0002-0002-0002-0000000001e4/1dc71096-27c4-4256-b2ac-bd7265525c69/images/5cbeb8c9-4f04-48d0-a5eb-78c49187c550/a0570e8c-9867-4ec4-818f-11e102fc4f9b,if=none,id=drive-virtio-disk0,format=qcow2,serial=5cbeb8c9-4f04-48d0-a5eb-78c49187c550,cache=none,werror=stop,rerror=stop,aio=threads
> -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
> -netdev tap,fd=28,id=hostnet0,vhost=on,vhostfd=29 -device
> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:db:94:00,bus=pci.0,addr=0x3
> -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/c31e97d0-135e-42da-9954-162b5228dce3.com.redhat.rhevm.vdsm,server,nowait
> -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/c31e97d0-135e-42da-9954-162b5228dce3.org.qemu.guest_agent.0,server,nowait
> -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
> -chardev spicevmc,id=charchannel2,name=vdagent -device
> virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
> -spice tls-port=5901,addr=10.0.0.93,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
> -k en-us -vga qxl -global qxl-vga.ram_size=67108864<tel:67108864> -global
> qxl-vga.vram_size=33554432<tel:33554432> -incoming tcp:[::]:49152 -device
> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7
> [root@compute2-2 ~]#
>
> Thanks,
> Cong
>
>
> On 2014/12/28, at 20:53, "Yue, Cong" <Cong_Yue(a)alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com><mailto:Cong_Yue@alliedtelesis.com>> wrote:
>
> I checked it again and confirmed there is one guest VM is running on the top of this host. The log is as follows:
>
> [root@compute2-1 vdsm]# ps -ef | grep qemu
> qemu 2983 846 0 Dec19 ? 00:00:00<x-apple-data-detectors://0> [supervdsmServer] <defunct>
> root 5489 3053 0 20:49<x-apple-data-detectors://1> pts/0 00:00:00<x-apple-data-detectors://2> grep --color=auto qemu
> qemu 26128 1 0 Dec19 ? 01:09:19 /usr/libexec/qemu-kvm
> -name testvm2 -S -machine rhel6.5.0,accel=kvm,usb=off -cpu Nehalem -m
> 500 -realtime mlock=off -smp 1,maxcpus=16,sockets=16,cores=1,threads=1
> -uuid e46bca87-4df5-4287-844b-90a26fccef33 -smbios
> type=1,manufacturer=oVirt,product=oVirt
> Node,version=7-0.1406.el7.centos.2.5,serial=4C4C4544-0030-3310-8059-B8C04F585231,uuid=e46bca87-4df5-4287-844b-90a26fccef33
> -no-user-config -nodefaults -chardev
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/testvm2.monitor,server,nowait
> -mon chardev=charmonitor,id=monitor,mode=control -rtc
> base=2014-12-19T20:18:01<x-apple-data-detectors://4>,driftfix=slew -no-kvm-pit-reinjection
> -no-hpet -no-shutdown -boot strict=on -device
> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
> virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5
> -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial=
> -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
> -drive file=/rhev/data-center/00000002-0002-0002-0002-0000000001e4/1dc71096-27c4-4256-b2ac-bd7265525c69/images/b4b5426b-95e3-41af-b286-da245891cdaf/0f688d49-97e3-4f1d-84d4-ac1432d903b3,if=none,id=drive-virtio-disk0,format=qcow2,serial=b4b5426b-95e3-41af-b286-da245891cdaf,cache=none,werror=stop,rerror=stop,aio=threads
> -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
> -netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=27 -device
> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:db:94:01,bus=pci.0,addr=0x3
> -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/e46bca87-4df5-4287-844b-90a26fccef33.com.redhat.rhevm.vdsm,server,nowait
> -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
> -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/e46bca87-4df5-4287-844b-90a26fccef33.org.qemu.guest_agent.0,server,nowait
> -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
> -chardev spicevmc,id=charchannel2,name=vdagent -device
> virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
> -spice tls-port=5900,addr=10.0.0.92,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
> -k en-us -vga qxl -global qxl-vga.ram_size=67108864<tel:67108864> -global
> qxl-vga.vram_size=33554432<tel:33554432> -incoming tcp:[::]:49152 -device
> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7
> [root@compute2-1 vdsm]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
> MainThread::INFO::2014-12-28
> 20:49:27,315::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
> Local maintenance detected
> MainThread::INFO::2014-12-28
> 20:49:27,646::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state LocalMaintenance (score: 0)
> MainThread::INFO::2014-12-28
> 20:49:27,646::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-28
> 20:49:37,732::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
> Local maintenance detected
> MainThread::INFO::2014-12-28
> 20:49:37,961::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state LocalMaintenance (score: 0)
> MainThread::INFO::2014-12-28
> 20:49:37,961::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-28
> 20:49:48,048::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
> Local maintenance detected
> MainThread::INFO::2014-12-28
> 20:49:48,319::states::208::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)
> Score is 0 due to local maintenance mode
> MainThread::INFO::2014-12-28
> 20:49:48,319::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state LocalMaintenance (score: 0)
> MainThread::INFO::2014-12-28
> 20:49:48,319::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
>
> Thanks,
> Cong
>
>
> On 2014/12/28, at 3:46, "Artyom Lukianov" <alukiano(a)redhat.com<mailto:alukiano@redhat.com><mailto:alukiano@redhat.com>> wrote:
>
> I see that you set local maintenance on host3 that do not have engine vm on it, so it nothing to migrate from this host.
> If you set local maintenance on host1, vm must migrate to another host with positive score.
> Thanks
>
> ----- Original Message -----
> From: "Cong Yue" <Cong_Yue(a)alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com><mailto:Cong_Yue@alliedtelesis.com>>
> To: "Simone Tiraboschi" <stirabos(a)redhat.com<mailto:stirabos@redhat.com><mailto:stirabos@redhat.com>>
> Cc: users(a)ovirt.org<mailto:users@ovirt.org><mailto:users@ovirt.org>
> Sent: Saturday, December 27, 2014 6:58:32 PM
> Subject: Re: [ovirt-users] VM failover with ovirt3.5
>
> Hi
>
> I had a try with "hosted-engine --set-maintence --mode=local" on
> compute2-1, which is host 3 in my cluster. From the log, it shows
> maintence mode is dectected, but migration does not happen.
>
> The logs are as follows. Is there any other config I need to check?
>
> [root@compute2-1 vdsm]# hosted-engine --vm-status
>
>
> --== Host 1 status ==-
>
> Status up-to-date : True
> Hostname : 10.0.0.94
> Host ID : 1
> Engine status : {"health": "good", "vm": "up",
> "detail": "up"}
> Score : 2400
> Local maintenance : False
> Host timestamp : 836296
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=836296 (Sat Dec 27 11:42:39 2014)
> host-id=1
> score=2400
> maintenance=False
> state=EngineUp
>
>
> --== Host 2 status ==--
>
> Status up-to-date : True
> Hostname : 10.0.0.93
> Host ID : 2
> Engine status : {"reason": "vm not running on
> this host", "health": "bad", "vm": "down", "detail": "unknown"}
> Score : 2400
> Local maintenance : False
> Host timestamp : 687358
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=687358 (Sat Dec 27 08:42:04 2014)
> host-id=2
> score=2400
> maintenance=False
> state=EngineDown
>
>
> --== Host 3 status ==--
>
> Status up-to-date : True
> Hostname : 10.0.0.92
> Host ID : 3
> Engine status : {"reason": "vm not running on
> this host", "health": "bad", "vm": "down", "detail": "unknown"}
> Score : 0
> Local maintenance : True
> Host timestamp : 681827
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=681827 (Sat Dec 27 08:42:40 2014)
> host-id=3
> score=0
> maintenance=True
> state=LocalMaintenance
> [root@compute2-1 vdsm]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
> MainThread::INFO::2014-12-27
> 08:42:41,109::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-27
> 08:42:51,198::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
> Local maintenance detected
> MainThread::INFO::2014-12-27
> 08:42:51,420::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state LocalMaintenance (score: 0)
> MainThread::INFO::2014-12-27
> 08:42:51,420::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-27
> 08:43:01,507::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
> Local maintenance detected
> MainThread::INFO::2014-12-27
> 08:43:01,773::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state LocalMaintenance (score: 0)
> MainThread::INFO::2014-12-27
> 08:43:01,773::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-27
> 08:43:11,859::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
> Local maintenance detected
> MainThread::INFO::2014-12-27
> 08:43:12,072::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state LocalMaintenance (score: 0)
> MainThread::INFO::2014-12-27
> 08:43:12,072::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
>
>
>
> [root@compute2-3 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
> MainThread::INFO::2014-12-27
> 11:36:28,855::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-27
> 11:36:39,130::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-27
> 11:36:39,130::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-27
> 11:36:49,449::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-27
> 11:36:49,449::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-27
> 11:36:59,739::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-27
> 11:36:59,739::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-27
> 11:37:09,779::states::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
> Engine vm running on localhost
> MainThread::INFO::2014-12-27
> 11:37:10,026::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-27
> 11:37:10,026::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-27
> 11:37:20,331::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-27
> 11:37:20,331::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
>
>
> [root@compute2-2 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
> MainThread::INFO::2014-12-27
> 08:36:12,462::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-27
> 08:36:22,797::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-27
> 08:36:22,798::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-27
> 08:36:32,876::states::437::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
> Engine vm is running on host 10.0.0.94 (id 1)
> MainThread::INFO::2014-12-27
> 08:36:33,169::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-27
> 08:36:33,169::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-27
> 08:36:43,567::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-27
> 08:36:43,567::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-27
> 08:36:53,858::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-27
> 08:36:53,858::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-27
> 08:37:04,028::state_machine::160::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
> Global metadata: {'maintenance': False}
> MainThread::INFO::2014-12-27
> 08:37:04,028::state_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
> Host 10.0.0.94 (id 1): {'extra':
> 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=835987
> (Sat Dec 27 11:37:30
> 2014)\nhost-id=1\nscore=2400\nmaintenance=False\nstate=EngineUp\n',
> 'hostname': '10.0.0.94', 'alive': True, 'host-id': 1, 'engine-status':
> {'health': 'good', 'vm': 'up', 'detail': 'up'}, 'score': 2400,
> 'maintenance': False, 'host-ts': 835987}
> MainThread::INFO::2014-12-27
> 08:37:04,028::state_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
> Host 10.0.0.92 (id 3): {'extra':
> 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=681528
> (Sat Dec 27 08:37:41
> 2014)\nhost-id=3\nscore=0\nmaintenance=True\nstate=LocalMaintenance\n',
> 'hostname': '10.0.0.92', 'alive': True, 'host-id': 3, 'engine-status':
> {'reason': 'vm not running on this host', 'health': 'bad', 'vm':
> 'down', 'detail': 'unknown'}, 'score': 0, 'maintenance': True,
> 'host-ts': 681528}
> MainThread::INFO::2014-12-27
> 08:37:04,028::state_machine::168::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
> Local (id 2): {'engine-health': {'reason': 'vm not running on this
> host', 'health': 'bad', 'vm': 'down', 'detail': 'unknown'}, 'bridge':
> True, 'mem-free': 15300.0, 'maintenance': False, 'cpu-load': 0.0215,
> 'gateway': True}
> MainThread::INFO::2014-12-27
> 08:37:04,265::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-27
> 08:37:04,265::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
>
> Thanks,
> Cong
>
> On 2014/12/22, at 5:29, "Simone Tiraboschi" <stirabos(a)redhat.com<mailto:stirabos@redhat.com><mailto:stirabos@redhat.com>> wrote:
>
>
>
> ----- Original Message -----
> From: "Cong Yue" <Cong_Yue(a)alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com><mailto:Cong_Yue@alliedtelesis.com>>
> To: "Simone Tiraboschi" <stirabos(a)redhat.com<mailto:stirabos@redhat.com><mailto:stirabos@redhat.com>>
> Cc: users(a)ovirt.org<mailto:users@ovirt.org><mailto:users@ovirt.org>
> Sent: Friday, December 19, 2014 7:22:10 PM
> Subject: RE: [ovirt-users] VM failover with ovirt3.5
>
> Thanks for the information. This is the log for my three ovirt nodes.
> From the output of hosted-engine --vm-status, it shows the engine state for
> my 2nd and 3rd ovirt node is DOWN.
> Is this the reason why VM failover not work in my environment?
>
> No, they looks ok: you can run the engine VM on single host at a time.
>
> How can I make
> also engine works for my 2nd and 3rd ovit nodes?
>
> If you put the host 1 in local maintenance mode ( hosted-engine --set-maintenance --mode=local ) the VM should migrate to host 2; if you reactivate host 1 ( hosted-engine --set-maintenance --mode=none ) and put host 2 in local maintenance mode the VM should migrate again.
>
> Can you please try that and post the logs if something is going bad?
>
>
> --
> --== Host 1 status ==--
>
> Status up-to-date : True
> Hostname : 10.0.0.94
> Host ID : 1
> Engine status : {"health": "good", "vm": "up",
> "detail": "up"}
> Score : 2400
> Local maintenance : False
> Host timestamp : 150475
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=150475 (Fri Dec 19 13:12:18 2014)
> host-id=1
> score=2400
> maintenance=False
> state=EngineUp
>
>
> --== Host 2 status ==--
>
> Status up-to-date : True
> Hostname : 10.0.0.93
> Host ID : 2
> Engine status : {"reason": "vm not running on
> this host", "health": "bad", "vm": "down", "detail": "unknown"}
> Score : 2400
> Local maintenance : False
> Host timestamp : 1572
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=1572 (Fri Dec 19 10:12:18 2014)
> host-id=2
> score=2400
> maintenance=False
> state=EngineDown
>
>
> --== Host 3 status ==--
>
> Status up-to-date : False
> Hostname : 10.0.0.92
> Host ID : 3
> Engine status : unknown stale-data
> Score : 2400
> Local maintenance : False
> Host timestamp : 987
> Extra metadata (valid at timestamp):
> metadata_parse_version=1
> metadata_feature_version=1
> timestamp=987 (Fri Dec 19 10:09:58 2014)
> host-id=3
> score=2400
> maintenance=False
> state=EngineDown
>
> --
> And the /var/log/ovirt-hosted-engine-ha/agent.log for three ovirt nodes are
> as follows:
> --
> 10.0.0.94(hosted-engine-1)
> ---
> MainThread::INFO::2014-12-19
> 13:09:33,716::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:09:33,716::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:09:44,017::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:09:44,017::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:09:54,303::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:09:54,303::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:04,342::states::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
> Engine vm running on localhost
> MainThread::INFO::2014-12-19
> 13:10:04,617::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:04,617::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:14,657::state_machine::160::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
> Global metadata: {'maintenance': False}
> MainThread::INFO::2014-12-19
> 13:10:14,657::state_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
> Host 10.0.0.93 (id 2): {'extra':
> 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=1448
> (Fri Dec 19 10:10:14
> 2014)\nhost-id=2\nscore=2400\nmaintenance=False\nstate=EngineDown\n',
> 'hostname': '10.0.0.93', 'alive': True, 'host-id': 2, 'engine-status':
> {'reason': 'vm not running on this host', 'health': 'bad', 'vm':
> 'down', 'detail': 'unknown'}, 'score': 2400, 'maintenance': False,
> 'host-ts': 1448}
> MainThread::INFO::2014-12-19
> 13:10:14,657::state_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
> Host 10.0.0.92 (id 3): {'extra':
> 'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=987
> (Fri Dec 19 10:09:58
> 2014)\nhost-id=3\nscore=2400\nmaintenance=False\nstate=EngineDown\n',
> 'hostname': '10.0.0.92', 'alive': True, 'host-id': 3, 'engine-status':
> {'reason': 'vm not running on this host', 'health': 'bad', 'vm':
> 'down', 'detail': 'unknown'}, 'score': 2400, 'maintenance': False,
> 'host-ts': 987}
> MainThread::INFO::2014-12-19
> 13:10:14,658::state_machine::168::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
> Local (id 1): {'engine-health': {'health': 'good', 'vm': 'up',
> 'detail': 'up'}, 'bridge': True, 'mem-free': 1079.0, 'maintenance':
> False, 'cpu-load': 0.0269, 'gateway': True}
> MainThread::INFO::2014-12-19
> 13:10:14,904::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:14,904::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:25,210::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:25,210::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:35,499::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:35,499::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:45,784::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:45,785::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:56,070::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:10:56,070::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:11:06,109::states::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
> Engine vm running on localhost
> MainThread::INFO::2014-12-19
> 13:11:06,359::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:11:06,359::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:11:16,658::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:11:16,658::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:11:26,991::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:11:26,991::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> MainThread::INFO::2014-12-19
> 13:11:37,341::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineUp (score: 2400)
> MainThread::INFO::2014-12-19
> 13:11:37,341::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.93 (id: 2, score: 2400)
> ----
>
> 10.0.0.93 (hosted-engine-2)
> MainThread::INFO::2014-12-19
> 10:12:18,339::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-19
> 10:12:18,339::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-19
> 10:12:28,651::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-19
> 10:12:28,652::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-19
> 10:12:39,010::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-19
> 10:12:39,010::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-19
> 10:12:49,338::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-19
> 10:12:49,338::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-19
> 10:12:59,642::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-19
> 10:12:59,642::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
> MainThread::INFO::2014-12-19
> 10:13:10,010::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Current state EngineDown (score: 2400)
> MainThread::INFO::2014-12-19
> 10:13:10,010::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
> Best remote host 10.0.0.94 (id: 1, score: 2400)
>
>
> 10.0.0.92(hosted-engine-3)
> same as 10.0.0.93
> --
>
> -----Original Message-----
> From: Simone Tiraboschi [mailto:stirabos@redhat.com]
> Sent: Friday, December 19, 2014 12:28 AM
> To: Yue, Cong
> Cc: users(a)ovirt.org<mailto:users@ovirt.org><mailto:users@ovirt.org>
> Subject: Re: [ovirt-users] VM failover with ovirt3.5
>
>
>
> ----- Original Message -----
> From: "Cong Yue" <Cong_Yue(a)alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com><mailto:Cong_Yue@alliedtelesis.com>>
> To: users(a)ovirt.org<mailto:users@ovirt.org><mailto:users@ovirt.org>
> Sent: Friday, December 19, 2014 2:14:33 AM
> Subject: [ovirt-users] VM failover with ovirt3.5
>
>
>
> Hi
>
>
>
> In my environment, I have 3 ovirt nodes as one cluster. And on top of
> host-1, there is one vm to host ovirt engine.
>
> Also I have one external storage for the cluster to use as data domain
> of engine and data.
>
> I confirmed live migration works well in my environment.
>
> But it seems very buggy for VM failover if I try to force to shut down
> one ovirt node. Sometimes the VM in the node which is shutdown can
> migrate to other host, but it take more than several minutes.
>
> Sometimes, it can not migrate at all. Sometimes, only when the host is
> back, the VM is beginning to move.
>
> Can you please check or share the logs under /var/log/ovirt-hosted-engine-ha/
> ?
>
> Is there some documentation to explain how VM failover is working? And
> is there some bugs reported related with this?
>
> http://www.ovirt.org/Features/Self_Hosted_Engine#Agent_State_Diagram
>
> Thanks in advance,
>
> Cong
>
>
>
>
> This e-mail message is for the sole use of the intended recipient(s)
> and may contain confidential and privileged information. Any
> unauthorized review, use, disclosure or distribution is prohibited. If
> you are not the intended recipient, please contact the sender by reply
> e-mail and destroy all copies of the original message. If you are the
> intended recipient, please be advised that the content of this message
> is subject to access, review and disclosure by the sender's e-mail System
> Administrator.
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org<mailto:Users@ovirt.org><mailto:Users@ovirt.org>
> http://lists.ovirt.org/mailman/listinfo/users
>
> This e-mail message is for the sole use of the intended recipient(s) and may
> contain confidential and privileged information. Any unauthorized review,
> use, disclosure or distribution is prohibited. If you are not the intended
> recipient, please contact the sender by reply e-mail and destroy all copies
> of the original message. If you are the intended recipient, please be
> advised that the content of this message is subject to access, review and
> disclosure by the sender's e-mail System Administrator.
>
>
> This e-mail message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. If you are the intended recipient, please be advised that the content of this message is subject to access, review and disclosure by the sender's e-mail System Administrator.
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org<mailto:Users@ovirt.org><mailto:Users@ovirt.org>
> http://lists.ovirt.org/mailman/listinfo/users
>
> ________________________________
> This e-mail message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. If you are the intended recipient, please be advised that the content of this message is subject to access, review and disclosure by the sender's e-mail System Administrator.
>
> ________________________________
> This e-mail message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. If you are the intended recipient, please be advised that the content of this message is subject to access, review and disclosure by the sender's e-mail System Administrator.
This e-mail message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. If you are the intended recipient, please be advised that the content of this message is subject to access, review and disclosure by the sender's e-mail System Administrator.
------------------------------
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
End of Users Digest, Vol 39, Issue 169
**************************************
------=_Part_1875460_365779577.1419876418683
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: georgia,serif; font-size: 12pt; colo=
r: #000000"><div>Hi,</div><div>Your guest-vm have to be defined as "<span s=
tyle=3D"font-family: 'Arial Unicode MS', Arial, sans-serif; font-size: smal=
l; line-height: 16px; background-color: #ffffff;" data-mce-style=3D"font-fa=
mily: 'Arial Unicode MS', Arial, sans-serif; font-size: small; line-height:=
16px; background-color: #ffffff;">Highly Available"</span></div><div><tabl=
e xmlns:d=3D"http://docbook.org/ns/docbook" class=3D"lt-4-cols lt-7-rows mc=
eItemTable" summary=3D"Virtual Machine: High Availability Settings" style=
=3D"widows: 4; orphans: 4; border: 1px solid #aaaaaa; width: 768.7999877929=
688px; border-collapse: collapse; table-layout: fixed; word-wrap: break-wor=
d; color: #000000; font-family: 'liberation sans', 'Myriad ', 'Bitstream Ve=
ra Sans', 'Lucida Grande', 'Luxi Sans', 'Trebuchet MS', helvetica, verdana,=
arial, sans-serif; font-size: 14.399999618530273px; line-height: 18.049999=
237060547px; background-color: #ffffff;" data-mce-style=3D"widows: 4; orpha=
ns: 4; border: 1px solid #aaaaaa; width: 768.7999877929688px; border-collap=
se: collapse; table-layout: fixed; word-wrap: break-word; color: #000000; f=
ont-family: 'liberation sans', 'Myriad ', 'Bitstream Vera Sans', 'Lucida Gr=
ande', 'Luxi Sans', 'Trebuchet MS', helvetica, verdana, arial, sans-serif; =
font-size: 14.399999618530273px; line-height: 18.049999237060547px; backgro=
und-color: #ffffff;"><tbody><tr><td align=3D"left" style=3D"border: none; v=
ertical-align: top; padding: 0.15em 0.5em;" data-mce-style=3D"border: none;=
vertical-align: top; padding: 0.15em 0.5em;"><div class=3D"para" style=3D"=
line-height: 1.29em; padding-top: 0px; margin-top: 0px; padding-bottom: 0px=
; margin-bottom: 1em; display: inline;" data-mce-style=3D"line-height: 1.29=
em; padding-top: 0px; margin-top: 0px; padding-bottom: 0px; margin-bottom: =
1em; display: inline;"><span class=3D"guilabel" style=3D"font-family: 'deja=
vu sans mono', 'liberation mono', 'bitstream vera mono', 'dejavu mono', mon=
ospace; font-weight: bold;" data-mce-style=3D"font-family: 'dejavu sans mon=
o', 'liberation mono', 'bitstream vera mono', 'dejavu mono', monospace; fon=
t-weight: bold;"><strong>Highly Available</strong></span></div></td><td ali=
gn=3D"left" style=3D"border: none; vertical-align: top; padding: 0.15em 0.5=
em;" data-mce-style=3D"border: none; vertical-align: top; padding: 0.15em 0=
.5em;"><div class=3D"para" style=3D"line-height: 1.29em; padding-top: 0px; =
margin-top: 0px; padding-bottom: 0px; margin-bottom: 1em; display: inline;"=
data-mce-style=3D"line-height: 1.29em; padding-top: 0px; margin-top: 0px; =
padding-bottom: 0px; margin-bottom: 1em; display: inline;">Select this chec=
k box if the virtual machine is to be highly available. For example, in cas=
es of host maintenance or failure, the virtual machine is automatically mov=
ed to or re-launched on another host. If the host is manually shut down by =
the system administrator, the virtual machine is not automatically moved to=
another host.</div><div class=3D"para" style=3D"line-height: 1.29em; paddi=
ng-top: 0px; margin-top: 1em; padding-bottom: 0px; margin-bottom: 1em;" dat=
a-mce-style=3D"line-height: 1.29em; padding-top: 0px; margin-top: 1em; padd=
ing-bottom: 0px; margin-bottom: 1em;">Note that this option is unavailable =
if the <span class=3D"guilabel" style=3D"font-family: 'dejavu sans mon=
o', 'liberation mono', 'bitstream vera mono', 'dejavu mono', monospace; fon=
t-weight: bold;" data-mce-style=3D"font-family: 'dejavu sans mono', 'libera=
tion mono', 'bitstream vera mono', 'dejavu mono', monospace; font-weight: b=
old;">Migration Options</span> setting in the <span class=3D"guil=
abel" style=3D"font-family: 'dejavu sans mono', 'liberation mono', 'bitstre=
am vera mono', 'dejavu mono', monospace; font-weight: bold;" data-mce-style=
=3D"font-family: 'dejavu sans mono', 'liberation mono', 'bitstream vera mon=
o', 'dejavu mono', monospace; font-weight: bold;">Hosts</span> tab is =
set to either <span class=3D"guilabel" style=3D"font-family: 'dejavu s=
ans mono', 'liberation mono', 'bitstream vera mono', 'dejavu mono', monospa=
ce; font-weight: bold;" data-mce-style=3D"font-family: 'dejavu sans mono', =
'liberation mono', 'bitstream vera mono', 'dejavu mono', monospace; font-we=
ight: bold;">Allow manual migration only</span> or <span class=3D=
"guilabel" style=3D"font-family: 'dejavu sans mono', 'liberation mono', 'bi=
tstream vera mono', 'dejavu mono', monospace; font-weight: bold;" data-mce-=
style=3D"font-family: 'dejavu sans mono', 'liberation mono', 'bitstream ver=
a mono', 'dejavu mono', monospace; font-weight: bold;">No migration</span>.=
For a virtual machine to be highly available, it must be possible for the =
Manager to migrate the virtual machine to other available hosts as necessar=
y.</div></td></tr></tbody></table><div><br></div></div><div><span name=3D"x=
"></span><br>Thanks in advance.<br><div><br></div>Best regards,<br>Nikolai<=
br>____________________<br>Nikolai Sednev<br>Senior Quality Engineer at Com=
pute team<br>Red Hat Israel<br>34 Jerusalem Road,<br>Ra'anana, Israel 43501=
<br><div><br></div>Tel: +972 9 7692043<br>Mobil=
e: +972 52 7342734<br>Email: nsednev(a)redhat.com<br>IRC: nsednev<span name=
=3D"x"></span><br></div><div><br></div><hr id=3D"zwchr"><div style=3D"color=
:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family=
:Helvetica,Arial,sans-serif;font-size:12pt;"><b>From: </b>users-request@ovi=
rt.org<br><b>To: </b>users(a)ovirt.org<br><b>Sent: </b>Monday, December 29, 2=
014 7:50:07 PM<br><b>Subject: </b>Users Digest, Vol 39, Issue 169<br><div><=
br></div>Send Users mailing list submissions to<br> =
users(a)ovirt.org<br><div><br></div>To subscribe or u=
nsubscribe via the World Wide Web, visit<br> &=
nbsp; http://lists.ovirt.org/mailman/listinfo/users<br>or, via e=
mail, send a message with subject or body 'help' to<br> &n=
bsp; users-request(a)ovirt.org<br><div><br></div>You c=
an reach the person managing the list at<br> &=
nbsp; users-owner(a)ovirt.org<br><div><br></div>When replying, ple=
ase edit your Subject line so it is more specific<br>than "Re: Contents of =
Users digest..."<br><div><br></div><br>Today's Topics:<br><div><br></div>&n=
bsp; 1. Re: VM failover with ovirt3.5 (Yue, Cong)<br><div><br><=
/div><br>------------------------------------------------------------------=
----<br><div><br></div>Message: 1<br>Date: Mon, 29 Dec 2014 09:49:58 -0800<=
br>From: "Yue, Cong" <Cong_Yue(a)alliedtelesis.com><br>To: Artyom Lukia=
nov <alukiano(a)redhat.com><br>Cc: "users(a)ovirt.org" <users(a)ovirt.or=
g><br>Subject: Re: [ovirt-users] VM failover with ovirt3.5<br>Message-ID=
: <11A51118-8B03-41FE-8FD0-C81AC8897EF6(a)alliedtelesis.com><br>Content=
-Type: text/plain; charset=3D"us-ascii"<br><div><br></div>Thanks for detail=
ed explanation. Do you mean only HE VM can be failover? I want to have a tr=
y with the VM on any host to check whether VM can be failover to other host=
automatically like VMware or Xenserver?<br>I will have a try as you advise=
d and provide the log for your further advice.<br><div><br></div>Thanks,<br=
>Cong<br><div><br></div><br><div><br></div>> On 2014/12/29, at 8:43, "Ar=
tyom Lukianov" <alukiano(a)redhat.com> wrote:<br>><br>> I see tha=
t HE vm run on host with ip 10.0.0.94, and two another hosts in "Local Main=
tenance" state, so vm will not migrate to any of them, can you try disable =
local maintenance on all hosts in HE environment and after enable "local ma=
intenance" on host where HE vm run, and provide also output of hosted-engin=
e --vm-status.<br>> Failover works in next way:<br>> 1) if host where=
run HE vm have score less by 800 that some other host in HE environment, H=
E vm will migrate on host with best score<br>> 2) if something happen to=
vm(kernel panic, crash of service...), agent will restart HE vm on another=
host in HE environment with positive score<br>> 3) if put to local main=
tenance host with HE vm, vm will migrate to another host with positive scor=
e<br>> Thanks.<br>><br>> ----- Original Message -----<br>> From=
: "Cong Yue" <Cong_Yue(a)alliedtelesis.com><br>> To: "Artyom Lukiano=
v" <alukiano(a)redhat.com><br>> Cc: "Simone Tiraboschi" <stirabos=
@redhat.com>, users(a)ovirt.org<br>> Sent: Monday, December 29, 2014 6:=
30:42 PM<br>> Subject: Re: [ovirt-users] VM failover with ovirt3.5<br>&g=
t;<br>> Thanks and the --vm-status log is as follows:<br>> [root@comp=
ute2-2 ~]# hosted-engine --vm-status<br>><br>><br>> --=3D=3D Host =
1 status =3D=3D--<br>><br>> Status up-to-date &n=
bsp; : True<br>> Hostname  =
; &nb=
sp; : 10.0.0.94<br>> Host ID &=
nbsp; : 1<br>> Engine st=
atus =
: {"health": "good", "vm": "up",<br>> "detail": "up"}<br>> Scor=
e &nb=
sp; : 2400<br>> Local maintenance &nbs=
p; : False<br>> Host tim=
estamp  =
; : 1008087<br>> Extra metadata (valid at timestamp):<br>> metadata_p=
arse_version=3D1<br>> metadata_feature_version=3D1<br>> timestamp=3D1=
008087<tel:1008087> (Mon Dec 29 11:25:51 2014)<br>> host-id=3D1<br=
>> score=3D2400<br>> maintenance=3DFalse<br>> state=3DEngineUp<br>=
><br>><br>> --=3D=3D Host 2 status =3D=3D--<br>><br>> Status=
up-to-date :=
True<br>> Hostname &nb=
sp; : 10.0.0.93<br>> Host ID &=
nbsp; =
: 2<br>> Engine status =
: {"reason": "vm not running on<br=
>> this host", "health": "bad", "vm": "down", "detail": "unknown"}<br>&g=
t; Score &nb=
sp; : 0<br>> Local maintenance =
: True<br>> Host =
timestamp &n=
bsp; : 859142<br>> Extra metadata (valid at timestamp):<br>> metadata=
_parse_version=3D1<br>> metadata_feature_version=3D1<br>> timestamp=
=3D859142 (Mon Dec 29 08:25:08 2014)<br>> host-id=3D2<br>> score=3D0<=
br>> maintenance=3DTrue<br>> state=3DLocalMaintenance<br>><br>>=
<br>> --=3D=3D Host 3 status =3D=3D--<br>><br>> Status up-to-date =
: True<br>>=
; Hostname &=
nbsp; : 10.0.0.92<br>> Host ID =
&nbs=
p;: 3<br>> Engine status  =
; : {"reason": "vm not running on<br>> this h=
ost", "health": "bad", "vm": "down", "detail": "unknown"}<br>> Score &nb=
sp; &=
nbsp; : 0<br>> Local maintenance  =
; : True<br>> Host timestamp &n=
bsp; : 85361=
5<br>> Extra metadata (valid at timestamp):<br>> metadata_parse_versi=
on=3D1<br>> metadata_feature_version=3D1<br>> timestamp=3D853615 (Mon=
Dec 29 08:25:57 2014)<br>> host-id=3D3<br>> score=3D0<br>> mainte=
nance=3DTrue<br>> state=3DLocalMaintenance<br>> You have new mail in =
/var/spool/mail/root<br>> [root@compute2-2 ~]#<br>><br>> Could you=
please explain how VM failover works inside ovirt? Is there any other debu=
g option I can enable to check the problem?<br>><br>> Thanks,<br>>=
Cong<br>><br>><br>> On 2014/12/29, at 1:39, "Artyom Lukianov" <=
;alukiano@redhat.com<mailto:alukiano@redhat.com>> wrote:<br>><b=
r>> Can you also provide output of hosted-engine --vm-status please, pre=
vious time it was useful, because I do not see something unusual.<br>> T=
hanks<br>><br>> ----- Original Message -----<br>> From: "Cong Yue"=
<Cong_Yue@alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com>>=
;<br>> To: "Artyom Lukianov" <alukiano@redhat.com<mailto:alukiano@=
redhat.com>><br>> Cc: "Simone Tiraboschi" <stirabos(a)redhat.com&=
lt;mailto:stirabos@redhat.com>>, users@ovirt.org<mailto:users@ovir=
t.org><br>> Sent: Monday, December 29, 2014 7:15:24 AM<br>> Subjec=
t: Re: [ovirt-users] VM failover with ovirt3.5<br>><br>> Also I chang=
e the maintenance mode to local in another host. But also the VM in this ho=
st can not be migrated. The logs are as follows.<br>><br>> [root@comp=
ute2-2 ~]# hosted-engine --set-maintenance --mode=3Dlocal<br>> [root@com=
pute2-2 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log<br>> MainT=
hread::INFO::2014-12-28<br>> 21:09:04,184::hosted_engine::332::ovirt_hos=
ted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>> =
Best remote host 10.0.0.94 (id: 1, score: 2400)<br>> MainThread::INFO::2=
014-12-28<br>> 21:09:14,603::hosted_engine::327::ovirt_hosted_engine_ha.=
agent.hosted_engine.HostedEngine::(start_monitoring)<br>> Current state =
EngineDown (score: 2400)<br>> MainThread::INFO::2014-12-28<br>> 21:09=
:14,603::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.Hos=
tedEngine::(start_monitoring)<br>> Best remote host 10.0.0.94 (id: 1, sc=
ore: 2400)<br>> MainThread::INFO::2014-12-28<br>> 21:09:24,903::hoste=
d_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(st=
art_monitoring)<br>> Current state EngineDown (score: 2400)<br>> Main=
Thread::INFO::2014-12-28<br>> 21:09:24,904::hosted_engine::332::ovirt_ho=
sted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>>=
Best remote host 10.0.0.94 (id: 1, score: 2400)<br>> MainThread::INFO::=
2014-12-28<br>> 21:09:35,026::states::437::ovirt_hosted_engine_ha.agent.=
hosted_engine.HostedEngine::(consume)<br>> Engine vm is running on host =
10.0.0.94 (id 1)<br>> MainThread::INFO::2014-12-28<br>> 21:09:35,236:=
:hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngin=
e::(start_monitoring)<br>> Current state EngineDown (score: 2400)<br>>=
; MainThread::INFO::2014-12-28<br>> 21:09:35,236::hosted_engine::332::ov=
irt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<b=
r>> Best remote host 10.0.0.94 (id: 1, score: 2400)<br>> MainThread::=
INFO::2014-12-28<br>> 21:09:45,604::hosted_engine::327::ovirt_hosted_eng=
ine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>> Current=
state EngineDown (score: 2400)<br>> MainThread::INFO::2014-12-28<br>>=
; 21:09:45,604::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_eng=
ine.HostedEngine::(start_monitoring)<br>> Best remote host 10.0.0.94 (id=
: 1, score: 2400)<br>> MainThread::INFO::2014-12-28<br>> 21:09:55,691=
::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedE=
ngine::(check)<br>> Local maintenance detected<br>> MainThread::INFO:=
:2014-12-28<br>> 21:09:55,701::brokerlink::111::ovirt_hosted_engine_ha.l=
ib.brokerlink.BrokerLink::(notify)<br>> Trying: notify time=3D1419829795=
.7 type=3Dstate_transition<br>> detail=3DEngineDown-LocalMaintenance hos=
tname=3D'compute2-2'<br>> MainThread::INFO::2014-12-28<br>> 21:09:55,=
761::brokerlink::120::ovirt_hosted_engine_ha.lib.brokerlink.BrokerLink::(no=
tify)<br>> Success, was notification of state_transition<br>> (Engine=
Down-LocalMaintenance) sent? sent<br>> MainThread::INFO::2014-12-28<br>&=
gt; 21:09:55,990::states::208::ovirt_hosted_engine_ha.agent.hosted_engine.H=
ostedEngine::(score)<br>> Score is 0 due to local maintenance mode<br>&g=
t; MainThread::INFO::2014-12-28<br>> 21:09:55,990::hosted_engine::327::o=
virt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<=
br>> Current state LocalMaintenance (score: 0)<br>> MainThread::INFO:=
:2014-12-28<br>> 21:09:55,991::hosted_engine::332::ovirt_hosted_engine_h=
a.agent.hosted_engine.HostedEngine::(start_monitoring)<br>> Best remote =
host 10.0.0.94 (id: 1, score: 2400)<br>> ^C<br>> You have new mail in=
/var/spool/mail/root<br>> [root@compute2-2 ~]# ps -ef | grep qemu<br>&g=
t; root 18420 2777 0 21:10<x-apple-data-detect=
ors://39> pts/0 00:00:00<x-apple-data-detectors://40>=
grep --color=3Dauto qemu<br>> qemu 29809 1 =
0 Dec19 ? 01:17:20 /usr/libexec/qemu-kvm<b=
r>> -name testvm2-2 -S -machine rhel6.5.0,accel=3Dkvm,usb=3Doff -cpu Neh=
alem<br>> -m 500 -realtime mlock=3Doff -smp<br>> 1,maxcpus=3D16,socke=
ts=3D16,cores=3D1,threads=3D1 -uuid<br>> c31e97d0-135e-42da-9954-162b522=
8dce3 -smbios<br>> type=3D1,manufacturer=3DoVirt,product=3DoVirt<br>>=
Node,version=3D7-0.1406.el7.centos.2.5,serial=3D4C4C4544-0059-3610-8033-B4=
C04F395931,uuid=3Dc31e97d0-135e-42da-9954-162b5228dce3<br>> -no-user-con=
fig -nodefaults -chardev<br>> socket,id=3Dcharmonitor,path=3D/var/lib/li=
bvirt/qemu/testvm2-2.monitor,server,nowait<br>> -mon chardev=3Dcharmonit=
or,id=3Dmonitor,mode=3Dcontrol -rtc<br>> base=3D2014-12-19T20:17:17<x=
-apple-data-detectors://42>,driftfix=3Dslew -no-kvm-pit-reinjection<br>&=
gt; -no-hpet -no-shutdown -boot strict=3Don -device<br>> piix3-usb-uhci,=
id=3Dusb,bus=3Dpci.0,addr=3D0x1.0x2 -device<br>> virtio-scsi-pci,id=3Dsc=
si0,bus=3Dpci.0,addr=3D0x4 -device<br>> virtio-serial-pci,id=3Dvirtio-se=
rial0,max_ports=3D16,bus=3Dpci.0,addr=3D0x5<br>> -drive if=3Dnone,id=3Dd=
rive-ide0-1-0,readonly=3Don,format=3Draw,serial=3D<br>> -device ide-cd,b=
us=3Dide.1,unit=3D0,drive=3Ddrive-ide0-1-0,id=3Dide0-1-0<br>> -drive fil=
e=3D/rhev/data-center/00000002-0002-0002-0002-0000000001e4/1dc71096-27c4-42=
56-b2ac-bd7265525c69/images/5cbeb8c9-4f04-48d0-a5eb-78c49187c550/a0570e8c-9=
867-4ec4-818f-11e102fc4f9b,if=3Dnone,id=3Ddrive-virtio-disk0,format=3Dqcow2=
,serial=3D5cbeb8c9-4f04-48d0-a5eb-78c49187c550,cache=3Dnone,werror=3Dstop,r=
error=3Dstop,aio=3Dthreads<br>> -device virtio-blk-pci,scsi=3Doff,bus=3D=
pci.0,addr=3D0x6,drive=3Ddrive-virtio-disk0,id=3Dvirtio-disk0,bootindex=3D1=
<br>> -netdev tap,fd=3D28,id=3Dhostnet0,vhost=3Don,vhostfd=3D29 -device<=
br>> virtio-net-pci,netdev=3Dhostnet0,id=3Dnet0,mac=3D00:1a:4a:db:94:00,=
bus=3Dpci.0,addr=3D0x3<br>> -chardev socket,id=3Dcharchannel0,path=3D/va=
r/lib/libvirt/qemu/channels/c31e97d0-135e-42da-9954-162b5228dce3.com.redhat=
.rhevm.vdsm,server,nowait<br>> -device virtserialport,bus=3Dvirtio-seria=
l0.0,nr=3D1,chardev=3Dcharchannel0,id=3Dchannel0,name=3Dcom.redhat.rhevm.vd=
sm<br>> -chardev socket,id=3Dcharchannel1,path=3D/var/lib/libvirt/qemu/c=
hannels/c31e97d0-135e-42da-9954-162b5228dce3.org.qemu.guest_agent.0,server,=
nowait<br>> -device virtserialport,bus=3Dvirtio-serial0.0,nr=3D2,chardev=
=3Dcharchannel1,id=3Dchannel1,name=3Dorg.qemu.guest_agent.0<br>> -charde=
v spicevmc,id=3Dcharchannel2,name=3Dvdagent -device<br>> virtserialport,=
bus=3Dvirtio-serial0.0,nr=3D3,chardev=3Dcharchannel2,id=3Dchannel2,name=3Dc=
om.redhat.spice.0<br>> -spice tls-port=3D5901,addr=3D10.0.0.93,x509-dir=
=3D/etc/pki/vdsm/libvirt-spice,tls-channel=3Dmain,tls-channel=3Ddisplay,tls=
-channel=3Dinputs,tls-channel=3Dcursor,tls-channel=3Dplayback,tls-channel=
=3Drecord,tls-channel=3Dsmartcard,tls-channel=3Dusbredir,seamless-migration=
=3Don<br>> -k en-us -vga qxl -global qxl-vga.ram_size=3D67108864<tel:=
67108864> -global<br>> qxl-vga.vram_size=3D33554432<tel:33554432&g=
t; -incoming tcp:[::]:49152 -device<br>> virtio-balloon-pci,id=3Dballoon=
0,bus=3Dpci.0,addr=3D0x7<br>> [root@compute2-2 ~]#<br>><br>> Thank=
s,<br>> Cong<br>><br>><br>> On 2014/12/28, at 20:53, "Yue, Cong=
" <Cong_Yue@alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com>&l=
t;mailto:Cong_Yue@alliedtelesis.com>> wrote:<br>><br>> I checke=
d it again and confirmed there is one guest VM is running on the top of thi=
s host. The log is as follows:<br>><br>> [root@compute2-1 vdsm]# ps -=
ef | grep qemu<br>> qemu 2983 846 0 Dec=
19 ? 00:00:00<x-apple-data-detectors://0> =
[supervdsmServer] <defunct><br>> root 5489 &nb=
sp;3053 0 20:49<x-apple-data-detectors://1> pts/0 =
00:00:00<x-apple-data-detectors://2> grep --color=3Dauto qemu<br>>=
qemu 26128 1 0 Dec19 ? &nb=
sp; 01:09:19 /usr/libexec/qemu-kvm<br>> -name testvm2 -S -machine =
rhel6.5.0,accel=3Dkvm,usb=3Doff -cpu Nehalem -m<br>> 500 -realtime mlock=
=3Doff -smp 1,maxcpus=3D16,sockets=3D16,cores=3D1,threads=3D1<br>> -uuid=
e46bca87-4df5-4287-844b-90a26fccef33 -smbios<br>> type=3D1,manufacturer=
=3DoVirt,product=3DoVirt<br>> Node,version=3D7-0.1406.el7.centos.2.5,ser=
ial=3D4C4C4544-0030-3310-8059-B8C04F585231,uuid=3De46bca87-4df5-4287-844b-9=
0a26fccef33<br>> -no-user-config -nodefaults -chardev<br>> socket,id=
=3Dcharmonitor,path=3D/var/lib/libvirt/qemu/testvm2.monitor,server,nowait<b=
r>> -mon chardev=3Dcharmonitor,id=3Dmonitor,mode=3Dcontrol -rtc<br>> =
base=3D2014-12-19T20:18:01<x-apple-data-detectors://4>,driftfix=3Dsle=
w -no-kvm-pit-reinjection<br>> -no-hpet -no-shutdown -boot strict=3Don -=
device<br>> piix3-usb-uhci,id=3Dusb,bus=3Dpci.0,addr=3D0x1.0x2 -device<b=
r>> virtio-scsi-pci,id=3Dscsi0,bus=3Dpci.0,addr=3D0x4 -device<br>> vi=
rtio-serial-pci,id=3Dvirtio-serial0,max_ports=3D16,bus=3Dpci.0,addr=3D0x5<b=
r>> -drive if=3Dnone,id=3Ddrive-ide0-1-0,readonly=3Don,format=3Draw,seri=
al=3D<br>> -device ide-cd,bus=3Dide.1,unit=3D0,drive=3Ddrive-ide0-1-0,id=
=3Dide0-1-0<br>> -drive file=3D/rhev/data-center/00000002-0002-0002-0002=
-0000000001e4/1dc71096-27c4-4256-b2ac-bd7265525c69/images/b4b5426b-95e3-41a=
f-b286-da245891cdaf/0f688d49-97e3-4f1d-84d4-ac1432d903b3,if=3Dnone,id=3Ddri=
ve-virtio-disk0,format=3Dqcow2,serial=3Db4b5426b-95e3-41af-b286-da245891cda=
f,cache=3Dnone,werror=3Dstop,rerror=3Dstop,aio=3Dthreads<br>> -device vi=
rtio-blk-pci,scsi=3Doff,bus=3Dpci.0,addr=3D0x6,drive=3Ddrive-virtio-disk0,i=
d=3Dvirtio-disk0,bootindex=3D1<br>> -netdev tap,fd=3D26,id=3Dhostnet0,vh=
ost=3Don,vhostfd=3D27 -device<br>> virtio-net-pci,netdev=3Dhostnet0,id=
=3Dnet0,mac=3D00:1a:4a:db:94:01,bus=3Dpci.0,addr=3D0x3<br>> -chardev soc=
ket,id=3Dcharchannel0,path=3D/var/lib/libvirt/qemu/channels/e46bca87-4df5-4=
287-844b-90a26fccef33.com.redhat.rhevm.vdsm,server,nowait<br>> -device v=
irtserialport,bus=3Dvirtio-serial0.0,nr=3D1,chardev=3Dcharchannel0,id=3Dcha=
nnel0,name=3Dcom.redhat.rhevm.vdsm<br>> -chardev socket,id=3Dcharchannel=
1,path=3D/var/lib/libvirt/qemu/channels/e46bca87-4df5-4287-844b-90a26fccef3=
3.org.qemu.guest_agent.0,server,nowait<br>> -device virtserialport,bus=
=3Dvirtio-serial0.0,nr=3D2,chardev=3Dcharchannel1,id=3Dchannel1,name=3Dorg.=
qemu.guest_agent.0<br>> -chardev spicevmc,id=3Dcharchannel2,name=3Dvdage=
nt -device<br>> virtserialport,bus=3Dvirtio-serial0.0,nr=3D3,chardev=3Dc=
harchannel2,id=3Dchannel2,name=3Dcom.redhat.spice.0<br>> -spice tls-port=
=3D5900,addr=3D10.0.0.92,x509-dir=3D/etc/pki/vdsm/libvirt-spice,tls-channel=
=3Dmain,tls-channel=3Ddisplay,tls-channel=3Dinputs,tls-channel=3Dcursor,tls=
-channel=3Dplayback,tls-channel=3Drecord,tls-channel=3Dsmartcard,tls-channe=
l=3Dusbredir,seamless-migration=3Don<br>> -k en-us -vga qxl -global qxl-=
vga.ram_size=3D67108864<tel:67108864> -global<br>> qxl-vga.vram_si=
ze=3D33554432<tel:33554432> -incoming tcp:[::]:49152 -device<br>> =
virtio-balloon-pci,id=3Dballoon0,bus=3Dpci.0,addr=3D0x7<br>> [root@compu=
te2-1 vdsm]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log<br>> Main=
Thread::INFO::2014-12-28<br>> 20:49:27,315::state_decorators::124::ovirt=
_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)<br>> Local m=
aintenance detected<br>> MainThread::INFO::2014-12-28<br>> 20:49:27,6=
46::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEn=
gine::(start_monitoring)<br>> Current state LocalMaintenance (score: 0)<=
br>> MainThread::INFO::2014-12-28<br>> 20:49:27,646::hosted_engine::3=
32::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitor=
ing)<br>> Best remote host 10.0.0.94 (id: 1, score: 2400)<br>> MainTh=
read::INFO::2014-12-28<br>> 20:49:37,732::state_decorators::124::ovirt_h=
osted_engine_ha.agent.hosted_engine.HostedEngine::(check)<br>> Local mai=
ntenance detected<br>> MainThread::INFO::2014-12-28<br>> 20:49:37,961=
::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngi=
ne::(start_monitoring)<br>> Current state LocalMaintenance (score: 0)<br=
>> MainThread::INFO::2014-12-28<br>> 20:49:37,961::hosted_engine::332=
::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitorin=
g)<br>> Best remote host 10.0.0.94 (id: 1, score: 2400)<br>> MainThre=
ad::INFO::2014-12-28<br>> 20:49:48,048::state_decorators::124::ovirt_hos=
ted_engine_ha.agent.hosted_engine.HostedEngine::(check)<br>> Local maint=
enance detected<br>> MainThread::INFO::2014-12-28<br>> 20:49:48,319::=
states::208::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(scor=
e)<br>> Score is 0 due to local maintenance mode<br>> MainThread::INF=
O::2014-12-28<br>> 20:49:48,319::hosted_engine::327::ovirt_hosted_engine=
_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>> Current st=
ate LocalMaintenance (score: 0)<br>> MainThread::INFO::2014-12-28<br>>=
; 20:49:48,319::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_eng=
ine.HostedEngine::(start_monitoring)<br>> Best remote host 10.0.0.94 (id=
: 1, score: 2400)<br>><br>> Thanks,<br>> Cong<br>><br>><br>&=
gt; On 2014/12/28, at 3:46, "Artyom Lukianov" <alukiano(a)redhat.com<ma=
ilto:alukiano@redhat.com><mailto:alukiano@redhat.com>> wrote:<b=
r>><br>> I see that you set local maintenance on host3 that do not ha=
ve engine vm on it, so it nothing to migrate from this host.<br>> If you=
set local maintenance on host1, vm must migrate to another host with posit=
ive score.<br>> Thanks<br>><br>> ----- Original Message -----<br>&=
gt; From: "Cong Yue" <Cong_Yue@alliedtelesis.com<mailto:Cong_Yue@alli=
edtelesis.com><mailto:Cong_Yue@alliedtelesis.com>><br>> To: =
"Simone Tiraboschi" <stirabos@redhat.com<mailto:stirabos@redhat.com&g=
t;<mailto:stirabos@redhat.com>><br>> Cc: users(a)ovirt.org<mai=
lto:users@ovirt.org><mailto:users@ovirt.org><br>> Sent: Saturda=
y, December 27, 2014 6:58:32 PM<br>> Subject: Re: [ovirt-users] VM failo=
ver with ovirt3.5<br>><br>> Hi<br>><br>> I had a try with "host=
ed-engine --set-maintence --mode=3Dlocal" on<br>> compute2-1, which is h=
ost 3 in my cluster. From the log, it shows<br>> maintence mode is decte=
cted, but migration does not happen.<br>><br>> The logs are as follow=
s. Is there any other config I need to check?<br>><br>> [root@compute=
2-1 vdsm]# hosted-engine --vm-status<br>><br>><br>> --=3D=3D Host =
1 status =3D=3D-<br>><br>> Status up-to-date &nb=
sp; : True<br>> Hostname =
&nbs=
p; : 10.0.0.94<br>> Host ID &n=
bsp; : 1<br>> Engine sta=
tus &=
nbsp;: {"health": "good", "vm": "up",<br>> "detail": "up"}<br>> Score=
&nbs=
p; : 2400<br>> Local maintenance  =
; : False<br>> Host time=
stamp =
: 836296<br>> Extra metadata (valid at timestamp):<br>> metadata_par=
se_version=3D1<br>> metadata_feature_version=3D1<br>> timestamp=3D836=
296 (Sat Dec 27 11:42:39 2014)<br>> host-id=3D1<br>> score=3D2400<br>=
> maintenance=3DFalse<br>> state=3DEngineUp<br>><br>><br>> -=
-=3D=3D Host 2 status =3D=3D--<br>><br>> Status up-to-date &nb=
sp; : True<br>> Hostname=
&nbs=
p; : 10.0.0.93<br>> Host ID &n=
bsp; : 2<br>&=
gt; Engine status &=
nbsp; : {"reason": "vm not running on<br>> this host", "hea=
lth": "bad", "vm": "down", "detail": "unknown"}<br>> Score =
&nbs=
p; : 2400<br>> Local maintenance  =
; : False<br>> Host timestamp &=
nbsp; : 687358<br>&=
gt; Extra metadata (valid at timestamp):<br>> metadata_parse_version=3D1=
<br>> metadata_feature_version=3D1<br>> timestamp=3D687358 (Sat Dec 2=
7 08:42:04 2014)<br>> host-id=3D2<br>> score=3D2400<br>> maintenan=
ce=3DFalse<br>> state=3DEngineDown<br>><br>><br>> --=3D=3D Host=
3 status =3D=3D--<br>><br>> Status up-to-date &=
nbsp; : True<br>> Hostname &nbs=
p; &n=
bsp; : 10.0.0.92<br>> Host ID =
: 3<br>> Engine s=
tatus =
: {"reason": "vm not running on<br>> this host", "health": "bad",=
"vm": "down", "detail": "unknown"}<br>> Score &nbs=
p; &n=
bsp;: 0<br>> Local maintenance =
: True<br>> Host timestamp &nb=
sp; : 681827<br>> Extra metada=
ta (valid at timestamp):<br>> metadata_parse_version=3D1<br>> metadat=
a_feature_version=3D1<br>> timestamp=3D681827 (Sat Dec 27 08:42:40 2014)=
<br>> host-id=3D3<br>> score=3D0<br>> maintenance=3DTrue<br>> s=
tate=3DLocalMaintenance<br>> [root@compute2-1 vdsm]# tail -f /var/log/ov=
irt-hosted-engine-ha/agent.log<br>> MainThread::INFO::2014-12-27<br>>=
08:42:41,109::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engi=
ne.HostedEngine::(start_monitoring)<br>> Best remote host 10.0.0.94 (id:=
1, score: 2400)<br>> MainThread::INFO::2014-12-27<br>> 08:42:51,198:=
:state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEn=
gine::(check)<br>> Local maintenance detected<br>> MainThread::INFO::=
2014-12-27<br>> 08:42:51,420::hosted_engine::327::ovirt_hosted_engine_ha=
.agent.hosted_engine.HostedEngine::(start_monitoring)<br>> Current state=
LocalMaintenance (score: 0)<br>> MainThread::INFO::2014-12-27<br>> 0=
8:42:51,420::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine=
.HostedEngine::(start_monitoring)<br>> Best remote host 10.0.0.94 (id: 1=
, score: 2400)<br>> MainThread::INFO::2014-12-27<br>> 08:43:01,507::s=
tate_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngi=
ne::(check)<br>> Local maintenance detected<br>> MainThread::INFO::20=
14-12-27<br>> 08:43:01,773::hosted_engine::327::ovirt_hosted_engine_ha.a=
gent.hosted_engine.HostedEngine::(start_monitoring)<br>> Current state L=
ocalMaintenance (score: 0)<br>> MainThread::INFO::2014-12-27<br>> 08:=
43:01,773::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.H=
ostedEngine::(start_monitoring)<br>> Best remote host 10.0.0.94 (id: 1, =
score: 2400)<br>> MainThread::INFO::2014-12-27<br>> 08:43:11,859::sta=
te_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine=
::(check)<br>> Local maintenance detected<br>> MainThread::INFO::2014=
-12-27<br>> 08:43:12,072::hosted_engine::327::ovirt_hosted_engine_ha.age=
nt.hosted_engine.HostedEngine::(start_monitoring)<br>> Current state Loc=
alMaintenance (score: 0)<br>> MainThread::INFO::2014-12-27<br>> 08:43=
:12,072::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.Hos=
tedEngine::(start_monitoring)<br>> Best remote host 10.0.0.94 (id: 1, sc=
ore: 2400)<br>><br>><br>><br>> [root@compute2-3 ~]# tail -f /va=
r/log/ovirt-hosted-engine-ha/agent.log<br>> MainThread::INFO::2014-12-27=
<br>> 11:36:28,855::hosted_engine::332::ovirt_hosted_engine_ha.agent.hos=
ted_engine.HostedEngine::(start_monitoring)<br>> Best remote host 10.0.0=
.93 (id: 2, score: 2400)<br>> MainThread::INFO::2014-12-27<br>> 11:36=
:39,130::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.Hos=
tedEngine::(start_monitoring)<br>> Current state EngineUp (score: 2400)<=
br>> MainThread::INFO::2014-12-27<br>> 11:36:39,130::hosted_engine::3=
32::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitor=
ing)<br>> Best remote host 10.0.0.93 (id: 2, score: 2400)<br>> MainTh=
read::INFO::2014-12-27<br>> 11:36:49,449::hosted_engine::327::ovirt_host=
ed_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>> C=
urrent state EngineUp (score: 2400)<br>> MainThread::INFO::2014-12-27<br=
>> 11:36:49,449::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted=
_engine.HostedEngine::(start_monitoring)<br>> Best remote host 10.0.0.93=
(id: 2, score: 2400)<br>> MainThread::INFO::2014-12-27<br>> 11:36:59=
,739::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.Hosted=
Engine::(start_monitoring)<br>> Current state EngineUp (score: 2400)<br>=
> MainThread::INFO::2014-12-27<br>> 11:36:59,739::hosted_engine::332:=
:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring=
)<br>> Best remote host 10.0.0.93 (id: 2, score: 2400)<br>> MainThrea=
d::INFO::2014-12-27<br>> 11:37:09,779::states::394::ovirt_hosted_engine_=
ha.agent.hosted_engine.HostedEngine::(consume)<br>> Engine vm running on=
localhost<br>> MainThread::INFO::2014-12-27<br>> 11:37:10,026::hoste=
d_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(st=
art_monitoring)<br>> Current state EngineUp (score: 2400)<br>> MainTh=
read::INFO::2014-12-27<br>> 11:37:10,026::hosted_engine::332::ovirt_host=
ed_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>> B=
est remote host 10.0.0.93 (id: 2, score: 2400)<br>> MainThread::INFO::20=
14-12-27<br>> 11:37:20,331::hosted_engine::327::ovirt_hosted_engine_ha.a=
gent.hosted_engine.HostedEngine::(start_monitoring)<br>> Current state E=
ngineUp (score: 2400)<br>> MainThread::INFO::2014-12-27<br>> 11:37:20=
,331::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.Hosted=
Engine::(start_monitoring)<br>> Best remote host 10.0.0.93 (id: 2, score=
: 2400)<br>><br>><br>> [root@compute2-2 ~]# tail -f /var/log/ovirt=
-hosted-engine-ha/agent.log<br>> MainThread::INFO::2014-12-27<br>> 08=
:36:12,462::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.=
HostedEngine::(start_monitoring)<br>> Best remote host 10.0.0.94 (id: 1,=
score: 2400)<br>> MainThread::INFO::2014-12-27<br>> 08:36:22,797::ho=
sted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::=
(start_monitoring)<br>> Current state EngineDown (score: 2400)<br>> M=
ainThread::INFO::2014-12-27<br>> 08:36:22,798::hosted_engine::332::ovirt=
_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>&=
gt; Best remote host 10.0.0.94 (id: 1, score: 2400)<br>> MainThread::INF=
O::2014-12-27<br>> 08:36:32,876::states::437::ovirt_hosted_engine_ha.age=
nt.hosted_engine.HostedEngine::(consume)<br>> Engine vm is running on ho=
st 10.0.0.94 (id 1)<br>> MainThread::INFO::2014-12-27<br>> 08:36:33,1=
69::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEn=
gine::(start_monitoring)<br>> Current state EngineDown (score: 2400)<br>=
> MainThread::INFO::2014-12-27<br>> 08:36:33,169::hosted_engine::332:=
:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring=
)<br>> Best remote host 10.0.0.94 (id: 1, score: 2400)<br>> MainThrea=
d::INFO::2014-12-27<br>> 08:36:43,567::hosted_engine::327::ovirt_hosted_=
engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>> Curr=
ent state EngineDown (score: 2400)<br>> MainThread::INFO::2014-12-27<br>=
> 08:36:43,567::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_=
engine.HostedEngine::(start_monitoring)<br>> Best remote host 10.0.0.94 =
(id: 1, score: 2400)<br>> MainThread::INFO::2014-12-27<br>> 08:36:53,=
858::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedE=
ngine::(start_monitoring)<br>> Current state EngineDown (score: 2400)<br=
>> MainThread::INFO::2014-12-27<br>> 08:36:53,858::hosted_engine::332=
::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitorin=
g)<br>> Best remote host 10.0.0.94 (id: 1, score: 2400)<br>> MainThre=
ad::INFO::2014-12-27<br>> 08:37:04,028::state_machine::160::ovirt_hosted=
_engine_ha.agent.hosted_engine.HostedEngine::(refresh)<br>> Global metad=
ata: {'maintenance': False}<br>> MainThread::INFO::2014-12-27<br>> 08=
:37:04,028::state_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.=
HostedEngine::(refresh)<br>> Host 10.0.0.94 (id 1): {'extra':<br>> 'm=
etadata_parse_version=3D1\nmetadata_feature_version=3D1\ntimestamp=3D835987=
<br>> (Sat Dec 27 11:37:30<br>> 2014)\nhost-id=3D1\nscore=3D2400\nmai=
ntenance=3DFalse\nstate=3DEngineUp\n',<br>> 'hostname': '10.0.0.94', 'al=
ive': True, 'host-id': 1, 'engine-status':<br>> {'health': 'good', 'vm':=
'up', 'detail': 'up'}, 'score': 2400,<br>> 'maintenance': False, 'host-=
ts': 835987}<br>> MainThread::INFO::2014-12-27<br>> 08:37:04,028::sta=
te_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(=
refresh)<br>> Host 10.0.0.92 (id 3): {'extra':<br>> 'metadata_parse_v=
ersion=3D1\nmetadata_feature_version=3D1\ntimestamp=3D681528<br>> (Sat D=
ec 27 08:37:41<br>> 2014)\nhost-id=3D3\nscore=3D0\nmaintenance=3DTrue\ns=
tate=3DLocalMaintenance\n',<br>> 'hostname': '10.0.0.92', 'alive': True,=
'host-id': 3, 'engine-status':<br>> {'reason': 'vm not running on this =
host', 'health': 'bad', 'vm':<br>> 'down', 'detail': 'unknown'}, 'score'=
: 0, 'maintenance': True,<br>> 'host-ts': 681528}<br>> MainThread::IN=
FO::2014-12-27<br>> 08:37:04,028::state_machine::168::ovirt_hosted_engin=
e_ha.agent.hosted_engine.HostedEngine::(refresh)<br>> Local (id 2): {'en=
gine-health': {'reason': 'vm not running on this<br>> host', 'health': '=
bad', 'vm': 'down', 'detail': 'unknown'}, 'bridge':<br>> True, 'mem-free=
': 15300.0, 'maintenance': False, 'cpu-load': 0.0215,<br>> 'gateway': Tr=
ue}<br>> MainThread::INFO::2014-12-27<br>> 08:37:04,265::hosted_engin=
e::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_mon=
itoring)<br>> Current state EngineDown (score: 2400)<br>> MainThread:=
:INFO::2014-12-27<br>> 08:37:04,265::hosted_engine::332::ovirt_hosted_en=
gine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>> Best r=
emote host 10.0.0.94 (id: 1, score: 2400)<br>><br>> Thanks,<br>> C=
ong<br>><br>> On 2014/12/22, at 5:29, "Simone Tiraboschi" <stirabo=
s@redhat.com<mailto:stirabos@redhat.com><mailto:stirabos@redhat.co=
m>> wrote:<br>><br>><br>><br>> ----- Original Message ---=
--<br>> From: "Cong Yue" <Cong_Yue@alliedtelesis.com<mailto:Cong_Y=
ue@alliedtelesis.com><mailto:Cong_Yue@alliedtelesis.com>><br>&g=
t; To: "Simone Tiraboschi" <stirabos@redhat.com<mailto:stirabos@redha=
t.com><mailto:stirabos@redhat.com>><br>> Cc: users(a)ovirt.org=
<mailto:users@ovirt.org><mailto:users@ovirt.org><br>> Sent: =
Friday, December 19, 2014 7:22:10 PM<br>> Subject: RE: [ovirt-users] VM =
failover with ovirt3.5<br>><br>> Thanks for the information. This is =
the log for my three ovirt nodes.<br>> From the output of hosted-engine =
--vm-status, it shows the engine state for<br>> my 2nd and 3rd ovirt nod=
e is DOWN.<br>> Is this the reason why VM failover not work in my enviro=
nment?<br>><br>> No, they looks ok: you can run the engine VM on sing=
le host at a time.<br>><br>> How can I make<br>> also engine works=
for my 2nd and 3rd ovit nodes?<br>><br>> If you put the host 1 in lo=
cal maintenance mode ( hosted-engine --set-maintenance --mode=3Dlocal ) the=
VM should migrate to host 2; if you reactivate host 1 ( hosted-engine --se=
t-maintenance --mode=3Dnone ) and put host 2 in local maintenance mode the =
VM should migrate again.<br>><br>> Can you please try that and post t=
he logs if something is going bad?<br>><br>><br>> --<br>> --=3D=
=3D Host 1 status =3D=3D--<br>><br>> Status up-to-date =
: True<br>> Hostname &nb=
sp; &=
nbsp; : 10.0.0.94<br>> Host ID =
: 1<br>> =
Engine status  =
; : {"health": "good", "vm": "up",<br>> "detail": "up"}<br>=
> Score &=
nbsp; : 2400<br>> Local maintenance &n=
bsp; : False<br>>=
Host timestamp &nb=
sp; : 150475<br>> Extra metadata (valid at timestamp):<br>> me=
tadata_parse_version=3D1<br>> metadata_feature_version=3D1<br>> times=
tamp=3D150475 (Fri Dec 19 13:12:18 2014)<br>> host-id=3D1<br>> score=
=3D2400<br>> maintenance=3DFalse<br>> state=3DEngineUp<br>><br>>=
;<br>> --=3D=3D Host 2 status =3D=3D--<br>><br>> Status up-to-date=
: True<br>&g=
t; Hostname =
: 10.0.0.93<br>> Host ID  =
; &nb=
sp;: 2<br>> Engine status &nbs=
p; : {"reason": "vm not running on<br>> this =
host", "health": "bad", "vm": "down", "detail": "unknown"}<br>> Score &n=
bsp; =
: 2400<br>> Local maintenance &=
nbsp; : False<br>> Host timesta=
mp : =
1572<br>> Extra metadata (valid at timestamp):<br>> metadata_parse_ve=
rsion=3D1<br>> metadata_feature_version=3D1<br>> timestamp=3D1572 (Fr=
i Dec 19 10:12:18 2014)<br>> host-id=3D2<br>> score=3D2400<br>> ma=
intenance=3DFalse<br>> state=3DEngineDown<br>><br>><br>> --=3D=
=3D Host 3 status =3D=3D--<br>><br>> Status up-to-date =
: False<br>> Hostname &n=
bsp; =
: 10.0.0.92<br>> Host ID  =
; : 3<br>>=
Engine status &nbs=
p; : unknown stale-data<br>> Score &nb=
sp; &=
nbsp;: 2400<br>> Local maintenance &n=
bsp; : False<br>> Host timestamp  =
; : 987<br>> Extra meta=
data (valid at timestamp):<br>> metadata_parse_version=3D1<br>> metad=
ata_feature_version=3D1<br>> timestamp=3D987 (Fri Dec 19 10:09:58 2014)<=
br>> host-id=3D3<br>> score=3D2400<br>> maintenance=3DFalse<br>>=
; state=3DEngineDown<br>><br>> --<br>> And the /var/log/ovirt-host=
ed-engine-ha/agent.log for three ovirt nodes are<br>> as follows:<br>>=
; --<br>> 10.0.0.94(hosted-engine-1)<br>> ---<br>> MainThread::INF=
O::2014-12-19<br>> 13:09:33,716::hosted_engine::327::ovirt_hosted_engine=
_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>> Current st=
ate EngineUp (score: 2400)<br>> MainThread::INFO::2014-12-19<br>> 13:=
09:33,716::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.H=
ostedEngine::(start_monitoring)<br>> Best remote host 10.0.0.93 (id: 2, =
score: 2400)<br>> MainThread::INFO::2014-12-19<br>> 13:09:44,017::hos=
ted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(=
start_monitoring)<br>> Current state EngineUp (score: 2400)<br>> Main=
Thread::INFO::2014-12-19<br>> 13:09:44,017::hosted_engine::332::ovirt_ho=
sted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>>=
Best remote host 10.0.0.93 (id: 2, score: 2400)<br>> MainThread::INFO::=
2014-12-19<br>> 13:09:54,303::hosted_engine::327::ovirt_hosted_engine_ha=
.agent.hosted_engine.HostedEngine::(start_monitoring)<br>> Current state=
EngineUp (score: 2400)<br>> MainThread::INFO::2014-12-19<br>> 13:09:=
54,303::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.Host=
edEngine::(start_monitoring)<br>> Best remote host 10.0.0.93 (id: 2, sco=
re: 2400)<br>> MainThread::INFO::2014-12-19<br>> 13:10:04,342::states=
::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)<b=
r>> Engine vm running on localhost<br>> MainThread::INFO::2014-12-19<=
br>> 13:10:04,617::hosted_engine::327::ovirt_hosted_engine_ha.agent.host=
ed_engine.HostedEngine::(start_monitoring)<br>> Current state EngineUp (=
score: 2400)<br>> MainThread::INFO::2014-12-19<br>> 13:10:04,617::hos=
ted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(=
start_monitoring)<br>> Best remote host 10.0.0.93 (id: 2, score: 2400)<b=
r>> MainThread::INFO::2014-12-19<br>> 13:10:14,657::state_machine::16=
0::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)<br>&g=
t; Global metadata: {'maintenance': False}<br>> MainThread::INFO::2014-1=
2-19<br>> 13:10:14,657::state_machine::165::ovirt_hosted_engine_ha.agent=
.hosted_engine.HostedEngine::(refresh)<br>> Host 10.0.0.93 (id 2): {'ext=
ra':<br>> 'metadata_parse_version=3D1\nmetadata_feature_version=3D1\ntim=
estamp=3D1448<br>> (Fri Dec 19 10:10:14<br>> 2014)\nhost-id=3D2\nscor=
e=3D2400\nmaintenance=3DFalse\nstate=3DEngineDown\n',<br>> 'hostname': '=
10.0.0.93', 'alive': True, 'host-id': 2, 'engine-status':<br>> {'reason'=
: 'vm not running on this host', 'health': 'bad', 'vm':<br>> 'down', 'de=
tail': 'unknown'}, 'score': 2400, 'maintenance': False,<br>> 'host-ts': =
1448}<br>> MainThread::INFO::2014-12-19<br>> 13:10:14,657::state_mach=
ine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh=
)<br>> Host 10.0.0.92 (id 3): {'extra':<br>> 'metadata_parse_version=
=3D1\nmetadata_feature_version=3D1\ntimestamp=3D987<br>> (Fri Dec 19 10:=
09:58<br>> 2014)\nhost-id=3D3\nscore=3D2400\nmaintenance=3DFalse\nstate=
=3DEngineDown\n',<br>> 'hostname': '10.0.0.92', 'alive': True, 'host-id'=
: 3, 'engine-status':<br>> {'reason': 'vm not running on this host', 'he=
alth': 'bad', 'vm':<br>> 'down', 'detail': 'unknown'}, 'score': 2400, 'm=
aintenance': False,<br>> 'host-ts': 987}<br>> MainThread::INFO::2014-=
12-19<br>> 13:10:14,658::state_machine::168::ovirt_hosted_engine_ha.agen=
t.hosted_engine.HostedEngine::(refresh)<br>> Local (id 1): {'engine-heal=
th': {'health': 'good', 'vm': 'up',<br>> 'detail': 'up'}, 'bridge': True=
, 'mem-free': 1079.0, 'maintenance':<br>> False, 'cpu-load': 0.0269, 'ga=
teway': True}<br>> MainThread::INFO::2014-12-19<br>> 13:10:14,904::ho=
sted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::=
(start_monitoring)<br>> Current state EngineUp (score: 2400)<br>> Mai=
nThread::INFO::2014-12-19<br>> 13:10:14,904::hosted_engine::332::ovirt_h=
osted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>>=
; Best remote host 10.0.0.93 (id: 2, score: 2400)<br>> MainThread::INFO:=
:2014-12-19<br>> 13:10:25,210::hosted_engine::327::ovirt_hosted_engine_h=
a.agent.hosted_engine.HostedEngine::(start_monitoring)<br>> Current stat=
e EngineUp (score: 2400)<br>> MainThread::INFO::2014-12-19<br>> 13:10=
:25,210::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.Hos=
tedEngine::(start_monitoring)<br>> Best remote host 10.0.0.93 (id: 2, sc=
ore: 2400)<br>> MainThread::INFO::2014-12-19<br>> 13:10:35,499::hoste=
d_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(st=
art_monitoring)<br>> Current state EngineUp (score: 2400)<br>> MainTh=
read::INFO::2014-12-19<br>> 13:10:35,499::hosted_engine::332::ovirt_host=
ed_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>> B=
est remote host 10.0.0.93 (id: 2, score: 2400)<br>> MainThread::INFO::20=
14-12-19<br>> 13:10:45,784::hosted_engine::327::ovirt_hosted_engine_ha.a=
gent.hosted_engine.HostedEngine::(start_monitoring)<br>> Current state E=
ngineUp (score: 2400)<br>> MainThread::INFO::2014-12-19<br>> 13:10:45=
,785::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.Hosted=
Engine::(start_monitoring)<br>> Best remote host 10.0.0.93 (id: 2, score=
: 2400)<br>> MainThread::INFO::2014-12-19<br>> 13:10:56,070::hosted_e=
ngine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start=
_monitoring)<br>> Current state EngineUp (score: 2400)<br>> MainThrea=
d::INFO::2014-12-19<br>> 13:10:56,070::hosted_engine::332::ovirt_hosted_=
engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>> Best=
remote host 10.0.0.93 (id: 2, score: 2400)<br>> MainThread::INFO::2014-=
12-19<br>> 13:11:06,109::states::394::ovirt_hosted_engine_ha.agent.hoste=
d_engine.HostedEngine::(consume)<br>> Engine vm running on localhost<br>=
> MainThread::INFO::2014-12-19<br>> 13:11:06,359::hosted_engine::327:=
:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring=
)<br>> Current state EngineUp (score: 2400)<br>> MainThread::INFO::20=
14-12-19<br>> 13:11:06,359::hosted_engine::332::ovirt_hosted_engine_ha.a=
gent.hosted_engine.HostedEngine::(start_monitoring)<br>> Best remote hos=
t 10.0.0.93 (id: 2, score: 2400)<br>> MainThread::INFO::2014-12-19<br>&g=
t; 13:11:16,658::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_en=
gine.HostedEngine::(start_monitoring)<br>> Current state EngineUp (score=
: 2400)<br>> MainThread::INFO::2014-12-19<br>> 13:11:16,658::hosted_e=
ngine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start=
_monitoring)<br>> Best remote host 10.0.0.93 (id: 2, score: 2400)<br>>=
; MainThread::INFO::2014-12-19<br>> 13:11:26,991::hosted_engine::327::ov=
irt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<b=
r>> Current state EngineUp (score: 2400)<br>> MainThread::INFO::2014-=
12-19<br>> 13:11:26,991::hosted_engine::332::ovirt_hosted_engine_ha.agen=
t.hosted_engine.HostedEngine::(start_monitoring)<br>> Best remote host 1=
0.0.0.93 (id: 2, score: 2400)<br>> MainThread::INFO::2014-12-19<br>> =
13:11:37,341::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engin=
e.HostedEngine::(start_monitoring)<br>> Current state EngineUp (score: 2=
400)<br>> MainThread::INFO::2014-12-19<br>> 13:11:37,341::hosted_engi=
ne::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_mo=
nitoring)<br>> Best remote host 10.0.0.93 (id: 2, score: 2400)<br>> -=
---<br>><br>> 10.0.0.93 (hosted-engine-2)<br>> MainThread::INFO::2=
014-12-19<br>> 10:12:18,339::hosted_engine::327::ovirt_hosted_engine_ha.=
agent.hosted_engine.HostedEngine::(start_monitoring)<br>> Current state =
EngineDown (score: 2400)<br>> MainThread::INFO::2014-12-19<br>> 10:12=
:18,339::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.Hos=
tedEngine::(start_monitoring)<br>> Best remote host 10.0.0.94 (id: 1, sc=
ore: 2400)<br>> MainThread::INFO::2014-12-19<br>> 10:12:28,651::hoste=
d_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(st=
art_monitoring)<br>> Current state EngineDown (score: 2400)<br>> Main=
Thread::INFO::2014-12-19<br>> 10:12:28,652::hosted_engine::332::ovirt_ho=
sted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>>=
Best remote host 10.0.0.94 (id: 1, score: 2400)<br>> MainThread::INFO::=
2014-12-19<br>> 10:12:39,010::hosted_engine::327::ovirt_hosted_engine_ha=
.agent.hosted_engine.HostedEngine::(start_monitoring)<br>> Current state=
EngineDown (score: 2400)<br>> MainThread::INFO::2014-12-19<br>> 10:1=
2:39,010::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.Ho=
stedEngine::(start_monitoring)<br>> Best remote host 10.0.0.94 (id: 1, s=
core: 2400)<br>> MainThread::INFO::2014-12-19<br>> 10:12:49,338::host=
ed_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(s=
tart_monitoring)<br>> Current state EngineDown (score: 2400)<br>> Mai=
nThread::INFO::2014-12-19<br>> 10:12:49,338::hosted_engine::332::ovirt_h=
osted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>>=
; Best remote host 10.0.0.94 (id: 1, score: 2400)<br>> MainThread::INFO:=
:2014-12-19<br>> 10:12:59,642::hosted_engine::327::ovirt_hosted_engine_h=
a.agent.hosted_engine.HostedEngine::(start_monitoring)<br>> Current stat=
e EngineDown (score: 2400)<br>> MainThread::INFO::2014-12-19<br>> 10:=
12:59,642::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.H=
ostedEngine::(start_monitoring)<br>> Best remote host 10.0.0.94 (id: 1, =
score: 2400)<br>> MainThread::INFO::2014-12-19<br>> 10:13:10,010::hos=
ted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(=
start_monitoring)<br>> Current state EngineDown (score: 2400)<br>> Ma=
inThread::INFO::2014-12-19<br>> 10:13:10,010::hosted_engine::332::ovirt_=
hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>&g=
t; Best remote host 10.0.0.94 (id: 1, score: 2400)<br>><br>><br>> =
10.0.0.92(hosted-engine-3)<br>> same as 10.0.0.93<br>> --<br>><br>=
> -----Original Message-----<br>> From: Simone Tiraboschi [mailto:sti=
rabos(a)redhat.com]<br>> Sent: Friday, December 19, 2014 12:28 AM<br>> =
To: Yue, Cong<br>> Cc: users@ovirt.org<mailto:users@ovirt.org><=
mailto:users@ovirt.org><br>> Subject: Re: [ovirt-users] VM failover w=
ith ovirt3.5<br>><br>><br>><br>> ----- Original Message -----<b=
r>> From: "Cong Yue" <Cong_Yue@alliedtelesis.com<mailto:Cong_Yue@a=
lliedtelesis.com><mailto:Cong_Yue@alliedtelesis.com>><br>> T=
o: users@ovirt.org<mailto:users@ovirt.org><mailto:users@ovirt.org&=
gt;<br>> Sent: Friday, December 19, 2014 2:14:33 AM<br>> Subject: [ov=
irt-users] VM failover with ovirt3.5<br>><br>><br>><br>> Hi<br>=
><br>><br>><br>> In my environment, I have 3 ovirt nodes as one=
cluster. And on top of<br>> host-1, there is one vm to host ovirt engin=
e.<br>><br>> Also I have one external storage for the cluster to use =
as data domain<br>> of engine and data.<br>><br>> I confirmed live=
migration works well in my environment.<br>><br>> But it seems very =
buggy for VM failover if I try to force to shut down<br>> one ovirt node=
. Sometimes the VM in the node which is shutdown can<br>> migrate to oth=
er host, but it take more than several minutes.<br>><br>> Sometimes, =
it can not migrate at all. Sometimes, only when the host is<br>> back, t=
he VM is beginning to move.<br>><br>> Can you please check or share t=
he logs under /var/log/ovirt-hosted-engine-ha/<br>> ?<br>><br>> Is=
there some documentation to explain how VM failover is working? And<br>>=
; is there some bugs reported related with this?<br>><br>> http://www=
.ovirt.org/Features/Self_Hosted_Engine#Agent_State_Diagram<br>><br>> =
Thanks in advance,<br>><br>> Cong<br>><br>><br>><br>><br>=
> This e-mail message is for the sole use of the intended recipient(s)<b=
r>> and may contain confidential and privileged information. Any<br>>=
unauthorized review, use, disclosure or distribution is prohibited. If<br>=
> you are not the intended recipient, please contact the sender by reply=
<br>> e-mail and destroy all copies of the original message. If you are =
the<br>> intended recipient, please be advised that the content of this =
message<br>> is subject to access, review and disclosure by the sender's=
e-mail System<br>> Administrator.<br>><br>> _____________________=
__________________________<br>> Users mailing list<br>> Users(a)ovirt.o=
rg<mailto:Users@ovirt.org><mailto:Users@ovirt.org><br>> http=
://lists.ovirt.org/mailman/listinfo/users<br>><br>> This e-mail messa=
ge is for the sole use of the intended recipient(s) and may<br>> contain=
confidential and privileged information. Any unauthorized review,<br>> =
use, disclosure or distribution is prohibited. If you are not the intended<=
br>> recipient, please contact the sender by reply e-mail and destroy al=
l copies<br>> of the original message. If you are the intended recipient=
, please be<br>> advised that the content of this message is subject to =
access, review and<br>> disclosure by the sender's e-mail System Adminis=
trator.<br>><br>><br>> This e-mail message is for the sole use of =
the intended recipient(s) and may contain confidential and privileged infor=
mation. Any unauthorized review, use, disclosure or distribution is prohibi=
ted. If you are not the intended recipient, please contact the sender by re=
ply e-mail and destroy all copies of the original message. If you are the i=
ntended recipient, please be advised that the content of this message is su=
bject to access, review and disclosure by the sender's e-mail System Admini=
strator.<br>> _______________________________________________<br>> Us=
ers mailing list<br>> Users@ovirt.org<mailto:Users@ovirt.org><m=
ailto:Users@ovirt.org><br>> http://lists.ovirt.org/mailman/listinfo/u=
sers<br>><br>> ________________________________<br>> This e-mail m=
essage is for the sole use of the intended recipient(s) and may contain con=
fidential and privileged information. Any unauthorized review, use, disclos=
ure or distribution is prohibited. If you are not the intended recipient, p=
lease contact the sender by reply e-mail and destroy all copies of the orig=
inal message. If you are the intended recipient, please be advised that the=
content of this message is subject to access, review and disclosure by the=
sender's e-mail System Administrator.<br>><br>> ____________________=
____________<br>> This e-mail message is for the sole use of the intende=
d recipient(s) and may contain confidential and privileged information. Any=
unauthorized review, use, disclosure or distribution is prohibited. If you=
are not the intended recipient, please contact the sender by reply e-mail =
and destroy all copies of the original message. If you are the intended rec=
ipient, please be advised that the content of this message is subject to ac=
cess, review and disclosure by the sender's e-mail System Administrator.<br=
><div><br></div>This e-mail message is for the sole use of the intended rec=
ipient(s) and may contain confidential and privileged information. Any unau=
thorized review, use, disclosure or distribution is prohibited. If you are =
not the intended recipient, please contact the sender by reply e-mail and d=
estroy all copies of the original message. If you are the intended recipien=
t, please be advised that the content of this message is subject to access,=
review and disclosure by the sender's e-mail System Administrator.<br><div=
><br></div><br>------------------------------<br><div><br></div>___________=
____________________________________<br>Users mailing list<br>Users(a)ovirt.o=
rg<br>http://lists.ovirt.org/mailman/listinfo/users<br><div><br></div><br>E=
nd of Users Digest, Vol 39, Issue 169<br>**********************************=
****<br></div><div><br></div></div></body></html>
------=_Part_1875460_365779577.1419876418683--
9 years, 11 months
Re: [ovirt-users] ??: bond mode balance-alb
by Nikolai Sednev
------=_Part_1871238_1615445632.1419874799888
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
I'd like to add that using of floating MAC "balance-tlb" for mode 5 or ARP negotiation for mode 6 load balancing " balance-alb" will influence latency and performance, using such mode should be avoided.
Mode zero or "balance-rr" should be also avoided as it is the only mode that will allow a single TCP/IP stream to utilize more than one interface, hence will create additional jitter, latency and performance impacts, as frames/packets will be sent and arrive from different interfaces, while preferred is to balance on per flow. Unless in your data center you're not using L2-only based traffic, I really don't see any usage for mode zero.
In Cisco routers the is a functionality called IP-CEF, which is turned on by default and balancing traffic on per TCP/IP flow, instead of per-packet, it is being used for better routing decisions for per-flow load balancing, if turned off, then per-packet load balancing will be enforced, causing high performance impact on router's CPU and memory resources, as decision have to be made on per-packet level, the higher the bit rate, the harder impact on resources of the router will be, especially for small sized packets.
Thanks in advance.
Best regards,
Nikolai
____________________
Nikolai Sednev
Senior Quality Engineer at Compute team
Red Hat Israel
34 Jerusalem Road,
Ra'anana, Israel 43501
Tel: +972 9 7692043
Mobile: +972 52 7342734
Email: nsednev(a)redhat.com
IRC: nsednev
----- Original Message -----
From: users-request(a)ovirt.org
To: users(a)ovirt.org
Sent: Monday, December 29, 2014 6:53:59 AM
Subject: Users Digest, Vol 39, Issue 163
Send Users mailing list submissions to
users(a)ovirt.org
To subscribe or unsubscribe via the World Wide Web, visit
http://lists.ovirt.org/mailman/listinfo/users
or, via email, send a message with subject or body 'help' to
users-request(a)ovirt.org
You can reach the person managing the list at
users-owner(a)ovirt.org
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Users digest..."
Today's Topics:
1. Re: Problem after update ovirt to 3.5 (Juan Jose)
2. Re: ??: bond mode balance-alb (Dan Kenigsberg)
3. Re: VM failover with ovirt3.5 (Yue, Cong)
----------------------------------------------------------------------
Message: 1
Date: Sun, 28 Dec 2014 20:08:37 +0100
From: Juan Jose <jj197005(a)gmail.com>
To: Simone Tiraboschi <stirabos(a)redhat.com>
Cc: "users(a)ovirt.org" <users(a)ovirt.org>
Subject: Re: [ovirt-users] Problem after update ovirt to 3.5
Message-ID:
<CADrE9wYtNdMPNsyjjZxA3zbyKZhYB5DA03wQ17dTLfuBBtA-Bg(a)mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
Many thanks Simone,
Juanjo.
On Tue, Dec 16, 2014 at 1:48 PM, Simone Tiraboschi <stirabos(a)redhat.com>
wrote:
>
>
> ----- Original Message -----
> > From: "Juan Jose" <jj197005(a)gmail.com>
> > To: "Yedidyah Bar David" <didi(a)redhat.com>, sbonazzo(a)redhat.com
> > Cc: users(a)ovirt.org
> > Sent: Tuesday, December 16, 2014 1:03:17 PM
> > Subject: Re: [ovirt-users] Problem after update ovirt to 3.5
> >
> > Hello everybody,
> >
> > It was the firewall, after upgrade my engine the NFS configuration had
> > disappered, I have configured again as Red Hat says and now it works
> > properly again.
> >
> > Many thank again for the indications.
>
> We already had a patch for it [1],
> it will released next month with oVirt 3.5.1
>
> [1] http://gerrit.ovirt.org/#/c/32874/
>
> > Juanjo.
> >
> > On Mon, Dec 15, 2014 at 2:32 PM, Yedidyah Bar David < didi(a)redhat.com >
> > wrote:
> >
> >
> > ----- Original Message -----
> > > From: "Juan Jose" < jj197005(a)gmail.com >
> > > To: users(a)ovirt.org
> > > Sent: Monday, December 15, 2014 3:17:15 PM
> > > Subject: [ovirt-users] Problem after update ovirt to 3.5
> > >
> > > Hello everybody,
> > >
> > > After upgrade my engine to oVirt 3.5, I have also upgraded one of my
> hosts
> > > to
> > > oVirt 3.5. After that it seems that all have gone good aparently.
> > >
> > > But in some seconds my ISO domain is desconnected and it is impossible
> to
> > > Activate. I'm attaching my engine.log. The below error is showed each
> time
> > > I
> > > try to Activate the ISO domain. Before the upgrade it was working
> without
> > > problems:
> > >
> > > 2014-12-15 13:25:07,607 ERROR
> > > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> > > (org.ovirt.thread.pool-8-thread-5) [460733dd] Correlation ID: null,
> Call
> > > Stack: null, Custom Event ID: -1, Message: Failed to connect Host
> host1 to
> > > the Storage Domains ISO_DOMAIN.
> > > 2014-12-15 13:25:07,608 INFO
> > >
> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
> > > (org.ovirt.thread.pool-8-thread-5) [460733dd] FINISH,
> > > ConnectStorageServerVDSCommand, return:
> > > {81c0a853-715c-4478-a812-6a74808fc482=477}, log id: 3590969e
> > > 2014-12-15 13:25:07,615 ERROR
> > > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> > > (org.ovirt.thread.pool-8-thread-5) [460733dd] Correlation ID: null,
> Call
> > > Stack: null, Custom Event ID: -1, Message: The error message for
> connection
> > > ovirt-engine.siee.local:/var/lib/exports/iso-20140303082312 returned by
> > > VDSM
> > > was: Problem while trying to mount target
> > > 2014-12-15 13:25:07,616 ERROR
> > > [org.ovirt.engine.core.bll.storage.NFSStorageHelper]
> > > (org.ovirt.thread.pool-8-thread-5) [460733dd] The connection with
> details
> > > ovirt-engine.siee.local:/var/lib/exports/iso-20140303082312 failed
> because
> > > of error code 477 and error message is: problem while trying to mount
> > > target
> > >
> > > If any other information is required, please tell me.
> >
> > Is the ISO domain on the engine host?
> >
> > Please check there iptables and /etc/exports, /etc/exports.d.
> >
> > Please post the setup (upgrade) log, check /var/log/ovirt-engine/setup.
> >
> > Thanks,
> > --
> > Didi
> >
> > _______________________________________________
> > Users mailing list
> > Users(a)ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20141228/bab30c2a/atta...>
------------------------------
Message: 2
Date: Sun, 28 Dec 2014 23:56:58 +0000
From: Dan Kenigsberg <danken(a)redhat.com>
To: Blaster <Blaster(a)556nato.com>
Cc: "Users(a)ovirt.org List" <users(a)ovirt.org>
Subject: Re: [ovirt-users] ??: bond mode balance-alb
Message-ID: <20141228235658.GE21690(a)redhat.com>
Content-Type: text/plain; charset=us-ascii
On Fri, Dec 26, 2014 at 12:39:45PM -0600, Blaster wrote:
> On 12/23/2014 2:55 AM, Dan Kenigsberg wrote:
> >Bug 1094842 - Bonding modes 0, 5 and 6 should be avoided for VM networks
> >https://bugzilla.redhat.com/show_bug.cgi?id=1094842#c0
>
> Dan,
>
> What is bad about these modes that oVirt can't use them?
I can only quote jpirko's workds from the link above:
Do not use tlb or alb in bridge, never! It does not work, that's it. The reason
is it mangles source macs in xmit frames and arps. When it is possible, just
use mode 4 (lacp). That should be always possible because all enterprise
switches support that. Generally, for 99% of use cases, you *should* use mode
4. There is no reason to use other modes.
>
> I just tested mode 4, and the LACP with Fedora 20 appears to not be
> compatible with the LAG mode on my Dell 2824.
>
> Would there be any issues with bringing two NICS into the VM and doing
> balance-alb at the guest level?
>
>
>
------------------------------
Message: 3
Date: Sun, 28 Dec 2014 20:53:44 -0800
From: "Yue, Cong" <Cong_Yue(a)alliedtelesis.com>
To: Artyom Lukianov <alukiano(a)redhat.com>
Cc: "users(a)ovirt.org" <users(a)ovirt.org>
Subject: Re: [ovirt-users] VM failover with ovirt3.5
Message-ID: <B7E7D6D4-B85D-471C-87A7-EA9AD32BF279(a)alliedtelesis.com>
Content-Type: text/plain; charset="utf-8"
I checked it again and confirmed there is one guest VM is running on the top of this host. The log is as follows:
[root@compute2-1 vdsm]# ps -ef | grep qemu
qemu 2983 846 0 Dec19 ? 00:00:00<x-apple-data-detectors://0> [supervdsmServer] <defunct>
root 5489 3053 0 20:49<x-apple-data-detectors://1> pts/0 00:00:00<x-apple-data-detectors://2> grep --color=auto qemu
qemu 26128 1 0 Dec19 ? 01:09:19 /usr/libexec/qemu-kvm
-name testvm2 -S -machine rhel6.5.0,accel=kvm,usb=off -cpu Nehalem -m
500 -realtime mlock=off -smp 1,maxcpus=16,sockets=16,cores=1,threads=1
-uuid e46bca87-4df5-4287-844b-90a26fccef33 -smbios
type=1,manufacturer=oVirt,product=oVirt
Node,version=7-0.1406.el7.centos.2.5,serial=4C4C4544-0030-3310-8059-B8C04F585231,uuid=e46bca87-4df5-4287-844b-90a26fccef33
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/testvm2.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2014-12-19T20:18:01<x-apple-data-detectors://4>,driftfix=slew -no-kvm-pit-reinjection
-no-hpet -no-shutdown -boot strict=on -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5
-drive if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial=
-device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
-drive file=/rhev/data-center/00000002-0002-0002-0002-0000000001e4/1dc71096-27c4-4256-b2ac-bd7265525c69/images/b4b5426b-95e3-41af-b286-da245891cdaf/0f688d49-97e3-4f1d-84d4-ac1432d903b3,if=none,id=drive-virtio-disk0,format=qcow2,serial=b4b5426b-95e3-41af-b286-da245891cdaf,cache=none,werror=stop,rerror=stop,aio=threads
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=27 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:db:94:01,bus=pci.0,addr=0x3
-chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/e46bca87-4df5-4287-844b-90a26fccef33.com.redhat.rhevm.vdsm,server,nowait
-device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/e46bca87-4df5-4287-844b-90a26fccef33.org.qemu.guest_agent.0,server,nowait
-device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev spicevmc,id=charchannel2,name=vdagent -device
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
-spice tls-port=5900,addr=10.0.0.92,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
-k en-us -vga qxl -global qxl-vga.ram_size=67108864<tel:67108864> -global
qxl-vga.vram_size=33554432<tel:33554432> -incoming tcp:[::]:49152 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7
[root@compute2-1 vdsm]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
MainThread::INFO::2014-12-28
20:49:27,315::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-28
20:49:27,646::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenance (score: 0)
MainThread::INFO::2014-12-28
20:49:27,646::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-28
20:49:37,732::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-28
20:49:37,961::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenance (score: 0)
MainThread::INFO::2014-12-28
20:49:37,961::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-28
20:49:48,048::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-28
20:49:48,319::states::208::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)
Score is 0 due to local maintenance mode
MainThread::INFO::2014-12-28
20:49:48,319::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenance (score: 0)
MainThread::INFO::2014-12-28
20:49:48,319::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
Thanks,
Cong
On 2014/12/28, at 3:46, "Artyom Lukianov" <alukiano(a)redhat.com<mailto:alukiano@redhat.com>> wrote:
I see that you set local maintenance on host3 that do not have engine vm on it, so it nothing to migrate from this host.
If you set local maintenance on host1, vm must migrate to another host with positive score.
Thanks
----- Original Message -----
From: "Cong Yue" <Cong_Yue(a)alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com>>
To: "Simone Tiraboschi" <stirabos(a)redhat.com<mailto:stirabos@redhat.com>>
Cc: users(a)ovirt.org<mailto:users@ovirt.org>
Sent: Saturday, December 27, 2014 6:58:32 PM
Subject: Re: [ovirt-users] VM failover with ovirt3.5
Hi
I had a try with "hosted-engine --set-maintence --mode=local" on
compute2-1, which is host 3 in my cluster. From the log, it shows
maintence mode is dectected, but migration does not happen.
The logs are as follows. Is there any other config I need to check?
[root@compute2-1 vdsm]# hosted-engine --vm-status
--== Host 1 status ==-
Status up-to-date : True
Hostname : 10.0.0.94
Host ID : 1
Engine status : {"health": "good", "vm": "up",
"detail": "up"}
Score : 2400
Local maintenance : False
Host timestamp : 836296
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=836296 (Sat Dec 27 11:42:39 2014)
host-id=1
score=2400
maintenance=False
state=EngineUp
--== Host 2 status ==--
Status up-to-date : True
Hostname : 10.0.0.93
Host ID : 2
Engine status : {"reason": "vm not running on
this host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 2400
Local maintenance : False
Host timestamp : 687358
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=687358 (Sat Dec 27 08:42:04 2014)
host-id=2
score=2400
maintenance=False
state=EngineDown
--== Host 3 status ==--
Status up-to-date : True
Hostname : 10.0.0.92
Host ID : 3
Engine status : {"reason": "vm not running on
this host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 0
Local maintenance : True
Host timestamp : 681827
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=681827 (Sat Dec 27 08:42:40 2014)
host-id=3
score=0
maintenance=True
state=LocalMaintenance
[root@compute2-1 vdsm]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
MainThread::INFO::2014-12-27
08:42:41,109::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-27
08:42:51,198::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-27
08:42:51,420::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenance (score: 0)
MainThread::INFO::2014-12-27
08:42:51,420::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-27
08:43:01,507::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-27
08:43:01,773::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenance (score: 0)
MainThread::INFO::2014-12-27
08:43:01,773::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-27
08:43:11,859::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)
Local maintenance detected
MainThread::INFO::2014-12-27
08:43:12,072::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state LocalMaintenance (score: 0)
MainThread::INFO::2014-12-27
08:43:12,072::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
[root@compute2-3 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
MainThread::INFO::2014-12-27
11:36:28,855::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-27
11:36:39,130::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-27
11:36:39,130::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-27
11:36:49,449::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-27
11:36:49,449::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-27
11:36:59,739::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-27
11:36:59,739::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-27
11:37:09,779::states::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
Engine vm running on localhost
MainThread::INFO::2014-12-27
11:37:10,026::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-27
11:37:10,026::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-27
11:37:20,331::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-27
11:37:20,331::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
[root@compute2-2 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log
MainThread::INFO::2014-12-27
08:36:12,462::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-27
08:36:22,797::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-27
08:36:22,798::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-27
08:36:32,876::states::437::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
Engine vm is running on host 10.0.0.94 (id 1)
MainThread::INFO::2014-12-27
08:36:33,169::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-27
08:36:33,169::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-27
08:36:43,567::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-27
08:36:43,567::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-27
08:36:53,858::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-27
08:36:53,858::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-27
08:37:04,028::state_machine::160::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Global metadata: {'maintenance': False}
MainThread::INFO::2014-12-27
08:37:04,028::state_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Host 10.0.0.94 (id 1): {'extra':
'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=835987
(Sat Dec 27 11:37:30
2014)\nhost-id=1\nscore=2400\nmaintenance=False\nstate=EngineUp\n',
'hostname': '10.0.0.94', 'alive': True, 'host-id': 1, 'engine-status':
{'health': 'good', 'vm': 'up', 'detail': 'up'}, 'score': 2400,
'maintenance': False, 'host-ts': 835987}
MainThread::INFO::2014-12-27
08:37:04,028::state_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Host 10.0.0.92 (id 3): {'extra':
'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=681528
(Sat Dec 27 08:37:41
2014)\nhost-id=3\nscore=0\nmaintenance=True\nstate=LocalMaintenance\n',
'hostname': '10.0.0.92', 'alive': True, 'host-id': 3, 'engine-status':
{'reason': 'vm not running on this host', 'health': 'bad', 'vm':
'down', 'detail': 'unknown'}, 'score': 0, 'maintenance': True,
'host-ts': 681528}
MainThread::INFO::2014-12-27
08:37:04,028::state_machine::168::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Local (id 2): {'engine-health': {'reason': 'vm not running on this
host', 'health': 'bad', 'vm': 'down', 'detail': 'unknown'}, 'bridge':
True, 'mem-free': 15300.0, 'maintenance': False, 'cpu-load': 0.0215,
'gateway': True}
MainThread::INFO::2014-12-27
08:37:04,265::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-27
08:37:04,265::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
Thanks,
Cong
On 2014/12/22, at 5:29, "Simone Tiraboschi" <stirabos(a)redhat.com<mailto:stirabos@redhat.com>> wrote:
----- Original Message -----
From: "Cong Yue" <Cong_Yue(a)alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com>>
To: "Simone Tiraboschi" <stirabos(a)redhat.com<mailto:stirabos@redhat.com>>
Cc: users(a)ovirt.org<mailto:users@ovirt.org>
Sent: Friday, December 19, 2014 7:22:10 PM
Subject: RE: [ovirt-users] VM failover with ovirt3.5
Thanks for the information. This is the log for my three ovirt nodes.
>From the output of hosted-engine --vm-status, it shows the engine state for
my 2nd and 3rd ovirt node is DOWN.
Is this the reason why VM failover not work in my environment?
No, they looks ok: you can run the engine VM on single host at a time.
How can I make
also engine works for my 2nd and 3rd ovit nodes?
If you put the host 1 in local maintenance mode ( hosted-engine --set-maintenance --mode=local ) the VM should migrate to host 2; if you reactivate host 1 ( hosted-engine --set-maintenance --mode=none ) and put host 2 in local maintenance mode the VM should migrate again.
Can you please try that and post the logs if something is going bad?
--
--== Host 1 status ==--
Status up-to-date : True
Hostname : 10.0.0.94
Host ID : 1
Engine status : {"health": "good", "vm": "up",
"detail": "up"}
Score : 2400
Local maintenance : False
Host timestamp : 150475
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=150475 (Fri Dec 19 13:12:18 2014)
host-id=1
score=2400
maintenance=False
state=EngineUp
--== Host 2 status ==--
Status up-to-date : True
Hostname : 10.0.0.93
Host ID : 2
Engine status : {"reason": "vm not running on
this host", "health": "bad", "vm": "down", "detail": "unknown"}
Score : 2400
Local maintenance : False
Host timestamp : 1572
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=1572 (Fri Dec 19 10:12:18 2014)
host-id=2
score=2400
maintenance=False
state=EngineDown
--== Host 3 status ==--
Status up-to-date : False
Hostname : 10.0.0.92
Host ID : 3
Engine status : unknown stale-data
Score : 2400
Local maintenance : False
Host timestamp : 987
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=987 (Fri Dec 19 10:09:58 2014)
host-id=3
score=2400
maintenance=False
state=EngineDown
--
And the /var/log/ovirt-hosted-engine-ha/agent.log for three ovirt nodes are
as follows:
--
10.0.0.94(hosted-engine-1)
---
MainThread::INFO::2014-12-19
13:09:33,716::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:09:33,716::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:09:44,017::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:09:44,017::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:09:54,303::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:09:54,303::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:10:04,342::states::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
Engine vm running on localhost
MainThread::INFO::2014-12-19
13:10:04,617::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:10:04,617::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:10:14,657::state_machine::160::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Global metadata: {'maintenance': False}
MainThread::INFO::2014-12-19
13:10:14,657::state_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Host 10.0.0.93 (id 2): {'extra':
'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=1448
(Fri Dec 19 10:10:14
2014)\nhost-id=2\nscore=2400\nmaintenance=False\nstate=EngineDown\n',
'hostname': '10.0.0.93', 'alive': True, 'host-id': 2, 'engine-status':
{'reason': 'vm not running on this host', 'health': 'bad', 'vm':
'down', 'detail': 'unknown'}, 'score': 2400, 'maintenance': False,
'host-ts': 1448}
MainThread::INFO::2014-12-19
13:10:14,657::state_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Host 10.0.0.92 (id 3): {'extra':
'metadata_parse_version=1\nmetadata_feature_version=1\ntimestamp=987
(Fri Dec 19 10:09:58
2014)\nhost-id=3\nscore=2400\nmaintenance=False\nstate=EngineDown\n',
'hostname': '10.0.0.92', 'alive': True, 'host-id': 3, 'engine-status':
{'reason': 'vm not running on this host', 'health': 'bad', 'vm':
'down', 'detail': 'unknown'}, 'score': 2400, 'maintenance': False,
'host-ts': 987}
MainThread::INFO::2014-12-19
13:10:14,658::state_machine::168::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)
Local (id 1): {'engine-health': {'health': 'good', 'vm': 'up',
'detail': 'up'}, 'bridge': True, 'mem-free': 1079.0, 'maintenance':
False, 'cpu-load': 0.0269, 'gateway': True}
MainThread::INFO::2014-12-19
13:10:14,904::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:10:14,904::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:10:25,210::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:10:25,210::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:10:35,499::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:10:35,499::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:10:45,784::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:10:45,785::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:10:56,070::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:10:56,070::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:11:06,109::states::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(consume)
Engine vm running on localhost
MainThread::INFO::2014-12-19
13:11:06,359::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:11:06,359::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:11:16,658::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:11:16,658::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:11:26,991::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:11:26,991::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
MainThread::INFO::2014-12-19
13:11:37,341::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineUp (score: 2400)
MainThread::INFO::2014-12-19
13:11:37,341::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.93 (id: 2, score: 2400)
----
10.0.0.93 (hosted-engine-2)
MainThread::INFO::2014-12-19
10:12:18,339::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-19
10:12:18,339::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-19
10:12:28,651::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-19
10:12:28,652::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-19
10:12:39,010::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-19
10:12:39,010::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-19
10:12:49,338::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-19
10:12:49,338::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-19
10:12:59,642::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-19
10:12:59,642::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
MainThread::INFO::2014-12-19
10:13:10,010::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Current state EngineDown (score: 2400)
MainThread::INFO::2014-12-19
10:13:10,010::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)
Best remote host 10.0.0.94 (id: 1, score: 2400)
10.0.0.92(hosted-engine-3)
same as 10.0.0.93
--
-----Original Message-----
From: Simone Tiraboschi [mailto:stirabos@redhat.com]
Sent: Friday, December 19, 2014 12:28 AM
To: Yue, Cong
Cc: users(a)ovirt.org<mailto:users@ovirt.org>
Subject: Re: [ovirt-users] VM failover with ovirt3.5
----- Original Message -----
From: "Cong Yue" <Cong_Yue(a)alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com>>
To: users(a)ovirt.org<mailto:users@ovirt.org>
Sent: Friday, December 19, 2014 2:14:33 AM
Subject: [ovirt-users] VM failover with ovirt3.5
Hi
In my environment, I have 3 ovirt nodes as one cluster. And on top of
host-1, there is one vm to host ovirt engine.
Also I have one external storage for the cluster to use as data domain
of engine and data.
I confirmed live migration works well in my environment.
But it seems very buggy for VM failover if I try to force to shut down
one ovirt node. Sometimes the VM in the node which is shutdown can
migrate to other host, but it take more than several minutes.
Sometimes, it can not migrate at all. Sometimes, only when the host is
back, the VM is beginning to move.
Can you please check or share the logs under /var/log/ovirt-hosted-engine-ha/
?
Is there some documentation to explain how VM failover is working? And
is there some bugs reported related with this?
http://www.ovirt.org/Features/Self_Hosted_Engine#Agent_State_Diagram
Thanks in advance,
Cong
This e-mail message is for the sole use of the intended recipient(s)
and may contain confidential and privileged information. Any
unauthorized review, use, disclosure or distribution is prohibited. If
you are not the intended recipient, please contact the sender by reply
e-mail and destroy all copies of the original message. If you are the
intended recipient, please be advised that the content of this message
is subject to access, review and disclosure by the sender's e-mail System
Administrator.
_______________________________________________
Users mailing list
Users(a)ovirt.org<mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
This e-mail message is for the sole use of the intended recipient(s) and may
contain confidential and privileged information. Any unauthorized review,
use, disclosure or distribution is prohibited. If you are not the intended
recipient, please contact the sender by reply e-mail and destroy all copies
of the original message. If you are the intended recipient, please be
advised that the content of this message is subject to access, review and
disclosure by the sender's e-mail System Administrator.
This e-mail message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. If you are the intended recipient, please be advised that the content of this message is subject to access, review and disclosure by the sender's e-mail System Administrator.
_______________________________________________
Users mailing list
Users(a)ovirt.org<mailto:Users@ovirt.org>
http://lists.ovirt.org/mailman/listinfo/users
________________________________
This e-mail message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message. If you are the intended recipient, please be advised that the content of this message is subject to access, review and disclosure by the sender's e-mail System Administrator.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ovirt.org/pipermail/users/attachments/20141228/c5ac26a7/atta...>
------------------------------
_______________________________________________
Users mailing list
Users(a)ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
End of Users Digest, Vol 39, Issue 163
**************************************
------=_Part_1871238_1615445632.1419874799888
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: georgia,serif; font-size: 12pt; colo=
r: #000000"><div>I'd like to add that<span style=3D"font-size: 12pt;"> =
;using of floating MAC </span><span style=3D"font-size: 12pt;">"balance-tlb=
" </span><span style=3D"font-size: 12pt;">for mode 5 or ARP </span><span st=
yle=3D"font-size: 12pt;">negotiation for mode 6 load balancing "</span><spa=
n style=3D"font-size: 12pt;">balance-alb" will influence latency and perfor=
mance, using such mode should be avoided. </span></div><div>Mode zero =
or "balance-rr" should be also avoided as it is the only m=
ode that will allow a single TCP/IP stream to utilize more than one interfa=
ce, hence will create additional jitter, latency and performance impacts,&n=
bsp;as frames/packets will be sent and arrive from different interfaces, wh=
ile preferred is to balance on per flow. Unless in your data center you're =
not using L2-only based traffic, I really don't see any usage for mode zero=
.</div><div>In Cisco routers the is a functionality called IP-CEF, which is=
turned on by default and balancing traffic on per TCP/IP flow, instead of =
per-packet, it is being used for better routing decisions for per-flow load=
balancing, if turned off, then per-packet load balancing will be enforced,=
causing high performance impact on router's CPU and memory resources, as d=
ecision have to be made on per-packet level, the higher the bit rate, the h=
arder impact on resources of the router will be, especially for small sized=
packets.</div><div><br></div><div><span name=3D"x"></span><br>Thanks in ad=
vance.<br><div><br></div>Best regards,<br>Nikolai<br>____________________<b=
r>Nikolai Sednev<br>Senior Quality Engineer at Compute team<br>Red Hat Isra=
el<br>34 Jerusalem Road,<br>Ra'anana, Israel 43501<br><div><br></div>Tel: &=
nbsp; +972 9 7692043<br>Mobile: +972 52 7342734<br>Ema=
il: nsednev(a)redhat.com<br>IRC: nsednev<span name=3D"x"></span><br></div><di=
v><br></div><hr id=3D"zwchr"><div style=3D"color:#000;font-weight:normal;fo=
nt-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif=
;font-size:12pt;"><b>From: </b>users-request(a)ovirt.org<br><b>To: </b>users@=
ovirt.org<br><b>Sent: </b>Monday, December 29, 2014 6:53:59 AM<br><b>Subjec=
t: </b>Users Digest, Vol 39, Issue 163<br><div><br></div>Send Users mailing=
list submissions to<br> use=
rs(a)ovirt.org<br><div><br></div>To subscribe or unsubscribe via the World Wi=
de Web, visit<br> http://lis=
ts.ovirt.org/mailman/listinfo/users<br>or, via email, send a message with s=
ubject or body 'help' to<br>  =
;users-request(a)ovirt.org<br><div><br></div>You can reach the person managin=
g the list at<br> users-owne=
r(a)ovirt.org<br><div><br></div>When replying, please edit your Subject line =
so it is more specific<br>than "Re: Contents of Users digest..."<br><div><b=
r></div><br>Today's Topics:<br><div><br></div> 1. Re: Pro=
blem after update ovirt to 3.5 (Juan Jose)<br> 2. Re: ??:=
bond mode balance-alb (Dan Kenigsberg)<br> 3. Re: VM fai=
lover with ovirt3.5 (Yue, Cong)<br><div><br></div><br>---------------------=
-------------------------------------------------<br><div><br></div>Message=
: 1<br>Date: Sun, 28 Dec 2014 20:08:37 +0100<br>From: Juan Jose <jj19700=
5(a)gmail.com><br>To: Simone Tiraboschi <stirabos(a)redhat.com><br>Cc:=
"users(a)ovirt.org" <users(a)ovirt.org><br>Subject: Re: [ovirt-users] Pr=
oblem after update ovirt to 3.5<br>Message-ID:<br> &=
nbsp; <CADrE9wYtNdMPNsyjjZxA3zbyKZhYB5DA03wQ17dTLfuBBtA=
-Bg(a)mail.gmail.com><br>Content-Type: text/plain; charset=3D"utf-8"<br><d=
iv><br></div>Many thanks Simone,<br><div><br></div>Juanjo.<br><div><br></di=
v>On Tue, Dec 16, 2014 at 1:48 PM, Simone Tiraboschi <stirabos(a)redhat.co=
m><br>wrote:<br><div><br></div>><br>><br>> ----- Original Messa=
ge -----<br>> > From: "Juan Jose" <jj197005(a)gmail.com><br>> =
> To: "Yedidyah Bar David" <didi(a)redhat.com>, sbonazzo(a)redhat.com<=
br>> > Cc: users(a)ovirt.org<br>> > Sent: Tuesday, December 16, 2=
014 1:03:17 PM<br>> > Subject: Re: [ovirt-users] Problem after update=
ovirt to 3.5<br>> ><br>> > Hello everybody,<br>> ><br>&g=
t; > It was the firewall, after upgrade my engine the NFS configuration =
had<br>> > disappered, I have configured again as Red Hat says and no=
w it works<br>> > properly again.<br>> ><br>> > Many than=
k again for the indications.<br>><br>> We already had a patch for it =
[1],<br>> it will released next month with oVirt 3.5.1<br>><br>> [=
1] http://gerrit.ovirt.org/#/c/32874/<br>><br>> > Juanjo.<br>> =
><br>> > On Mon, Dec 15, 2014 at 2:32 PM, Yedidyah Bar David < =
didi(a)redhat.com ><br>> > wrote:<br>> ><br>> ><br>> =
> ----- Original Message -----<br>> > > From: "Juan Jose" < =
jj197005(a)gmail.com ><br>> > > To: users(a)ovirt.org<br>> > =
> Sent: Monday, December 15, 2014 3:17:15 PM<br>> > > Subject: =
[ovirt-users] Problem after update ovirt to 3.5<br>> > ><br>> &=
gt; > Hello everybody,<br>> > ><br>> > > After upgrade=
my engine to oVirt 3.5, I have also upgraded one of my<br>> hosts<br>&g=
t; > > to<br>> > > oVirt 3.5. After that it seems that all h=
ave gone good aparently.<br>> > ><br>> > > But in some se=
conds my ISO domain is desconnected and it is impossible<br>> to<br>>=
> > Activate. I'm attaching my engine.log. The below error is showed=
each<br>> time<br>> > > I<br>> > > try to Activate th=
e ISO domain. Before the upgrade it was working<br>> without<br>> >=
; > problems:<br>> > ><br>> > > 2014-12-15 13:25:07,60=
7 ERROR<br>> > > [org.ovirt.engine.core.dal.dbbroker.auditloghandl=
ing.AuditLogDirector]<br>> > > (org.ovirt.thread.pool-8-thread-5) =
[460733dd] Correlation ID: null,<br>> Call<br>> > > Stack: null=
, Custom Event ID: -1, Message: Failed to connect Host<br>> host1 to<br>=
> > > the Storage Domains ISO_DOMAIN.<br>> > > 2014-12-15=
13:25:07,608 INFO<br>> > ><br>> [org.ovirt.engine.core.vdsbrok=
er.vdsbroker.ConnectStorageServerVDSCommand]<br>> > > (org.ovirt.t=
hread.pool-8-thread-5) [460733dd] FINISH,<br>> > > ConnectStorageS=
erverVDSCommand, return:<br>> > > {81c0a853-715c-4478-a812-6a74808=
fc482=3D477}, log id: 3590969e<br>> > > 2014-12-15 13:25:07,615 ER=
ROR<br>> > > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.=
AuditLogDirector]<br>> > > (org.ovirt.thread.pool-8-thread-5) [460=
733dd] Correlation ID: null,<br>> Call<br>> > > Stack: null, Cu=
stom Event ID: -1, Message: The error message for<br>> connection<br>>=
; > > ovirt-engine.siee.local:/var/lib/exports/iso-20140303082312 ret=
urned by<br>> > > VDSM<br>> > > was: Problem while trying=
to mount target<br>> > > 2014-12-15 13:25:07,616 ERROR<br>> &g=
t; > [org.ovirt.engine.core.bll.storage.NFSStorageHelper]<br>> > &=
gt; (org.ovirt.thread.pool-8-thread-5) [460733dd] The connection with<br>&g=
t; details<br>> > > ovirt-engine.siee.local:/var/lib/exports/iso-2=
0140303082312 failed<br>> because<br>> > > of error code 477 an=
d error message is: problem while trying to mount<br>> > > target<=
br>> > ><br>> > > If any other information is required, p=
lease tell me.<br>> ><br>> > Is the ISO domain on the engine ho=
st?<br>> ><br>> > Please check there iptables and /etc/exports,=
/etc/exports.d.<br>> ><br>> > Please post the setup (upgrade) =
log, check /var/log/ovirt-engine/setup.<br>> ><br>> > Thanks,<b=
r>> > --<br>> > Didi<br>> ><br>> > ________________=
_______________________________<br>> > Users mailing list<br>> >=
; Users(a)ovirt.org<br>> > http://lists.ovirt.org/mailman/listinfo/user=
s<br>> ><br>><br>-------------- next part --------------<br>An HTM=
L attachment was scrubbed...<br>URL: <http://lists.ovirt.org/pipermail/u=
sers/attachments/20141228/bab30c2a/attachment-0001.html><br><div><br></d=
iv>------------------------------<br><div><br></div>Message: 2<br>Date: Sun=
, 28 Dec 2014 23:56:58 +0000<br>From: Dan Kenigsberg <danken(a)redhat.com&=
gt;<br>To: Blaster <Blaster(a)556nato.com><br>Cc: "Users(a)ovirt.org List=
" <users(a)ovirt.org><br>Subject: Re: [ovirt-users] ??: bond mode balan=
ce-alb<br>Message-ID: <20141228235658.GE21690(a)redhat.com><br>Content-=
Type: text/plain; charset=3Dus-ascii<br><div><br></div>On Fri, Dec 26, 2014=
at 12:39:45PM -0600, Blaster wrote:<br>> On 12/23/2014 2:55 AM, Dan Ken=
igsberg wrote:<br>> >Bug 1094842 - Bonding modes 0, 5 and 6 should be=
avoided for VM networks<br>> >https://bugzilla.redhat.com/show_bug.c=
gi?id=3D1094842#c0<br>> <br>> Dan,<br>> <br>> What is bad about=
these modes that oVirt can't use them?<br><div><br></div>I can only quote =
jpirko's workds from the link above:<br><div><br></div> D=
o not use tlb or alb in bridge, never! It does not work, that's it. The rea=
son<br> is it mangles source macs in xmit frames and arps=
. When it is possible, just<br> use mode 4 (lacp). That s=
hould be always possible because all enterprise<br> switc=
hes support that. Generally, for 99% of use cases, you *should* use mode<br=
> 4. There is no reason to use other modes.<br><div><br><=
/div>> <br>> I just tested mode 4, and the LACP with Fedora 20 appear=
s to not be<br>> compatible with the LAG mode on my Dell 2824.<br>> <=
br>> Would there be any issues with bringing two NICS into the VM and do=
ing<br>> balance-alb at the guest level?<br>> <br>> <br>> <br><=
div><br></div><br>------------------------------<br><div><br></div>Message:=
3<br>Date: Sun, 28 Dec 2014 20:53:44 -0800<br>From: "Yue, Cong" <Cong_Y=
ue(a)alliedtelesis.com><br>To: Artyom Lukianov <alukiano(a)redhat.com>=
<br>Cc: "users(a)ovirt.org" <users(a)ovirt.org><br>Subject: Re: [ovirt-us=
ers] VM failover with ovirt3.5<br>Message-ID: <B7E7D6D4-B85D-471C-87A7-E=
A9AD32BF279(a)alliedtelesis.com><br>Content-Type: text/plain; charset=3D"u=
tf-8"<br><div><br></div>I checked it again and confirmed there is one guest=
VM is running on the top of this host. The log is as follows:<br><div><br>=
</div>[root@compute2-1 vdsm]# ps -ef | grep qemu<br>qemu &nbs=
p;2983 846 0 Dec19 ? 00:00:00<x-=
apple-data-detectors://0> [supervdsmServer] <defunct><br>root &nbs=
p; 5489 3053 0 20:49<x-apple-data-detectors://1=
> pts/0 00:00:00<x-apple-data-detectors://2> grep --c=
olor=3Dauto qemu<br>qemu 26128 1 0 Dec19 =
? 01:09:19 /usr/libexec/qemu-kvm<br>-name testvm=
2 -S -machine rhel6.5.0,accel=3Dkvm,usb=3Doff -cpu Nehalem -m<br>500 -realt=
ime mlock=3Doff -smp 1,maxcpus=3D16,sockets=3D16,cores=3D1,threads=3D1<br>-=
uuid e46bca87-4df5-4287-844b-90a26fccef33 -smbios<br>type=3D1,manufacturer=
=3DoVirt,product=3DoVirt<br>Node,version=3D7-0.1406.el7.centos.2.5,serial=
=3D4C4C4544-0030-3310-8059-B8C04F585231,uuid=3De46bca87-4df5-4287-844b-90a2=
6fccef33<br>-no-user-config -nodefaults -chardev<br>socket,id=3Dcharmonitor=
,path=3D/var/lib/libvirt/qemu/testvm2.monitor,server,nowait<br>-mon chardev=
=3Dcharmonitor,id=3Dmonitor,mode=3Dcontrol -rtc<br>base=3D2014-12-19T20:18:=
01<x-apple-data-detectors://4>,driftfix=3Dslew -no-kvm-pit-reinjectio=
n<br>-no-hpet -no-shutdown -boot strict=3Don -device<br>piix3-usb-uhci,id=
=3Dusb,bus=3Dpci.0,addr=3D0x1.0x2 -device<br>virtio-scsi-pci,id=3Dscsi0,bus=
=3Dpci.0,addr=3D0x4 -device<br>virtio-serial-pci,id=3Dvirtio-serial0,max_po=
rts=3D16,bus=3Dpci.0,addr=3D0x5<br>-drive if=3Dnone,id=3Ddrive-ide0-1-0,rea=
donly=3Don,format=3Draw,serial=3D<br>-device ide-cd,bus=3Dide.1,unit=3D0,dr=
ive=3Ddrive-ide0-1-0,id=3Dide0-1-0<br>-drive file=3D/rhev/data-center/00000=
002-0002-0002-0002-0000000001e4/1dc71096-27c4-4256-b2ac-bd7265525c69/images=
/b4b5426b-95e3-41af-b286-da245891cdaf/0f688d49-97e3-4f1d-84d4-ac1432d903b3,=
if=3Dnone,id=3Ddrive-virtio-disk0,format=3Dqcow2,serial=3Db4b5426b-95e3-41a=
f-b286-da245891cdaf,cache=3Dnone,werror=3Dstop,rerror=3Dstop,aio=3Dthreads<=
br>-device virtio-blk-pci,scsi=3Doff,bus=3Dpci.0,addr=3D0x6,drive=3Ddrive-v=
irtio-disk0,id=3Dvirtio-disk0,bootindex=3D1<br>-netdev tap,fd=3D26,id=3Dhos=
tnet0,vhost=3Don,vhostfd=3D27 -device<br>virtio-net-pci,netdev=3Dhostnet0,i=
d=3Dnet0,mac=3D00:1a:4a:db:94:01,bus=3Dpci.0,addr=3D0x3<br>-chardev socket,=
id=3Dcharchannel0,path=3D/var/lib/libvirt/qemu/channels/e46bca87-4df5-4287-=
844b-90a26fccef33.com.redhat.rhevm.vdsm,server,nowait<br>-device virtserial=
port,bus=3Dvirtio-serial0.0,nr=3D1,chardev=3Dcharchannel0,id=3Dchannel0,nam=
e=3Dcom.redhat.rhevm.vdsm<br>-chardev socket,id=3Dcharchannel1,path=3D/var/=
lib/libvirt/qemu/channels/e46bca87-4df5-4287-844b-90a26fccef33.org.qemu.gue=
st_agent.0,server,nowait<br>-device virtserialport,bus=3Dvirtio-serial0.0,n=
r=3D2,chardev=3Dcharchannel1,id=3Dchannel1,name=3Dorg.qemu.guest_agent.0<br=
>-chardev spicevmc,id=3Dcharchannel2,name=3Dvdagent -device<br>virtserialpo=
rt,bus=3Dvirtio-serial0.0,nr=3D3,chardev=3Dcharchannel2,id=3Dchannel2,name=
=3Dcom.redhat.spice.0<br>-spice tls-port=3D5900,addr=3D10.0.0.92,x509-dir=
=3D/etc/pki/vdsm/libvirt-spice,tls-channel=3Dmain,tls-channel=3Ddisplay,tls=
-channel=3Dinputs,tls-channel=3Dcursor,tls-channel=3Dplayback,tls-channel=
=3Drecord,tls-channel=3Dsmartcard,tls-channel=3Dusbredir,seamless-migration=
=3Don<br>-k en-us -vga qxl -global qxl-vga.ram_size=3D67108864<tel:67108=
864> -global<br>qxl-vga.vram_size=3D33554432<tel:33554432> -incomi=
ng tcp:[::]:49152 -device<br>virtio-balloon-pci,id=3Dballoon0,bus=3Dpci.0,a=
ddr=3D0x7<br>[root@compute2-1 vdsm]# tail -f /var/log/ovirt-hosted-engine-h=
a/agent.log<br>MainThread::INFO::2014-12-28<br>20:49:27,315::state_decorato=
rs::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)<b=
r>Local maintenance detected<br>MainThread::INFO::2014-12-28<br>20:49:27,64=
6::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEng=
ine::(start_monitoring)<br>Current state LocalMaintenance (score: 0)<br>Mai=
nThread::INFO::2014-12-28<br>20:49:27,646::hosted_engine::332::ovirt_hosted=
_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Best rem=
ote host 10.0.0.94 (id: 1, score: 2400)<br>MainThread::INFO::2014-12-28<br>=
20:49:37,732::state_decorators::124::ovirt_hosted_engine_ha.agent.hosted_en=
gine.HostedEngine::(check)<br>Local maintenance detected<br>MainThread::INF=
O::2014-12-28<br>20:49:37,961::hosted_engine::327::ovirt_hosted_engine_ha.a=
gent.hosted_engine.HostedEngine::(start_monitoring)<br>Current state LocalM=
aintenance (score: 0)<br>MainThread::INFO::2014-12-28<br>20:49:37,961::host=
ed_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(s=
tart_monitoring)<br>Best remote host 10.0.0.94 (id: 1, score: 2400)<br>Main=
Thread::INFO::2014-12-28<br>20:49:48,048::state_decorators::124::ovirt_host=
ed_engine_ha.agent.hosted_engine.HostedEngine::(check)<br>Local maintenance=
detected<br>MainThread::INFO::2014-12-28<br>20:49:48,319::states::208::ovi=
rt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score)<br>Score is 0=
due to local maintenance mode<br>MainThread::INFO::2014-12-28<br>20:49:48,=
319::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedE=
ngine::(start_monitoring)<br>Current state LocalMaintenance (score: 0)<br>M=
ainThread::INFO::2014-12-28<br>20:49:48,319::hosted_engine::332::ovirt_host=
ed_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Best r=
emote host 10.0.0.94 (id: 1, score: 2400)<br><div><br></div>Thanks,<br>Cong=
<br><div><br></div><br>On 2014/12/28, at 3:46, "Artyom Lukianov" <alukia=
no@redhat.com<mailto:alukiano@redhat.com>> wrote:<br><div><br></di=
v>I see that you set local maintenance on host3 that do not have engine vm =
on it, so it nothing to migrate from this host.<br>If you set local mainten=
ance on host1, vm must migrate to another host with positive score.<br>Than=
ks<br><div><br></div>----- Original Message -----<br>From: "Cong Yue" <C=
ong_Yue@alliedtelesis.com<mailto:Cong_Yue@alliedtelesis.com>><br>T=
o: "Simone Tiraboschi" <stirabos@redhat.com<mailto:stirabos@redhat.co=
m>><br>Cc: users@ovirt.org<mailto:users@ovirt.org><br>Sent: Sat=
urday, December 27, 2014 6:58:32 PM<br>Subject: Re: [ovirt-users] VM failov=
er with ovirt3.5<br><div><br></div>Hi<br><div><br></div>I had a try with "h=
osted-engine --set-maintence --mode=3Dlocal" on<br>compute2-1, which is hos=
t 3 in my cluster. From the log, it shows<br>maintence mode is dectected, b=
ut migration does not happen.<br><div><br></div>The logs are as follows. Is=
there any other config I need to check?<br><div><br></div>[root@compute2-1=
vdsm]# hosted-engine --vm-status<br><div><br></div><br>--=3D=3D Host 1 sta=
tus =3D=3D-<br><div><br></div>Status up-to-date =
: True<br>Hostname =
: 10.=
0.0.94<br>Host ID &=
nbsp; : 1<br>Engine status =
: {"health": =
"good", "vm": "up",<br>"detail": "up"}<br>Score =
&nbs=
p;: 2400<br>Local maintenance &nb=
sp; : False<br>Host timestamp &nbs=
p; : 836296<br>Extra metadata (valid at =
timestamp):<br>metadata_parse_version=3D1<br>metadata_feature_version=3D1<b=
r>timestamp=3D836296 (Sat Dec 27 11:42:39 2014)<br>host-id=3D1<br>score=3D2=
400<br>maintenance=3DFalse<br>state=3DEngineUp<br><div><br></div><br>--=3D=
=3D Host 2 status =3D=3D--<br><div><br></div>Status up-to-date  =
; : True<br>Hostname =
&nbs=
p; : 10.0.0.93<br>Host ID =
: 2<br>Engine status=
&nbs=
p;: {"reason": "vm not running on<br>this host", "health": "bad", "vm": "do=
wn", "detail": "unknown"}<br>Score  =
; : 2400<br>L=
ocal maintenance &n=
bsp;: False<br>Host timestamp &nb=
sp; : 687358<br>Extra metadata (valid at timestamp):<b=
r>metadata_parse_version=3D1<br>metadata_feature_version=3D1<br>timestamp=
=3D687358 (Sat Dec 27 08:42:04 2014)<br>host-id=3D2<br>score=3D2400<br>main=
tenance=3DFalse<br>state=3DEngineDown<br><div><br></div><br>--=3D=3D Host 3=
status =3D=3D--<br><div><br></div>Status up-to-date &=
nbsp; : True<br>Hostname &n=
bsp; =
: 10.0.0.92<br>Host ID &nb=
sp; : 3<br>Engine status &n=
bsp; : {"reas=
on": "vm not running on<br>this host", "health": "bad", "vm": "down", "deta=
il": "unknown"}<br>Score &=
nbsp; : 0<br>Local maintena=
nce : True<br=
>Host timestamp &nb=
sp; : 681827<br>Extra metadata (valid at timestamp):<br>metadata_par=
se_version=3D1<br>metadata_feature_version=3D1<br>timestamp=3D681827 (Sat D=
ec 27 08:42:40 2014)<br>host-id=3D3<br>score=3D0<br>maintenance=3DTrue<br>s=
tate=3DLocalMaintenance<br>[root@compute2-1 vdsm]# tail -f /var/log/ovirt-h=
osted-engine-ha/agent.log<br>MainThread::INFO::2014-12-27<br>08:42:41,109::=
hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine=
::(start_monitoring)<br>Best remote host 10.0.0.94 (id: 1, score: 2400)<br>=
MainThread::INFO::2014-12-27<br>08:42:51,198::state_decorators::124::ovirt_=
hosted_engine_ha.agent.hosted_engine.HostedEngine::(check)<br>Local mainten=
ance detected<br>MainThread::INFO::2014-12-27<br>08:42:51,420::hosted_engin=
e::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_mon=
itoring)<br>Current state LocalMaintenance (score: 0)<br>MainThread::INFO::=
2014-12-27<br>08:42:51,420::hosted_engine::332::ovirt_hosted_engine_ha.agen=
t.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0=
.94 (id: 1, score: 2400)<br>MainThread::INFO::2014-12-27<br>08:43:01,507::s=
tate_decorators::124::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngi=
ne::(check)<br>Local maintenance detected<br>MainThread::INFO::2014-12-27<b=
r>08:43:01,773::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_eng=
ine.HostedEngine::(start_monitoring)<br>Current state LocalMaintenance (sco=
re: 0)<br>MainThread::INFO::2014-12-27<br>08:43:01,773::hosted_engine::332:=
:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring=
)<br>Best remote host 10.0.0.94 (id: 1, score: 2400)<br>MainThread::INFO::2=
014-12-27<br>08:43:11,859::state_decorators::124::ovirt_hosted_engine_ha.ag=
ent.hosted_engine.HostedEngine::(check)<br>Local maintenance detected<br>Ma=
inThread::INFO::2014-12-27<br>08:43:12,072::hosted_engine::327::ovirt_hoste=
d_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Current=
state LocalMaintenance (score: 0)<br>MainThread::INFO::2014-12-27<br>08:43=
:12,072::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.Hos=
tedEngine::(start_monitoring)<br>Best remote host 10.0.0.94 (id: 1, score: =
2400)<br><div><br></div><br><div><br></div>[root@compute2-3 ~]# tail -f /va=
r/log/ovirt-hosted-engine-ha/agent.log<br>MainThread::INFO::2014-12-27<br>1=
1:36:28,855::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine=
.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.93 (id: 2, sco=
re: 2400)<br>MainThread::INFO::2014-12-27<br>11:36:39,130::hosted_engine::3=
27::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitor=
ing)<br>Current state EngineUp (score: 2400)<br>MainThread::INFO::2014-12-2=
7<br>11:36:39,130::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_=
engine.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.93 (id: =
2, score: 2400)<br>MainThread::INFO::2014-12-27<br>11:36:49,449::hosted_eng=
ine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_m=
onitoring)<br>Current state EngineUp (score: 2400)<br>MainThread::INFO::201=
4-12-27<br>11:36:49,449::hosted_engine::332::ovirt_hosted_engine_ha.agent.h=
osted_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.93=
(id: 2, score: 2400)<br>MainThread::INFO::2014-12-27<br>11:36:59,739::host=
ed_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(s=
tart_monitoring)<br>Current state EngineUp (score: 2400)<br>MainThread::INF=
O::2014-12-27<br>11:36:59,739::hosted_engine::332::ovirt_hosted_engine_ha.a=
gent.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.=
0.0.93 (id: 2, score: 2400)<br>MainThread::INFO::2014-12-27<br>11:37:09,779=
::states::394::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(co=
nsume)<br>Engine vm running on localhost<br>MainThread::INFO::2014-12-27<br=
>11:37:10,026::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engi=
ne.HostedEngine::(start_monitoring)<br>Current state EngineUp (score: 2400)=
<br>MainThread::INFO::2014-12-27<br>11:37:10,026::hosted_engine::332::ovirt=
_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>B=
est remote host 10.0.0.93 (id: 2, score: 2400)<br>MainThread::INFO::2014-12=
-27<br>11:37:20,331::hosted_engine::327::ovirt_hosted_engine_ha.agent.hoste=
d_engine.HostedEngine::(start_monitoring)<br>Current state EngineUp (score:=
2400)<br>MainThread::INFO::2014-12-27<br>11:37:20,331::hosted_engine::332:=
:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring=
)<br>Best remote host 10.0.0.93 (id: 2, score: 2400)<br><div><br></div><br>=
[root@compute2-2 ~]# tail -f /var/log/ovirt-hosted-engine-ha/agent.log<br>M=
ainThread::INFO::2014-12-27<br>08:36:12,462::hosted_engine::332::ovirt_host=
ed_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Best r=
emote host 10.0.0.94 (id: 1, score: 2400)<br>MainThread::INFO::2014-12-27<b=
r>08:36:22,797::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_eng=
ine.HostedEngine::(start_monitoring)<br>Current state EngineDown (score: 24=
00)<br>MainThread::INFO::2014-12-27<br>08:36:22,798::hosted_engine::332::ov=
irt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<b=
r>Best remote host 10.0.0.94 (id: 1, score: 2400)<br>MainThread::INFO::2014=
-12-27<br>08:36:32,876::states::437::ovirt_hosted_engine_ha.agent.hosted_en=
gine.HostedEngine::(consume)<br>Engine vm is running on host 10.0.0.94 (id =
1)<br>MainThread::INFO::2014-12-27<br>08:36:33,169::hosted_engine::327::ovi=
rt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br=
>Current state EngineDown (score: 2400)<br>MainThread::INFO::2014-12-27<br>=
08:36:33,169::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engin=
e.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.94 (id: 1, sc=
ore: 2400)<br>MainThread::INFO::2014-12-27<br>08:36:43,567::hosted_engine::=
327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monito=
ring)<br>Current state EngineDown (score: 2400)<br>MainThread::INFO::2014-1=
2-27<br>08:36:43,567::hosted_engine::332::ovirt_hosted_engine_ha.agent.host=
ed_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.94 (i=
d: 1, score: 2400)<br>MainThread::INFO::2014-12-27<br>08:36:53,858::hosted_=
engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(star=
t_monitoring)<br>Current state EngineDown (score: 2400)<br>MainThread::INFO=
::2014-12-27<br>08:36:53,858::hosted_engine::332::ovirt_hosted_engine_ha.ag=
ent.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.0=
.0.94 (id: 1, score: 2400)<br>MainThread::INFO::2014-12-27<br>08:37:04,028:=
:state_machine::160::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngin=
e::(refresh)<br>Global metadata: {'maintenance': False}<br>MainThread::INFO=
::2014-12-27<br>08:37:04,028::state_machine::165::ovirt_hosted_engine_ha.ag=
ent.hosted_engine.HostedEngine::(refresh)<br>Host 10.0.0.94 (id 1): {'extra=
':<br>'metadata_parse_version=3D1\nmetadata_feature_version=3D1\ntimestamp=
=3D835987<br>(Sat Dec 27 11:37:30<br>2014)\nhost-id=3D1\nscore=3D2400\nmain=
tenance=3DFalse\nstate=3DEngineUp\n',<br>'hostname': '10.0.0.94', 'alive': =
True, 'host-id': 1, 'engine-status':<br>{'health': 'good', 'vm': 'up', 'det=
ail': 'up'}, 'score': 2400,<br>'maintenance': False, 'host-ts': 835987}<br>=
MainThread::INFO::2014-12-27<br>08:37:04,028::state_machine::165::ovirt_hos=
ted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)<br>Host 10.0.0.92=
(id 3): {'extra':<br>'metadata_parse_version=3D1\nmetadata_feature_version=
=3D1\ntimestamp=3D681528<br>(Sat Dec 27 08:37:41<br>2014)\nhost-id=3D3\nsco=
re=3D0\nmaintenance=3DTrue\nstate=3DLocalMaintenance\n',<br>'hostname': '10=
.0.0.92', 'alive': True, 'host-id': 3, 'engine-status':<br>{'reason': 'vm n=
ot running on this host', 'health': 'bad', 'vm':<br>'down', 'detail': 'unkn=
own'}, 'score': 0, 'maintenance': True,<br>'host-ts': 681528}<br>MainThread=
::INFO::2014-12-27<br>08:37:04,028::state_machine::168::ovirt_hosted_engine=
_ha.agent.hosted_engine.HostedEngine::(refresh)<br>Local (id 2): {'engine-h=
ealth': {'reason': 'vm not running on this<br>host', 'health': 'bad', 'vm':=
'down', 'detail': 'unknown'}, 'bridge':<br>True, 'mem-free': 15300.0, 'mai=
ntenance': False, 'cpu-load': 0.0215,<br>'gateway': True}<br>MainThread::IN=
FO::2014-12-27<br>08:37:04,265::hosted_engine::327::ovirt_hosted_engine_ha.=
agent.hosted_engine.HostedEngine::(start_monitoring)<br>Current state Engin=
eDown (score: 2400)<br>MainThread::INFO::2014-12-27<br>08:37:04,265::hosted=
_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(sta=
rt_monitoring)<br>Best remote host 10.0.0.94 (id: 1, score: 2400)<br><div><=
br></div>Thanks,<br>Cong<br><div><br></div>On 2014/12/22, at 5:29, "Simone =
Tiraboschi" <stirabos@redhat.com<mailto:stirabos@redhat.com>> w=
rote:<br><div><br></div><br><div><br></div>----- Original Message -----<br>=
From: "Cong Yue" <Cong_Yue@alliedtelesis.com<mailto:Cong_Yue@alliedte=
lesis.com>><br>To: "Simone Tiraboschi" <stirabos(a)redhat.com<mai=
lto:stirabos@redhat.com>><br>Cc: users@ovirt.org<mailto:users@ovir=
t.org><br>Sent: Friday, December 19, 2014 7:22:10 PM<br>Subject: RE: [ov=
irt-users] VM failover with ovirt3.5<br><div><br></div>Thanks for the infor=
mation. This is the log for my three ovirt nodes.<br>From the output of hos=
ted-engine --vm-status, it shows the engine state for<br>my 2nd and 3rd ovi=
rt node is DOWN.<br>Is this the reason why VM failover not work in my envir=
onment?<br><div><br></div>No, they looks ok: you can run the engine VM on s=
ingle host at a time.<br><div><br></div>How can I make<br>also engine works=
for my 2nd and 3rd ovit nodes?<br><div><br></div>If you put the host 1 in =
local maintenance mode ( hosted-engine --set-maintenance --mode=3Dlocal ) t=
he VM should migrate to host 2; if you reactivate host 1 ( hosted-engine --=
set-maintenance --mode=3Dnone ) and put host 2 in local maintenance mode th=
e VM should migrate again.<br><div><br></div>Can you please try that and po=
st the logs if something is going bad?<br><div><br></div><br>--<br>--=3D=3D=
Host 1 status =3D=3D--<br><div><br></div>Status up-to-date &=
nbsp; : True<br>Hostname &n=
bsp; =
: 10.0.0.94<br>Host ID &nb=
sp; : 1<br>Engine status &n=
bsp; :=
{"health": "good", "vm": "up",<br>"detail": "up"}<br>Score &=
nbsp; =
: 2400<br>Local maintenance  =
; : False<br>Host timestamp =
: 150475<br>Extra metadat=
a (valid at timestamp):<br>metadata_parse_version=3D1<br>metadata_feature_v=
ersion=3D1<br>timestamp=3D150475 (Fri Dec 19 13:12:18 2014)<br>host-id=3D1<=
br>score=3D2400<br>maintenance=3DFalse<br>state=3DEngineUp<br><div><br></di=
v><br>--=3D=3D Host 2 status =3D=3D--<br><div><br></div>Status up-to-date &=
nbsp; : True<br>Host=
name =
: 10.0.0.93<br>Host ID &nb=
sp; : 2<br>En=
gine status =
: {"reason": "vm not running on<br>this host", "health": "bad"=
, "vm": "down", "detail": "unknown"}<br>Score &=
nbsp; =
: 2400<br>Local maintenance  =
; : False<br>Host timestamp =
: 1572<br>Extra metadata (valid at time=
stamp):<br>metadata_parse_version=3D1<br>metadata_feature_version=3D1<br>ti=
mestamp=3D1572 (Fri Dec 19 10:12:18 2014)<br>host-id=3D2<br>score=3D2400<br=
>maintenance=3DFalse<br>state=3DEngineDown<br><div><br></div><br>--=3D=3D H=
ost 3 status =3D=3D--<br><div><br></div>Status up-to-date &nb=
sp; : False<br>Hostname &nb=
sp; &=
nbsp; : 10.0.0.92<br>Host ID &nbs=
p; : 3<br>Engine status &nb=
sp; : =
unknown stale-data<br>Score  =
; : 2400<br>Local ma=
intenance : F=
alse<br>Host timestamp &nb=
sp; : 987<br>Extra metadata (valid at timestamp):<br>metadata=
_parse_version=3D1<br>metadata_feature_version=3D1<br>timestamp=3D987 (Fri =
Dec 19 10:09:58 2014)<br>host-id=3D3<br>score=3D2400<br>maintenance=3DFalse=
<br>state=3DEngineDown<br><div><br></div>--<br>And the /var/log/ovirt-hoste=
d-engine-ha/agent.log for three ovirt nodes are<br>as follows:<br>--<br>10.=
0.0.94(hosted-engine-1)<br>---<br>MainThread::INFO::2014-12-19<br>13:09:33,=
716::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedE=
ngine::(start_monitoring)<br>Current state EngineUp (score: 2400)<br>MainTh=
read::INFO::2014-12-19<br>13:09:33,716::hosted_engine::332::ovirt_hosted_en=
gine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote=
host 10.0.0.93 (id: 2, score: 2400)<br>MainThread::INFO::2014-12-19<br>13:=
09:44,017::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.H=
ostedEngine::(start_monitoring)<br>Current state EngineUp (score: 2400)<br>=
MainThread::INFO::2014-12-19<br>13:09:44,017::hosted_engine::332::ovirt_hos=
ted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Best =
remote host 10.0.0.93 (id: 2, score: 2400)<br>MainThread::INFO::2014-12-19<=
br>13:09:54,303::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_en=
gine.HostedEngine::(start_monitoring)<br>Current state EngineUp (score: 240=
0)<br>MainThread::INFO::2014-12-19<br>13:09:54,303::hosted_engine::332::ovi=
rt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br=
>Best remote host 10.0.0.93 (id: 2, score: 2400)<br>MainThread::INFO::2014-=
12-19<br>13:10:04,342::states::394::ovirt_hosted_engine_ha.agent.hosted_eng=
ine.HostedEngine::(consume)<br>Engine vm running on localhost<br>MainThread=
::INFO::2014-12-19<br>13:10:04,617::hosted_engine::327::ovirt_hosted_engine=
_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Current state E=
ngineUp (score: 2400)<br>MainThread::INFO::2014-12-19<br>13:10:04,617::host=
ed_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(s=
tart_monitoring)<br>Best remote host 10.0.0.93 (id: 2, score: 2400)<br>Main=
Thread::INFO::2014-12-19<br>13:10:14,657::state_machine::160::ovirt_hosted_=
engine_ha.agent.hosted_engine.HostedEngine::(refresh)<br>Global metadata: {=
'maintenance': False}<br>MainThread::INFO::2014-12-19<br>13:10:14,657::stat=
e_machine::165::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(r=
efresh)<br>Host 10.0.0.93 (id 2): {'extra':<br>'metadata_parse_version=3D1\=
nmetadata_feature_version=3D1\ntimestamp=3D1448<br>(Fri Dec 19 10:10:14<br>=
2014)\nhost-id=3D2\nscore=3D2400\nmaintenance=3DFalse\nstate=3DEngineDown\n=
',<br>'hostname': '10.0.0.93', 'alive': True, 'host-id': 2, 'engine-status'=
:<br>{'reason': 'vm not running on this host', 'health': 'bad', 'vm':<br>'d=
own', 'detail': 'unknown'}, 'score': 2400, 'maintenance': False,<br>'host-t=
s': 1448}<br>MainThread::INFO::2014-12-19<br>13:10:14,657::state_machine::1=
65::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(refresh)<br>H=
ost 10.0.0.92 (id 3): {'extra':<br>'metadata_parse_version=3D1\nmetadata_fe=
ature_version=3D1\ntimestamp=3D987<br>(Fri Dec 19 10:09:58<br>2014)\nhost-i=
d=3D3\nscore=3D2400\nmaintenance=3DFalse\nstate=3DEngineDown\n',<br>'hostna=
me': '10.0.0.92', 'alive': True, 'host-id': 3, 'engine-status':<br>{'reason=
': 'vm not running on this host', 'health': 'bad', 'vm':<br>'down', 'detail=
': 'unknown'}, 'score': 2400, 'maintenance': False,<br>'host-ts': 987}<br>M=
ainThread::INFO::2014-12-19<br>13:10:14,658::state_machine::168::ovirt_host=
ed_engine_ha.agent.hosted_engine.HostedEngine::(refresh)<br>Local (id 1): {=
'engine-health': {'health': 'good', 'vm': 'up',<br>'detail': 'up'}, 'bridge=
': True, 'mem-free': 1079.0, 'maintenance':<br>False, 'cpu-load': 0.0269, '=
gateway': True}<br>MainThread::INFO::2014-12-19<br>13:10:14,904::hosted_eng=
ine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_m=
onitoring)<br>Current state EngineUp (score: 2400)<br>MainThread::INFO::201=
4-12-19<br>13:10:14,904::hosted_engine::332::ovirt_hosted_engine_ha.agent.h=
osted_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.93=
(id: 2, score: 2400)<br>MainThread::INFO::2014-12-19<br>13:10:25,210::host=
ed_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(s=
tart_monitoring)<br>Current state EngineUp (score: 2400)<br>MainThread::INF=
O::2014-12-19<br>13:10:25,210::hosted_engine::332::ovirt_hosted_engine_ha.a=
gent.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.=
0.0.93 (id: 2, score: 2400)<br>MainThread::INFO::2014-12-19<br>13:10:35,499=
::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngi=
ne::(start_monitoring)<br>Current state EngineUp (score: 2400)<br>MainThrea=
d::INFO::2014-12-19<br>13:10:35,499::hosted_engine::332::ovirt_hosted_engin=
e_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote ho=
st 10.0.0.93 (id: 2, score: 2400)<br>MainThread::INFO::2014-12-19<br>13:10:=
45,784::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.Host=
edEngine::(start_monitoring)<br>Current state EngineUp (score: 2400)<br>Mai=
nThread::INFO::2014-12-19<br>13:10:45,785::hosted_engine::332::ovirt_hosted=
_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Best rem=
ote host 10.0.0.93 (id: 2, score: 2400)<br>MainThread::INFO::2014-12-19<br>=
13:10:56,070::hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engin=
e.HostedEngine::(start_monitoring)<br>Current state EngineUp (score: 2400)<=
br>MainThread::INFO::2014-12-19<br>13:10:56,070::hosted_engine::332::ovirt_=
hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Be=
st remote host 10.0.0.93 (id: 2, score: 2400)<br>MainThread::INFO::2014-12-=
19<br>13:11:06,109::states::394::ovirt_hosted_engine_ha.agent.hosted_engine=
.HostedEngine::(consume)<br>Engine vm running on localhost<br>MainThread::I=
NFO::2014-12-19<br>13:11:06,359::hosted_engine::327::ovirt_hosted_engine_ha=
.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Current state Engi=
neUp (score: 2400)<br>MainThread::INFO::2014-12-19<br>13:11:06,359::hosted_=
engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(star=
t_monitoring)<br>Best remote host 10.0.0.93 (id: 2, score: 2400)<br>MainThr=
ead::INFO::2014-12-19<br>13:11:16,658::hosted_engine::327::ovirt_hosted_eng=
ine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Current stat=
e EngineUp (score: 2400)<br>MainThread::INFO::2014-12-19<br>13:11:16,658::h=
osted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:=
:(start_monitoring)<br>Best remote host 10.0.0.93 (id: 2, score: 2400)<br>M=
ainThread::INFO::2014-12-19<br>13:11:26,991::hosted_engine::327::ovirt_host=
ed_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Curren=
t state EngineUp (score: 2400)<br>MainThread::INFO::2014-12-19<br>13:11:26,=
991::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedE=
ngine::(start_monitoring)<br>Best remote host 10.0.0.93 (id: 2, score: 2400=
)<br>MainThread::INFO::2014-12-19<br>13:11:37,341::hosted_engine::327::ovir=
t_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>=
Current state EngineUp (score: 2400)<br>MainThread::INFO::2014-12-19<br>13:=
11:37,341::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.H=
ostedEngine::(start_monitoring)<br>Best remote host 10.0.0.93 (id: 2, score=
: 2400)<br>----<br><div><br></div>10.0.0.93 (hosted-engine-2)<br>MainThread=
::INFO::2014-12-19<br>10:12:18,339::hosted_engine::327::ovirt_hosted_engine=
_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Current state E=
ngineDown (score: 2400)<br>MainThread::INFO::2014-12-19<br>10:12:18,339::ho=
sted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::=
(start_monitoring)<br>Best remote host 10.0.0.94 (id: 1, score: 2400)<br>Ma=
inThread::INFO::2014-12-19<br>10:12:28,651::hosted_engine::327::ovirt_hoste=
d_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Current=
state EngineDown (score: 2400)<br>MainThread::INFO::2014-12-19<br>10:12:28=
,652::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engine.Hosted=
Engine::(start_monitoring)<br>Best remote host 10.0.0.94 (id: 1, score: 240=
0)<br>MainThread::INFO::2014-12-19<br>10:12:39,010::hosted_engine::327::ovi=
rt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br=
>Current state EngineDown (score: 2400)<br>MainThread::INFO::2014-12-19<br>=
10:12:39,010::hosted_engine::332::ovirt_hosted_engine_ha.agent.hosted_engin=
e.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.94 (id: 1, sc=
ore: 2400)<br>MainThread::INFO::2014-12-19<br>10:12:49,338::hosted_engine::=
327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monito=
ring)<br>Current state EngineDown (score: 2400)<br>MainThread::INFO::2014-1=
2-19<br>10:12:49,338::hosted_engine::332::ovirt_hosted_engine_ha.agent.host=
ed_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.0.0.94 (i=
d: 1, score: 2400)<br>MainThread::INFO::2014-12-19<br>10:12:59,642::hosted_=
engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(star=
t_monitoring)<br>Current state EngineDown (score: 2400)<br>MainThread::INFO=
::2014-12-19<br>10:12:59,642::hosted_engine::332::ovirt_hosted_engine_ha.ag=
ent.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote host 10.0=
.0.94 (id: 1, score: 2400)<br>MainThread::INFO::2014-12-19<br>10:13:10,010:=
:hosted_engine::327::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngin=
e::(start_monitoring)<br>Current state EngineDown (score: 2400)<br>MainThre=
ad::INFO::2014-12-19<br>10:13:10,010::hosted_engine::332::ovirt_hosted_engi=
ne_ha.agent.hosted_engine.HostedEngine::(start_monitoring)<br>Best remote h=
ost 10.0.0.94 (id: 1, score: 2400)<br><div><br></div><br>10.0.0.92(hosted-e=
ngine-3)<br>same as 10.0.0.93<br>--<br><div><br></div>-----Original Message=
-----<br>From: Simone Tiraboschi [mailto:stirabos@redhat.com]<br>Sent: Frid=
ay, December 19, 2014 12:28 AM<br>To: Yue, Cong<br>Cc: users(a)ovirt.org<m=
ailto:users@ovirt.org><br>Subject: Re: [ovirt-users] VM failover with ov=
irt3.5<br><div><br></div><br><div><br></div>----- Original Message -----<br=
>From: "Cong Yue" <Cong_Yue@alliedtelesis.com<mailto:Cong_Yue@alliedt=
elesis.com>><br>To: users@ovirt.org<mailto:users@ovirt.org><br>=
Sent: Friday, December 19, 2014 2:14:33 AM<br>Subject: [ovirt-users] VM fai=
lover with ovirt3.5<br><div><br></div><br><div><br></div>Hi<br><div><br></d=
iv><br><div><br></div>In my environment, I have 3 ovirt nodes as one cluste=
r. And on top of<br>host-1, there is one vm to host ovirt engine.<br><div><=
br></div>Also I have one external storage for the cluster to use as data do=
main<br>of engine and data.<br><div><br></div>I confirmed live migration wo=
rks well in my environment.<br><div><br></div>But it seems very buggy for V=
M failover if I try to force to shut down<br>one ovirt node. Sometimes the =
VM in the node which is shutdown can<br>migrate to other host, but it take =
more than several minutes.<br><div><br></div>Sometimes, it can not migrate =
at all. Sometimes, only when the host is<br>back, the VM is beginning to mo=
ve.<br><div><br></div>Can you please check or share the logs under /var/log=
/ovirt-hosted-engine-ha/<br>?<br><div><br></div>Is there some documentation=
to explain how VM failover is working? And<br>is there some bugs reported =
related with this?<br><div><br></div>http://www.ovirt.org/Features/Self_Hos=
ted_Engine#Agent_State_Diagram<br><div><br></div>Thanks in advance,<br><div=
><br></div>Cong<br><div><br></div><br><div><br></div><br>This e-mail messag=
e is for the sole use of the intended recipient(s)<br>and may contain confi=
dential and privileged information. Any<br>unauthorized review, use, disclo=
sure or distribution is prohibited. If<br>you are not the intended recipien=
t, please contact the sender by reply<br>e-mail and destroy all copies of t=
he original message. If you are the<br>intended recipient, please be advise=
d that the content of this message<br>is subject to access, review and disc=
losure by the sender's e-mail System<br>Administrator.<br><div><br></div>__=
_____________________________________________<br>Users mailing list<br>User=
s@ovirt.org<mailto:Users@ovirt.org><br>http://lists.ovirt.org/mailman=
/listinfo/users<br><div><br></div>This e-mail message is for the sole use o=
f the intended recipient(s) and may<br>contain confidential and privileged =
information. Any unauthorized review,<br>use, disclosure or distribution is=
prohibited. If you are not the intended<br>recipient, please contact the s=
ender by reply e-mail and destroy all copies<br>of the original message. If=
you are the intended recipient, please be<br>advised that the content of t=
his message is subject to access, review and<br>disclosure by the sender's =
e-mail System Administrator.<br><div><br></div><br>This e-mail message is f=
or the sole use of the intended recipient(s) and may contain confidential a=
nd privileged information. Any unauthorized review, use, disclosure or dist=
ribution is prohibited. If you are not the intended recipient, please conta=
ct the sender by reply e-mail and destroy all copies of the original messag=
e. If you are the intended recipient, please be advised that the content of=
this message is subject to access, review and disclosure by the sender's e=
-mail System Administrator.<br>____________________________________________=
___<br>Users mailing list<br>Users@ovirt.org<mailto:Users@ovirt.org><=
br>http://lists.ovirt.org/mailman/listinfo/users<br><div><br></div>________=
________________________<br>This e-mail message is for the sole use of the =
intended recipient(s) and may contain confidential and privileged informati=
on. Any unauthorized review, use, disclosure or distribution is prohibited.=
If you are not the intended recipient, please contact the sender by reply =
e-mail and destroy all copies of the original message. If you are the inten=
ded recipient, please be advised that the content of this message is subjec=
t to access, review and disclosure by the sender's e-mail System Administra=
tor.<br>-------------- next part --------------<br>An HTML attachment was s=
crubbed...<br>URL: <http://lists.ovirt.org/pipermail/users/attachments/2=
0141228/c5ac26a7/attachment.html><br><div><br></div>--------------------=
----------<br><div><br></div>______________________________________________=
_<br>Users mailing list<br>Users(a)ovirt.org<br>http://lists.ovirt.org/mailma=
n/listinfo/users<br><div><br></div><br>End of Users Digest, Vol 39, Issue 1=
63<br>**************************************<br></div><div><br></div></div>=
</body></html>
------=_Part_1871238_1615445632.1419874799888--
9 years, 11 months
Re: [ovirt-users] Backup and Restore of VMs
by Nathanaël Blanchet
Le 29/12/2014 12:10, Nathanaël Blanchet a écrit :
> Hello,
>
> Thank you for the script, yes, it is clearer now.
> However, there is a something I misunderstand, my raisoning may be
> stupid, just tell me.
> It is closely about the backup process, precisely when the disk is
> attached to the vm... At this moment, an extern process should do this
> step. If we consider using the dd command to have a byte-to-byte copy
> from the snapshot disk, why not directly attaching this cloned raw
> virtual disk to the new OVF cloned VM instead of creating a new
> provisonned disk?
> But you just might consider doing file copy during the backup process
> (rsnyc like) which implies to format the new created disk and many
> additionnal steps as creating Logical Volumes if needed, etc...
> Can anybody help me with understanding this step?
> Thank you.
>
> Le 28/12/2014 10:02, Liron Aravot a écrit :
>> Hi All,
>> I've uploaded an example script (oVirt python-sdk) that contains
>> examples to the steps
>> described on
>> http://www.ovirt.org/Features/Backup-Restore_API_Integration
>>
>> let me know how it works out for you -
>> https://github.com/laravot/backuprestoreapi
>>
>>
>> ----- Original Message -----
>>> From: "Liron Aravot" <laravot(a)redhat.com>
>>> To: "Soeren Malchow" <soeren.malchow(a)mcon.net>
>>> Cc: "Vered Volansky" <vered(a)redhat.com>, Users(a)ovirt.org
>>> Sent: Wednesday, December 24, 2014 12:20:36 PM
>>> Subject: Re: [ovirt-users] Backup and Restore of VMs
>>>
>>> Hi guys,
>>> I'm currently working on complete example of the steps appear in -
>>> http://www.ovirt.org/Features/Backup-Restore_API_Integration
>>>
>>> will share with you as soon as i'm done with it.
>>>
>>> thanks,
>>> Liron
>>>
>>> ----- Original Message -----
>>>> From: "Soeren Malchow" <soeren.malchow(a)mcon.net>
>>>> To: "Vered Volansky" <vered(a)redhat.com>
>>>> Cc: Users(a)ovirt.org
>>>> Sent: Wednesday, December 24, 2014 11:58:01 AM
>>>> Subject: Re: [ovirt-users] Backup and Restore of VMs
>>>>
>>>> Dear Vered,
>>>>
>>>> at some point we have to start, and right now we are getting
>>>> closer, even
>>>> with the documentation it is sometime hard to find the correct
>>>> place to
>>>> start, especially without specific examples (and I have decades of
>>>> experience now)
>>>>
>>>> with the backup plugin that came from Lucas Vandroux we have a
>>>> starting
>>>> point
>>>> right now, and we will continue form here and try to work with him
>>>> on this.
>>>>
>>>> Regards
>>>> Soeren
>>>>
>>>>
>>>> -----Original Message-----
>>>> From: users-bounces(a)ovirt.org [mailto:users-bounces@ovirt.org] On
>>>> Behalf Of
>>>> Blaster
>>>> Sent: Tuesday, December 23, 2014 5:49 PM
>>>> To: Vered Volansky
>>>> Cc: Users(a)ovirt.org
>>>> Subject: Re: [ovirt-users] Backup and Restore of VMs
>>>>
>>>> Sounds like a Chicken/Egg problem.
>>>>
>>>>
>>>>
>>>> On 12/23/2014 12:03 AM, Vered Volansky wrote:
>>>>> Well, real world is community...
>>>>> Maybe change the name of the thread in order to make this more
>>>>> clear for
>>>>> someone from the community that might be able to could help.
>>>>> Maybe something like:
>>>>> Request for sharing real world example of VM backups.
>>>>>
>>>>> We obviously use it as part as developing, but I don't have what
>>>>> you're
>>>>> asking for.
>>>>> If you try it yourself and stumble onto questions in the process,
>>>>> please
>>>>> ask the list and we'll do our best to help.
>>>>>
>>>>> Best Regards,
>>>>> Vered
>>>>>
>>>>> ----- Original Message -----
>>>>>> From: "Blaster" <blaster(a)556nato.com>
>>>>>> To: "Vered Volansky" <vered(a)redhat.com>
>>>>>> Cc: Users(a)ovirt.org
>>>>>> Sent: Tuesday, December 23, 2014 5:56:13 AM
>>>>>> Subject: Re: [ovirt-users] Backup and Restore of VMs
>>>>>>
>>>>>>
>>>>>> Vered,
>>>>>>
>>>>>> It sounds like Soeren already knows about that page. His issue seems
>>>>>> to be, as well as the issue of others judging by comments on
>>>>>> here, is
>>>>>> that there aren’t any real world examples of how the API is used.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Dec 22, 2014, at 9:26 AM, Vered Volansky <vered(a)redhat.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Please take a look at:
>>>>>>> http://www.ovirt.org/Features/Backup-Restore_API_Integration
>>>>>>>
>>>>>>> Specifically:
>>>>>>> http://www.ovirt.org/Features/Backup-Restore_API_Integration#Full_VM
>>>>>>>
>>>>>>> _Backups
>>>>>>>
>>>>>>> Regards,
>>>>>>> Vered
>>>>>>>
>>>>>>> ----- Original Message -----
>>>>>>>> From: "Soeren Malchow" <soeren.malchow(a)mcon.net>
>>>>>>>> To: Users(a)ovirt.org
>>>>>>>> Sent: Friday, December 19, 2014 1:44:38 PM
>>>>>>>> Subject: [ovirt-users] Backup and Restore of VMs
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Dear all,
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> ovirt: 3.5
>>>>>>>>
>>>>>>>> gluster: 3.6.1
>>>>>>>>
>>>>>>>> OS: CentOS 7 (except ovirt hosted engine = centos 6.6)
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> i spent quite a while researching backup and restore for VMs right
>>>>>>>> now, so far I have come up with this as a start for us
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> - API calls to create schedule snapshots of virtual machines This
>>>>>>>> is or short term storage and to guard against accidential deletion
>>>>>>>> within the VM but not for storage corruption
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> - Since we are using a gluster backend, gluster snapshots I wasn’t
>>>>>>>> able so far to really test it since the LV needs to be thin
>>>>>>>> provisioned and we did not do that in the setup
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> For the API calls we have the problem that we can not find any
>>>>>>>> existing scripts or something like that to do those snapshots (and
>>>>>>>> i/we are not developers enough to do that).
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> As an additional information, we have a ZFS based storage with
>>>>>>>> deduplication that we use for other backup purposes which does a
>>>>>>>> great job especially because of the deduplication (we can storage
>>>>>>>> generations of backups without problems), this storage can be NFS
>>>>>>>> exported and used as backup repository.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Are there any backup and restore procedure you guys are using for
>>>>>>>> backup and restore that works for you and can you point me into
>>>>>>>> the
>>>>>>>> right direction ?
>>>>>>>>
>>>>>>>> I am a little bit list right now and would appreciate any help.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Regards
>>>>>>>>
>>>>>>>> Soeren
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> Users mailing list
>>>>>>>> Users(a)ovirt.org
>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Users mailing list
>>>>>>> Users(a)ovirt.org
>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>> _______________________________________________
>>>>>> Users mailing list
>>>>>> Users(a)ovirt.org
>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>
>>>>
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users(a)ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users(a)ovirt.org
>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>
9 years, 11 months
vdsm noipspoof.py vdsm hook problem
by InterNetX - Juergen Gotteswinter
Hi,
i am trying to get the noipspoof.py Hook up and running, which works
fine so far if i only feed it with a single ip. when trying to add 2+,
like described in the ource (comma seperated), the gui tells me that
this isnt expected / nice and wont let me do this.
I already tried modding the Regex, which made the engine to take a
2nd/3rd ip (comma seperated), but it seems that theres somehere else
something wrong with parsing this.
VDSM throws this:
vdsm vm.Vm ERROR vmId=`4c9cb160-2283-4769-a69c-434e6c992c2b`::The vm
start process failed#012Traceback (most recent call last):#012 File
"/usr/share/vdsm/virt/vm.py", line 2266, in _startUnderlyingVm#012
self._run()#012 File "/usr/share/vdsm/virt/vm.py", line 3332, in
_run#012 domxml = hooks.before_vm_start(self._buildCmdLine(),
self.conf)#012 File "/usr/share/vdsm/hooks.py", line 142, in
before_vm_start#012 return _runHooksDir(domxml, 'before_vm_start',
vmconf=vmconf)#012 File "/usr/share/vdsm/hooks.py", line 110, in
_runHooksDir#012 raise HookError()#012HookError
The VM fails to start, engine tries this on every available host (which,
not surprising fail, too).
Anyone any ideas / patches / hints how to mod this hook ?
Thanks
Juergen
9 years, 11 months
Problem after update ovirt to 3.5
by Juan Jose
Hello everybody,
After upgrade my engine to oVirt 3.5, I have also upgraded one of my hosts
to oVirt 3.5. After that it seems that all have gone good aparently.
But in some seconds my ISO domain is desconnected and it is impossible to
Activate. I'm attaching my engine.log. The below error is showed each time
I try to Activate the ISO domain. Before the upgrade it was working without
problems:
2014-12-15 13:25:07,607 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-5) [460733dd] Correlation ID: null, Call
Stack: null, Custom Event ID: -1, Message: Failed to connect Host host1 to
the Storage Domains ISO_DOMAIN.
2014-12-15 13:25:07,608 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
(org.ovirt.thread.pool-8-thread-5) [460733dd] FINISH,
ConnectStorageServerVDSCommand, return:
{81c0a853-715c-4478-a812-6a74808fc482=477}, log id: 3590969e
2014-12-15 13:25:07,615 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-8-thread-5) [460733dd] Correlation ID: null, Call
Stack: null, Custom Event ID: -1, Message: The error message for connection
ovirt-engine.siee.local:/var/lib/exports/iso-20140303082312 returned by
VDSM was: Problem while trying to mount target
2014-12-15 13:25:07,616 ERROR
[org.ovirt.engine.core.bll.storage.NFSStorageHelper]
(org.ovirt.thread.pool-8-thread-5) [460733dd] The connection with details
ovirt-engine.siee.local:/var/lib/exports/iso-20140303082312 failed because
of error code 477 and error message is: problem while trying to mount target
If any other information is required, please tell me.
Many thanks in advanced,
Juanjo.
9 years, 11 months
Stucked VM Migration and now only run once
by Kurt Woitschach
This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--gioR5jNPrlhFWJc7CMfuUMwUJC4shLNV5
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hi all,
we have a Problem with a VM that can only be started in run-once mode.
After a temporary network disconnect on the hosting node, the vm (and
some others) was down. When I tried to start regularly, it showed a
currently beeing migrated status.
I only could start it with run-once.
Reboot didn't make a change.
Any ideas?
Greets
Kurt
--=20
Kurt Woitschach-M=C3=BCller kurt.woitschach-mueller(a)tngtech.com * +49-17=
43180076=20
TNG Technology Consulting GmbH, Betastr. 13a, 85774 Unterf=C3=B6hring=20
Gesch=C3=A4ftsf=C3=BChrer: Henrik Klagges, Gerhard M=C3=BCller, Christoph=
Stock=20
Sitz: Unterf=C3=B6hring * Amtsgericht M=C3=BCnchen * HRB 135082=20
--gioR5jNPrlhFWJc7CMfuUMwUJC4shLNV5
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAEBAgAGBQJUnwb7AAoJEO13zDeMkkLQR2MH/j17+d1d/OPy7oqp4jCorHnk
u5B/RC9ZNpPEhFBsw9KTVAOE/W8iKfBoHprYKWm/82obmchEC/FysYD9SCLCJNx3
Zjj2mT+Mxh+L+FFwkSCE4+DBh/CehxcO2AagbmGejjl1a5mYYZuNZBYhbTBdY8JF
PPow94KENh0VHDCO5suRcnG/oDI90D/M1wU5zxO56+XojKzFnhXWdAlGgbTgJ+3d
CCTP99mRM1e+A5DxKfB17CLXBBiou9iswnO7XW2PYR0Lon7D5rRdJCtdW9zDU2FW
pXL7+vGvevIeY1hpZl6rIo8xMiZIbyi9hSrLI3sbHejmBX0ONcO9iRnt4q9AfmI=
=0iXO
-----END PGP SIGNATURE-----
--gioR5jNPrlhFWJc7CMfuUMwUJC4shLNV5--
9 years, 11 months
Re: [ovirt-users] Can not connect to Storage domain data
by Yue, Cong
--_000_ED08B56256B38842A463A2A0804C5AC0326ACA471Csvrcaexch1atg_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
I found the workaround for this.
For some reason my Storage domain of data can not be mounted. I just mount =
it manually, like
mount -t nfs nfs2-3:/data /rhev/data-center/mnt/nfs2-3:_data
Actually, the folder of "/rhev/data-center/mnt/nfs2-3:_data" has been creat=
ed. I think this may be some bug as for in my environment, I will reproduce=
it always if I try to deploy the host from second time.
Thanks,
Cong
From: Yue, Cong
Sent: Thursday, December 18, 2014 2:17 PM
To: 'users(a)ovirt.org'
Subject: RE: Can not connect to Storage domain data
I think the problems for my issue are related with the NFS version.
>From the second, if I change the value of Defaultver /etc/nfsmount.conf fr=
om "Defaultvres=3D4" to "Defaulvers=3D3", the mount can not be done. When I=
changed it back to "Defaultvers=3D4", it will work.
Also from /proc/mounts, it shows the nfs version is nfs4. But for my first =
host, it is nfs3.
Do somebody have the similar issue about thi?
Thank in advance,
Cong
From: Yue, Cong
Sent: Thursday, December 18, 2014 9:52 AM
To: users(a)ovirt.org<mailto:users@ovirt.org>
Subject: Can not connect to Storage domain data
Hi
I successfully deployed the first ovirt host with hosted-engine -deploy. En=
gine VM works well.
While, when I try to create the second host with the same way as the guide =
of
http://community.redhat.com/blog/2014/11/up-and-running-with-ovirt-3-5-part=
-two/
I am not using GlusterFS, and just use one external storage(nfs) in my envi=
ronment.
The issue I have is in the engine administration menu, it says "can not con=
nect to storage domain data"
In the second host, I checked with nfs-check.py for both storage and data d=
omain. It shows the status is ok.
http://www.ovirt.org/Troubleshooting_NFS_Storage_Issues
During deployment of the second host, how the data domain is trying to be m=
ounted?
Thanks,
________________________________
This e-mail message is for the sole use of the intended recipient(s) and ma=
y contain confidential and privileged information. Any unauthorized review,=
use, disclosure or distribution is prohibited. If you are not the intended=
recipient, please contact the sender by reply e-mail and destroy all copie=
s of the original message. If you are the intended recipient, please be adv=
ised that the content of this message is subject to access, review and disc=
losure by the sender's e-mail System Administrator.
--_000_ED08B56256B38842A463A2A0804C5AC0326ACA471Csvrcaexch1atg_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 14 (filtered medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"MS Mincho";
panose-1:2 2 6 9 4 2 5 8 3 4;}
@font-face
{font-family:SimSun;
panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
{font-family:Tahoma;
panose-1:2 11 6 4 3 5 4 4 2 4;}
@font-face
{font-family:"\@SimSun";
panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
{font-family:"\@MS Mincho";
panose-1:2 2 6 9 4 2 5 8 3 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
p.MsoAcetate, li.MsoAcetate, div.MsoAcetate
{mso-style-priority:99;
mso-style-link:"Balloon Text Char";
margin:0in;
margin-bottom:.0001pt;
font-size:8.0pt;
font-family:"Tahoma","sans-serif";}
span.BalloonTextChar
{mso-style-name:"Balloon Text Char";
mso-style-priority:99;
mso-style-link:"Balloon Text";
font-family:"Tahoma","sans-serif";}
span.EmailStyle19
{mso-style-type:personal;
font-family:"Calibri","sans-serif";
color:windowtext;}
span.EmailStyle20
{mso-style-type:personal;
font-family:"Calibri","sans-serif";
color:#1F497D;}
span.EmailStyle21
{mso-style-type:personal-reply;
font-family:"Calibri","sans-serif";
color:#1F497D;}
.MsoChpDefault
{mso-style-type:export-only;
font-size:10.0pt;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"blue" vlink=3D"purple">
<div class=3D"WordSection1">
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
">I found the workaround for this.<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
">For some reason my Storage domain of data can not be mounted. I just moun=
t it manually, like<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
">mount –t nfs nfs2-3:/data /rhev/data-center/mnt/nfs2-3:_data<o:p></=
o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
">Actually, the folder of “/rhev/data-center/mnt/nfs2-3:_data” =
has been created. I think this may be some bug as for in my environment, I =
will reproduce it always if I try to deploy the
host from second time.<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
">Thanks,<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
">Cong<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D"><o:p> </o:p></spa=
n></p>
<div>
<div style=3D"border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in =
0in 0in">
<p class=3D"MsoNormal"><b><span style=3D"font-size:10.0pt;font-family:"=
;Tahoma","sans-serif"">From:</span></b><span style=3D"font-s=
ize:10.0pt;font-family:"Tahoma","sans-serif""> Yue, Con=
g
<br>
<b>Sent:</b> Thursday, December 18, 2014 2:17 PM<br>
<b>To:</b> 'users(a)ovirt.org'<br>
<b>Subject:</b> RE: Can not connect to Storage domain data <o:p></o:p></spa=
n></p>
</div>
</div>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
">I think the problems for my issue are related with the NFS version.<o:p><=
/o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
">From the second, if I change the value of Defaultver /etc/nfsmount.=
conf from “Defaultvres=3D4” to “Defaulvers=3D3”, th=
e mount can not be done. When I changed it back to “Defaultvers=3D4&#=
8221;, it
will work.<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
">Also from /proc/mounts, it shows the nfs version is nfs4. But for my firs=
t host, it is nfs3.<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
">Do somebody have the similar issue about thi?<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
">Thank in advance,<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
">Cong<o:p></o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D;mso-fareast-language:JA=
"><o:p> </o:p></span></p>
<p class=3D"MsoNormal"><span style=3D"color:#1F497D"><o:p> </o:p></spa=
n></p>
<div>
<div style=3D"border:none;border-top:solid #B5C4DF 1.0pt;padding:3.0pt 0in =
0in 0in">
<p class=3D"MsoNormal"><b><span style=3D"font-size:10.0pt;font-family:"=
;Tahoma","sans-serif"">From:</span></b><span style=3D"font-s=
ize:10.0pt;font-family:"Tahoma","sans-serif""> Yue, Con=
g
<br>
<b>Sent:</b> Thursday, December 18, 2014 9:52 AM<br>
<b>To:</b> <a href=3D"mailto:users@ovirt.org">users(a)ovirt.org</a><br>
<b>Subject:</b> Can not connect to Storage domain data <o:p></o:p></span></=
p>
</div>
</div>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">Hi<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">I successfully deployed the first ovirt host with ho=
sted-engine –deploy. Engine VM works well.<o:p></o:p></p>
<p class=3D"MsoNormal">While, when I try to create the second host with the=
same way as the guide of
<o:p></o:p></p>
<p class=3D"MsoNormal"><a href=3D"http://community.redhat.com/blog/2014/11/=
up-and-running-with-ovirt-3-5-part-two/">http://community.redhat.com/blog/2=
014/11/up-and-running-with-ovirt-3-5-part-two/</a><o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">I am not using GlusterFS, and just use one external =
storage(nfs) in my environment.<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">The issue I have is in the engine administration men=
u, it says “can not connect to storage domain data”<o:p></o:p><=
/p>
<p class=3D"MsoNormal">In the second host, I checked with nfs-check.py for =
both storage and data domain. It shows the status is ok.<o:p></o:p></p>
<p class=3D"MsoNormal"><a href=3D"http://www.ovirt.org/Troubleshooting_NFS_=
Storage_Issues">http://www.ovirt.org/Troubleshooting_NFS_Storage_Issues</a>=
<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">During deployment of the second host, how the data d=
omain is trying to be mounted?<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal">Thanks,<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
<p class=3D"MsoNormal"><o:p> </o:p></p>
</div>
<br>
<hr>
<font face=3D"Arial" color=3D"Gray" size=3D"1">This e-mail message is for t=
he sole use of the intended recipient(s) and may contain confidential and p=
rivileged information. Any unauthorized review, use, disclosure or distribu=
tion is prohibited. If you are not the
intended recipient, please contact the sender by reply e-mail and destroy =
all copies of the original message. If you are the intended recipient, plea=
se be advised that the content of this message is subject to access, review=
and disclosure by the sender's
e-mail System Administrator.<br>
</font>
</body>
</html>
--_000_ED08B56256B38842A463A2A0804C5AC0326ACA471Csvrcaexch1atg_--
9 years, 11 months
Re: [ovirt-users] Introduction!
by Donny Davis
----_com.android.email_839995840196020
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: base64
CiAgICAKV2VsY29tZSB0byBvdmlydC4KSWYgeW91IHdhbnQgdG8gc2VlIG92aXJ0IGNoZWNrIG91
dCBjbG91ZHNwaW4ubWUKSXRzIGZyZWUKCgpIYXBweSBDb25uZWN0aW5nLiBTZW50IGZyb20gbXkg
U3ByaW50IFNhbXN1bmcgR2FsYXh5IFPCriA1CgotLS0tLS0tLSBPcmlnaW5hbCBtZXNzYWdlIC0t
LS0tLS0tCkZyb206IFllZGlkeWFoIEJhciBEYXZpZCA8ZGlkaUByZWRoYXQuY29tPiAKRGF0ZTog
MTIvMjMvMjAxNCAgMTE6NDkgUE0gIChHTVQtMDc6MDApIApUbzogVG9tIFdlZWtzIDx0b20ubS53
ZWVrc0BnbWFpbC5jb20+IApDYzogdXNlcnNAb3ZpcnQub3JnIApTdWJqZWN0OiBSZTogW292aXJ0
LXVzZXJzXSBJbnRyb2R1Y3Rpb24hIAoKSGkgVG9tLAoKLS0tLS0gT3JpZ2luYWwgTWVzc2FnZSAt
LS0tLQo+IEZyb206ICJUb20gV2Vla3MiIDx0b20ubS53ZWVrc0BnbWFpbC5jb20+Cj4gVG86IHVz
ZXJzQG92aXJ0Lm9yZwo+IFNlbnQ6IFdlZG5lc2RheSwgRGVjZW1iZXIgMjQsIDIwMTQgNDoxNToy
NSBBTQo+IFN1YmplY3Q6IFtvdmlydC11c2Vyc10gSW50cm9kdWN0aW9uIQo+IAo+IEhlbGxvLAo+
IAo+IEkgYW0gaGFwcHkgdG8gam9pbiB0aGUgY29tbXVuaXR5IGFuZCBoZWxwIHN1cHBvcnQgdGhl
IHByb2plY3QuIEknbSBhIGxvbmcKPiB0aW1lIHVzZXIgb2YgdlNwaGVyZS92Q2VudGVyIGJ1dCBJ
IGFtIGFuIGFzcGlyaW5nIHRvIHdvcmsgd2l0aGluIG9wZW4tc291cmNlCj4gd29ybGQuIE15IHdv
cmsgZXhwZXJpZW5jZSBpcyBpbiBjb3Jwb3JhdGUgZW52aXJvbm1lbnRzIGFuZCBpbmNsdWRlcwo+
IHZpcnR1YWxpemF0aW9uLCBzdG9yYWdlLCBhcyB3ZWxsIGFzIGJhc2ljIG5ldHdvcmtpbmcuCj4g
Cj4gSSBob3BlIEkgY2FuIHN0YXJ0IGhlbHBpbmcgYnkgY29udmVydGluZyBteSBob21lbGFiIGZy
b20gdGhlIFZNd2FyZSBzdGFjayB0bwo+IG9WaXJ0LiBJIGNhbiBjb250cmlidXRlIGJ5IHN1Ym1p
dHRpbmcgYnVnIHJlcXVlc3RzIGFuZCBkb2N1bWVudGF0aW9uLi4ubGV0Cj4gbWUga25vdyBpZiB0
aGF0IHdvdWxkIGJlIGhlbHBmdWwhCgpUaGF0IHdvdWxkIGRlZmluaXRlbHkgYmUgaGVscGZ1bCEK
Ckdvb2QgbHVjayBhbmQgYmVzdCByZWdhcmRzLAotLSAKRGlkaQpfX19fX19fX19fX19fX19fX19f
X19fX19fX19fX19fX19fX19fX19fX19fX19fXwpVc2VycyBtYWlsaW5nIGxpc3QKVXNlcnNAb3Zp
cnQub3JnCmh0dHA6Ly9saXN0cy5vdmlydC5vcmcvbWFpbG1hbi9saXN0aW5mby91c2Vycwo=
----_com.android.email_839995840196020
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: base64
PGh0bWw+PGhlYWQ+PG1ldGEgaHR0cC1lcXVpdj0iQ29udGVudC1UeXBlIiBjb250ZW50PSJ0ZXh0
L2h0bWw7IGNoYXJzZXQ9VVRGLTgiPjwvaGVhZD48Ym9keT4KICAgIAo8ZGl2PldlbGNvbWUgdG8g
b3ZpcnQuPC9kaXY+PGRpdj48YnI+PC9kaXY+PGRpdj5JZiB5b3Ugd2FudCB0byBzZWUgb3ZpcnQg
Y2hlY2sgb3V0IGNsb3Vkc3Bpbi5tZTwvZGl2PjxkaXY+PGJyPjwvZGl2PjxkaXY+SXRzIGZyZWU8
L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2IGlk
PSJjb21wb3Nlcl9zaWduYXR1cmUiPjxkaXYgc3R5bGU9ImZvbnQtc2l6ZTo4NSU7Y29sb3I6IzU3
NTc1NyI+SGFwcHkgQ29ubmVjdGluZy4gU2VudCBmcm9tIG15IFNwcmludCBTYW1zdW5nIEdhbGF4
eSBTwq4gNTwvZGl2PjwvZGl2Pjxicj48YnI+LS0tLS0tLS0gT3JpZ2luYWwgbWVzc2FnZSAtLS0t
LS0tLTxicj5Gcm9tOiBZZWRpZHlhaCBCYXIgRGF2aWQgJmx0O2RpZGlAcmVkaGF0LmNvbSZndDsg
PGJyPkRhdGU6IDEyLzIzLzIwMTQgIDExOjQ5IFBNICAoR01ULTA3OjAwKSA8YnI+VG86IFRvbSBX
ZWVrcyAmbHQ7dG9tLm0ud2Vla3NAZ21haWwuY29tJmd0OyA8YnI+Q2M6IHVzZXJzQG92aXJ0Lm9y
ZyA8YnI+U3ViamVjdDogUmU6IFtvdmlydC11c2Vyc10gSW50cm9kdWN0aW9uISA8YnI+PGJyPkhp
IFRvbSw8YnI+PGJyPi0tLS0tIE9yaWdpbmFsIE1lc3NhZ2UgLS0tLS08YnI+Jmd0OyBGcm9tOiAi
VG9tIFdlZWtzIiAmbHQ7dG9tLm0ud2Vla3NAZ21haWwuY29tJmd0Ozxicj4mZ3Q7IFRvOiB1c2Vy
c0BvdmlydC5vcmc8YnI+Jmd0OyBTZW50OiBXZWRuZXNkYXksIERlY2VtYmVyIDI0LCAyMDE0IDQ6
MTU6MjUgQU08YnI+Jmd0OyBTdWJqZWN0OiBbb3ZpcnQtdXNlcnNdIEludHJvZHVjdGlvbiE8YnI+
Jmd0OyA8YnI+Jmd0OyBIZWxsbyw8YnI+Jmd0OyA8YnI+Jmd0OyBJIGFtIGhhcHB5IHRvIGpvaW4g
dGhlIGNvbW11bml0eSBhbmQgaGVscCBzdXBwb3J0IHRoZSBwcm9qZWN0LiBJJ20gYSBsb25nPGJy
PiZndDsgdGltZSB1c2VyIG9mIHZTcGhlcmUvdkNlbnRlciBidXQgSSBhbSBhbiBhc3BpcmluZyB0
byB3b3JrIHdpdGhpbiBvcGVuLXNvdXJjZTxicj4mZ3Q7IHdvcmxkLiBNeSB3b3JrIGV4cGVyaWVu
Y2UgaXMgaW4gY29ycG9yYXRlIGVudmlyb25tZW50cyBhbmQgaW5jbHVkZXM8YnI+Jmd0OyB2aXJ0
dWFsaXphdGlvbiwgc3RvcmFnZSwgYXMgd2VsbCBhcyBiYXNpYyBuZXR3b3JraW5nLjxicj4mZ3Q7
IDxicj4mZ3Q7IEkgaG9wZSBJIGNhbiBzdGFydCBoZWxwaW5nIGJ5IGNvbnZlcnRpbmcgbXkgaG9t
ZWxhYiBmcm9tIHRoZSBWTXdhcmUgc3RhY2sgdG88YnI+Jmd0OyBvVmlydC4gSSBjYW4gY29udHJp
YnV0ZSBieSBzdWJtaXR0aW5nIGJ1ZyByZXF1ZXN0cyBhbmQgZG9jdW1lbnRhdGlvbi4uLmxldDxi
cj4mZ3Q7IG1lIGtub3cgaWYgdGhhdCB3b3VsZCBiZSBoZWxwZnVsITxicj48YnI+VGhhdCB3b3Vs
ZCBkZWZpbml0ZWx5IGJlIGhlbHBmdWwhPGJyPjxicj5Hb29kIGx1Y2sgYW5kIGJlc3QgcmVnYXJk
cyw8YnI+LS0gPGJyPkRpZGk8YnI+X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19f
X19fX19fX19fX188YnI+VXNlcnMgbWFpbGluZyBsaXN0PGJyPlVzZXJzQG92aXJ0Lm9yZzxicj5o
dHRwOi8vbGlzdHMub3ZpcnQub3JnL21haWxtYW4vbGlzdGluZm8vdXNlcnM8YnI+PC9ib2R5Pjwv
aHRtbD4=
----_com.android.email_839995840196020--
9 years, 11 months
Introduction!
by Tom Weeks
Hello,
I am happy to join the community and help support the project. I'm a long
time user of vSphere/vCenter but I am an aspiring to work within
open-source world. My work experience is in corporate environments and
includes virtualization, storage, as well as basic networking.
I hope I can start helping by converting my homelab from the VMware stack
to oVirt. I can contribute by submitting bug requests and
documentation...let me know if that would be helpful!
-Tom
9 years, 11 months