hosted engine migration

Hi all, I have an ovirt4.3.10.4 environment of 2 hosts. Normal vms in this environment could be migrated, but the hosted engine vm could not be migrated. Anyone can help? Thanks a lot! hosts status: normal vm migration: hosted engine vm migration:

Have you checked under a shell the output of 'hosted-engine --vm-status' . Check the Score of the hosts. Maybe there is a node with score of '0' ? Best Regards, Strahil Nikolov В вторник, 25 август 2020 г., 13:46:18 Гринуич+3, 董青龙 <ddqlo@126.com> написа: Hi all, I have an ovirt4.3.10.4 environment of 2 hosts. Normal vms in this environment could be migrated, but the hosted engine vm could not be migrated. Anyone can help? Thanks a lot! hosts status: normal vm migration: hosted engine vm migration: _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZXHE2AJX42HNHO...

Thanks! The scores of all nodes are not '0'. I find that someone has already asked a question like this. It seems that this feature has been disabled in 4.3. I am not sure if it is enabled in 4.4. 在 2020-08-29 02:27:03,"Strahil Nikolov" <hunter86_bg@yahoo.com> :
Have you checked under a shell the output of 'hosted-engine --vm-status' . Check the Score of the hosts. Maybe there is a node with score of '0' ?
Best Regards, Strahil Nikolov
В вторник, 25 август 2020 г., 13:46:18 Гринуич+3, 董青龙 <ddqlo@126.com> написа:
Hi all, I have an ovirt4.3.10.4 environment of 2 hosts. Normal vms in this environment could be migrated, but the hosted engine vm could not be migrated. Anyone can help? Thanks a lot!
hosts status:
normal vm migration:
hosted engine vm migration:
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZXHE2AJX42HNHO...

I'm running oVirt 4.3.10 and I can migrate my Engine from node to node. I had one similar issue , but powering off and on the HE has fixed it. You have to check the vdsm log on the source and on destination in order to figure out what is going on. Also you might consider checking the libvirt logs on the destination. Best Regards, Strahil Nikolov В понеделник, 31 август 2020 г., 10:47:22 Гринуич+3, ddqlo <ddqlo@126.com> написа: Thanks! The scores of all nodes are not '0'. I find that someone has already asked a question like this. It seems that this feature has been disabled in 4.3. I am not sure if it is enabled in 4.4. 在 2020-08-29 02:27:03,"Strahil Nikolov" <hunter86_bg@yahoo.com> :
Have you checked under a shell the output of 'hosted-engine --vm-status' . Check the Score of the hosts. Maybe there is a node with score of '0' ?
Best Regards, Strahil Nikolov
В вторник, 25 август 2020 г., 13:46:18 Гринуич+3, 董青龙 <ddqlo@126.com> написа:
Hi all, I have an ovirt4.3.10.4 environment of 2 hosts. Normal vms in this environment could be migrated, but the hosted engine vm could not be migrated. Anyone can help? Thanks a lot!
hosts status:
normal vm migration:
hosted engine vm migration:
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZXHE2AJX42HNHO...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/IAYLFLC6K42OUP...

I could not find any logs because the migration button is disabled in the web UI. It seems that the engine migration operation is prevented at first. Any other ideas? Thanks! 在 2020-09-01 00:06:19,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
I'm running oVirt 4.3.10 and I can migrate my Engine from node to node. I had one similar issue , but powering off and on the HE has fixed it.
You have to check the vdsm log on the source and on destination in order to figure out what is going on. Also you might consider checking the libvirt logs on the destination.
Best Regards, Strahil Nikolov
В понеделник, 31 август 2020 г., 10:47:22 Гринуич+3, ddqlo <ddqlo@126.com> написа:
Thanks! The scores of all nodes are not '0'. I find that someone has already asked a question like this. It seems that this feature has been disabled in 4.3. I am not sure if it is enabled in 4.4.
在 2020-08-29 02:27:03,"Strahil Nikolov" <hunter86_bg@yahoo.com> :
Have you checked under a shell the output of 'hosted-engine --vm-status' . Check the Score of the hosts. Maybe there is a node with score of '0' ?
Best Regards, Strahil Nikolov
В вторник, 25 август 2020 г., 13:46:18 Гринуич+3, 董青龙 <ddqlo@126.com> написа:
Hi all, I have an ovirt4.3.10.4 environment of 2 hosts. Normal vms in this environment could be migrated, but the hosted engine vm could not be migrated. Anyone can help? Thanks a lot!
hosts status:
normal vm migration:
hosted engine vm migration:
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZXHE2AJX42HNHO...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/IAYLFLC6K42OUP...

I have found some engine logs: 2020-09-07 09:00:45,428+08 INFO [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-6) [29259482-1515-4c10-8458-59354a0953ac] Candidate host 'node22' ('585b374b-4c82-4f5c-aad7-196d9f5d5625') was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU' (correlation id: null) 2020-09-07 09:00:45,428+08 INFO [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-6) [29259482-1515-4c10-8458-59354a0953ac] Candidate host 'node28' ('a678a15d-19e6-46f2-80bf-c3181197a0a6') was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU' (correlation id: null) It seems that both of the two hosts were filtered out. 在 2020-09-07 07:50:55,"ddqlo" <ddqlo@126.com> 写道: I could not find any logs because the migration button is disabled in the web UI. It seems that the engine migration operation is prevented at first. Any other ideas? Thanks! 在 2020-09-01 00:06:19,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
I'm running oVirt 4.3.10 and I can migrate my Engine from node to node. I had one similar issue , but powering off and on the HE has fixed it.
You have to check the vdsm log on the source and on destination in order to figure out what is going on. Also you might consider checking the libvirt logs on the destination.
Best Regards, Strahil Nikolov
В понеделник, 31 август 2020 г., 10:47:22 Гринуич+3, ddqlo <ddqlo@126.com> написа:
Thanks! The scores of all nodes are not '0'. I find that someone has already asked a question like this. It seems that this feature has been disabled in 4.3. I am not sure if it is enabled in 4.4.
在 2020-08-29 02:27:03,"Strahil Nikolov" <hunter86_bg@yahoo.com> :
Have you checked under a shell the output of 'hosted-engine --vm-status' . Check the Score of the hosts. Maybe there is a node with score of '0' ?
Best Regards, Strahil Nikolov
В вторник, 25 август 2020 г., 13:46:18 Гринуич+3, 董青龙 <ddqlo@126.com> написа:
Hi all, I have an ovirt4.3.10.4 environment of 2 hosts. Normal vms in this environment could be migrated, but the hosted engine vm could not be migrated. Anyone can help? Thanks a lot!
hosts status:
normal vm migration:
hosted engine vm migration:
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZXHE2AJX42HNHO...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/IAYLFLC6K42OUP...

On Mon, Sep 7, 2020 at 4:13 AM ddqlo <ddqlo@126.com> wrote:
I have found some engine logs:
2020-09-07 09:00:45,428+08 INFO [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-6) [29259482-1515-4c10-8458-59354a0953ac] Candidate host 'node22' ('585b374b-4c82-4f5c-aad7-196d9f5d5625') was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU' (correlation id: null)
2020-09-07 09:00:45,428+08 INFO [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-6) [29259482-1515-4c10-8458-59354a0953ac] Candidate host 'node28' ('a678a15d-19e6-46f2-80bf-c3181197a0a6') was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU' (correlation id: null)
It seems that both of the two hosts were filtered out.
So please check cpu conf of the hosts, cluster, VM, etc. One more thing you can try is to force putting the host in maintenance - this will require migrating the engine VM. If the engine refuses to do that, because it can't migrate the VM due to above issue, you can try instead: hosted-engine --set-maintenance --mode=local I think this overrides the engine and will force a migration. Didn't try recently. Best regards,
在 2020-09-07 07:50:55,"ddqlo" <ddqlo@126.com> 写道:
I could not find any logs because the migration button is disabled in the web UI. It seems that the engine migration operation is prevented at first. Any other ideas? Thanks!
在 2020-09-01 00:06:19,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
I'm running oVirt 4.3.10 and I can migrate my Engine from node to node. I had one similar issue , but powering off and on the HE has fixed it.
You have to check the vdsm log on the source and on destination in order to figure out what is going on. Also you might consider checking the libvirt logs on the destination.
Best Regards, Strahil Nikolov
В понеделник, 31 август 2020 г., 10:47:22 Гринуич+3, ddqlo <ddqlo@126.com> написа:
Thanks! The scores of all nodes are not '0'. I find that someone has already asked a question like this. It seems that this feature has been disabled in 4.3. I am not sure if it is enabled in 4.4.
在 2020-08-29 02:27:03,"Strahil Nikolov" <hunter86_bg@yahoo.com> :
Have you checked under a shell the output of 'hosted-engine --vm-status' . Check the Score of the hosts. Maybe there is a node with score of '0' ?
Best Regards, Strahil Nikolov
В вторник, 25 август 2020 г., 13:46:18 Гринуич+3, 董青龙 <ddqlo@126.com> написа:
Hi all, I have an ovirt4.3.10.4 environment of 2 hosts. Normal vms in this environment could be migrated, but the hosted engine vm could not be migrated. Anyone can help? Thanks a lot!
hosts status:
normal vm migration:
hosted engine vm migration:
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZXHE2AJX42HNHO...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/IAYLFLC6K42OUP...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/HRCQR7Y6AMUW6H...
-- Didi

my hosts cpu:Intel Haswell-noTSX Family cluster cpu:Intel Haswell-noTSX Family HostedEngine vm cpu:Intel Haswell-noTSX Family When I tried to put the host in maintenance in web UI, I got an error: When I typed the command, I got this: [root@node22 ~]# hosted-engine --set-maintenance --mode=local Unable to enter local maintenance mode: the engine VM is running on the current host, please migrate it before entering local maintenance mode. At 2020-09-07 12:52:01, "Yedidyah Bar David" <didi@redhat.com> wrote:
On Mon, Sep 7, 2020 at 4:13 AM ddqlo <ddqlo@126.com> wrote:
I have found some engine logs:
2020-09-07 09:00:45,428+08 INFO [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-6) [29259482-1515-4c10-8458-59354a0953ac] Candidate host 'node22' ('585b374b-4c82-4f5c-aad7-196d9f5d5625') was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU' (correlation id: null)
2020-09-07 09:00:45,428+08 INFO [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-6) [29259482-1515-4c10-8458-59354a0953ac] Candidate host 'node28' ('a678a15d-19e6-46f2-80bf-c3181197a0a6') was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU' (correlation id: null)
It seems that both of the two hosts were filtered out.
So please check cpu conf of the hosts, cluster, VM, etc.
One more thing you can try is to force putting the host in maintenance - this will require migrating the engine VM.
If the engine refuses to do that, because it can't migrate the VM due to above issue, you can try instead:
hosted-engine --set-maintenance --mode=local
I think this overrides the engine and will force a migration. Didn't try recently.
Best regards,
在 2020-09-07 07:50:55,"ddqlo" <ddqlo@126.com> 写道:
I could not find any logs because the migration button is disabled in the web UI. It seems that the engine migration operation is prevented at first. Any other ideas? Thanks!
在 2020-09-01 00:06:19,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
I'm running oVirt 4.3.10 and I can migrate my Engine from node to node. I had one similar issue , but powering off and on the HE has fixed it.
You have to check the vdsm log on the source and on destination in order to figure out what is going on. Also you might consider checking the libvirt logs on the destination.
Best Regards, Strahil Nikolov
В понеделник, 31 август 2020 г., 10:47:22 Гринуич+3, ddqlo <ddqlo@126.com> написа:
Thanks! The scores of all nodes are not '0'. I find that someone has already asked a question like this. It seems that this feature has been disabled in 4.3. I am not sure if it is enabled in 4.4.
在 2020-08-29 02:27:03,"Strahil Nikolov" <hunter86_bg@yahoo.com> :
Have you checked under a shell the output of 'hosted-engine --vm-status' . Check the Score of the hosts. Maybe there is a node with score of '0' ?
Best Regards, Strahil Nikolov
В вторник, 25 август 2020 г., 13:46:18 Гринуич+3, 董青龙 <ddqlo@126.com> написа:
Hi all, I have an ovirt4.3.10.4 environment of 2 hosts. Normal vms in this environment could be migrated, but the hosted engine vm could not be migrated. Anyone can help? Thanks a lot!
hosts status:
normal vm migration:
hosted engine vm migration:
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZXHE2AJX42HNHO...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/IAYLFLC6K42OUP...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/HRCQR7Y6AMUW6H...
-- Didi

On Tue, Sep 8, 2020 at 4:33 AM ddqlo <ddqlo@126.com> wrote:
my hosts cpu:Intel Haswell-noTSX Family
cluster cpu:Intel Haswell-noTSX Family HostedEngine vm cpu:Intel Haswell-noTSX Family
When I tried to put the host in maintenance in web UI, I got an error:
Adding Arik. Arik - any idea what else to test?
When I typed the command, I got this:
[root@node22 ~]# hosted-engine --set-maintenance --mode=local
Unable to enter local maintenance mode: the engine VM is running on the current host, please migrate it before entering local maintenance mode.
Sorry, I wasn't aware that this was disabled since 4.3.5, about a year ago: https://gerrit.ovirt.org/#/q/Ia06b9bc6e65a7937e6d6462c001b59572369fe66,n,z So you'll have to first fix migration on engine level. Best regards,
At 2020-09-07 12:52:01, "Yedidyah Bar David" <didi@redhat.com> wrote:
On Mon, Sep 7, 2020 at 4:13 AM ddqlo <ddqlo@126.com> wrote:
I have found some engine logs:
2020-09-07 09:00:45,428+08 INFO [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-6) [29259482-1515-4c10-8458-59354a0953ac] Candidate host 'node22' ('585b374b-4c82-4f5c-aad7-196d9f5d5625') was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU' (correlation id: null)
2020-09-07 09:00:45,428+08 INFO [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-6) [29259482-1515-4c10-8458-59354a0953ac] Candidate host 'node28' ('a678a15d-19e6-46f2-80bf-c3181197a0a6') was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU' (correlation id: null)
It seems that both of the two hosts were filtered out.
So please check cpu conf of the hosts, cluster, VM, etc.
One more thing you can try is to force putting the host in maintenance - this will require migrating the engine VM.
If the engine refuses to do that, because it can't migrate the VM due to above issue, you can try instead:
hosted-engine --set-maintenance --mode=local
I think this overrides the engine and will force a migration. Didn't try recently.
Best regards,
在 2020-09-07 07:50:55,"ddqlo" <ddqlo@126.com> 写道:
I could not find any logs because the migration button is disabled in the web UI. It seems that the engine migration operation is prevented at first. Any other ideas? Thanks!
在 2020-09-01 00:06:19,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
I'm running oVirt 4.3.10 and I can migrate my Engine from node to node. I had one similar issue , but powering off and on the HE has fixed it.
You have to check the vdsm log on the source and on destination in order to figure out what is going on. Also you might consider checking the libvirt logs on the destination.
Best Regards, Strahil Nikolov
В понеделник, 31 август 2020 г., 10:47:22 Гринуич+3, ddqlo <ddqlo@126.com> написа:
Thanks! The scores of all nodes are not '0'. I find that someone has already asked a question like this. It seems that this feature has been disabled in 4.3. I am not sure if it is enabled in 4.4.
在 2020-08-29 02:27:03,"Strahil Nikolov" <hunter86_bg@yahoo.com> :
Have you checked under a shell the output of 'hosted-engine --vm-status' . Check the Score of the hosts. Maybe there is a node with score of '0' ?
Best Regards, Strahil Nikolov
В вторник, 25 август 2020 г., 13:46:18 Гринуич+3, 董青龙 <ddqlo@126.com> написа:
Hi all, I have an ovirt4.3.10.4 environment of 2 hosts. Normal vms in this environment could be migrated, but the hosted engine vm could not be migrated. Anyone can help? Thanks a lot!
hosts status:
normal vm migration:
hosted engine vm migration:
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZXHE2AJX42HNHO...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/IAYLFLC6K42OUP...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/HRCQR7Y6AMUW6H...
-- Didi
-- Didi

I think that you can try to set one of the HE hosts into maintenance and then use UI to 'reinstall'. Don't forget to mark the host as a HE host also (some dropdown in the UI wizard). Best Regards, Strahil Nikolov В вторник, 8 септември 2020 г., 10:24:00 Гринуич+3, Yedidyah Bar David <didi@redhat.com> написа: On Tue, Sep 8, 2020 at 4:33 AM ddqlo <ddqlo@126.com> wrote:
my hosts cpu:Intel Haswell-noTSX Family cluster cpu:Intel Haswell-noTSX Family HostedEngine vm cpu:Intel Haswell-noTSX Family
When I tried to put the host in maintenance in web UI, I got an error:
Adding Arik. Arik - any idea what else to test?
When I typed the command, I got this: [root@node22 ~]# hosted-engine --set-maintenance --mode=local Unable to enter local maintenance mode: the engine VM is running on the current host, please migrate it before entering local maintenance mode.
Sorry, I wasn't aware that this was disabled since 4.3.5, about a year ago: https://gerrit.ovirt.org/#/q/Ia06b9bc6e65a7937e6d6462c001b59572369fe66,n,z So you'll have to first fix migration on engine level. Best regards,
At 2020-09-07 12:52:01, "Yedidyah Bar David" <didi@redhat.com> wrote:
On Mon, Sep 7, 2020 at 4:13 AM ddqlo <ddqlo@126.com> wrote:
I have found some engine logs:
2020-09-07 09:00:45,428+08 INFO [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-6) [29259482-1515-4c10-8458-59354a0953ac] Candidate host 'node22' ('585b374b-4c82-4f5c-aad7-196d9f5d5625') was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU' (correlation id: null)
2020-09-07 09:00:45,428+08 INFO [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-6) [29259482-1515-4c10-8458-59354a0953ac] Candidate host 'node28' ('a678a15d-19e6-46f2-80bf-c3181197a0a6') was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU' (correlation id: null)
It seems that both of the two hosts were filtered out.
So please check cpu conf of the hosts, cluster, VM, etc.
One more thing you can try is to force putting the host in maintenance - this will require migrating the engine VM.
If the engine refuses to do that, because it can't migrate the VM due to above issue, you can try instead:
hosted-engine --set-maintenance --mode=local
I think this overrides the engine and will force a migration. Didn't try recently.
Best regards,
在 2020-09-07 07:50:55,"ddqlo" <ddqlo@126.com> 写道:
I could not find any logs because the migration button is disabled in the web UI. It seems that the engine migration operation is prevented at first. Any other ideas? Thanks!
在 2020-09-01 00:06:19,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
I'm running oVirt 4.3.10 and I can migrate my Engine from node to node. I had one similar issue , but powering off and on the HE has fixed it.
You have to check the vdsm log on the source and on destination in order to figure out what is going on. Also you might consider checking the libvirt logs on the destination.
Best Regards, Strahil Nikolov
В понеделник, 31 август 2020 г., 10:47:22 Гринуич+3, ddqlo <ddqlo@126.com> написа:
Thanks! The scores of all nodes are not '0'. I find that someone has already asked a question like this. It seems that this feature has been disabled in 4.3. I am not sure if it is enabled in 4.4.
在 2020-08-29 02:27:03,"Strahil Nikolov" <hunter86_bg@yahoo.com> :
Have you checked under a shell the output of 'hosted-engine --vm-status' . Check the Score of the hosts. Maybe there is a node with score of '0' ?
Best Regards, Strahil Nikolov
В вторник, 25 август 2020 г., 13:46:18 Гринуич+3, 董青龙 <ddqlo@126.com> написа:
Hi all, I have an ovirt4.3.10.4 environment of 2 hosts. Normal vms in this environment could be migrated, but the hosted engine vm could not be migrated. Anyone can help? Thanks a lot!
hosts status:
normal vm migration:
hosted engine vm migration:
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZXHE2AJX42HNHO...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/IAYLFLC6K42OUP...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/HRCQR7Y6AMUW6H...
-- Didi
-- Didi _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/DPNXPLOQXCP6WJ...

I have tried. It does not work. My host network: Will this help? 在 2020-09-09 22:52:25,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
I think that you can try to set one of the HE hosts into maintenance and then use UI to 'reinstall'. Don't forget to mark the host as a HE host also (some dropdown in the UI wizard).
Best Regards, Strahil Nikolov
В вторник, 8 септември 2020 г., 10:24:00 Гринуич+3, Yedidyah Bar David <didi@redhat.com> написа:
On Tue, Sep 8, 2020 at 4:33 AM ddqlo <ddqlo@126.com> wrote:
my hosts cpu:Intel Haswell-noTSX Family cluster cpu:Intel Haswell-noTSX Family HostedEngine vm cpu:Intel Haswell-noTSX Family
When I tried to put the host in maintenance in web UI, I got an error:
Adding Arik. Arik - any idea what else to test?
When I typed the command, I got this: [root@node22 ~]# hosted-engine --set-maintenance --mode=local Unable to enter local maintenance mode: the engine VM is running on the current host, please migrate it before entering local maintenance mode.
Sorry, I wasn't aware that this was disabled since 4.3.5, about a year ago:
https://gerrit.ovirt.org/#/q/Ia06b9bc6e65a7937e6d6462c001b59572369fe66,n,z
So you'll have to first fix migration on engine level.
Best regards,
At 2020-09-07 12:52:01, "Yedidyah Bar David" <didi@redhat.com> wrote:
On Mon, Sep 7, 2020 at 4:13 AM ddqlo <ddqlo@126.com> wrote:
I have found some engine logs:
2020-09-07 09:00:45,428+08 INFO [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-6) [29259482-1515-4c10-8458-59354a0953ac] Candidate host 'node22' ('585b374b-4c82-4f5c-aad7-196d9f5d5625') was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU' (correlation id: null)
2020-09-07 09:00:45,428+08 INFO [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-6) [29259482-1515-4c10-8458-59354a0953ac] Candidate host 'node28' ('a678a15d-19e6-46f2-80bf-c3181197a0a6') was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU' (correlation id: null)
It seems that both of the two hosts were filtered out.
So please check cpu conf of the hosts, cluster, VM, etc.
One more thing you can try is to force putting the host in maintenance - this will require migrating the engine VM.
If the engine refuses to do that, because it can't migrate the VM due to above issue, you can try instead:
hosted-engine --set-maintenance --mode=local
I think this overrides the engine and will force a migration. Didn't try recently.
Best regards,
在 2020-09-07 07:50:55,"ddqlo" <ddqlo@126.com> 写道:
I could not find any logs because the migration button is disabled in the web UI. It seems that the engine migration operation is prevented at first. Any other ideas? Thanks!
在 2020-09-01 00:06:19,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
I'm running oVirt 4.3.10 and I can migrate my Engine from node to node. I had one similar issue , but powering off and on the HE has fixed it.
You have to check the vdsm log on the source and on destination in order to figure out what is going on. Also you might consider checking the libvirt logs on the destination.
Best Regards, Strahil Nikolov
В понеделник, 31 август 2020 г., 10:47:22 Гринуич+3, ddqlo <ddqlo@126.com> написа:
Thanks! The scores of all nodes are not '0'. I find that someone has already asked a question like this. It seems that this feature has been disabled in 4.3. I am not sure if it is enabled in 4.4.
在 2020-08-29 02:27:03,"Strahil Nikolov" <hunter86_bg@yahoo.com> :
Have you checked under a shell the output of 'hosted-engine --vm-status' . Check the Score of the hosts. Maybe there is a node with score of '0' ?
Best Regards, Strahil Nikolov
В вторник, 25 август 2020 г., 13:46:18 Гринуич+3, 董青龙 <ddqlo@126.com> написа:
Hi all, I have an ovirt4.3.10.4 environment of 2 hosts. Normal vms in this environment could be migrated, but the hosted engine vm could not be migrated. Anyone can help? Thanks a lot!
hosts status:
normal vm migration:
hosted engine vm migration:
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZXHE2AJX42HNHO...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/IAYLFLC6K42OUP...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/HRCQR7Y6AMUW6H...
-- Didi
-- Didi _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/DPNXPLOQXCP6WJ...

You can use the following: vim ~/.bashrc alias virsh='virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf' <Save and Exit> source ~/.bashrc #Show host capabilities virsh capabilities Now repeat on the other nodes. Compare the CPU from the 3 outputs. Best Regards, Strahil Nikolov В понеделник, 7 септември 2020 г., 04:11:52 Гринуич+3, ddqlo <ddqlo@126.com> написа: I have found some engine logs: 2020-09-07 09:00:45,428+08 INFO [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-6) [29259482-1515-4c10-8458-59354a0953ac] Candidate host 'node22' ('585b374b-4c82-4f5c-aad7-196d9f5d5625') was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU' (correlation id: null) 2020-09-07 09:00:45,428+08 INFO [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-6) [29259482-1515-4c10-8458-59354a0953ac] Candidate host 'node28' ('a678a15d-19e6-46f2-80bf-c3181197a0a6') was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU' (correlation id: null) It seems that both of the two hosts were filtered out. 在 2020-09-07 07:50:55,"ddqlo" <ddqlo@126.com> 写道:
I could not find any logs because the migration button is disabled in the web UI. It seems that the engine migration operation is prevented at first. Any other ideas? Thanks!
在 2020-09-01 00:06:19,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
I'm running oVirt 4.3.10 and I can migrate my Engine from node to node. I had one similar issue , but powering off and on the HE has fixed it.
You have to check the vdsm log on the source and on destination in order to figure out what is going on. Also you might consider checking the libvirt logs on the destination.
Best Regards, Strahil Nikolov
В понеделник, 31 август 2020 г., 10:47:22 Гринуич+3, ddqlo <ddqlo@126.com> написа:
Thanks! The scores of all nodes are not '0'. I find that someone has already asked a question like this. It seems that this feature has been disabled in 4.3. I am not sure if it is enabled in 4.4.
在 2020-08-29 02:27:03,"Strahil Nikolov" <hunter86_bg@yahoo.com> :
Have you checked under a shell the output of 'hosted-engine --vm-status' . Check the Score of the hosts. Maybe there is a node with score of '0' ?
Best Regards, Strahil Nikolov
В вторник, 25 август 2020 г., 13:46:18 Гринуич+3, 董青龙 <ddqlo@126.com> написа:
Hi all, I have an ovirt4.3.10.4 environment of 2 hosts. Normal vms in this environment could be migrated, but the hosted engine vm could not be migrated. Anyone can help? Thanks a lot!
hosts status:
normal vm migration:
hosted engine vm migration:
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZXHE2AJX42HNHO...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/IAYLFLC6K42OUP...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/HRCQR7Y6AMUW6H...

diffidences: <host> <uuid>a15b30fd-2de2-4bea-922d-d0de2ee3b76a</uuid> ...... <counter name='tsc' frequency='3600008000' scaling='no'/> ...... <topology> <cells num='1'> <cell id='0'> <memory unit='KiB'>32904772</memory> <pages unit='KiB' size='4'>8226193</pages> 在 2020-09-09 01:35:59,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
You can use the following:
vim ~/.bashrc
alias virsh='virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf'
<Save and Exit> source ~/.bashrc
#Show host capabilities virsh capabilities
Now repeat on the other nodes. Compare the CPU from the 3 outputs.
Best Regards, Strahil Nikolov
В понеделник, 7 септември 2020 г., 04:11:52 Гринуич+3, ddqlo <ddqlo@126.com> написа:
I have found some engine logs:
2020-09-07 09:00:45,428+08 INFO [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-6) [29259482-1515-4c10-8458-59354a0953ac] Candidate host 'node22' ('585b374b-4c82-4f5c-aad7-196d9f5d5625') was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU' (correlation id: null) 2020-09-07 09:00:45,428+08 INFO [org.ovirt.engine.core.bll.scheduling.SchedulingManager] (default task-6) [29259482-1515-4c10-8458-59354a0953ac] Candidate host 'node28' ('a678a15d-19e6-46f2-80bf-c3181197a0a6') was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU' (correlation id: null)
It seems that both of the two hosts were filtered out.
在 2020-09-07 07:50:55,"ddqlo" <ddqlo@126.com> 写道:
I could not find any logs because the migration button is disabled in the web UI. It seems that the engine migration operation is prevented at first. Any other ideas? Thanks!
在 2020-09-01 00:06:19,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
I'm running oVirt 4.3.10 and I can migrate my Engine from node to node. I had one similar issue , but powering off and on the HE has fixed it.
You have to check the vdsm log on the source and on destination in order to figure out what is going on. Also you might consider checking the libvirt logs on the destination.
Best Regards, Strahil Nikolov
В понеделник, 31 август 2020 г., 10:47:22 Гринуич+3, ddqlo <ddqlo@126.com> написа:
Thanks! The scores of all nodes are not '0'. I find that someone has already asked a question like this. It seems that this feature has been disabled in 4.3. I am not sure if it is enabled in 4.4.
在 2020-08-29 02:27:03,"Strahil Nikolov" <hunter86_bg@yahoo.com> :
Have you checked under a shell the output of 'hosted-engine --vm-status' . Check the Score of the hosts. Maybe there is a node with score of '0' ?
Best Regards, Strahil Nikolov
В вторник, 25 август 2020 г., 13:46:18 Гринуич+3, 董青龙 <ddqlo@126.com> написа:
Hi all, I have an ovirt4.3.10.4 environment of 2 hosts. Normal vms in this environment could be migrated, but the hosted engine vm could not be migrated. Anyone can help? Thanks a lot!
hosts status:
normal vm migration:
hosted engine vm migration:
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZXHE2AJX42HNHO...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/IAYLFLC6K42OUP...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/HRCQR7Y6AMUW6H...

What is the output of 'hosted-engine --vm-status' on the node where the HostedEngine is running ? Best Regards, Strahil Nikolov В понеделник, 7 септември 2020 г., 03:53:13 Гринуич+3, ddqlo <ddqlo@126.com> написа: I could not find any logs because the migration button is disabled in the web UI. It seems that the engine migration operation is prevented at first. Any other ideas? Thanks! 在 2020-09-01 00:06:19,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
I'm running oVirt 4.3.10 and I can migrate my Engine from node to node. I had one similar issue , but powering off and on the HE has fixed it.
You have to check the vdsm log on the source and on destination in order to figure out what is going on. Also you might consider checking the libvirt logs on the destination.
Best Regards, Strahil Nikolov
В понеделник, 31 август 2020 г., 10:47:22 Гринуич+3, ddqlo <ddqlo@126.com> написа:
Thanks! The scores of all nodes are not '0'. I find that someone has already asked a question like this. It seems that this feature has been disabled in 4.3. I am not sure if it is enabled in 4.4.
在 2020-08-29 02:27:03,"Strahil Nikolov" <hunter86_bg@yahoo.com> :
Have you checked under a shell the output of 'hosted-engine --vm-status' . Check the Score of the hosts. Maybe there is a node with score of '0' ?
Best Regards, Strahil Nikolov
В вторник, 25 август 2020 г., 13:46:18 Гринуич+3, 董青龙 <ddqlo@126.com> написа:
Hi all, I have an ovirt4.3.10.4 environment of 2 hosts. Normal vms in this environment could be migrated, but the hosted engine vm could not be migrated. Anyone can help? Thanks a lot!
hosts status:
normal vm migration:
hosted engine vm migration:
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZXHE2AJX42HNHO...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/IAYLFLC6K42OUP...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/23ZMAP5K7N5KKX...

--== Host node28 (id: 1) status ==-- conf_on_shared_storage : True Status up-to-date : True Hostname : node28 Host ID : 1 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"} Score : 1800 stopped : False Local maintenance : False crc32 : 4ac6105b local_conf_timestamp : 1794597 Host timestamp : 1794597 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=1794597 (Tue Sep 15 09:47:17 2020) host-id=1 score=1800 vm_conf_refresh_time=1794597 (Tue Sep 15 09:47:17 2020) conf_on_shared_storage=True maintenance=False state=EngineDown stopped=False --== Host node22 (id: 2) status ==-- conf_on_shared_storage : True Status up-to-date : True Hostname : node22 Host ID : 2 Engine status : {"health": "good", "vm": "up", "detail": "Up"} Score : 1800 stopped : False Local maintenance : False crc32 : ffc41893 local_conf_timestamp : 1877876 Host timestamp : 1877876 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=1877876 (Tue Sep 15 09:47:13 2020) host-id=2 score=1800 vm_conf_refresh_time=1877876 (Tue Sep 15 09:47:13 2020) conf_on_shared_storage=True maintenance=False state=EngineUp stopped=False 在 2020-09-09 01:32:55,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
What is the output of 'hosted-engine --vm-status' on the node where the HostedEngine is running ?
Best Regards, Strahil Nikolov
В понеделник, 7 септември 2020 г., 03:53:13 Гринуич+3, ddqlo <ddqlo@126.com> написа:
I could not find any logs because the migration button is disabled in the web UI. It seems that the engine migration operation is prevented at first. Any other ideas? Thanks!
在 2020-09-01 00:06:19,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
I'm running oVirt 4.3.10 and I can migrate my Engine from node to node. I had one similar issue , but powering off and on the HE has fixed it.
You have to check the vdsm log on the source and on destination in order to figure out what is going on. Also you might consider checking the libvirt logs on the destination.
Best Regards, Strahil Nikolov
В понеделник, 31 август 2020 г., 10:47:22 Гринуич+3, ddqlo <ddqlo@126.com> написа:
Thanks! The scores of all nodes are not '0'. I find that someone has already asked a question like this. It seems that this feature has been disabled in 4.3. I am not sure if it is enabled in 4.4.
在 2020-08-29 02:27:03,"Strahil Nikolov" <hunter86_bg@yahoo.com> :
Have you checked under a shell the output of 'hosted-engine --vm-status' . Check the Score of the hosts. Maybe there is a node with score of '0' ?
Best Regards, Strahil Nikolov
В вторник, 25 август 2020 г., 13:46:18 Гринуич+3, 董青龙 <ddqlo@126.com> написа:
Hi all, I have an ovirt4.3.10.4 environment of 2 hosts. Normal vms in this environment could be migrated, but the hosted engine vm could not be migrated. Anyone can help? Thanks a lot!
hosts status:
normal vm migration:
hosted engine vm migration:
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZXHE2AJX42HNHO...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/IAYLFLC6K42OUP...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/23ZMAP5K7N5KKX...

Both nodes have a lower than the usual score (should be 3400 ). Based on the score you are probably suffering from gateway-score-penalty [1][2]. Check if your gateway is pingable. Best Regards, Strahil Nikolov 1 - https://www.ovirt.org/images/Hosted-Engine-4.3-deep-dive.pdf(page 8) 2 - /etc/ovirt-hosted-engine-ha/agent.conf В вторник, 15 септември 2020 г., 04:49:48 Гринуич+3, ddqlo <ddqlo@126.com> написа: --== Host node28 (id: 1) status ==-- conf_on_shared_storage : True Status up-to-date : True Hostname : node28 Host ID : 1 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"} Score : 1800 stopped : False Local maintenance : False crc32 : 4ac6105b local_conf_timestamp : 1794597 Host timestamp : 1794597 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=1794597 (Tue Sep 15 09:47:17 2020) host-id=1 score=1800 vm_conf_refresh_time=1794597 (Tue Sep 15 09:47:17 2020) conf_on_shared_storage=True maintenance=False state=EngineDown stopped=False --== Host node22 (id: 2) status ==-- conf_on_shared_storage : True Status up-to-date : True Hostname : node22 Host ID : 2 Engine status : {"health": "good", "vm": "up", "detail": "Up"} Score : 1800 stopped : False Local maintenance : False crc32 : ffc41893 local_conf_timestamp : 1877876 Host timestamp : 1877876 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=1877876 (Tue Sep 15 09:47:13 2020) host-id=2 score=1800 vm_conf_refresh_time=1877876 (Tue Sep 15 09:47:13 2020) conf_on_shared_storage=True maintenance=False state=EngineUp stopped=False 在 2020-09-09 01:32:55,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
What is the output of 'hosted-engine --vm-status' on the node where the HostedEngine is running ?
Best Regards, Strahil Nikolov
В понеделник, 7 септември 2020 г., 03:53:13 Гринуич+3, ddqlo <ddqlo@126.com> написа:
I could not find any logs because the migration button is disabled in the web UI. It seems that the engine migration operation is prevented at first. Any other ideas? Thanks!
在 2020-09-01 00:06:19,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
I'm running oVirt 4.3.10 and I can migrate my Engine from node to node. I had one similar issue , but powering off and on the HE has fixed it.
You have to check the vdsm log on the source and on destination in order to figure out what is going on. Also you might consider checking the libvirt logs on the destination.
Best Regards, Strahil Nikolov
В понеделник, 31 август 2020 г., 10:47:22 Гринуич+3, ddqlo <ddqlo@126.com> написа:
Thanks! The scores of all nodes are not '0'. I find that someone has already asked a question like this. It seems that this feature has been disabled in 4.3. I am not sure if it is enabled in 4.4.
在 2020-08-29 02:27:03,"Strahil Nikolov" <hunter86_bg@yahoo.com> :
Have you checked under a shell the output of 'hosted-engine --vm-status' . Check the Score of the hosts. Maybe there is a node with score of '0' ?
Best Regards, Strahil Nikolov
В вторник, 25 август 2020 г., 13:46:18 Гринуич+3, 董青龙 <ddqlo@126.com> написа:
Hi all, I have an ovirt4.3.10.4 environment of 2 hosts. Normal vms in this environment could be migrated, but the hosted engine vm could not be migrated. Anyone can help? Thanks a lot!
hosts status:
normal vm migration:
hosted engine vm migration:
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZXHE2AJX42HNHO...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/IAYLFLC6K42OUP...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/23ZMAP5K7N5KKX...

My gateway was not pingable. I have fixed this problem and now both nodes have a score(3400). Yet, hosted engine could not be migrated. Same log in engine.log: host filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU' 在 2020-09-16 02:11:09,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
Both nodes have a lower than the usual score (should be 3400 ). Based on the score you are probably suffering from gateway-score-penalty [1][2]. Check if your gateway is pingable.
Best Regards, Strahil Nikolov
1 - https://www.ovirt.org/images/Hosted-Engine-4.3-deep-dive.pdf(page 8) 2 - /etc/ovirt-hosted-engine-ha/agent.conf
В вторник, 15 септември 2020 г., 04:49:48 Гринуич+3, ddqlo <ddqlo@126.com> написа:
--== Host node28 (id: 1) status ==--
conf_on_shared_storage : True Status up-to-date : True Hostname : node28 Host ID : 1 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"} Score : 1800 stopped : False Local maintenance : False crc32 : 4ac6105b local_conf_timestamp : 1794597 Host timestamp : 1794597 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=1794597 (Tue Sep 15 09:47:17 2020) host-id=1 score=1800 vm_conf_refresh_time=1794597 (Tue Sep 15 09:47:17 2020) conf_on_shared_storage=True maintenance=False state=EngineDown stopped=False
--== Host node22 (id: 2) status ==--
conf_on_shared_storage : True Status up-to-date : True Hostname : node22 Host ID : 2 Engine status : {"health": "good", "vm": "up", "detail": "Up"} Score : 1800 stopped : False Local maintenance : False crc32 : ffc41893 local_conf_timestamp : 1877876 Host timestamp : 1877876 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=1877876 (Tue Sep 15 09:47:13 2020) host-id=2 score=1800 vm_conf_refresh_time=1877876 (Tue Sep 15 09:47:13 2020) conf_on_shared_storage=True maintenance=False state=EngineUp stopped=False
在 2020-09-09 01:32:55,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
What is the output of 'hosted-engine --vm-status' on the node where the HostedEngine is running ?
Best Regards, Strahil Nikolov
В понеделник, 7 септември 2020 г., 03:53:13 Гринуич+3, ddqlo <ddqlo@126.com> написа:
I could not find any logs because the migration button is disabled in the web UI. It seems that the engine migration operation is prevented at first. Any other ideas? Thanks!
在 2020-09-01 00:06:19,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
I'm running oVirt 4.3.10 and I can migrate my Engine from node to node. I had one similar issue , but powering off and on the HE has fixed it.
You have to check the vdsm log on the source and on destination in order to figure out what is going on. Also you might consider checking the libvirt logs on the destination.
Best Regards, Strahil Nikolov
В понеделник, 31 август 2020 г., 10:47:22 Гринуич+3, ddqlo <ddqlo@126.com> написа:
Thanks! The scores of all nodes are not '0'. I find that someone has already asked a question like this. It seems that this feature has been disabled in 4.3. I am not sure if it is enabled in 4.4.
在 2020-08-29 02:27:03,"Strahil Nikolov" <hunter86_bg@yahoo.com> :
Have you checked under a shell the output of 'hosted-engine --vm-status' . Check the Score of the hosts. Maybe there is a node with score of '0' ?
Best Regards, Strahil Nikolov
В вторник, 25 август 2020 г., 13:46:18 Гринуич+3, 董青龙 <ddqlo@126.com> написа:
Hi all, I have an ovirt4.3.10.4 environment of 2 hosts. Normal vms in this environment could be migrated, but the hosted engine vm could not be migrated. Anyone can help? Thanks a lot!
hosts status:
normal vm migration:
hosted engine vm migration:
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZXHE2AJX42HNHO...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/IAYLFLC6K42OUP...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/23ZMAP5K7N5KKX...

Can you verify the HostedEngine's CPU ? 1. ssh to the host hosting the HE 2. alias virsh='virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf' 3. virsh dumpxml HostedEngine Then set the alias for virsh on all Hosts and 'virsh capabilites' should show the Hosts' <cpu><model> . Best Regards, Strahil Nikolov В сряда, 16 септември 2020 г., 10:16:08 Гринуич+3, ddqlo <ddqlo@126.com> написа: My gateway was not pingable. I have fixed this problem and now both nodes have a score(3400). Yet, hosted engine could not be migrated. Same log in engine.log: host filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU' 在 2020-09-16 02:11:09,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
Both nodes have a lower than the usual score (should be 3400 ). Based on the score you are probably suffering from gateway-score-penalty [1][2]. Check if your gateway is pingable.
Best Regards, Strahil Nikolov
1 - https://www.ovirt.org/images/Hosted-Engine-4.3-deep-dive.pdf(page 8) 2 - /etc/ovirt-hosted-engine-ha/agent.conf
В вторник, 15 септември 2020 г., 04:49:48 Гринуич+3, ddqlo <ddqlo@126.com> написа:
--== Host node28 (id: 1) status ==--
conf_on_shared_storage : True Status up-to-date : True Hostname : node28 Host ID : 1 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"} Score : 1800 stopped : False Local maintenance : False crc32 : 4ac6105b local_conf_timestamp : 1794597 Host timestamp : 1794597 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=1794597 (Tue Sep 15 09:47:17 2020) host-id=1 score=1800 vm_conf_refresh_time=1794597 (Tue Sep 15 09:47:17 2020) conf_on_shared_storage=True maintenance=False state=EngineDown stopped=False
--== Host node22 (id: 2) status ==--
conf_on_shared_storage : True Status up-to-date : True Hostname : node22 Host ID : 2 Engine status : {"health": "good", "vm": "up", "detail": "Up"} Score : 1800 stopped : False Local maintenance : False crc32 : ffc41893 local_conf_timestamp : 1877876 Host timestamp : 1877876 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=1877876 (Tue Sep 15 09:47:13 2020) host-id=2 score=1800 vm_conf_refresh_time=1877876 (Tue Sep 15 09:47:13 2020) conf_on_shared_storage=True maintenance=False state=EngineUp stopped=False
在 2020-09-09 01:32:55,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
What is the output of 'hosted-engine --vm-status' on the node where the HostedEngine is running ?
Best Regards, Strahil Nikolov
В понеделник, 7 септември 2020 г., 03:53:13 Гринуич+3, ddqlo <ddqlo@126.com> написа:
I could not find any logs because the migration button is disabled in the web UI. It seems that the engine migration operation is prevented at first. Any other ideas? Thanks!
在 2020-09-01 00:06:19,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
I'm running oVirt 4.3.10 and I can migrate my Engine from node to node. I had one similar issue , but powering off and on the HE has fixed it.
You have to check the vdsm log on the source and on destination in order to figure out what is going on. Also you might consider checking the libvirt logs on the destination.
Best Regards, Strahil Nikolov
В понеделник, 31 август 2020 г., 10:47:22 Гринуич+3, ddqlo <ddqlo@126.com> написа:
Thanks! The scores of all nodes are not '0'. I find that someone has already asked a question like this. It seems that this feature has been disabled in 4.3. I am not sure if it is enabled in 4.4.
在 2020-08-29 02:27:03,"Strahil Nikolov" <hunter86_bg@yahoo.com> :
Have you checked under a shell the output of 'hosted-engine --vm-status' . Check the Score of the hosts. Maybe there is a node with score of '0' ?
Best Regards, Strahil Nikolov
В вторник, 25 август 2020 г., 13:46:18 Гринуич+3, 董青龙 <ddqlo@126.com> написа:
Hi all, I have an ovirt4.3.10.4 environment of 2 hosts. Normal vms in this environment could be migrated, but the hosted engine vm could not be migrated. Anyone can help? Thanks a lot!
hosts status:
normal vm migration:
hosted engine vm migration:
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZXHE2AJX42HNHO...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/IAYLFLC6K42OUP...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/23ZMAP5K7N5KKX...

HostedEngine: ...... <model fallback='forbid'>Haswell-noTSX</model> ...... both of the hosts: ...... <model>Westmere</model> ...... others vms which can be migrated: ...... <model fallback='forbid'>Haswell-noTSX</model> ...... 在 2020-09-17 03:03:24,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
Can you verify the HostedEngine's CPU ?
1. ssh to the host hosting the HE 2. alias virsh='virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf' 3. virsh dumpxml HostedEngine
Then set the alias for virsh on all Hosts and 'virsh capabilites' should show the Hosts' <cpu><model> .
Best Regards, Strahil Nikolov
В сряда, 16 септември 2020 г., 10:16:08 Гринуич+3, ddqlo <ddqlo@126.com> написа:
My gateway was not pingable. I have fixed this problem and now both nodes have a score(3400). Yet, hosted engine could not be migrated. Same log in engine.log: host filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU'
在 2020-09-16 02:11:09,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
Both nodes have a lower than the usual score (should be 3400 ). Based on the score you are probably suffering from gateway-score-penalty [1][2]. Check if your gateway is pingable.
Best Regards, Strahil Nikolov
1 - https://www.ovirt.org/images/Hosted-Engine-4.3-deep-dive.pdf(page 8) 2 - /etc/ovirt-hosted-engine-ha/agent.conf
В вторник, 15 септември 2020 г., 04:49:48 Гринуич+3, ddqlo <ddqlo@126.com> написа:
--== Host node28 (id: 1) status ==--
conf_on_shared_storage : True Status up-to-date : True Hostname : node28 Host ID : 1 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"} Score : 1800 stopped : False Local maintenance : False crc32 : 4ac6105b local_conf_timestamp : 1794597 Host timestamp : 1794597 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=1794597 (Tue Sep 15 09:47:17 2020) host-id=1 score=1800 vm_conf_refresh_time=1794597 (Tue Sep 15 09:47:17 2020) conf_on_shared_storage=True maintenance=False state=EngineDown stopped=False
--== Host node22 (id: 2) status ==--
conf_on_shared_storage : True Status up-to-date : True Hostname : node22 Host ID : 2 Engine status : {"health": "good", "vm": "up", "detail": "Up"} Score : 1800 stopped : False Local maintenance : False crc32 : ffc41893 local_conf_timestamp : 1877876 Host timestamp : 1877876 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=1877876 (Tue Sep 15 09:47:13 2020) host-id=2 score=1800 vm_conf_refresh_time=1877876 (Tue Sep 15 09:47:13 2020) conf_on_shared_storage=True maintenance=False state=EngineUp stopped=False
在 2020-09-09 01:32:55,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
What is the output of 'hosted-engine --vm-status' on the node where the HostedEngine is running ?
Best Regards, Strahil Nikolov
В понеделник, 7 септември 2020 г., 03:53:13 Гринуич+3, ddqlo <ddqlo@126.com> написа:
I could not find any logs because the migration button is disabled in the web UI. It seems that the engine migration operation is prevented at first. Any other ideas? Thanks!
在 2020-09-01 00:06:19,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
I'm running oVirt 4.3.10 and I can migrate my Engine from node to node. I had one similar issue , but powering off and on the HE has fixed it.
You have to check the vdsm log on the source and on destination in order to figure out what is going on. Also you might consider checking the libvirt logs on the destination.
Best Regards, Strahil Nikolov
В понеделник, 31 август 2020 г., 10:47:22 Гринуич+3, ddqlo <ddqlo@126.com> написа:
Thanks! The scores of all nodes are not '0'. I find that someone has already asked a question like this. It seems that this feature has been disabled in 4.3. I am not sure if it is enabled in 4.4.
在 2020-08-29 02:27:03,"Strahil Nikolov" <hunter86_bg@yahoo.com> :
Have you checked under a shell the output of 'hosted-engine --vm-status' . Check the Score of the hosts. Maybe there is a node with score of '0' ?
Best Regards, Strahil Nikolov
В вторник, 25 август 2020 г., 13:46:18 Гринуич+3, 董青龙 <ddqlo@126.com> написа:
Hi all, I have an ovirt4.3.10.4 environment of 2 hosts. Normal vms in this environment could be migrated, but the hosted engine vm could not be migrated. Anyone can help? Thanks a lot!
hosts status:
normal vm migration:
hosted engine vm migration:
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZXHE2AJX42HNHO...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/IAYLFLC6K42OUP...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/23ZMAP5K7N5KKX...

It would be easier if you posted the whole xml. What about the sections (in HE xml) starting with: feature policy= Also the hosts have a section which contains: В четвъртък, 17 септември 2020 г., 05:54:12 Гринуич+3, ddqlo <ddqlo@126.com> написа: HostedEngine: ...... <model fallback='forbid'>Haswell-noTSX</model> ...... both of the hosts: ...... <model>Westmere</model> ...... others vms which can be migrated: ...... <model fallback='forbid'>Haswell-noTSX</model> ...... 在 2020-09-17 03:03:24,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
Can you verify the HostedEngine's CPU ?
1. ssh to the host hosting the HE 2. alias virsh='virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf' 3. virsh dumpxml HostedEngine
Then set the alias for virsh on all Hosts and 'virsh capabilites' should show the Hosts' <cpu><model> .
Best Regards, Strahil Nikolov
В сряда, 16 септември 2020 г., 10:16:08 Гринуич+3, ddqlo <ddqlo@126.com> написа:
My gateway was not pingable. I have fixed this problem and now both nodes have a score(3400). Yet, hosted engine could not be migrated. Same log in engine.log: host filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU'
在 2020-09-16 02:11:09,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
Both nodes have a lower than the usual score (should be 3400 ). Based on the score you are probably suffering from gateway-score-penalty [1][2]. Check if your gateway is pingable.
Best Regards, Strahil Nikolov
1 - https://www.ovirt.org/images/Hosted-Engine-4.3-deep-dive.pdf(page 8) 2 - /etc/ovirt-hosted-engine-ha/agent.conf
В вторник, 15 септември 2020 г., 04:49:48 Гринуич+3, ddqlo <ddqlo@126.com> написа:
--== Host node28 (id: 1) status ==--
conf_on_shared_storage : True Status up-to-date : True Hostname : node28 Host ID : 1 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"} Score : 1800 stopped : False Local maintenance : False crc32 : 4ac6105b local_conf_timestamp : 1794597 Host timestamp : 1794597 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=1794597 (Tue Sep 15 09:47:17 2020) host-id=1 score=1800 vm_conf_refresh_time=1794597 (Tue Sep 15 09:47:17 2020) conf_on_shared_storage=True maintenance=False state=EngineDown stopped=False
--== Host node22 (id: 2) status ==--
conf_on_shared_storage : True Status up-to-date : True Hostname : node22 Host ID : 2 Engine status : {"health": "good", "vm": "up", "detail": "Up"} Score : 1800 stopped : False Local maintenance : False crc32 : ffc41893 local_conf_timestamp : 1877876 Host timestamp : 1877876 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=1877876 (Tue Sep 15 09:47:13 2020) host-id=2 score=1800 vm_conf_refresh_time=1877876 (Tue Sep 15 09:47:13 2020) conf_on_shared_storage=True maintenance=False state=EngineUp stopped=False
在 2020-09-09 01:32:55,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
What is the output of 'hosted-engine --vm-status' on the node where the HostedEngine is running ?
Best Regards, Strahil Nikolov
В понеделник, 7 септември 2020 г., 03:53:13 Гринуич+3, ddqlo <ddqlo@126.com> написа:
I could not find any logs because the migration button is disabled in the web UI. It seems that the engine migration operation is prevented at first. Any other ideas? Thanks!
在 2020-09-01 00:06:19,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
I'm running oVirt 4.3.10 and I can migrate my Engine from node to node. I had one similar issue , but powering off and on the HE has fixed it.
You have to check the vdsm log on the source and on destination in order to figure out what is going on. Also you might consider checking the libvirt logs on the destination.
Best Regards, Strahil Nikolov
В понеделник, 31 август 2020 г., 10:47:22 Гринуич+3, ddqlo <ddqlo@126.com> написа:
Thanks! The scores of all nodes are not '0'. I find that someone has already asked a question like this. It seems that this feature has been disabled in 4.3. I am not sure if it is enabled in 4.4.
在 2020-08-29 02:27:03,"Strahil Nikolov" <hunter86_bg@yahoo.com> :
Have you checked under a shell the output of 'hosted-engine --vm-status' . Check the Score of the hosts. Maybe there is a node with score of '0' ?
Best Regards, Strahil Nikolov
В вторник, 25 август 2020 г., 13:46:18 Гринуич+3, 董青龙 <ddqlo@126.com> написа:
Hi all, I have an ovirt4.3.10.4 environment of 2 hosts. Normal vms in this environment could be migrated, but the hosted engine vm could not be migrated. Anyone can help? Thanks a lot!
hosts status:
normal vm migration:
hosted engine vm migration:
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZXHE2AJX42HNHO...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/IAYLFLC6K42OUP...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/23ZMAP5K7N5KKX...

It would be easier if you posted the whole xml. What about the sections (in HE xml) starting with: feature policy= Also the hosts have a section which contains: <feature name= If you can share a VM's xml sections for a good VM. Best Regards, Strahil Nikolov В четвъртък, 17 септември 2020 г., 05:54:12 Гринуич+3, ddqlo <ddqlo@126.com> написа: HostedEngine: ...... <model fallback='forbid'>Haswell-noTSX</model> ...... both of the hosts: ...... <model>Westmere</model> ...... others vms which can be migrated: ...... <model fallback='forbid'>Haswell-noTSX</model> ...... 在 2020-09-17 03:03:24,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
Can you verify the HostedEngine's CPU ?
1. ssh to the host hosting the HE 2. alias virsh='virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf' 3. virsh dumpxml HostedEngine
Then set the alias for virsh on all Hosts and 'virsh capabilites' should show the Hosts' <cpu><model> .
Best Regards, Strahil Nikolov
В сряда, 16 септември 2020 г., 10:16:08 Гринуич+3, ddqlo <ddqlo@126.com> написа:
My gateway was not pingable. I have fixed this problem and now both nodes have a score(3400). Yet, hosted engine could not be migrated. Same log in engine.log: host filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU'
在 2020-09-16 02:11:09,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
Both nodes have a lower than the usual score (should be 3400 ). Based on the score you are probably suffering from gateway-score-penalty [1][2]. Check if your gateway is pingable.
Best Regards, Strahil Nikolov
1 - https://www.ovirt.org/images/Hosted-Engine-4.3-deep-dive.pdf(page 8) 2 - /etc/ovirt-hosted-engine-ha/agent.conf
В вторник, 15 септември 2020 г., 04:49:48 Гринуич+3, ddqlo <ddqlo@126.com> написа:
--== Host node28 (id: 1) status ==--
conf_on_shared_storage : True Status up-to-date : True Hostname : node28 Host ID : 1 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"} Score : 1800 stopped : False Local maintenance : False crc32 : 4ac6105b local_conf_timestamp : 1794597 Host timestamp : 1794597 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=1794597 (Tue Sep 15 09:47:17 2020) host-id=1 score=1800 vm_conf_refresh_time=1794597 (Tue Sep 15 09:47:17 2020) conf_on_shared_storage=True maintenance=False state=EngineDown stopped=False
--== Host node22 (id: 2) status ==--
conf_on_shared_storage : True Status up-to-date : True Hostname : node22 Host ID : 2 Engine status : {"health": "good", "vm": "up", "detail": "Up"} Score : 1800 stopped : False Local maintenance : False crc32 : ffc41893 local_conf_timestamp : 1877876 Host timestamp : 1877876 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=1877876 (Tue Sep 15 09:47:13 2020) host-id=2 score=1800 vm_conf_refresh_time=1877876 (Tue Sep 15 09:47:13 2020) conf_on_shared_storage=True maintenance=False state=EngineUp stopped=False
在 2020-09-09 01:32:55,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
What is the output of 'hosted-engine --vm-status' on the node where the HostedEngine is running ?
Best Regards, Strahil Nikolov
В понеделник, 7 септември 2020 г., 03:53:13 Гринуич+3, ddqlo <ddqlo@126.com> написа:
I could not find any logs because the migration button is disabled in the web UI. It seems that the engine migration operation is prevented at first. Any other ideas? Thanks!
在 2020-09-01 00:06:19,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
I'm running oVirt 4.3.10 and I can migrate my Engine from node to node. I had one similar issue , but powering off and on the HE has fixed it.
You have to check the vdsm log on the source and on destination in order to figure out what is going on. Also you might consider checking the libvirt logs on the destination.
Best Regards, Strahil Nikolov
В понеделник, 31 август 2020 г., 10:47:22 Гринуич+3, ddqlo <ddqlo@126.com> написа:
Thanks! The scores of all nodes are not '0'. I find that someone has already asked a question like this. It seems that this feature has been disabled in 4.3. I am not sure if it is enabled in 4.4.
在 2020-08-29 02:27:03,"Strahil Nikolov" <hunter86_bg@yahoo.com> :
Have you checked under a shell the output of 'hosted-engine --vm-status' . Check the Score of the hosts. Maybe there is a node with score of '0' ?
Best Regards, Strahil Nikolov
В вторник, 25 август 2020 г., 13:46:18 Гринуич+3, 董青龙 <ddqlo@126.com> написа:
Hi all, I have an ovirt4.3.10.4 environment of 2 hosts. Normal vms in this environment could be migrated, but the hosted engine vm could not be migrated. Anyone can help? Thanks a lot!
hosts status:
normal vm migration:
hosted engine vm migration:
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZXHE2AJX42HNHO...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/IAYLFLC6K42OUP...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/23ZMAP5K7N5KKX...

HE: <domain type='kvm' id='1'> <name>HostedEngine</name> <uuid>b4e805ff-556d-42bd-a6df-02f5902fd01c</uuid> <metadata xmlns:ns0="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0"> <ns0:qos/> <ovirt-vm:vm xmlns:ovirt-vm="http://ovirt.org/vm/1.0"> <ovirt-vm:clusterVersion>4.3</ovirt-vm:clusterVersion> <ovirt-vm:destroy_on_reboot type="bool">False</ovirt-vm:destroy_on_reboot> <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused> <ovirt-vm:memGuaranteedSize type="int">1024</ovirt-vm:memGuaranteedSize> <ovirt-vm:minGuaranteedMemoryMb type="int">1024</ovirt-vm:minGuaranteedMemoryMb> <ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior> <ovirt-vm:startTime type="float">1600307555.19</ovirt-vm:startTime> <ovirt-vm:device mac_address="56:6f:9b:b0:00:01"> <ovirt-vm:network>external</ovirt-vm:network> <ovirt-vm:custom> <ovirt-vm:queues>4</ovirt-vm:queues> </ovirt-vm:custom> </ovirt-vm:device> <ovirt-vm:device mac_address="00:16:3e:50:c1:97"> <ovirt-vm:network>ovirtmgmt</ovirt-vm:network> <ovirt-vm:custom> <ovirt-vm:queues>4</ovirt-vm:queues> </ovirt-vm:custom> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="vda"> <ovirt-vm:domainID>c17c1934-332f-464c-8f89-ad72463c00b3</ovirt-vm:domainID> <ovirt-vm:guestName>/dev/vda2</ovirt-vm:guestName> <ovirt-vm:imageID>8eca143a-4535-4421-bd35-9f5764d67d70</ovirt-vm:imageID> <ovirt-vm:poolID>00000000-0000-0000-0000-000000000000</ovirt-vm:poolID> <ovirt-vm:shared>exclusive</ovirt-vm:shared> <ovirt-vm:volumeID>ae961104-c3b3-4a43-9f46-7fa6bdc2ac33</ovirt-vm:volumeID> <ovirt-vm:specParams> <ovirt-vm:pinToIoThread>1</ovirt-vm:pinToIoThread> </ovirt-vm:specParams> <ovirt-vm:volumeChain> <ovirt-vm:volumeChainNode> <ovirt-vm:domainID>c17c1934-332f-464c-8f89-ad72463c00b3</ovirt-vm:domainID> <ovirt-vm:imageID>8eca143a-4535-4421-bd35-9f5764d67d70</ovirt-vm:imageID> <ovirt-vm:leaseOffset type="int">108003328</ovirt-vm:leaseOffset> <ovirt-vm:leasePath>/dev/c17c1934-332f-464c-8f89-ad72463c00b3/leases</ovirt-vm:leasePath> <ovirt-vm:path>/rhev/data-center/mnt/blockSD/c17c1934-332f-464c-8f89-ad72463c00b3/images/8eca143a-4535-4421-bd35-9f5764d67d70/ae961104-c3b3-4a43-9f46-7fa6bdc2ac33</ovirt-vm:path> <ovirt-vm:volumeID>ae961104-c3b3-4a43-9f46-7fa6bdc2ac33</ovirt-vm:volumeID> </ovirt-vm:volumeChainNode> </ovirt-vm:volumeChain> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="hdc"/> </ovirt-vm:vm> </metadata> <maxMemory slots='16' unit='KiB'>67108864</maxMemory> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <vcpu placement='static' current='4'>64</vcpu> <iothreads>1</iothreads> <resource> <partition>/machine</partition> </resource> <sysinfo type='smbios'> <system> <entry name='manufacturer'>oVirt</entry> <entry name='product'>oVirt Node</entry> <entry name='version'>7-5.1804.el7.centos</entry> <entry name='serial'>00000000-0000-0000-0000-0CC47A6B3160</entry> <entry name='uuid'>b4e805ff-556d-42bd-a6df-02f5902fd01c</entry> </system> </sysinfo> <os> <type arch='x86_64' machine='pc-i440fx-rhel7.6.0'>hvm</type> <boot dev='hd'/> <bios useserial='yes'/> <smbios mode='sysinfo'/> </os> <features> <acpi/> </features> <cpu mode='custom' match='exact' check='full'> <model fallback='forbid'>Haswell-noTSX</model> <topology sockets='16' cores='4' threads='1'/> <feature policy='require' name='vme'/> <feature policy='require' name='f16c'/> <feature policy='require' name='rdrand'/> <feature policy='require' name='hypervisor'/> <feature policy='require' name='arat'/> <feature policy='require' name='xsaveopt'/> <feature policy='require' name='abm'/> <numa> <cell id='0' cpus='0-3' memory='16777216' unit='KiB'/> </numa> </cpu> <clock offset='variable' adjustment='0' basis='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>destroy</on_reboot> <on_crash>destroy</on_crash> <pm> <suspend-to-mem enabled='no'/> <suspend-to-disk enabled='no'/> </pm> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='cdrom'> <driver name='qemu' error_policy='report'/> <source startupPolicy='optional'/> <target dev='hdc' bus='ide'/> <readonly/> <alias name='ua-80fde7d5-ee7f-4201-9118-11bc6c3b8530'/> <address type='drive' controller='0' bus='1' target='0' unit='0'/> </disk> <disk type='block' device='disk' snapshot='no'> <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native' iothread='1'/> <source dev='/var/run/vdsm/storage/c17c1934-332f-464c-8f89-ad72463c00b3/8eca143a-4535-4421-bd35-9f5764d67d70/ae961104-c3b3-4a43-9f46-7fa6bdc2ac33'> <seclabel model='dac' relabel='no'/> </source> <backingStore/> <target dev='vda' bus='virtio'/> <serial>8eca143a-4535-4421-bd35-9f5764d67d70</serial> <alias name='ua-8eca143a-4535-4421-bd35-9f5764d67d70'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </disk> <controller type='scsi' index='0' model='virtio-scsi'> <driver iothread='1'/> <alias name='ua-27331e83-03f4-42a3-9554-c41649c02ba4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </controller> <controller type='ide' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0' ports='16'> <alias name='ua-8fe74299-b60f-4778-8e80-db05393a9489'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </controller> <controller type='usb' index='0' model='piix3-uhci'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <lease> <lockspace>c17c1934-332f-464c-8f89-ad72463c00b3</lockspace> <key>ae961104-c3b3-4a43-9f46-7fa6bdc2ac33</key> <target path='/dev/c17c1934-332f-464c-8f89-ad72463c00b3/leases' offset='108003328'/> </lease> <interface type='bridge'> <mac address='00:16:3e:50:c1:97'/> <source bridge='ovirtmgmt'/> <target dev='vnet0'/> <model type='virtio'/> <driver name='vhost' queues='4'/> <filterref filter='vdsm-no-mac-spoofing'/> <link state='up'/> <mtu size='1500'/> <alias name='ua-fada74ee-2338-4cde-a7ba-43a9a636ad6e'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <interface type='bridge'> <mac address='56:6f:9b:b0:00:01'/> <source bridge='external'/> <target dev='vnet1'/> <model type='virtio'/> <driver name='vhost' queues='4'/> <filterref filter='vdsm-no-mac-spoofing'/> <link state='up'/> <mtu size='1500'/> <alias name='ua-f7b4c949-1f9f-4355-811d-88428c88ce4e'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/> </interface> <serial type='unix'> <source mode='bind' path='/var/run/ovirt-vmconsole-console/b4e805ff-556d-42bd-a6df-02f5902fd01c.sock'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='unix'> <source mode='bind' path='/var/run/ovirt-vmconsole-console/b4e805ff-556d-42bd-a6df-02f5902fd01c.sock'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/b4e805ff-556d-42bd-a6df-02f5902fd01c.ovirt-guest-agent.0'/> <target type='virtio' name='ovirt-guest-agent.0' state='connected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/b4e805ff-556d-42bd-a6df-02f5902fd01c.org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='channel1'/> <address type='virtio-serial' controller='0' bus='0' port='2'/> </channel> <channel type='spicevmc'> <target type='virtio' name='com.redhat.spice.0' state='disconnected'/> <alias name='channel2'/> <address type='virtio-serial' controller='0' bus='0' port='3'/> </channel> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/b4e805ff-556d-42bd-a6df-02f5902fd01c.org.ovirt.hosted-engine-setup.0'/> <target type='virtio' name='org.ovirt.hosted-engine-setup.0' state='disconnected'/> <alias name='channel3'/> <address type='virtio-serial' controller='0' bus='0' port='4'/> </channel> <input type='tablet' bus='usb'> <alias name='input0'/> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'> <alias name='input1'/> </input> <input type='keyboard' bus='ps2'> <alias name='input2'/> </input> <graphics type='vnc' port='5900' autoport='yes' listen='192.168.1.22' keymap='en-us' passwdValidTo='1970-01-01T00:00:01'> <listen type='network' address='192.168.1.22' network='vdsm-external'/> </graphics> <graphics type='spice' port='5901' tlsPort='5902' autoport='yes' listen='192.168.1.22' passwdValidTo='1970-01-01T00:00:01'> <listen type='network' address='192.168.1.22' network='vdsm-external'/> <channel name='main' mode='secure'/> <channel name='display' mode='secure'/> <channel name='inputs' mode='secure'/> <channel name='cursor' mode='secure'/> <channel name='playback' mode='secure'/> <channel name='record' mode='secure'/> <channel name='smartcard' mode='secure'/> <channel name='usbredir' mode='secure'/> </graphics> <sound model='ich6'> <alias name='ua-bd287767-9b83-4e44-ac6f-8b527f9632b8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </sound> <video> <model type='qxl' ram='65536' vram='32768' vgamem='16384' heads='1' primary='yes'/> <alias name='ua-bcfb6b4b-0b3c-4d5b-ba2d-8ce40a65facd'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <stats period='5'/> <alias name='ua-39d36063-8808-47db-9fef-a0baad9f9661'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </memballoon> <rng model='virtio'> <backend model='random'>/dev/urandom</backend> <alias name='ua-75516d34-dd8f-4f0f-8496-e1f222a359a8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </rng> </devices> <seclabel type='dynamic' model='selinux' relabel='yes'> <label>system_u:system_r:svirt_t:s0:c162,c716</label> <imagelabel>system_u:object_r:svirt_image_t:s0:c162,c716</imagelabel> </seclabel> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+107:+107</label> <imagelabel>+107:+107</imagelabel> </seclabel> </domain> hosts: <capabilities> <host> <uuid>b25adcec-eef2-49a5-8663-7cdcfc50891b</uuid> <cpu> <arch>x86_64</arch> <model>Westmere</model> <vendor>Intel</vendor> <microcode version='34'/> <counter name='tsc' frequency='3699996000' scaling='no'/> <topology sockets='1' cores='2' threads='2'/> <feature name='vme'/> <feature name='ds'/> <feature name='acpi'/> <feature name='ss'/> <feature name='ht'/> <feature name='tm'/> <feature name='pbe'/> <feature name='pclmuldq'/> <feature name='dtes64'/> <feature name='monitor'/> <feature name='ds_cpl'/> <feature name='vmx'/> <feature name='est'/> <feature name='tm2'/> <feature name='fma'/> <feature name='xtpr'/> <feature name='pdcm'/> <feature name='pcid'/> <feature name='movbe'/> <feature name='tsc-deadline'/> <feature name='xsave'/> <feature name='osxsave'/> <feature name='avx'/> <feature name='f16c'/> <feature name='rdrand'/> <feature name='arat'/> <feature name='fsgsbase'/> <feature name='tsc_adjust'/> <feature name='bmi1'/> <feature name='avx2'/> <feature name='smep'/> <feature name='bmi2'/> <feature name='erms'/> <feature name='invpcid'/> <feature name='xsaveopt'/> <feature name='pdpe1gb'/> <feature name='rdtscp'/> <feature name='abm'/> <feature name='invtsc'/> <pages unit='KiB' size='4'/> <pages unit='KiB' size='2048'/> <pages unit='KiB' size='1048576'/> </cpu> <power_management> <suspend_mem/> <suspend_disk/> <suspend_hybrid/> </power_management> <iommu support='no'/> <migration_features> <live/> <uri_transports> <uri_transport>tcp</uri_transport> <uri_transport>rdma</uri_transport> </uri_transports> </migration_features> <topology> <cells num='1'> <cell id='0'> <memory unit='KiB'>32903488</memory> <pages unit='KiB' size='4'>8225872</pages> <pages unit='KiB' size='2048'>0</pages> <pages unit='KiB' size='1048576'>0</pages> <distances> <sibling id='0' value='10'/> </distances> <cpus num='4'> <cpu id='0' socket_id='0' core_id='0' siblings='0,2'/> <cpu id='1' socket_id='0' core_id='1' siblings='1,3'/> <cpu id='2' socket_id='0' core_id='0' siblings='0,2'/> <cpu id='3' socket_id='0' core_id='1' siblings='1,3'/> </cpus> </cell> </cells> </topology> <cache> <bank id='0' level='3' type='both' size='3' unit='MiB' cpus='0-3'/> </cache> <secmodel> <model>selinux</model> <doi>0</doi> <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel> <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel> </secmodel> <secmodel> <model>dac</model> <doi>0</doi> <baselabel type='kvm'>+107:+107</baselabel> <baselabel type='qemu'>+107:+107</baselabel> </secmodel> </host> <guest> <os_type>hvm</os_type> <arch name='i686'> <wordsize>32</wordsize> <emulator>/usr/libexec/qemu-kvm</emulator> <machine maxCpus='240'>pc-i440fx-rhel7.6.0</machine> <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240'>pc</machine> <machine maxCpus='240'>pc-i440fx-rhel7.0.0</machine> <machine maxCpus='384'>pc-q35-rhel7.6.0</machine> <machine canonical='pc-q35-rhel7.6.0' maxCpus='384'>q35</machine> <machine maxCpus='240'>rhel6.3.0</machine> <machine maxCpus='240'>rhel6.4.0</machine> <machine maxCpus='240'>rhel6.0.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.5.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.1.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.2.0</machine> <machine maxCpus='255'>pc-q35-rhel7.3.0</machine> <machine maxCpus='240'>rhel6.5.0</machine> <machine maxCpus='384'>pc-q35-rhel7.4.0</machine> <machine maxCpus='240'>rhel6.6.0</machine> <machine maxCpus='240'>rhel6.1.0</machine> <machine maxCpus='240'>rhel6.2.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.3.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.4.0</machine> <machine maxCpus='384'>pc-q35-rhel7.5.0</machine> <domain type='qemu'/> <domain type='kvm'> <emulator>/usr/libexec/qemu-kvm</emulator> </domain> </arch> <features> <cpuselection/> <deviceboot/> <disksnapshot default='on' toggle='no'/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> <pae/> <nonpae/> </features> </guest> <guest> <os_type>hvm</os_type> <arch name='x86_64'> <wordsize>64</wordsize> <emulator>/usr/libexec/qemu-kvm</emulator> <machine maxCpus='240'>pc-i440fx-rhel7.6.0</machine> <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240'>pc</machine> <machine maxCpus='240'>pc-i440fx-rhel7.0.0</machine> <machine maxCpus='384'>pc-q35-rhel7.6.0</machine> <machine canonical='pc-q35-rhel7.6.0' maxCpus='384'>q35</machine> <machine maxCpus='240'>rhel6.3.0</machine> <machine maxCpus='240'>rhel6.4.0</machine> <machine maxCpus='240'>rhel6.0.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.5.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.1.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.2.0</machine> <machine maxCpus='255'>pc-q35-rhel7.3.0</machine> <machine maxCpus='240'>rhel6.5.0</machine> <machine maxCpus='384'>pc-q35-rhel7.4.0</machine> <machine maxCpus='240'>rhel6.6.0</machine> <machine maxCpus='240'>rhel6.1.0</machine> <machine maxCpus='240'>rhel6.2.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.3.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.4.0</machine> <machine maxCpus='384'>pc-q35-rhel7.5.0</machine> <domain type='qemu'/> <domain type='kvm'> <emulator>/usr/libexec/qemu-kvm</emulator> </domain> </arch> <features> <cpuselection/> <deviceboot/> <disksnapshot default='on' toggle='no'/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> </features> </guest> </capabilities> 在 2020-09-17 12:00:19,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
It would be easier if you posted the whole xml.
What about the sections (in HE xml) starting with: feature policy=
Also the hosts have a section which contains:
<feature name=
If you can share a VM's xml sections for a good VM.
Best Regards, Strahil Nikolov
В четвъртък, 17 септември 2020 г., 05:54:12 Гринуич+3, ddqlo <ddqlo@126.com> написа:
HostedEngine: ...... <model fallback='forbid'>Haswell-noTSX</model> ......
both of the hosts: ...... <model>Westmere</model> ......
others vms which can be migrated: ...... <model fallback='forbid'>Haswell-noTSX</model> ......
在 2020-09-17 03:03:24,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
Can you verify the HostedEngine's CPU ?
1. ssh to the host hosting the HE 2. alias virsh='virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf' 3. virsh dumpxml HostedEngine
Then set the alias for virsh on all Hosts and 'virsh capabilites' should show the Hosts' <cpu><model> .
Best Regards, Strahil Nikolov
В сряда, 16 септември 2020 г., 10:16:08 Гринуич+3, ddqlo <ddqlo@126.com> написа:
My gateway was not pingable. I have fixed this problem and now both nodes have a score(3400). Yet, hosted engine could not be migrated. Same log in engine.log: host filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU'
在 2020-09-16 02:11:09,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
Both nodes have a lower than the usual score (should be 3400 ). Based on the score you are probably suffering from gateway-score-penalty [1][2]. Check if your gateway is pingable.
Best Regards, Strahil Nikolov
1 - https://www.ovirt.org/images/Hosted-Engine-4.3-deep-dive.pdf(page 8) 2 - /etc/ovirt-hosted-engine-ha/agent.conf
В вторник, 15 септември 2020 г., 04:49:48 Гринуич+3, ddqlo <ddqlo@126.com> написа:
--== Host node28 (id: 1) status ==--
conf_on_shared_storage : True Status up-to-date : True Hostname : node28 Host ID : 1 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"} Score : 1800 stopped : False Local maintenance : False crc32 : 4ac6105b local_conf_timestamp : 1794597 Host timestamp : 1794597 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=1794597 (Tue Sep 15 09:47:17 2020) host-id=1 score=1800 vm_conf_refresh_time=1794597 (Tue Sep 15 09:47:17 2020) conf_on_shared_storage=True maintenance=False state=EngineDown stopped=False
--== Host node22 (id: 2) status ==--
conf_on_shared_storage : True Status up-to-date : True Hostname : node22 Host ID : 2 Engine status : {"health": "good", "vm": "up", "detail": "Up"} Score : 1800 stopped : False Local maintenance : False crc32 : ffc41893 local_conf_timestamp : 1877876 Host timestamp : 1877876 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=1877876 (Tue Sep 15 09:47:13 2020) host-id=2 score=1800 vm_conf_refresh_time=1877876 (Tue Sep 15 09:47:13 2020) conf_on_shared_storage=True maintenance=False state=EngineUp stopped=False
在 2020-09-09 01:32:55,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
What is the output of 'hosted-engine --vm-status' on the node where the HostedEngine is running ?
Best Regards, Strahil Nikolov
В понеделник, 7 септември 2020 г., 03:53:13 Гринуич+3, ddqlo <ddqlo@126.com> написа:
I could not find any logs because the migration button is disabled in the web UI. It seems that the engine migration operation is prevented at first. Any other ideas? Thanks!
在 2020-09-01 00:06:19,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
I'm running oVirt 4.3.10 and I can migrate my Engine from node to node. I had one similar issue , but powering off and on the HE has fixed it.
You have to check the vdsm log on the source and on destination in order to figure out what is going on. Also you might consider checking the libvirt logs on the destination.
Best Regards, Strahil Nikolov
В понеделник, 31 август 2020 г., 10:47:22 Гринуич+3, ddqlo <ddqlo@126.com> написа:
Thanks! The scores of all nodes are not '0'. I find that someone has already asked a question like this. It seems that this feature has been disabled in 4.3. I am not sure if it is enabled in 4.4.
在 2020-08-29 02:27:03,"Strahil Nikolov" <hunter86_bg@yahoo.com> :
Have you checked under a shell the output of 'hosted-engine --vm-status' . Check the Score of the hosts. Maybe there is a node with score of '0' ?
Best Regards, Strahil Nikolov
В вторник, 25 август 2020 г., 13:46:18 Гринуич+3, 董青龙 <ddqlo@126.com> написа:
Hi all, I have an ovirt4.3.10.4 environment of 2 hosts. Normal vms in this environment could be migrated, but the hosted engine vm could not be migrated. Anyone can help? Thanks a lot!
hosts status:
normal vm migration:
hosted engine vm migration:
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZXHE2AJX42HNHO...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/IAYLFLC6K42OUP...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/23ZMAP5K7N5KKX...

Hm... interesting. The VM is using 'Haswell-noTSX' while the host is 'Westmere'. In my case I got no difference: [root@ovirt1 ~]# virsh dumpxml HostedEngine | grep Opteron <model fallback='forbid'>Opteron_G5</model> [root@ovirt1 ~]# virsh capabilities | grep Opteron <model>Opteron_G5</model> Did you update the cluster holding the Hosted Engine ? I guess you can try to: - Set global maintenance - Power off the HostedEngine VM - virsh dumpxml HostedEngine > /root/HE.xml - use virsh edit to change the cpu of the HE (non-permanent) change - try to power on the modified HE If it powers on , you can try to migrate it and if it succeeds - then you should make it permanent. Best Regards, Strahil Nikolov В петък, 18 септември 2020 г., 04:40:39 Гринуич+3, ddqlo <ddqlo@126.com> написа: HE: <domain type='kvm' id='1'> <name>HostedEngine</name> <uuid>b4e805ff-556d-42bd-a6df-02f5902fd01c</uuid> <metadata xmlns:ns0="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0"> <ns0:qos/> <ovirt-vm:vm xmlns:ovirt-vm="http://ovirt.org/vm/1.0"> <ovirt-vm:clusterVersion>4.3</ovirt-vm:clusterVersion> <ovirt-vm:destroy_on_reboot type="bool">False</ovirt-vm:destroy_on_reboot> <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused> <ovirt-vm:memGuaranteedSize type="int">1024</ovirt-vm:memGuaranteedSize> <ovirt-vm:minGuaranteedMemoryMb type="int">1024</ovirt-vm:minGuaranteedMemoryMb> <ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior> <ovirt-vm:startTime type="float">1600307555.19</ovirt-vm:startTime> <ovirt-vm:device mac_address="56:6f:9b:b0:00:01"> <ovirt-vm:network>external</ovirt-vm:network> <ovirt-vm:custom> <ovirt-vm:queues>4</ovirt-vm:queues> </ovirt-vm:custom> </ovirt-vm:device> <ovirt-vm:device mac_address="00:16:3e:50:c1:97"> <ovirt-vm:network>ovirtmgmt</ovirt-vm:network> <ovirt-vm:custom> <ovirt-vm:queues>4</ovirt-vm:queues> </ovirt-vm:custom> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="vda"> <ovirt-vm:domainID>c17c1934-332f-464c-8f89-ad72463c00b3</ovirt-vm:domainID> <ovirt-vm:guestName>/dev/vda2</ovirt-vm:guestName> <ovirt-vm:imageID>8eca143a-4535-4421-bd35-9f5764d67d70</ovirt-vm:imageID> <ovirt-vm:poolID>00000000-0000-0000-0000-000000000000</ovirt-vm:poolID> <ovirt-vm:shared>exclusive</ovirt-vm:shared> <ovirt-vm:volumeID>ae961104-c3b3-4a43-9f46-7fa6bdc2ac33</ovirt-vm:volumeID> <ovirt-vm:specParams> <ovirt-vm:pinToIoThread>1</ovirt-vm:pinToIoThread> </ovirt-vm:specParams> <ovirt-vm:volumeChain> <ovirt-vm:volumeChainNode> <ovirt-vm:domainID>c17c1934-332f-464c-8f89-ad72463c00b3</ovirt-vm:domainID> <ovirt-vm:imageID>8eca143a-4535-4421-bd35-9f5764d67d70</ovirt-vm:imageID> <ovirt-vm:leaseOffset type="int">108003328</ovirt-vm:leaseOffset> <ovirt-vm:leasePath>/dev/c17c1934-332f-464c-8f89-ad72463c00b3/leases</ovirt-vm:leasePath> <ovirt-vm:path>/rhev/data-center/mnt/blockSD/c17c1934-332f-464c-8f89-ad72463c00b3/images/8eca143a-4535-4421-bd35-9f5764d67d70/ae961104-c3b3-4a43-9f46-7fa6bdc2ac33</ovirt-vm:path> <ovirt-vm:volumeID>ae961104-c3b3-4a43-9f46-7fa6bdc2ac33</ovirt-vm:volumeID> </ovirt-vm:volumeChainNode> </ovirt-vm:volumeChain> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="hdc"/> </ovirt-vm:vm> </metadata> <maxMemory slots='16' unit='KiB'>67108864</maxMemory> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <vcpu placement='static' current='4'>64</vcpu> <iothreads>1</iothreads> <resource> <partition>/machine</partition> </resource> <sysinfo type='smbios'> <system> <entry name='manufacturer'>oVirt</entry> <entry name='product'>oVirt Node</entry> <entry name='version'>7-5.1804.el7.centos</entry> <entry name='serial'>00000000-0000-0000-0000-0CC47A6B3160</entry> <entry name='uuid'>b4e805ff-556d-42bd-a6df-02f5902fd01c</entry> </system> </sysinfo> <os> <type arch='x86_64' machine='pc-i440fx-rhel7.6.0'>hvm</type> <boot dev='hd'/> <bios useserial='yes'/> <smbios mode='sysinfo'/> </os> <features> <acpi/> </features> <cpu mode='custom' match='exact' check='full'> <model fallback='forbid'>Haswell-noTSX</model> <topology sockets='16' cores='4' threads='1'/> <feature policy='require' name='vme'/> <feature policy='require' name='f16c'/> <feature policy='require' name='rdrand'/> <feature policy='require' name='hypervisor'/> <feature policy='require' name='arat'/> <feature policy='require' name='xsaveopt'/> <feature policy='require' name='abm'/> <numa> <cell id='0' cpus='0-3' memory='16777216' unit='KiB'/> </numa> </cpu> <clock offset='variable' adjustment='0' basis='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>destroy</on_reboot> <on_crash>destroy</on_crash> <pm> <suspend-to-mem enabled='no'/> <suspend-to-disk enabled='no'/> </pm> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='cdrom'> <driver name='qemu' error_policy='report'/> <source startupPolicy='optional'/> <target dev='hdc' bus='ide'/> <readonly/> <alias name='ua-80fde7d5-ee7f-4201-9118-11bc6c3b8530'/> <address type='drive' controller='0' bus='1' target='0' unit='0'/> </disk> <disk type='block' device='disk' snapshot='no'> <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native' iothread='1'/> <source dev='/var/run/vdsm/storage/c17c1934-332f-464c-8f89-ad72463c00b3/8eca143a-4535-4421-bd35-9f5764d67d70/ae961104-c3b3-4a43-9f46-7fa6bdc2ac33'> <seclabel model='dac' relabel='no'/> </source> <backingStore/> <target dev='vda' bus='virtio'/> <serial>8eca143a-4535-4421-bd35-9f5764d67d70</serial> <alias name='ua-8eca143a-4535-4421-bd35-9f5764d67d70'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </disk> <controller type='scsi' index='0' model='virtio-scsi'> <driver iothread='1'/> <alias name='ua-27331e83-03f4-42a3-9554-c41649c02ba4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </controller> <controller type='ide' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0' ports='16'> <alias name='ua-8fe74299-b60f-4778-8e80-db05393a9489'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </controller> <controller type='usb' index='0' model='piix3-uhci'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <lease> <lockspace>c17c1934-332f-464c-8f89-ad72463c00b3</lockspace> <key>ae961104-c3b3-4a43-9f46-7fa6bdc2ac33</key> <target path='/dev/c17c1934-332f-464c-8f89-ad72463c00b3/leases' offset='108003328'/> </lease> <interface type='bridge'> <mac address='00:16:3e:50:c1:97'/> <source bridge='ovirtmgmt'/> <target dev='vnet0'/> <model type='virtio'/> <driver name='vhost' queues='4'/> <filterref filter='vdsm-no-mac-spoofing'/> <link state='up'/> <mtu size='1500'/> <alias name='ua-fada74ee-2338-4cde-a7ba-43a9a636ad6e'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <interface type='bridge'> <mac address='56:6f:9b:b0:00:01'/> <source bridge='external'/> <target dev='vnet1'/> <model type='virtio'/> <driver name='vhost' queues='4'/> <filterref filter='vdsm-no-mac-spoofing'/> <link state='up'/> <mtu size='1500'/> <alias name='ua-f7b4c949-1f9f-4355-811d-88428c88ce4e'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/> </interface> <serial type='unix'> <source mode='bind' path='/var/run/ovirt-vmconsole-console/b4e805ff-556d-42bd-a6df-02f5902fd01c.sock'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='unix'> <source mode='bind' path='/var/run/ovirt-vmconsole-console/b4e805ff-556d-42bd-a6df-02f5902fd01c.sock'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/b4e805ff-556d-42bd-a6df-02f5902fd01c.ovirt-guest-agent.0'/> <target type='virtio' name='ovirt-guest-agent.0' state='connected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/b4e805ff-556d-42bd-a6df-02f5902fd01c.org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='channel1'/> <address type='virtio-serial' controller='0' bus='0' port='2'/> </channel> <channel type='spicevmc'> <target type='virtio' name='com.redhat.spice.0' state='disconnected'/> <alias name='channel2'/> <address type='virtio-serial' controller='0' bus='0' port='3'/> </channel> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/b4e805ff-556d-42bd-a6df-02f5902fd01c.org.ovirt.hosted-engine-setup.0'/> <target type='virtio' name='org.ovirt.hosted-engine-setup.0' state='disconnected'/> <alias name='channel3'/> <address type='virtio-serial' controller='0' bus='0' port='4'/> </channel> <input type='tablet' bus='usb'> <alias name='input0'/> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'> <alias name='input1'/> </input> <input type='keyboard' bus='ps2'> <alias name='input2'/> </input> <graphics type='vnc' port='5900' autoport='yes' listen='192.168.1.22' keymap='en-us' passwdValidTo='1970-01-01T00:00:01'> <listen type='network' address='192.168.1.22' network='vdsm-external'/> </graphics> <graphics type='spice' port='5901' tlsPort='5902' autoport='yes' listen='192.168.1.22' passwdValidTo='1970-01-01T00:00:01'> <listen type='network' address='192.168.1.22' network='vdsm-external'/> <channel name='main' mode='secure'/> <channel name='display' mode='secure'/> <channel name='inputs' mode='secure'/> <channel name='cursor' mode='secure'/> <channel name='playback' mode='secure'/> <channel name='record' mode='secure'/> <channel name='smartcard' mode='secure'/> <channel name='usbredir' mode='secure'/> </graphics> <sound model='ich6'> <alias name='ua-bd287767-9b83-4e44-ac6f-8b527f9632b8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </sound> <video> <model type='qxl' ram='65536' vram='32768' vgamem='16384' heads='1' primary='yes'/> <alias name='ua-bcfb6b4b-0b3c-4d5b-ba2d-8ce40a65facd'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <stats period='5'/> <alias name='ua-39d36063-8808-47db-9fef-a0baad9f9661'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </memballoon> <rng model='virtio'> <backend model='random'>/dev/urandom</backend> <alias name='ua-75516d34-dd8f-4f0f-8496-e1f222a359a8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </rng> </devices> <seclabel type='dynamic' model='selinux' relabel='yes'> <label>system_u:system_r:svirt_t:s0:c162,c716</label> <imagelabel>system_u:object_r:svirt_image_t:s0:c162,c716</imagelabel> </seclabel> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+107:+107</label> <imagelabel>+107:+107</imagelabel> </seclabel> </domain> hosts: <capabilities> <host> <uuid>b25adcec-eef2-49a5-8663-7cdcfc50891b</uuid> <cpu> <arch>x86_64</arch> <model>Westmere</model> <vendor>Intel</vendor> <microcode version='34'/> <counter name='tsc' frequency='3699996000' scaling='no'/> <topology sockets='1' cores='2' threads='2'/> <feature name='vme'/> <feature name='ds'/> <feature name='acpi'/> <feature name='ss'/> <feature name='ht'/> <feature name='tm'/> <feature name='pbe'/> <feature name='pclmuldq'/> <feature name='dtes64'/> <feature name='monitor'/> <feature name='ds_cpl'/> <feature name='vmx'/> <feature name='est'/> <feature name='tm2'/> <feature name='fma'/> <feature name='xtpr'/> <feature name='pdcm'/> <feature name='pcid'/> <feature name='movbe'/> <feature name='tsc-deadline'/> <feature name='xsave'/> <feature name='osxsave'/> <feature name='avx'/> <feature name='f16c'/> <feature name='rdrand'/> <feature name='arat'/> <feature name='fsgsbase'/> <feature name='tsc_adjust'/> <feature name='bmi1'/> <feature name='avx2'/> <feature name='smep'/> <feature name='bmi2'/> <feature name='erms'/> <feature name='invpcid'/> <feature name='xsaveopt'/> <feature name='pdpe1gb'/> <feature name='rdtscp'/> <feature name='abm'/> <feature name='invtsc'/> <pages unit='KiB' size='4'/> <pages unit='KiB' size='2048'/> <pages unit='KiB' size='1048576'/> </cpu> <power_management> <suspend_mem/> <suspend_disk/> <suspend_hybrid/> </power_management> <iommu support='no'/> <migration_features> <live/> <uri_transports> <uri_transport>tcp</uri_transport> <uri_transport>rdma</uri_transport> </uri_transports> </migration_features> <topology> <cells num='1'> <cell id='0'> <memory unit='KiB'>32903488</memory> <pages unit='KiB' size='4'>8225872</pages> <pages unit='KiB' size='2048'>0</pages> <pages unit='KiB' size='1048576'>0</pages> <distances> <sibling id='0' value='10'/> </distances> <cpus num='4'> <cpu id='0' socket_id='0' core_id='0' siblings='0,2'/> <cpu id='1' socket_id='0' core_id='1' siblings='1,3'/> <cpu id='2' socket_id='0' core_id='0' siblings='0,2'/> <cpu id='3' socket_id='0' core_id='1' siblings='1,3'/> </cpus> </cell> </cells> </topology> <cache> <bank id='0' level='3' type='both' size='3' unit='MiB' cpus='0-3'/> </cache> <secmodel> <model>selinux</model> <doi>0</doi> <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel> <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel> </secmodel> <secmodel> <model>dac</model> <doi>0</doi> <baselabel type='kvm'>+107:+107</baselabel> <baselabel type='qemu'>+107:+107</baselabel> </secmodel> </host> <guest> <os_type>hvm</os_type> <arch name='i686'> <wordsize>32</wordsize> <emulator>/usr/libexec/qemu-kvm</emulator> <machine maxCpus='240'>pc-i440fx-rhel7.6.0</machine> <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240'>pc</machine> <machine maxCpus='240'>pc-i440fx-rhel7.0.0</machine> <machine maxCpus='384'>pc-q35-rhel7.6.0</machine> <machine canonical='pc-q35-rhel7.6.0' maxCpus='384'>q35</machine> <machine maxCpus='240'>rhel6.3.0</machine> <machine maxCpus='240'>rhel6.4.0</machine> <machine maxCpus='240'>rhel6.0.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.5.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.1.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.2.0</machine> <machine maxCpus='255'>pc-q35-rhel7.3.0</machine> <machine maxCpus='240'>rhel6.5.0</machine> <machine maxCpus='384'>pc-q35-rhel7.4.0</machine> <machine maxCpus='240'>rhel6.6.0</machine> <machine maxCpus='240'>rhel6.1.0</machine> <machine maxCpus='240'>rhel6.2.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.3.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.4.0</machine> <machine maxCpus='384'>pc-q35-rhel7.5.0</machine> <domain type='qemu'/> <domain type='kvm'> <emulator>/usr/libexec/qemu-kvm</emulator> </domain> </arch> <features> <cpuselection/> <deviceboot/> <disksnapshot default='on' toggle='no'/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> <pae/> <nonpae/> </features> </guest> <guest> <os_type>hvm</os_type> <arch name='x86_64'> <wordsize>64</wordsize> <emulator>/usr/libexec/qemu-kvm</emulator> <machine maxCpus='240'>pc-i440fx-rhel7.6.0</machine> <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240'>pc</machine> <machine maxCpus='240'>pc-i440fx-rhel7.0.0</machine> <machine maxCpus='384'>pc-q35-rhel7.6.0</machine> <machine canonical='pc-q35-rhel7.6.0' maxCpus='384'>q35</machine> <machine maxCpus='240'>rhel6.3.0</machine> <machine maxCpus='240'>rhel6.4.0</machine> <machine maxCpus='240'>rhel6.0.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.5.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.1.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.2.0</machine> <machine maxCpus='255'>pc-q35-rhel7.3.0</machine> <machine maxCpus='240'>rhel6.5.0</machine> <machine maxCpus='384'>pc-q35-rhel7.4.0</machine> <machine maxCpus='240'>rhel6.6.0</machine> <machine maxCpus='240'>rhel6.1.0</machine> <machine maxCpus='240'>rhel6.2.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.3.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.4.0</machine> <machine maxCpus='384'>pc-q35-rhel7.5.0</machine> <domain type='qemu'/> <domain type='kvm'> <emulator>/usr/libexec/qemu-kvm</emulator> </domain> </arch> <features> <cpuselection/> <deviceboot/> <disksnapshot default='on' toggle='no'/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> </features> </guest> </capabilities> 在 2020-09-17 12:00:19,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
It would be easier if you posted the whole xml.
What about the sections (in HE xml) starting with: feature policy=
Also the hosts have a section which contains:
<feature name=
If you can share a VM's xml sections for a good VM.
Best Regards, Strahil Nikolov
В четвъртък, 17 септември 2020 г., 05:54:12 Гринуич+3, ddqlo <ddqlo@126.com> написа:
HostedEngine: ...... <model fallback='forbid'>Haswell-noTSX</model> ......
both of the hosts: ...... <model>Westmere</model> ......
others vms which can be migrated: ...... <model fallback='forbid'>Haswell-noTSX</model> ......
在 2020-09-17 03:03:24,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
Can you verify the HostedEngine's CPU ?
1. ssh to the host hosting the HE 2. alias virsh='virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf' 3. virsh dumpxml HostedEngine
Then set the alias for virsh on all Hosts and 'virsh capabilites' should show the Hosts' <cpu><model> .
Best Regards, Strahil Nikolov
В сряда, 16 септември 2020 г., 10:16:08 Гринуич+3, ddqlo <ddqlo@126.com> написа:
My gateway was not pingable. I have fixed this problem and now both nodes have a score(3400). Yet, hosted engine could not be migrated. Same log in engine.log: host filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU'
在 2020-09-16 02:11:09,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
Both nodes have a lower than the usual score (should be 3400 ). Based on the score you are probably suffering from gateway-score-penalty [1][2]. Check if your gateway is pingable.
Best Regards, Strahil Nikolov
1 - https://www.ovirt.org/images/Hosted-Engine-4.3-deep-dive.pdf(page 8) 2 - /etc/ovirt-hosted-engine-ha/agent.conf
В вторник, 15 септември 2020 г., 04:49:48 Гринуич+3, ddqlo <ddqlo@126.com> написа:
--== Host node28 (id: 1) status ==--
conf_on_shared_storage : True Status up-to-date : True Hostname : node28 Host ID : 1 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"} Score : 1800 stopped : False Local maintenance : False crc32 : 4ac6105b local_conf_timestamp : 1794597 Host timestamp : 1794597 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=1794597 (Tue Sep 15 09:47:17 2020) host-id=1 score=1800 vm_conf_refresh_time=1794597 (Tue Sep 15 09:47:17 2020) conf_on_shared_storage=True maintenance=False state=EngineDown stopped=False
--== Host node22 (id: 2) status ==--
conf_on_shared_storage : True Status up-to-date : True Hostname : node22 Host ID : 2 Engine status : {"health": "good", "vm": "up", "detail": "Up"} Score : 1800 stopped : False Local maintenance : False crc32 : ffc41893 local_conf_timestamp : 1877876 Host timestamp : 1877876 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=1877876 (Tue Sep 15 09:47:13 2020) host-id=2 score=1800 vm_conf_refresh_time=1877876 (Tue Sep 15 09:47:13 2020) conf_on_shared_storage=True maintenance=False state=EngineUp stopped=False
在 2020-09-09 01:32:55,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
What is the output of 'hosted-engine --vm-status' on the node where the HostedEngine is running ?
Best Regards, Strahil Nikolov
В понеделник, 7 септември 2020 г., 03:53:13 Гринуич+3, ddqlo <ddqlo@126.com> написа:
I could not find any logs because the migration button is disabled in the web UI. It seems that the engine migration operation is prevented at first. Any other ideas? Thanks!
在 2020-09-01 00:06:19,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
I'm running oVirt 4.3.10 and I can migrate my Engine from node to node. I had one similar issue , but powering off and on the HE has fixed it.
You have to check the vdsm log on the source and on destination in order to figure out what is going on. Also you might consider checking the libvirt logs on the destination.
Best Regards, Strahil Nikolov
В понеделник, 31 август 2020 г., 10:47:22 Гринуич+3, ddqlo <ddqlo@126.com> написа:
Thanks! The scores of all nodes are not '0'. I find that someone has already asked a question like this. It seems that this feature has been disabled in 4.3. I am not sure if it is enabled in 4.4.
在 2020-08-29 02:27:03,"Strahil Nikolov" <hunter86_bg@yahoo.com> :
Have you checked under a shell the output of 'hosted-engine --vm-status' . Check the Score of the hosts. Maybe there is a node with score of '0' ?
Best Regards, Strahil Nikolov
В вторник, 25 август 2020 г., 13:46:18 Гринуич+3, 董青龙 <ddqlo@126.com> написа:
Hi all, I have an ovirt4.3.10.4 environment of 2 hosts. Normal vms in this environment could be migrated, but the hosted engine vm could not be migrated. Anyone can help? Thanks a lot!
hosts status:
normal vm migration:
hosted engine vm migration:
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZXHE2AJX42HNHO...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/IAYLFLC6K42OUP...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/23ZMAP5K7N5KKX...

so strange! After I set global maintenance, powered off and started H The cpu of HE became 'Westmere'(did not change anything). But HE still could not be migrated. HE xml: <cpu mode='custom' match='exact' check='full'> <model fallback='forbid'>Westmere</model> <topology sockets='16' cores='4' threads='1'/> <feature policy='require' name='vme'/> <feature policy='require' name='pclmuldq'/> <feature policy='require' name='x2apic'/> <feature policy='require' name='hypervisor'/> <feature policy='require' name='arat'/> <numa> <cell id='0' cpus='0-3' memory='16777216' unit='KiB'/> </numa> </cpu> host capabilities: <model>Westmere</model> cluster cpu type (UI): host cpu type (UI): HE cpu type (UI): 在 2020-09-19 13:27:35,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
Hm... interesting.
The VM is using 'Haswell-noTSX' while the host is 'Westmere'.
In my case I got no difference:
[root@ovirt1 ~]# virsh dumpxml HostedEngine | grep Opteron <model fallback='forbid'>Opteron_G5</model> [root@ovirt1 ~]# virsh capabilities | grep Opteron <model>Opteron_G5</model>
Did you update the cluster holding the Hosted Engine ?
I guess you can try to:
- Set global maintenance - Power off the HostedEngine VM - virsh dumpxml HostedEngine > /root/HE.xml - use virsh edit to change the cpu of the HE (non-permanent) change - try to power on the modified HE
If it powers on , you can try to migrate it and if it succeeds - then you should make it permanent.
Best Regards, Strahil Nikolov
В петък, 18 септември 2020 г., 04:40:39 Гринуич+3, ddqlo <ddqlo@126.com> написа:
HE:
<domain type='kvm' id='1'> <name>HostedEngine</name> <uuid>b4e805ff-556d-42bd-a6df-02f5902fd01c</uuid> <metadata xmlns:ns0="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0"> <ns0:qos/> <ovirt-vm:vm xmlns:ovirt-vm="http://ovirt.org/vm/1.0"> <ovirt-vm:clusterVersion>4.3</ovirt-vm:clusterVersion> <ovirt-vm:destroy_on_reboot type="bool">False</ovirt-vm:destroy_on_reboot> <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused> <ovirt-vm:memGuaranteedSize type="int">1024</ovirt-vm:memGuaranteedSize> <ovirt-vm:minGuaranteedMemoryMb type="int">1024</ovirt-vm:minGuaranteedMemoryMb> <ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior> <ovirt-vm:startTime type="float">1600307555.19</ovirt-vm:startTime> <ovirt-vm:device mac_address="56:6f:9b:b0:00:01"> <ovirt-vm:network>external</ovirt-vm:network> <ovirt-vm:custom> <ovirt-vm:queues>4</ovirt-vm:queues> </ovirt-vm:custom> </ovirt-vm:device> <ovirt-vm:device mac_address="00:16:3e:50:c1:97"> <ovirt-vm:network>ovirtmgmt</ovirt-vm:network> <ovirt-vm:custom> <ovirt-vm:queues>4</ovirt-vm:queues> </ovirt-vm:custom> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="vda"> <ovirt-vm:domainID>c17c1934-332f-464c-8f89-ad72463c00b3</ovirt-vm:domainID> <ovirt-vm:guestName>/dev/vda2</ovirt-vm:guestName> <ovirt-vm:imageID>8eca143a-4535-4421-bd35-9f5764d67d70</ovirt-vm:imageID> <ovirt-vm:poolID>00000000-0000-0000-0000-000000000000</ovirt-vm:poolID> <ovirt-vm:shared>exclusive</ovirt-vm:shared> <ovirt-vm:volumeID>ae961104-c3b3-4a43-9f46-7fa6bdc2ac33</ovirt-vm:volumeID> <ovirt-vm:specParams> <ovirt-vm:pinToIoThread>1</ovirt-vm:pinToIoThread> </ovirt-vm:specParams> <ovirt-vm:volumeChain> <ovirt-vm:volumeChainNode> <ovirt-vm:domainID>c17c1934-332f-464c-8f89-ad72463c00b3</ovirt-vm:domainID> <ovirt-vm:imageID>8eca143a-4535-4421-bd35-9f5764d67d70</ovirt-vm:imageID> <ovirt-vm:leaseOffset type="int">108003328</ovirt-vm:leaseOffset> <ovirt-vm:leasePath>/dev/c17c1934-332f-464c-8f89-ad72463c00b3/leases</ovirt-vm:leasePath> <ovirt-vm:path>/rhev/data-center/mnt/blockSD/c17c1934-332f-464c-8f89-ad72463c00b3/images/8eca143a-4535-4421-bd35-9f5764d67d70/ae961104-c3b3-4a43-9f46-7fa6bdc2ac33</ovirt-vm:path> <ovirt-vm:volumeID>ae961104-c3b3-4a43-9f46-7fa6bdc2ac33</ovirt-vm:volumeID> </ovirt-vm:volumeChainNode> </ovirt-vm:volumeChain> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="hdc"/> </ovirt-vm:vm> </metadata> <maxMemory slots='16' unit='KiB'>67108864</maxMemory> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <vcpu placement='static' current='4'>64</vcpu> <iothreads>1</iothreads> <resource> <partition>/machine</partition> </resource> <sysinfo type='smbios'> <system> <entry name='manufacturer'>oVirt</entry> <entry name='product'>oVirt Node</entry> <entry name='version'>7-5.1804.el7.centos</entry> <entry name='serial'>00000000-0000-0000-0000-0CC47A6B3160</entry> <entry name='uuid'>b4e805ff-556d-42bd-a6df-02f5902fd01c</entry> </system> </sysinfo> <os> <type arch='x86_64' machine='pc-i440fx-rhel7.6.0'>hvm</type> <boot dev='hd'/> <bios useserial='yes'/> <smbios mode='sysinfo'/> </os> <features> <acpi/> </features> <cpu mode='custom' match='exact' check='full'> <model fallback='forbid'>Haswell-noTSX</model> <topology sockets='16' cores='4' threads='1'/> <feature policy='require' name='vme'/> <feature policy='require' name='f16c'/> <feature policy='require' name='rdrand'/> <feature policy='require' name='hypervisor'/> <feature policy='require' name='arat'/> <feature policy='require' name='xsaveopt'/> <feature policy='require' name='abm'/> <numa> <cell id='0' cpus='0-3' memory='16777216' unit='KiB'/> </numa> </cpu> <clock offset='variable' adjustment='0' basis='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>destroy</on_reboot> <on_crash>destroy</on_crash> <pm> <suspend-to-mem enabled='no'/> <suspend-to-disk enabled='no'/> </pm> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='cdrom'> <driver name='qemu' error_policy='report'/> <source startupPolicy='optional'/> <target dev='hdc' bus='ide'/> <readonly/> <alias name='ua-80fde7d5-ee7f-4201-9118-11bc6c3b8530'/> <address type='drive' controller='0' bus='1' target='0' unit='0'/> </disk> <disk type='block' device='disk' snapshot='no'> <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native' iothread='1'/> <source dev='/var/run/vdsm/storage/c17c1934-332f-464c-8f89-ad72463c00b3/8eca143a-4535-4421-bd35-9f5764d67d70/ae961104-c3b3-4a43-9f46-7fa6bdc2ac33'> <seclabel model='dac' relabel='no'/> </source> <backingStore/> <target dev='vda' bus='virtio'/> <serial>8eca143a-4535-4421-bd35-9f5764d67d70</serial> <alias name='ua-8eca143a-4535-4421-bd35-9f5764d67d70'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </disk> <controller type='scsi' index='0' model='virtio-scsi'> <driver iothread='1'/> <alias name='ua-27331e83-03f4-42a3-9554-c41649c02ba4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </controller> <controller type='ide' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0' ports='16'> <alias name='ua-8fe74299-b60f-4778-8e80-db05393a9489'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </controller> <controller type='usb' index='0' model='piix3-uhci'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <lease> <lockspace>c17c1934-332f-464c-8f89-ad72463c00b3</lockspace> <key>ae961104-c3b3-4a43-9f46-7fa6bdc2ac33</key> <target path='/dev/c17c1934-332f-464c-8f89-ad72463c00b3/leases' offset='108003328'/> </lease> <interface type='bridge'> <mac address='00:16:3e:50:c1:97'/> <source bridge='ovirtmgmt'/> <target dev='vnet0'/> <model type='virtio'/> <driver name='vhost' queues='4'/> <filterref filter='vdsm-no-mac-spoofing'/> <link state='up'/> <mtu size='1500'/> <alias name='ua-fada74ee-2338-4cde-a7ba-43a9a636ad6e'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <interface type='bridge'> <mac address='56:6f:9b:b0:00:01'/> <source bridge='external'/> <target dev='vnet1'/> <model type='virtio'/> <driver name='vhost' queues='4'/> <filterref filter='vdsm-no-mac-spoofing'/> <link state='up'/> <mtu size='1500'/> <alias name='ua-f7b4c949-1f9f-4355-811d-88428c88ce4e'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/> </interface> <serial type='unix'> <source mode='bind' path='/var/run/ovirt-vmconsole-console/b4e805ff-556d-42bd-a6df-02f5902fd01c.sock'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='unix'> <source mode='bind' path='/var/run/ovirt-vmconsole-console/b4e805ff-556d-42bd-a6df-02f5902fd01c.sock'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/b4e805ff-556d-42bd-a6df-02f5902fd01c.ovirt-guest-agent.0'/> <target type='virtio' name='ovirt-guest-agent.0' state='connected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/b4e805ff-556d-42bd-a6df-02f5902fd01c.org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='channel1'/> <address type='virtio-serial' controller='0' bus='0' port='2'/> </channel> <channel type='spicevmc'> <target type='virtio' name='com.redhat.spice.0' state='disconnected'/> <alias name='channel2'/> <address type='virtio-serial' controller='0' bus='0' port='3'/> </channel> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/b4e805ff-556d-42bd-a6df-02f5902fd01c.org.ovirt.hosted-engine-setup.0'/> <target type='virtio' name='org.ovirt.hosted-engine-setup.0' state='disconnected'/> <alias name='channel3'/> <address type='virtio-serial' controller='0' bus='0' port='4'/> </channel> <input type='tablet' bus='usb'> <alias name='input0'/> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'> <alias name='input1'/> </input> <input type='keyboard' bus='ps2'> <alias name='input2'/> </input> <graphics type='vnc' port='5900' autoport='yes' listen='192.168.1.22' keymap='en-us' passwdValidTo='1970-01-01T00:00:01'> <listen type='network' address='192.168.1.22' network='vdsm-external'/> </graphics> <graphics type='spice' port='5901' tlsPort='5902' autoport='yes' listen='192.168.1.22' passwdValidTo='1970-01-01T00:00:01'> <listen type='network' address='192.168.1.22' network='vdsm-external'/> <channel name='main' mode='secure'/> <channel name='display' mode='secure'/> <channel name='inputs' mode='secure'/> <channel name='cursor' mode='secure'/> <channel name='playback' mode='secure'/> <channel name='record' mode='secure'/> <channel name='smartcard' mode='secure'/> <channel name='usbredir' mode='secure'/> </graphics> <sound model='ich6'> <alias name='ua-bd287767-9b83-4e44-ac6f-8b527f9632b8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </sound> <video> <model type='qxl' ram='65536' vram='32768' vgamem='16384' heads='1' primary='yes'/> <alias name='ua-bcfb6b4b-0b3c-4d5b-ba2d-8ce40a65facd'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <stats period='5'/> <alias name='ua-39d36063-8808-47db-9fef-a0baad9f9661'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </memballoon> <rng model='virtio'> <backend model='random'>/dev/urandom</backend> <alias name='ua-75516d34-dd8f-4f0f-8496-e1f222a359a8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </rng> </devices> <seclabel type='dynamic' model='selinux' relabel='yes'> <label>system_u:system_r:svirt_t:s0:c162,c716</label> <imagelabel>system_u:object_r:svirt_image_t:s0:c162,c716</imagelabel> </seclabel> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+107:+107</label> <imagelabel>+107:+107</imagelabel> </seclabel> </domain>
hosts:
<capabilities> <host> <uuid>b25adcec-eef2-49a5-8663-7cdcfc50891b</uuid> <cpu> <arch>x86_64</arch> <model>Westmere</model> <vendor>Intel</vendor> <microcode version='34'/> <counter name='tsc' frequency='3699996000' scaling='no'/> <topology sockets='1' cores='2' threads='2'/> <feature name='vme'/> <feature name='ds'/> <feature name='acpi'/> <feature name='ss'/> <feature name='ht'/> <feature name='tm'/> <feature name='pbe'/> <feature name='pclmuldq'/> <feature name='dtes64'/> <feature name='monitor'/> <feature name='ds_cpl'/> <feature name='vmx'/> <feature name='est'/> <feature name='tm2'/> <feature name='fma'/> <feature name='xtpr'/> <feature name='pdcm'/> <feature name='pcid'/> <feature name='movbe'/> <feature name='tsc-deadline'/> <feature name='xsave'/> <feature name='osxsave'/> <feature name='avx'/> <feature name='f16c'/> <feature name='rdrand'/> <feature name='arat'/> <feature name='fsgsbase'/> <feature name='tsc_adjust'/> <feature name='bmi1'/> <feature name='avx2'/> <feature name='smep'/> <feature name='bmi2'/> <feature name='erms'/> <feature name='invpcid'/> <feature name='xsaveopt'/> <feature name='pdpe1gb'/> <feature name='rdtscp'/> <feature name='abm'/> <feature name='invtsc'/> <pages unit='KiB' size='4'/> <pages unit='KiB' size='2048'/> <pages unit='KiB' size='1048576'/> </cpu> <power_management> <suspend_mem/> <suspend_disk/> <suspend_hybrid/> </power_management> <iommu support='no'/> <migration_features> <live/> <uri_transports> <uri_transport>tcp</uri_transport> <uri_transport>rdma</uri_transport> </uri_transports> </migration_features> <topology> <cells num='1'> <cell id='0'> <memory unit='KiB'>32903488</memory> <pages unit='KiB' size='4'>8225872</pages> <pages unit='KiB' size='2048'>0</pages> <pages unit='KiB' size='1048576'>0</pages> <distances> <sibling id='0' value='10'/> </distances> <cpus num='4'> <cpu id='0' socket_id='0' core_id='0' siblings='0,2'/> <cpu id='1' socket_id='0' core_id='1' siblings='1,3'/> <cpu id='2' socket_id='0' core_id='0' siblings='0,2'/> <cpu id='3' socket_id='0' core_id='1' siblings='1,3'/> </cpus> </cell> </cells> </topology> <cache> <bank id='0' level='3' type='both' size='3' unit='MiB' cpus='0-3'/> </cache> <secmodel> <model>selinux</model> <doi>0</doi> <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel> <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel> </secmodel> <secmodel> <model>dac</model> <doi>0</doi> <baselabel type='kvm'>+107:+107</baselabel> <baselabel type='qemu'>+107:+107</baselabel> </secmodel> </host> <guest> <os_type>hvm</os_type> <arch name='i686'> <wordsize>32</wordsize> <emulator>/usr/libexec/qemu-kvm</emulator> <machine maxCpus='240'>pc-i440fx-rhel7.6.0</machine> <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240'>pc</machine> <machine maxCpus='240'>pc-i440fx-rhel7.0.0</machine> <machine maxCpus='384'>pc-q35-rhel7.6.0</machine> <machine canonical='pc-q35-rhel7.6.0' maxCpus='384'>q35</machine> <machine maxCpus='240'>rhel6.3.0</machine> <machine maxCpus='240'>rhel6.4.0</machine> <machine maxCpus='240'>rhel6.0.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.5.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.1.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.2.0</machine> <machine maxCpus='255'>pc-q35-rhel7.3.0</machine> <machine maxCpus='240'>rhel6.5.0</machine> <machine maxCpus='384'>pc-q35-rhel7.4.0</machine> <machine maxCpus='240'>rhel6.6.0</machine> <machine maxCpus='240'>rhel6.1.0</machine> <machine maxCpus='240'>rhel6.2.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.3.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.4.0</machine> <machine maxCpus='384'>pc-q35-rhel7.5.0</machine> <domain type='qemu'/> <domain type='kvm'> <emulator>/usr/libexec/qemu-kvm</emulator> </domain> </arch> <features> <cpuselection/> <deviceboot/> <disksnapshot default='on' toggle='no'/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> <pae/> <nonpae/> </features> </guest> <guest> <os_type>hvm</os_type> <arch name='x86_64'> <wordsize>64</wordsize> <emulator>/usr/libexec/qemu-kvm</emulator> <machine maxCpus='240'>pc-i440fx-rhel7.6.0</machine> <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240'>pc</machine> <machine maxCpus='240'>pc-i440fx-rhel7.0.0</machine> <machine maxCpus='384'>pc-q35-rhel7.6.0</machine> <machine canonical='pc-q35-rhel7.6.0' maxCpus='384'>q35</machine> <machine maxCpus='240'>rhel6.3.0</machine> <machine maxCpus='240'>rhel6.4.0</machine> <machine maxCpus='240'>rhel6.0.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.5.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.1.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.2.0</machine> <machine maxCpus='255'>pc-q35-rhel7.3.0</machine> <machine maxCpus='240'>rhel6.5.0</machine> <machine maxCpus='384'>pc-q35-rhel7.4.0</machine> <machine maxCpus='240'>rhel6.6.0</machine> <machine maxCpus='240'>rhel6.1.0</machine> <machine maxCpus='240'>rhel6.2.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.3.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.4.0</machine> <machine maxCpus='384'>pc-q35-rhel7.5.0</machine> <domain type='qemu'/> <domain type='kvm'> <emulator>/usr/libexec/qemu-kvm</emulator> </domain> </arch> <features> <cpuselection/> <deviceboot/> <disksnapshot default='on' toggle='no'/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> </features> </guest> </capabilities>
在 2020-09-17 12:00:19,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
It would be easier if you posted the whole xml.
What about the sections (in HE xml) starting with: feature policy=
Also the hosts have a section which contains:
<feature name=
If you can share a VM's xml sections for a good VM.
Best Regards, Strahil Nikolov
В четвъртък, 17 септември 2020 г., 05:54:12 Гринуич+3, ddqlo <ddqlo@126.com> написа:
HostedEngine: ...... <model fallback='forbid'>Haswell-noTSX</model> ......
both of the hosts: ...... <model>Westmere</model> ......
others vms which can be migrated: ...... <model fallback='forbid'>Haswell-noTSX</model> ......
在 2020-09-17 03:03:24,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
Can you verify the HostedEngine's CPU ?
1. ssh to the host hosting the HE 2. alias virsh='virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf' 3. virsh dumpxml HostedEngine
Then set the alias for virsh on all Hosts and 'virsh capabilites' should show the Hosts' <cpu><model> .
Best Regards, Strahil Nikolov
В сряда, 16 септември 2020 г., 10:16:08 Гринуич+3, ddqlo <ddqlo@126.com> написа:
My gateway was not pingable. I have fixed this problem and now both nodes have a score(3400). Yet, hosted engine could not be migrated. Same log in engine.log: host filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU'
在 2020-09-16 02:11:09,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
Both nodes have a lower than the usual score (should be 3400 ). Based on the score you are probably suffering from gateway-score-penalty [1][2]. Check if your gateway is pingable.
Best Regards, Strahil Nikolov
1 - https://www.ovirt.org/images/Hosted-Engine-4.3-deep-dive.pdf(page 8) 2 - /etc/ovirt-hosted-engine-ha/agent.conf
В вторник, 15 септември 2020 г., 04:49:48 Гринуич+3, ddqlo <ddqlo@126.com> написа:
--== Host node28 (id: 1) status ==--
conf_on_shared_storage : True Status up-to-date : True Hostname : node28 Host ID : 1 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"} Score : 1800 stopped : False Local maintenance : False crc32 : 4ac6105b local_conf_timestamp : 1794597 Host timestamp : 1794597 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=1794597 (Tue Sep 15 09:47:17 2020) host-id=1 score=1800 vm_conf_refresh_time=1794597 (Tue Sep 15 09:47:17 2020) conf_on_shared_storage=True maintenance=False state=EngineDown stopped=False
--== Host node22 (id: 2) status ==--
conf_on_shared_storage : True Status up-to-date : True Hostname : node22 Host ID : 2 Engine status : {"health": "good", "vm": "up", "detail": "Up"} Score : 1800 stopped : False Local maintenance : False crc32 : ffc41893 local_conf_timestamp : 1877876 Host timestamp : 1877876 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=1877876 (Tue Sep 15 09:47:13 2020) host-id=2 score=1800 vm_conf_refresh_time=1877876 (Tue Sep 15 09:47:13 2020) conf_on_shared_storage=True maintenance=False state=EngineUp stopped=False
在 2020-09-09 01:32:55,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
What is the output of 'hosted-engine --vm-status' on the node where the HostedEngine is running ?
Best Regards, Strahil Nikolov
В понеделник, 7 септември 2020 г., 03:53:13 Гринуич+3, ddqlo <ddqlo@126.com> написа:
I could not find any logs because the migration button is disabled in the web UI. It seems that the engine migration operation is prevented at first. Any other ideas? Thanks!
在 2020-09-01 00:06:19,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
I'm running oVirt 4.3.10 and I can migrate my Engine from node to node. I had one similar issue , but powering off and on the HE has fixed it.
You have to check the vdsm log on the source and on destination in order to figure out what is going on. Also you might consider checking the libvirt logs on the destination.
Best Regards, Strahil Nikolov
В понеделник, 31 август 2020 г., 10:47:22 Гринуич+3, ddqlo <ddqlo@126.com> написа:
Thanks! The scores of all nodes are not '0'. I find that someone has already asked a question like this. It seems that this feature has been disabled in 4.3. I am not sure if it is enabled in 4.4.
在 2020-08-29 02:27:03,"Strahil Nikolov" <hunter86_bg@yahoo.com> : >Have you checked under a shell the output of 'hosted-engine --vm-status' . Check the Score of the hosts. Maybe there is a node with score of '0' ? > >Best Regards, >Strahil Nikolov > > > > > > >В вторник, 25 август 2020 г., 13:46:18 Гринуич+3, 董青龙 <ddqlo@126.com> написа: > > > > > >Hi all, > I have an ovirt4.3.10.4 environment of 2 hosts. Normal vms in this environment could be migrated, but the hosted engine vm could not be migrated. Anyone can help? Thanks a lot! > >hosts status: > >normal vm migration: > >hosted engine vm migration: > > > > >_______________________________________________ >Users mailing list -- users@ovirt.org >To unsubscribe send an email to users-leave@ovirt.org >Privacy Statement: https://www.ovirt.org/privacy-policy.html >oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ >List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZXHE2AJX42HNHO...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/IAYLFLC6K42OUP...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/23ZMAP5K7N5KKX...

That's quite strange. Any errors/clues in the Engine's logs ? Best Regards, Strahil Nikolov В понеделник, 21 септември 2020 г., 05:58:35 Гринуич+3, ddqlo <ddqlo@126.com> написа: so strange! After I set global maintenance, powered off and started H The cpu of HE became 'Westmere'(did not change anything). But HE still could not be migrated. HE xml: <cpu mode='custom' match='exact' check='full'> <model fallback='forbid'>Westmere</model> <topology sockets='16' cores='4' threads='1'/> <feature policy='require' name='vme'/> <feature policy='require' name='pclmuldq'/> <feature policy='require' name='x2apic'/> <feature policy='require' name='hypervisor'/> <feature policy='require' name='arat'/> <numa> <cell id='0' cpus='0-3' memory='16777216' unit='KiB'/> </numa> </cpu> host capabilities: <model>Westmere</model> cluster cpu type (UI): host cpu type (UI): HE cpu type (UI): 在 2020-09-19 13:27:35,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
Hm... interesting.
The VM is using 'Haswell-noTSX' while the host is 'Westmere'.
In my case I got no difference:
[root@ovirt1 ~]# virsh dumpxml HostedEngine | grep Opteron <model fallback='forbid'>Opteron_G5</model> [root@ovirt1 ~]# virsh capabilities | grep Opteron <model>Opteron_G5</model>
Did you update the cluster holding the Hosted Engine ?
I guess you can try to:
- Set global maintenance - Power off the HostedEngine VM - virsh dumpxml HostedEngine > /root/HE.xml - use virsh edit to change the cpu of the HE (non-permanent) change - try to power on the modified HE
If it powers on , you can try to migrate it and if it succeeds - then you should make it permanent.
Best Regards, Strahil Nikolov
В петък, 18 септември 2020 г., 04:40:39 Гринуич+3, ddqlo <ddqlo@126.com> написа:
HE:
<domain type='kvm' id='1'> <name>HostedEngine</name> <uuid>b4e805ff-556d-42bd-a6df-02f5902fd01c</uuid> <metadata xmlns:ns0="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0"> <ns0:qos/> <ovirt-vm:vm xmlns:ovirt-vm="http://ovirt.org/vm/1.0"> <ovirt-vm:clusterVersion>4.3</ovirt-vm:clusterVersion> <ovirt-vm:destroy_on_reboot type="bool">False</ovirt-vm:destroy_on_reboot> <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused> <ovirt-vm:memGuaranteedSize type="int">1024</ovirt-vm:memGuaranteedSize> <ovirt-vm:minGuaranteedMemoryMb type="int">1024</ovirt-vm:minGuaranteedMemoryMb> <ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior> <ovirt-vm:startTime type="float">1600307555.19</ovirt-vm:startTime> <ovirt-vm:device mac_address="56:6f:9b:b0:00:01"> <ovirt-vm:network>external</ovirt-vm:network> <ovirt-vm:custom> <ovirt-vm:queues>4</ovirt-vm:queues> </ovirt-vm:custom> </ovirt-vm:device> <ovirt-vm:device mac_address="00:16:3e:50:c1:97"> <ovirt-vm:network>ovirtmgmt</ovirt-vm:network> <ovirt-vm:custom> <ovirt-vm:queues>4</ovirt-vm:queues> </ovirt-vm:custom> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="vda"> <ovirt-vm:domainID>c17c1934-332f-464c-8f89-ad72463c00b3</ovirt-vm:domainID> <ovirt-vm:guestName>/dev/vda2</ovirt-vm:guestName> <ovirt-vm:imageID>8eca143a-4535-4421-bd35-9f5764d67d70</ovirt-vm:imageID> <ovirt-vm:poolID>00000000-0000-0000-0000-000000000000</ovirt-vm:poolID> <ovirt-vm:shared>exclusive</ovirt-vm:shared> <ovirt-vm:volumeID>ae961104-c3b3-4a43-9f46-7fa6bdc2ac33</ovirt-vm:volumeID> <ovirt-vm:specParams> <ovirt-vm:pinToIoThread>1</ovirt-vm:pinToIoThread> </ovirt-vm:specParams> <ovirt-vm:volumeChain> <ovirt-vm:volumeChainNode> <ovirt-vm:domainID>c17c1934-332f-464c-8f89-ad72463c00b3</ovirt-vm:domainID> <ovirt-vm:imageID>8eca143a-4535-4421-bd35-9f5764d67d70</ovirt-vm:imageID> <ovirt-vm:leaseOffset type="int">108003328</ovirt-vm:leaseOffset> <ovirt-vm:leasePath>/dev/c17c1934-332f-464c-8f89-ad72463c00b3/leases</ovirt-vm:leasePath> <ovirt-vm:path>/rhev/data-center/mnt/blockSD/c17c1934-332f-464c-8f89-ad72463c00b3/images/8eca143a-4535-4421-bd35-9f5764d67d70/ae961104-c3b3-4a43-9f46-7fa6bdc2ac33</ovirt-vm:path> <ovirt-vm:volumeID>ae961104-c3b3-4a43-9f46-7fa6bdc2ac33</ovirt-vm:volumeID> </ovirt-vm:volumeChainNode> </ovirt-vm:volumeChain> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="hdc"/> </ovirt-vm:vm> </metadata> <maxMemory slots='16' unit='KiB'>67108864</maxMemory> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <vcpu placement='static' current='4'>64</vcpu> <iothreads>1</iothreads> <resource> <partition>/machine</partition> </resource> <sysinfo type='smbios'> <system> <entry name='manufacturer'>oVirt</entry> <entry name='product'>oVirt Node</entry> <entry name='version'>7-5.1804.el7.centos</entry> <entry name='serial'>00000000-0000-0000-0000-0CC47A6B3160</entry> <entry name='uuid'>b4e805ff-556d-42bd-a6df-02f5902fd01c</entry> </system> </sysinfo> <os> <type arch='x86_64' machine='pc-i440fx-rhel7.6.0'>hvm</type> <boot dev='hd'/> <bios useserial='yes'/> <smbios mode='sysinfo'/> </os> <features> <acpi/> </features> <cpu mode='custom' match='exact' check='full'> <model fallback='forbid'>Haswell-noTSX</model> <topology sockets='16' cores='4' threads='1'/> <feature policy='require' name='vme'/> <feature policy='require' name='f16c'/> <feature policy='require' name='rdrand'/> <feature policy='require' name='hypervisor'/> <feature policy='require' name='arat'/> <feature policy='require' name='xsaveopt'/> <feature policy='require' name='abm'/> <numa> <cell id='0' cpus='0-3' memory='16777216' unit='KiB'/> </numa> </cpu> <clock offset='variable' adjustment='0' basis='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>destroy</on_reboot> <on_crash>destroy</on_crash> <pm> <suspend-to-mem enabled='no'/> <suspend-to-disk enabled='no'/> </pm> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='cdrom'> <driver name='qemu' error_policy='report'/> <source startupPolicy='optional'/> <target dev='hdc' bus='ide'/> <readonly/> <alias name='ua-80fde7d5-ee7f-4201-9118-11bc6c3b8530'/> <address type='drive' controller='0' bus='1' target='0' unit='0'/> </disk> <disk type='block' device='disk' snapshot='no'> <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native' iothread='1'/> <source dev='/var/run/vdsm/storage/c17c1934-332f-464c-8f89-ad72463c00b3/8eca143a-4535-4421-bd35-9f5764d67d70/ae961104-c3b3-4a43-9f46-7fa6bdc2ac33'> <seclabel model='dac' relabel='no'/> </source> <backingStore/> <target dev='vda' bus='virtio'/> <serial>8eca143a-4535-4421-bd35-9f5764d67d70</serial> <alias name='ua-8eca143a-4535-4421-bd35-9f5764d67d70'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </disk> <controller type='scsi' index='0' model='virtio-scsi'> <driver iothread='1'/> <alias name='ua-27331e83-03f4-42a3-9554-c41649c02ba4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </controller> <controller type='ide' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0' ports='16'> <alias name='ua-8fe74299-b60f-4778-8e80-db05393a9489'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </controller> <controller type='usb' index='0' model='piix3-uhci'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <lease> <lockspace>c17c1934-332f-464c-8f89-ad72463c00b3</lockspace> <key>ae961104-c3b3-4a43-9f46-7fa6bdc2ac33</key> <target path='/dev/c17c1934-332f-464c-8f89-ad72463c00b3/leases' offset='108003328'/> </lease> <interface type='bridge'> <mac address='00:16:3e:50:c1:97'/> <source bridge='ovirtmgmt'/> <target dev='vnet0'/> <model type='virtio'/> <driver name='vhost' queues='4'/> <filterref filter='vdsm-no-mac-spoofing'/> <link state='up'/> <mtu size='1500'/> <alias name='ua-fada74ee-2338-4cde-a7ba-43a9a636ad6e'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <interface type='bridge'> <mac address='56:6f:9b:b0:00:01'/> <source bridge='external'/> <target dev='vnet1'/> <model type='virtio'/> <driver name='vhost' queues='4'/> <filterref filter='vdsm-no-mac-spoofing'/> <link state='up'/> <mtu size='1500'/> <alias name='ua-f7b4c949-1f9f-4355-811d-88428c88ce4e'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/> </interface> <serial type='unix'> <source mode='bind' path='/var/run/ovirt-vmconsole-console/b4e805ff-556d-42bd-a6df-02f5902fd01c.sock'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='unix'> <source mode='bind' path='/var/run/ovirt-vmconsole-console/b4e805ff-556d-42bd-a6df-02f5902fd01c.sock'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/b4e805ff-556d-42bd-a6df-02f5902fd01c.ovirt-guest-agent.0'/> <target type='virtio' name='ovirt-guest-agent.0' state='connected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/b4e805ff-556d-42bd-a6df-02f5902fd01c.org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='channel1'/> <address type='virtio-serial' controller='0' bus='0' port='2'/> </channel> <channel type='spicevmc'> <target type='virtio' name='com.redhat.spice.0' state='disconnected'/> <alias name='channel2'/> <address type='virtio-serial' controller='0' bus='0' port='3'/> </channel> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/b4e805ff-556d-42bd-a6df-02f5902fd01c.org.ovirt.hosted-engine-setup.0'/> <target type='virtio' name='org.ovirt.hosted-engine-setup.0' state='disconnected'/> <alias name='channel3'/> <address type='virtio-serial' controller='0' bus='0' port='4'/> </channel> <input type='tablet' bus='usb'> <alias name='input0'/> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'> <alias name='input1'/> </input> <input type='keyboard' bus='ps2'> <alias name='input2'/> </input> <graphics type='vnc' port='5900' autoport='yes' listen='192.168.1.22' keymap='en-us' passwdValidTo='1970-01-01T00:00:01'> <listen type='network' address='192.168.1.22' network='vdsm-external'/> </graphics> <graphics type='spice' port='5901' tlsPort='5902' autoport='yes' listen='192.168.1.22' passwdValidTo='1970-01-01T00:00:01'> <listen type='network' address='192.168.1.22' network='vdsm-external'/> <channel name='main' mode='secure'/> <channel name='display' mode='secure'/> <channel name='inputs' mode='secure'/> <channel name='cursor' mode='secure'/> <channel name='playback' mode='secure'/> <channel name='record' mode='secure'/> <channel name='smartcard' mode='secure'/> <channel name='usbredir' mode='secure'/> </graphics> <sound model='ich6'> <alias name='ua-bd287767-9b83-4e44-ac6f-8b527f9632b8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </sound> <video> <model type='qxl' ram='65536' vram='32768' vgamem='16384' heads='1' primary='yes'/> <alias name='ua-bcfb6b4b-0b3c-4d5b-ba2d-8ce40a65facd'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <stats period='5'/> <alias name='ua-39d36063-8808-47db-9fef-a0baad9f9661'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </memballoon> <rng model='virtio'> <backend model='random'>/dev/urandom</backend> <alias name='ua-75516d34-dd8f-4f0f-8496-e1f222a359a8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </rng> </devices> <seclabel type='dynamic' model='selinux' relabel='yes'> <label>system_u:system_r:svirt_t:s0:c162,c716</label> <imagelabel>system_u:object_r:svirt_image_t:s0:c162,c716</imagelabel> </seclabel> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+107:+107</label> <imagelabel>+107:+107</imagelabel> </seclabel> </domain>
hosts:
<capabilities> <host> <uuid>b25adcec-eef2-49a5-8663-7cdcfc50891b</uuid> <cpu> <arch>x86_64</arch> <model>Westmere</model> <vendor>Intel</vendor> <microcode version='34'/> <counter name='tsc' frequency='3699996000' scaling='no'/> <topology sockets='1' cores='2' threads='2'/> <feature name='vme'/> <feature name='ds'/> <feature name='acpi'/> <feature name='ss'/> <feature name='ht'/> <feature name='tm'/> <feature name='pbe'/> <feature name='pclmuldq'/> <feature name='dtes64'/> <feature name='monitor'/> <feature name='ds_cpl'/> <feature name='vmx'/> <feature name='est'/> <feature name='tm2'/> <feature name='fma'/> <feature name='xtpr'/> <feature name='pdcm'/> <feature name='pcid'/> <feature name='movbe'/> <feature name='tsc-deadline'/> <feature name='xsave'/> <feature name='osxsave'/> <feature name='avx'/> <feature name='f16c'/> <feature name='rdrand'/> <feature name='arat'/> <feature name='fsgsbase'/> <feature name='tsc_adjust'/> <feature name='bmi1'/> <feature name='avx2'/> <feature name='smep'/> <feature name='bmi2'/> <feature name='erms'/> <feature name='invpcid'/> <feature name='xsaveopt'/> <feature name='pdpe1gb'/> <feature name='rdtscp'/> <feature name='abm'/> <feature name='invtsc'/> <pages unit='KiB' size='4'/> <pages unit='KiB' size='2048'/> <pages unit='KiB' size='1048576'/> </cpu> <power_management> <suspend_mem/> <suspend_disk/> <suspend_hybrid/> </power_management> <iommu support='no'/> <migration_features> <live/> <uri_transports> <uri_transport>tcp</uri_transport> <uri_transport>rdma</uri_transport> </uri_transports> </migration_features> <topology> <cells num='1'> <cell id='0'> <memory unit='KiB'>32903488</memory> <pages unit='KiB' size='4'>8225872</pages> <pages unit='KiB' size='2048'>0</pages> <pages unit='KiB' size='1048576'>0</pages> <distances> <sibling id='0' value='10'/> </distances> <cpus num='4'> <cpu id='0' socket_id='0' core_id='0' siblings='0,2'/> <cpu id='1' socket_id='0' core_id='1' siblings='1,3'/> <cpu id='2' socket_id='0' core_id='0' siblings='0,2'/> <cpu id='3' socket_id='0' core_id='1' siblings='1,3'/> </cpus> </cell> </cells> </topology> <cache> <bank id='0' level='3' type='both' size='3' unit='MiB' cpus='0-3'/> </cache> <secmodel> <model>selinux</model> <doi>0</doi> <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel> <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel> </secmodel> <secmodel> <model>dac</model> <doi>0</doi> <baselabel type='kvm'>+107:+107</baselabel> <baselabel type='qemu'>+107:+107</baselabel> </secmodel> </host> <guest> <os_type>hvm</os_type> <arch name='i686'> <wordsize>32</wordsize> <emulator>/usr/libexec/qemu-kvm</emulator> <machine maxCpus='240'>pc-i440fx-rhel7.6.0</machine> <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240'>pc</machine> <machine maxCpus='240'>pc-i440fx-rhel7.0.0</machine> <machine maxCpus='384'>pc-q35-rhel7.6.0</machine> <machine canonical='pc-q35-rhel7.6.0' maxCpus='384'>q35</machine> <machine maxCpus='240'>rhel6.3.0</machine> <machine maxCpus='240'>rhel6.4.0</machine> <machine maxCpus='240'>rhel6.0.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.5.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.1.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.2.0</machine> <machine maxCpus='255'>pc-q35-rhel7.3.0</machine> <machine maxCpus='240'>rhel6.5.0</machine> <machine maxCpus='384'>pc-q35-rhel7.4.0</machine> <machine maxCpus='240'>rhel6.6.0</machine> <machine maxCpus='240'>rhel6.1.0</machine> <machine maxCpus='240'>rhel6.2.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.3.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.4.0</machine> <machine maxCpus='384'>pc-q35-rhel7.5.0</machine> <domain type='qemu'/> <domain type='kvm'> <emulator>/usr/libexec/qemu-kvm</emulator> </domain> </arch> <features> <cpuselection/> <deviceboot/> <disksnapshot default='on' toggle='no'/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> <pae/> <nonpae/> </features> </guest> <guest> <os_type>hvm</os_type> <arch name='x86_64'> <wordsize>64</wordsize> <emulator>/usr/libexec/qemu-kvm</emulator> <machine maxCpus='240'>pc-i440fx-rhel7.6.0</machine> <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240'>pc</machine> <machine maxCpus='240'>pc-i440fx-rhel7.0.0</machine> <machine maxCpus='384'>pc-q35-rhel7.6.0</machine> <machine canonical='pc-q35-rhel7.6.0' maxCpus='384'>q35</machine> <machine maxCpus='240'>rhel6.3.0</machine> <machine maxCpus='240'>rhel6.4.0</machine> <machine maxCpus='240'>rhel6.0.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.5.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.1.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.2.0</machine> <machine maxCpus='255'>pc-q35-rhel7.3.0</machine> <machine maxCpus='240'>rhel6.5.0</machine> <machine maxCpus='384'>pc-q35-rhel7.4.0</machine> <machine maxCpus='240'>rhel6.6.0</machine> <machine maxCpus='240'>rhel6.1.0</machine> <machine maxCpus='240'>rhel6.2.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.3.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.4.0</machine> <machine maxCpus='384'>pc-q35-rhel7.5.0</machine> <domain type='qemu'/> <domain type='kvm'> <emulator>/usr/libexec/qemu-kvm</emulator> </domain> </arch> <features> <cpuselection/> <deviceboot/> <disksnapshot default='on' toggle='no'/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> </features> </guest> </capabilities>
在 2020-09-17 12:00:19,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
It would be easier if you posted the whole xml.
What about the sections (in HE xml) starting with: feature policy=
Also the hosts have a section which contains:
<feature name=
If you can share a VM's xml sections for a good VM.
Best Regards, Strahil Nikolov
В четвъртък, 17 септември 2020 г., 05:54:12 Гринуич+3, ddqlo <ddqlo@126.com> написа:
HostedEngine: ...... <model fallback='forbid'>Haswell-noTSX</model> ......
both of the hosts: ...... <model>Westmere</model> ......
others vms which can be migrated: ...... <model fallback='forbid'>Haswell-noTSX</model> ......
在 2020-09-17 03:03:24,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
Can you verify the HostedEngine's CPU ?
1. ssh to the host hosting the HE 2. alias virsh='virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf' 3. virsh dumpxml HostedEngine
Then set the alias for virsh on all Hosts and 'virsh capabilites' should show the Hosts' <cpu><model> .
Best Regards, Strahil Nikolov
В сряда, 16 септември 2020 г., 10:16:08 Гринуич+3, ddqlo <ddqlo@126.com> написа:
My gateway was not pingable. I have fixed this problem and now both nodes have a score(3400). Yet, hosted engine could not be migrated. Same log in engine.log: host filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU'
在 2020-09-16 02:11:09,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
Both nodes have a lower than the usual score (should be 3400 ). Based on the score you are probably suffering from gateway-score-penalty [1][2]. Check if your gateway is pingable.
Best Regards, Strahil Nikolov
1 - https://www.ovirt.org/images/Hosted-Engine-4.3-deep-dive.pdf(page 8) 2 - /etc/ovirt-hosted-engine-ha/agent.conf
В вторник, 15 септември 2020 г., 04:49:48 Гринуич+3, ddqlo <ddqlo@126.com> написа:
--== Host node28 (id: 1) status ==--
conf_on_shared_storage : True Status up-to-date : True Hostname : node28 Host ID : 1 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"} Score : 1800 stopped : False Local maintenance : False crc32 : 4ac6105b local_conf_timestamp : 1794597 Host timestamp : 1794597 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=1794597 (Tue Sep 15 09:47:17 2020) host-id=1 score=1800 vm_conf_refresh_time=1794597 (Tue Sep 15 09:47:17 2020) conf_on_shared_storage=True maintenance=False state=EngineDown stopped=False
--== Host node22 (id: 2) status ==--
conf_on_shared_storage : True Status up-to-date : True Hostname : node22 Host ID : 2 Engine status : {"health": "good", "vm": "up", "detail": "Up"} Score : 1800 stopped : False Local maintenance : False crc32 : ffc41893 local_conf_timestamp : 1877876 Host timestamp : 1877876 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=1877876 (Tue Sep 15 09:47:13 2020) host-id=2 score=1800 vm_conf_refresh_time=1877876 (Tue Sep 15 09:47:13 2020) conf_on_shared_storage=True maintenance=False state=EngineUp stopped=False
在 2020-09-09 01:32:55,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
What is the output of 'hosted-engine --vm-status' on the node where the HostedEngine is running ?
Best Regards, Strahil Nikolov
В понеделник, 7 септември 2020 г., 03:53:13 Гринуич+3, ddqlo <ddqlo@126.com> написа:
I could not find any logs because the migration button is disabled in the web UI. It seems that the engine migration operation is prevented at first. Any other ideas? Thanks!
在 2020-09-01 00:06:19,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
I'm running oVirt 4.3.10 and I can migrate my Engine from node to node. I had one similar issue , but powering off and on the HE has fixed it.
You have to check the vdsm log on the source and on destination in order to figure out what is going on. Also you might consider checking the libvirt logs on the destination.
Best Regards, Strahil Nikolov
В понеделник, 31 август 2020 г., 10:47:22 Гринуич+3, ddqlo <ddqlo@126.com> написа:
Thanks! The scores of all nodes are not '0'. I find that someone has already asked a question like this. It seems that this feature has been disabled in 4.3. I am not sure if it is enabled in 4.4.
在 2020-08-29 02:27:03,"Strahil Nikolov" <hunter86_bg@yahoo.com> : >Have you checked under a shell the output of 'hosted-engine --vm-status' . Check the Score of the hosts. Maybe there is a node with score of '0' ? > >Best Regards, >Strahil Nikolov > > > > > > >В вторник, 25 август 2020 г., 13:46:18 Гринуич+3, 董青龙 <ddqlo@126.com> написа: > > > > > >Hi all, > I have an ovirt4.3.10.4 environment of 2 hosts. Normal vms in this environment could be migrated, but the hosted engine vm could not be migrated. Anyone can help? Thanks a lot! > >hosts status: > >normal vm migration: > >hosted engine vm migration: > > > > >_______________________________________________ >Users mailing list -- users@ovirt.org >To unsubscribe send an email to users-leave@ovirt.org >Privacy Statement: https://www.ovirt.org/privacy-policy.html >oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ >List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZXHE2AJX42HNHO...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/IAYLFLC6K42OUP...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/23ZMAP5K7N5KKX...

Can you put 1 host in maintenance and use the "Installation" -> "Reinstall" and enable the HE deployment from one of the tabs ? Best Regards, Strahil Nikolov В понеделник, 21 септември 2020 г., 06:38:06 Гринуич+3, ddqlo <ddqlo@126.com> написа: so strange! After I set global maintenance, powered off and started H The cpu of HE became 'Westmere'(did not change anything). But HE still could not be migrated. HE xml: <cpu mode='custom' match='exact' check='full'> <model fallback='forbid'>Westmere</model> <topology sockets='16' cores='4' threads='1'/> <feature policy='require' name='vme'/> <feature policy='require' name='pclmuldq'/> <feature policy='require' name='x2apic'/> <feature policy='require' name='hypervisor'/> <feature policy='require' name='arat'/> <numa> <cell id='0' cpus='0-3' memory='16777216' unit='KiB'/> </numa> </cpu> host capabilities: <model>Westmere</model> cluster cpu type (UI): host cpu type (UI): HE cpu type (UI): 在 2020-09-19 13:27:35,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
Hm... interesting.
The VM is using 'Haswell-noTSX' while the host is 'Westmere'.
In my case I got no difference:
[root@ovirt1 ~]# virsh dumpxml HostedEngine | grep Opteron <model fallback='forbid'>Opteron_G5</model> [root@ovirt1 ~]# virsh capabilities | grep Opteron <model>Opteron_G5</model>
Did you update the cluster holding the Hosted Engine ?
I guess you can try to:
- Set global maintenance - Power off the HostedEngine VM - virsh dumpxml HostedEngine > /root/HE.xml - use virsh edit to change the cpu of the HE (non-permanent) change - try to power on the modified HE
If it powers on , you can try to migrate it and if it succeeds - then you should make it permanent.
Best Regards, Strahil Nikolov
В петък, 18 септември 2020 г., 04:40:39 Гринуич+3, ddqlo <ddqlo@126.com> написа:
HE:
<domain type='kvm' id='1'> <name>HostedEngine</name> <uuid>b4e805ff-556d-42bd-a6df-02f5902fd01c</uuid> <metadata xmlns:ns0="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0"> <ns0:qos/> <ovirt-vm:vm xmlns:ovirt-vm="http://ovirt.org/vm/1.0"> <ovirt-vm:clusterVersion>4.3</ovirt-vm:clusterVersion> <ovirt-vm:destroy_on_reboot type="bool">False</ovirt-vm:destroy_on_reboot> <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused> <ovirt-vm:memGuaranteedSize type="int">1024</ovirt-vm:memGuaranteedSize> <ovirt-vm:minGuaranteedMemoryMb type="int">1024</ovirt-vm:minGuaranteedMemoryMb> <ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior> <ovirt-vm:startTime type="float">1600307555.19</ovirt-vm:startTime> <ovirt-vm:device mac_address="56:6f:9b:b0:00:01"> <ovirt-vm:network>external</ovirt-vm:network> <ovirt-vm:custom> <ovirt-vm:queues>4</ovirt-vm:queues> </ovirt-vm:custom> </ovirt-vm:device> <ovirt-vm:device mac_address="00:16:3e:50:c1:97"> <ovirt-vm:network>ovirtmgmt</ovirt-vm:network> <ovirt-vm:custom> <ovirt-vm:queues>4</ovirt-vm:queues> </ovirt-vm:custom> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="vda"> <ovirt-vm:domainID>c17c1934-332f-464c-8f89-ad72463c00b3</ovirt-vm:domainID> <ovirt-vm:guestName>/dev/vda2</ovirt-vm:guestName> <ovirt-vm:imageID>8eca143a-4535-4421-bd35-9f5764d67d70</ovirt-vm:imageID> <ovirt-vm:poolID>00000000-0000-0000-0000-000000000000</ovirt-vm:poolID> <ovirt-vm:shared>exclusive</ovirt-vm:shared> <ovirt-vm:volumeID>ae961104-c3b3-4a43-9f46-7fa6bdc2ac33</ovirt-vm:volumeID> <ovirt-vm:specParams> <ovirt-vm:pinToIoThread>1</ovirt-vm:pinToIoThread> </ovirt-vm:specParams> <ovirt-vm:volumeChain> <ovirt-vm:volumeChainNode> <ovirt-vm:domainID>c17c1934-332f-464c-8f89-ad72463c00b3</ovirt-vm:domainID> <ovirt-vm:imageID>8eca143a-4535-4421-bd35-9f5764d67d70</ovirt-vm:imageID> <ovirt-vm:leaseOffset type="int">108003328</ovirt-vm:leaseOffset> <ovirt-vm:leasePath>/dev/c17c1934-332f-464c-8f89-ad72463c00b3/leases</ovirt-vm:leasePath> <ovirt-vm:path>/rhev/data-center/mnt/blockSD/c17c1934-332f-464c-8f89-ad72463c00b3/images/8eca143a-4535-4421-bd35-9f5764d67d70/ae961104-c3b3-4a43-9f46-7fa6bdc2ac33</ovirt-vm:path> <ovirt-vm:volumeID>ae961104-c3b3-4a43-9f46-7fa6bdc2ac33</ovirt-vm:volumeID> </ovirt-vm:volumeChainNode> </ovirt-vm:volumeChain> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="hdc"/> </ovirt-vm:vm> </metadata> <maxMemory slots='16' unit='KiB'>67108864</maxMemory> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <vcpu placement='static' current='4'>64</vcpu> <iothreads>1</iothreads> <resource> <partition>/machine</partition> </resource> <sysinfo type='smbios'> <system> <entry name='manufacturer'>oVirt</entry> <entry name='product'>oVirt Node</entry> <entry name='version'>7-5.1804.el7.centos</entry> <entry name='serial'>00000000-0000-0000-0000-0CC47A6B3160</entry> <entry name='uuid'>b4e805ff-556d-42bd-a6df-02f5902fd01c</entry> </system> </sysinfo> <os> <type arch='x86_64' machine='pc-i440fx-rhel7.6.0'>hvm</type> <boot dev='hd'/> <bios useserial='yes'/> <smbios mode='sysinfo'/> </os> <features> <acpi/> </features> <cpu mode='custom' match='exact' check='full'> <model fallback='forbid'>Haswell-noTSX</model> <topology sockets='16' cores='4' threads='1'/> <feature policy='require' name='vme'/> <feature policy='require' name='f16c'/> <feature policy='require' name='rdrand'/> <feature policy='require' name='hypervisor'/> <feature policy='require' name='arat'/> <feature policy='require' name='xsaveopt'/> <feature policy='require' name='abm'/> <numa> <cell id='0' cpus='0-3' memory='16777216' unit='KiB'/> </numa> </cpu> <clock offset='variable' adjustment='0' basis='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>destroy</on_reboot> <on_crash>destroy</on_crash> <pm> <suspend-to-mem enabled='no'/> <suspend-to-disk enabled='no'/> </pm> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='cdrom'> <driver name='qemu' error_policy='report'/> <source startupPolicy='optional'/> <target dev='hdc' bus='ide'/> <readonly/> <alias name='ua-80fde7d5-ee7f-4201-9118-11bc6c3b8530'/> <address type='drive' controller='0' bus='1' target='0' unit='0'/> </disk> <disk type='block' device='disk' snapshot='no'> <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native' iothread='1'/> <source dev='/var/run/vdsm/storage/c17c1934-332f-464c-8f89-ad72463c00b3/8eca143a-4535-4421-bd35-9f5764d67d70/ae961104-c3b3-4a43-9f46-7fa6bdc2ac33'> <seclabel model='dac' relabel='no'/> </source> <backingStore/> <target dev='vda' bus='virtio'/> <serial>8eca143a-4535-4421-bd35-9f5764d67d70</serial> <alias name='ua-8eca143a-4535-4421-bd35-9f5764d67d70'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </disk> <controller type='scsi' index='0' model='virtio-scsi'> <driver iothread='1'/> <alias name='ua-27331e83-03f4-42a3-9554-c41649c02ba4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </controller> <controller type='ide' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0' ports='16'> <alias name='ua-8fe74299-b60f-4778-8e80-db05393a9489'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </controller> <controller type='usb' index='0' model='piix3-uhci'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <lease> <lockspace>c17c1934-332f-464c-8f89-ad72463c00b3</lockspace> <key>ae961104-c3b3-4a43-9f46-7fa6bdc2ac33</key> <target path='/dev/c17c1934-332f-464c-8f89-ad72463c00b3/leases' offset='108003328'/> </lease> <interface type='bridge'> <mac address='00:16:3e:50:c1:97'/> <source bridge='ovirtmgmt'/> <target dev='vnet0'/> <model type='virtio'/> <driver name='vhost' queues='4'/> <filterref filter='vdsm-no-mac-spoofing'/> <link state='up'/> <mtu size='1500'/> <alias name='ua-fada74ee-2338-4cde-a7ba-43a9a636ad6e'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <interface type='bridge'> <mac address='56:6f:9b:b0:00:01'/> <source bridge='external'/> <target dev='vnet1'/> <model type='virtio'/> <driver name='vhost' queues='4'/> <filterref filter='vdsm-no-mac-spoofing'/> <link state='up'/> <mtu size='1500'/> <alias name='ua-f7b4c949-1f9f-4355-811d-88428c88ce4e'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/> </interface> <serial type='unix'> <source mode='bind' path='/var/run/ovirt-vmconsole-console/b4e805ff-556d-42bd-a6df-02f5902fd01c.sock'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='unix'> <source mode='bind' path='/var/run/ovirt-vmconsole-console/b4e805ff-556d-42bd-a6df-02f5902fd01c.sock'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/b4e805ff-556d-42bd-a6df-02f5902fd01c.ovirt-guest-agent.0'/> <target type='virtio' name='ovirt-guest-agent.0' state='connected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/b4e805ff-556d-42bd-a6df-02f5902fd01c.org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='channel1'/> <address type='virtio-serial' controller='0' bus='0' port='2'/> </channel> <channel type='spicevmc'> <target type='virtio' name='com.redhat.spice.0' state='disconnected'/> <alias name='channel2'/> <address type='virtio-serial' controller='0' bus='0' port='3'/> </channel> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/b4e805ff-556d-42bd-a6df-02f5902fd01c.org.ovirt.hosted-engine-setup.0'/> <target type='virtio' name='org.ovirt.hosted-engine-setup.0' state='disconnected'/> <alias name='channel3'/> <address type='virtio-serial' controller='0' bus='0' port='4'/> </channel> <input type='tablet' bus='usb'> <alias name='input0'/> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'> <alias name='input1'/> </input> <input type='keyboard' bus='ps2'> <alias name='input2'/> </input> <graphics type='vnc' port='5900' autoport='yes' listen='192.168.1.22' keymap='en-us' passwdValidTo='1970-01-01T00:00:01'> <listen type='network' address='192.168.1.22' network='vdsm-external'/> </graphics> <graphics type='spice' port='5901' tlsPort='5902' autoport='yes' listen='192.168.1.22' passwdValidTo='1970-01-01T00:00:01'> <listen type='network' address='192.168.1.22' network='vdsm-external'/> <channel name='main' mode='secure'/> <channel name='display' mode='secure'/> <channel name='inputs' mode='secure'/> <channel name='cursor' mode='secure'/> <channel name='playback' mode='secure'/> <channel name='record' mode='secure'/> <channel name='smartcard' mode='secure'/> <channel name='usbredir' mode='secure'/> </graphics> <sound model='ich6'> <alias name='ua-bd287767-9b83-4e44-ac6f-8b527f9632b8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </sound> <video> <model type='qxl' ram='65536' vram='32768' vgamem='16384' heads='1' primary='yes'/> <alias name='ua-bcfb6b4b-0b3c-4d5b-ba2d-8ce40a65facd'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <stats period='5'/> <alias name='ua-39d36063-8808-47db-9fef-a0baad9f9661'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </memballoon> <rng model='virtio'> <backend model='random'>/dev/urandom</backend> <alias name='ua-75516d34-dd8f-4f0f-8496-e1f222a359a8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </rng> </devices> <seclabel type='dynamic' model='selinux' relabel='yes'> <label>system_u:system_r:svirt_t:s0:c162,c716</label> <imagelabel>system_u:object_r:svirt_image_t:s0:c162,c716</imagelabel> </seclabel> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+107:+107</label> <imagelabel>+107:+107</imagelabel> </seclabel> </domain>
hosts:
<capabilities> <host> <uuid>b25adcec-eef2-49a5-8663-7cdcfc50891b</uuid> <cpu> <arch>x86_64</arch> <model>Westmere</model> <vendor>Intel</vendor> <microcode version='34'/> <counter name='tsc' frequency='3699996000' scaling='no'/> <topology sockets='1' cores='2' threads='2'/> <feature name='vme'/> <feature name='ds'/> <feature name='acpi'/> <feature name='ss'/> <feature name='ht'/> <feature name='tm'/> <feature name='pbe'/> <feature name='pclmuldq'/> <feature name='dtes64'/> <feature name='monitor'/> <feature name='ds_cpl'/> <feature name='vmx'/> <feature name='est'/> <feature name='tm2'/> <feature name='fma'/> <feature name='xtpr'/> <feature name='pdcm'/> <feature name='pcid'/> <feature name='movbe'/> <feature name='tsc-deadline'/> <feature name='xsave'/> <feature name='osxsave'/> <feature name='avx'/> <feature name='f16c'/> <feature name='rdrand'/> <feature name='arat'/> <feature name='fsgsbase'/> <feature name='tsc_adjust'/> <feature name='bmi1'/> <feature name='avx2'/> <feature name='smep'/> <feature name='bmi2'/> <feature name='erms'/> <feature name='invpcid'/> <feature name='xsaveopt'/> <feature name='pdpe1gb'/> <feature name='rdtscp'/> <feature name='abm'/> <feature name='invtsc'/> <pages unit='KiB' size='4'/> <pages unit='KiB' size='2048'/> <pages unit='KiB' size='1048576'/> </cpu> <power_management> <suspend_mem/> <suspend_disk/> <suspend_hybrid/> </power_management> <iommu support='no'/> <migration_features> <live/> <uri_transports> <uri_transport>tcp</uri_transport> <uri_transport>rdma</uri_transport> </uri_transports> </migration_features> <topology> <cells num='1'> <cell id='0'> <memory unit='KiB'>32903488</memory> <pages unit='KiB' size='4'>8225872</pages> <pages unit='KiB' size='2048'>0</pages> <pages unit='KiB' size='1048576'>0</pages> <distances> <sibling id='0' value='10'/> </distances> <cpus num='4'> <cpu id='0' socket_id='0' core_id='0' siblings='0,2'/> <cpu id='1' socket_id='0' core_id='1' siblings='1,3'/> <cpu id='2' socket_id='0' core_id='0' siblings='0,2'/> <cpu id='3' socket_id='0' core_id='1' siblings='1,3'/> </cpus> </cell> </cells> </topology> <cache> <bank id='0' level='3' type='both' size='3' unit='MiB' cpus='0-3'/> </cache> <secmodel> <model>selinux</model> <doi>0</doi> <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel> <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel> </secmodel> <secmodel> <model>dac</model> <doi>0</doi> <baselabel type='kvm'>+107:+107</baselabel> <baselabel type='qemu'>+107:+107</baselabel> </secmodel> </host> <guest> <os_type>hvm</os_type> <arch name='i686'> <wordsize>32</wordsize> <emulator>/usr/libexec/qemu-kvm</emulator> <machine maxCpus='240'>pc-i440fx-rhel7.6.0</machine> <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240'>pc</machine> <machine maxCpus='240'>pc-i440fx-rhel7.0.0</machine> <machine maxCpus='384'>pc-q35-rhel7.6.0</machine> <machine canonical='pc-q35-rhel7.6.0' maxCpus='384'>q35</machine> <machine maxCpus='240'>rhel6.3.0</machine> <machine maxCpus='240'>rhel6.4.0</machine> <machine maxCpus='240'>rhel6.0.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.5.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.1.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.2.0</machine> <machine maxCpus='255'>pc-q35-rhel7.3.0</machine> <machine maxCpus='240'>rhel6.5.0</machine> <machine maxCpus='384'>pc-q35-rhel7.4.0</machine> <machine maxCpus='240'>rhel6.6.0</machine> <machine maxCpus='240'>rhel6.1.0</machine> <machine maxCpus='240'>rhel6.2.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.3.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.4.0</machine> <machine maxCpus='384'>pc-q35-rhel7.5.0</machine> <domain type='qemu'/> <domain type='kvm'> <emulator>/usr/libexec/qemu-kvm</emulator> </domain> </arch> <features> <cpuselection/> <deviceboot/> <disksnapshot default='on' toggle='no'/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> <pae/> <nonpae/> </features> </guest> <guest> <os_type>hvm</os_type> <arch name='x86_64'> <wordsize>64</wordsize> <emulator>/usr/libexec/qemu-kvm</emulator> <machine maxCpus='240'>pc-i440fx-rhel7.6.0</machine> <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240'>pc</machine> <machine maxCpus='240'>pc-i440fx-rhel7.0.0</machine> <machine maxCpus='384'>pc-q35-rhel7.6.0</machine> <machine canonical='pc-q35-rhel7.6.0' maxCpus='384'>q35</machine> <machine maxCpus='240'>rhel6.3.0</machine> <machine maxCpus='240'>rhel6.4.0</machine> <machine maxCpus='240'>rhel6.0.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.5.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.1.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.2.0</machine> <machine maxCpus='255'>pc-q35-rhel7.3.0</machine> <machine maxCpus='240'>rhel6.5.0</machine> <machine maxCpus='384'>pc-q35-rhel7.4.0</machine> <machine maxCpus='240'>rhel6.6.0</machine> <machine maxCpus='240'>rhel6.1.0</machine> <machine maxCpus='240'>rhel6.2.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.3.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.4.0</machine> <machine maxCpus='384'>pc-q35-rhel7.5.0</machine> <domain type='qemu'/> <domain type='kvm'> <emulator>/usr/libexec/qemu-kvm</emulator> </domain> </arch> <features> <cpuselection/> <deviceboot/> <disksnapshot default='on' toggle='no'/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> </features> </guest> </capabilities>
在 2020-09-17 12:00:19,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
It would be easier if you posted the whole xml.
What about the sections (in HE xml) starting with: feature policy=
Also the hosts have a section which contains:
<feature name=
If you can share a VM's xml sections for a good VM.
Best Regards, Strahil Nikolov
В четвъртък, 17 септември 2020 г., 05:54:12 Гринуич+3, ddqlo <ddqlo@126.com> написа:
HostedEngine: ...... <model fallback='forbid'>Haswell-noTSX</model> ......
both of the hosts: ...... <model>Westmere</model> ......
others vms which can be migrated: ...... <model fallback='forbid'>Haswell-noTSX</model> ......
在 2020-09-17 03:03:24,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
Can you verify the HostedEngine's CPU ?
1. ssh to the host hosting the HE 2. alias virsh='virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf' 3. virsh dumpxml HostedEngine
Then set the alias for virsh on all Hosts and 'virsh capabilites' should show the Hosts' <cpu><model> .
Best Regards, Strahil Nikolov
В сряда, 16 септември 2020 г., 10:16:08 Гринуич+3, ddqlo <ddqlo@126.com> написа:
My gateway was not pingable. I have fixed this problem and now both nodes have a score(3400). Yet, hosted engine could not be migrated. Same log in engine.log: host filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU'
在 2020-09-16 02:11:09,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
Both nodes have a lower than the usual score (should be 3400 ). Based on the score you are probably suffering from gateway-score-penalty [1][2]. Check if your gateway is pingable.
Best Regards, Strahil Nikolov
1 - https://www.ovirt.org/images/Hosted-Engine-4.3-deep-dive.pdf(page 8) 2 - /etc/ovirt-hosted-engine-ha/agent.conf
В вторник, 15 септември 2020 г., 04:49:48 Гринуич+3, ddqlo <ddqlo@126.com> написа:
--== Host node28 (id: 1) status ==--
conf_on_shared_storage : True Status up-to-date : True Hostname : node28 Host ID : 1 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"} Score : 1800 stopped : False Local maintenance : False crc32 : 4ac6105b local_conf_timestamp : 1794597 Host timestamp : 1794597 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=1794597 (Tue Sep 15 09:47:17 2020) host-id=1 score=1800 vm_conf_refresh_time=1794597 (Tue Sep 15 09:47:17 2020) conf_on_shared_storage=True maintenance=False state=EngineDown stopped=False
--== Host node22 (id: 2) status ==--
conf_on_shared_storage : True Status up-to-date : True Hostname : node22 Host ID : 2 Engine status : {"health": "good", "vm": "up", "detail": "Up"} Score : 1800 stopped : False Local maintenance : False crc32 : ffc41893 local_conf_timestamp : 1877876 Host timestamp : 1877876 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=1877876 (Tue Sep 15 09:47:13 2020) host-id=2 score=1800 vm_conf_refresh_time=1877876 (Tue Sep 15 09:47:13 2020) conf_on_shared_storage=True maintenance=False state=EngineUp stopped=False
在 2020-09-09 01:32:55,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
What is the output of 'hosted-engine --vm-status' on the node where the HostedEngine is running ?
Best Regards, Strahil Nikolov
В понеделник, 7 септември 2020 г., 03:53:13 Гринуич+3, ddqlo <ddqlo@126.com> написа:
I could not find any logs because the migration button is disabled in the web UI. It seems that the engine migration operation is prevented at first. Any other ideas? Thanks!
在 2020-09-01 00:06:19,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
I'm running oVirt 4.3.10 and I can migrate my Engine from node to node. I had one similar issue , but powering off and on the HE has fixed it.
You have to check the vdsm log on the source and on destination in order to figure out what is going on. Also you might consider checking the libvirt logs on the destination.
Best Regards, Strahil Nikolov
В понеделник, 31 август 2020 г., 10:47:22 Гринуич+3, ddqlo <ddqlo@126.com> написа:
Thanks! The scores of all nodes are not '0'. I find that someone has already asked a question like this. It seems that this feature has been disabled in 4.3. I am not sure if it is enabled in 4.4.
在 2020-08-29 02:27:03,"Strahil Nikolov" <hunter86_bg@yahoo.com> : >Have you checked under a shell the output of 'hosted-engine --vm-status' . Check the Score of the hosts. Maybe there is a node with score of '0' ? > >Best Regards, >Strahil Nikolov > > > > > > >В вторник, 25 август 2020 г., 13:46:18 Гринуич+3, 董青龙 <ddqlo@126.com> написа: > > > > > >Hi all, > I have an ovirt4.3.10.4 environment of 2 hosts. Normal vms in this environment could be migrated, but the hosted engine vm could not be migrated. Anyone can help? Thanks a lot! > >hosts status: > >normal vm migration: > >hosted engine vm migration: > > > > >_______________________________________________ >Users mailing list -- users@ovirt.org >To unsubscribe send an email to users-leave@ovirt.org >Privacy Statement: https://www.ovirt.org/privacy-policy.html >oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ >List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZXHE2AJX42HNHO...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/IAYLFLC6K42OUP...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/23ZMAP5K7N5KKX...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/BM3QAMWVBKUATS...

Yes. I can. The host which does not host the HE could be reinstalled sucessfully in web UI. After this is done nothing has changed. 在 2020-09-22 03:08:18,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
Can you put 1 host in maintenance and use the "Installation" -> "Reinstall" and enable the HE deployment from one of the tabs ?
Best Regards, Strahil Nikolov
В понеделник, 21 септември 2020 г., 06:38:06 Гринуич+3, ddqlo <ddqlo@126.com> написа:
so strange! After I set global maintenance, powered off and started H The cpu of HE became 'Westmere'(did not change anything). But HE still could not be migrated.
HE xml: <cpu mode='custom' match='exact' check='full'> <model fallback='forbid'>Westmere</model> <topology sockets='16' cores='4' threads='1'/> <feature policy='require' name='vme'/> <feature policy='require' name='pclmuldq'/> <feature policy='require' name='x2apic'/> <feature policy='require' name='hypervisor'/> <feature policy='require' name='arat'/> <numa> <cell id='0' cpus='0-3' memory='16777216' unit='KiB'/> </numa> </cpu>
host capabilities: <model>Westmere</model>
cluster cpu type (UI):
host cpu type (UI):
HE cpu type (UI):
在 2020-09-19 13:27:35,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
Hm... interesting.
The VM is using 'Haswell-noTSX' while the host is 'Westmere'.
In my case I got no difference:
[root@ovirt1 ~]# virsh dumpxml HostedEngine | grep Opteron <model fallback='forbid'>Opteron_G5</model> [root@ovirt1 ~]# virsh capabilities | grep Opteron <model>Opteron_G5</model>
Did you update the cluster holding the Hosted Engine ?
I guess you can try to:
- Set global maintenance - Power off the HostedEngine VM - virsh dumpxml HostedEngine > /root/HE.xml - use virsh edit to change the cpu of the HE (non-permanent) change - try to power on the modified HE
If it powers on , you can try to migrate it and if it succeeds - then you should make it permanent.
Best Regards, Strahil Nikolov
В петък, 18 септември 2020 г., 04:40:39 Гринуич+3, ddqlo <ddqlo@126.com> написа:
HE:
<domain type='kvm' id='1'> <name>HostedEngine</name> <uuid>b4e805ff-556d-42bd-a6df-02f5902fd01c</uuid> <metadata xmlns:ns0="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0"> <ns0:qos/> <ovirt-vm:vm xmlns:ovirt-vm="http://ovirt.org/vm/1.0"> <ovirt-vm:clusterVersion>4.3</ovirt-vm:clusterVersion> <ovirt-vm:destroy_on_reboot type="bool">False</ovirt-vm:destroy_on_reboot> <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused> <ovirt-vm:memGuaranteedSize type="int">1024</ovirt-vm:memGuaranteedSize> <ovirt-vm:minGuaranteedMemoryMb type="int">1024</ovirt-vm:minGuaranteedMemoryMb> <ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior> <ovirt-vm:startTime type="float">1600307555.19</ovirt-vm:startTime> <ovirt-vm:device mac_address="56:6f:9b:b0:00:01"> <ovirt-vm:network>external</ovirt-vm:network> <ovirt-vm:custom> <ovirt-vm:queues>4</ovirt-vm:queues> </ovirt-vm:custom> </ovirt-vm:device> <ovirt-vm:device mac_address="00:16:3e:50:c1:97"> <ovirt-vm:network>ovirtmgmt</ovirt-vm:network> <ovirt-vm:custom> <ovirt-vm:queues>4</ovirt-vm:queues> </ovirt-vm:custom> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="vda"> <ovirt-vm:domainID>c17c1934-332f-464c-8f89-ad72463c00b3</ovirt-vm:domainID> <ovirt-vm:guestName>/dev/vda2</ovirt-vm:guestName> <ovirt-vm:imageID>8eca143a-4535-4421-bd35-9f5764d67d70</ovirt-vm:imageID> <ovirt-vm:poolID>00000000-0000-0000-0000-000000000000</ovirt-vm:poolID> <ovirt-vm:shared>exclusive</ovirt-vm:shared> <ovirt-vm:volumeID>ae961104-c3b3-4a43-9f46-7fa6bdc2ac33</ovirt-vm:volumeID> <ovirt-vm:specParams> <ovirt-vm:pinToIoThread>1</ovirt-vm:pinToIoThread> </ovirt-vm:specParams> <ovirt-vm:volumeChain> <ovirt-vm:volumeChainNode> <ovirt-vm:domainID>c17c1934-332f-464c-8f89-ad72463c00b3</ovirt-vm:domainID> <ovirt-vm:imageID>8eca143a-4535-4421-bd35-9f5764d67d70</ovirt-vm:imageID> <ovirt-vm:leaseOffset type="int">108003328</ovirt-vm:leaseOffset> <ovirt-vm:leasePath>/dev/c17c1934-332f-464c-8f89-ad72463c00b3/leases</ovirt-vm:leasePath> <ovirt-vm:path>/rhev/data-center/mnt/blockSD/c17c1934-332f-464c-8f89-ad72463c00b3/images/8eca143a-4535-4421-bd35-9f5764d67d70/ae961104-c3b3-4a43-9f46-7fa6bdc2ac33</ovirt-vm:path> <ovirt-vm:volumeID>ae961104-c3b3-4a43-9f46-7fa6bdc2ac33</ovirt-vm:volumeID> </ovirt-vm:volumeChainNode> </ovirt-vm:volumeChain> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="hdc"/> </ovirt-vm:vm> </metadata> <maxMemory slots='16' unit='KiB'>67108864</maxMemory> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <vcpu placement='static' current='4'>64</vcpu> <iothreads>1</iothreads> <resource> <partition>/machine</partition> </resource> <sysinfo type='smbios'> <system> <entry name='manufacturer'>oVirt</entry> <entry name='product'>oVirt Node</entry> <entry name='version'>7-5.1804.el7.centos</entry> <entry name='serial'>00000000-0000-0000-0000-0CC47A6B3160</entry> <entry name='uuid'>b4e805ff-556d-42bd-a6df-02f5902fd01c</entry> </system> </sysinfo> <os> <type arch='x86_64' machine='pc-i440fx-rhel7.6.0'>hvm</type> <boot dev='hd'/> <bios useserial='yes'/> <smbios mode='sysinfo'/> </os> <features> <acpi/> </features> <cpu mode='custom' match='exact' check='full'> <model fallback='forbid'>Haswell-noTSX</model> <topology sockets='16' cores='4' threads='1'/> <feature policy='require' name='vme'/> <feature policy='require' name='f16c'/> <feature policy='require' name='rdrand'/> <feature policy='require' name='hypervisor'/> <feature policy='require' name='arat'/> <feature policy='require' name='xsaveopt'/> <feature policy='require' name='abm'/> <numa> <cell id='0' cpus='0-3' memory='16777216' unit='KiB'/> </numa> </cpu> <clock offset='variable' adjustment='0' basis='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>destroy</on_reboot> <on_crash>destroy</on_crash> <pm> <suspend-to-mem enabled='no'/> <suspend-to-disk enabled='no'/> </pm> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='cdrom'> <driver name='qemu' error_policy='report'/> <source startupPolicy='optional'/> <target dev='hdc' bus='ide'/> <readonly/> <alias name='ua-80fde7d5-ee7f-4201-9118-11bc6c3b8530'/> <address type='drive' controller='0' bus='1' target='0' unit='0'/> </disk> <disk type='block' device='disk' snapshot='no'> <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native' iothread='1'/> <source dev='/var/run/vdsm/storage/c17c1934-332f-464c-8f89-ad72463c00b3/8eca143a-4535-4421-bd35-9f5764d67d70/ae961104-c3b3-4a43-9f46-7fa6bdc2ac33'> <seclabel model='dac' relabel='no'/> </source> <backingStore/> <target dev='vda' bus='virtio'/> <serial>8eca143a-4535-4421-bd35-9f5764d67d70</serial> <alias name='ua-8eca143a-4535-4421-bd35-9f5764d67d70'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </disk> <controller type='scsi' index='0' model='virtio-scsi'> <driver iothread='1'/> <alias name='ua-27331e83-03f4-42a3-9554-c41649c02ba4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </controller> <controller type='ide' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0' ports='16'> <alias name='ua-8fe74299-b60f-4778-8e80-db05393a9489'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </controller> <controller type='usb' index='0' model='piix3-uhci'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <lease> <lockspace>c17c1934-332f-464c-8f89-ad72463c00b3</lockspace> <key>ae961104-c3b3-4a43-9f46-7fa6bdc2ac33</key> <target path='/dev/c17c1934-332f-464c-8f89-ad72463c00b3/leases' offset='108003328'/> </lease> <interface type='bridge'> <mac address='00:16:3e:50:c1:97'/> <source bridge='ovirtmgmt'/> <target dev='vnet0'/> <model type='virtio'/> <driver name='vhost' queues='4'/> <filterref filter='vdsm-no-mac-spoofing'/> <link state='up'/> <mtu size='1500'/> <alias name='ua-fada74ee-2338-4cde-a7ba-43a9a636ad6e'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <interface type='bridge'> <mac address='56:6f:9b:b0:00:01'/> <source bridge='external'/> <target dev='vnet1'/> <model type='virtio'/> <driver name='vhost' queues='4'/> <filterref filter='vdsm-no-mac-spoofing'/> <link state='up'/> <mtu size='1500'/> <alias name='ua-f7b4c949-1f9f-4355-811d-88428c88ce4e'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/> </interface> <serial type='unix'> <source mode='bind' path='/var/run/ovirt-vmconsole-console/b4e805ff-556d-42bd-a6df-02f5902fd01c.sock'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='unix'> <source mode='bind' path='/var/run/ovirt-vmconsole-console/b4e805ff-556d-42bd-a6df-02f5902fd01c.sock'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/b4e805ff-556d-42bd-a6df-02f5902fd01c.ovirt-guest-agent.0'/> <target type='virtio' name='ovirt-guest-agent.0' state='connected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/b4e805ff-556d-42bd-a6df-02f5902fd01c.org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='channel1'/> <address type='virtio-serial' controller='0' bus='0' port='2'/> </channel> <channel type='spicevmc'> <target type='virtio' name='com.redhat.spice.0' state='disconnected'/> <alias name='channel2'/> <address type='virtio-serial' controller='0' bus='0' port='3'/> </channel> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/b4e805ff-556d-42bd-a6df-02f5902fd01c.org.ovirt.hosted-engine-setup.0'/> <target type='virtio' name='org.ovirt.hosted-engine-setup.0' state='disconnected'/> <alias name='channel3'/> <address type='virtio-serial' controller='0' bus='0' port='4'/> </channel> <input type='tablet' bus='usb'> <alias name='input0'/> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'> <alias name='input1'/> </input> <input type='keyboard' bus='ps2'> <alias name='input2'/> </input> <graphics type='vnc' port='5900' autoport='yes' listen='192.168.1.22' keymap='en-us' passwdValidTo='1970-01-01T00:00:01'> <listen type='network' address='192.168.1.22' network='vdsm-external'/> </graphics> <graphics type='spice' port='5901' tlsPort='5902' autoport='yes' listen='192.168.1.22' passwdValidTo='1970-01-01T00:00:01'> <listen type='network' address='192.168.1.22' network='vdsm-external'/> <channel name='main' mode='secure'/> <channel name='display' mode='secure'/> <channel name='inputs' mode='secure'/> <channel name='cursor' mode='secure'/> <channel name='playback' mode='secure'/> <channel name='record' mode='secure'/> <channel name='smartcard' mode='secure'/> <channel name='usbredir' mode='secure'/> </graphics> <sound model='ich6'> <alias name='ua-bd287767-9b83-4e44-ac6f-8b527f9632b8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </sound> <video> <model type='qxl' ram='65536' vram='32768' vgamem='16384' heads='1' primary='yes'/> <alias name='ua-bcfb6b4b-0b3c-4d5b-ba2d-8ce40a65facd'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <stats period='5'/> <alias name='ua-39d36063-8808-47db-9fef-a0baad9f9661'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </memballoon> <rng model='virtio'> <backend model='random'>/dev/urandom</backend> <alias name='ua-75516d34-dd8f-4f0f-8496-e1f222a359a8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </rng> </devices> <seclabel type='dynamic' model='selinux' relabel='yes'> <label>system_u:system_r:svirt_t:s0:c162,c716</label> <imagelabel>system_u:object_r:svirt_image_t:s0:c162,c716</imagelabel> </seclabel> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+107:+107</label> <imagelabel>+107:+107</imagelabel> </seclabel> </domain>
hosts:
<capabilities> <host> <uuid>b25adcec-eef2-49a5-8663-7cdcfc50891b</uuid> <cpu> <arch>x86_64</arch> <model>Westmere</model> <vendor>Intel</vendor> <microcode version='34'/> <counter name='tsc' frequency='3699996000' scaling='no'/> <topology sockets='1' cores='2' threads='2'/> <feature name='vme'/> <feature name='ds'/> <feature name='acpi'/> <feature name='ss'/> <feature name='ht'/> <feature name='tm'/> <feature name='pbe'/> <feature name='pclmuldq'/> <feature name='dtes64'/> <feature name='monitor'/> <feature name='ds_cpl'/> <feature name='vmx'/> <feature name='est'/> <feature name='tm2'/> <feature name='fma'/> <feature name='xtpr'/> <feature name='pdcm'/> <feature name='pcid'/> <feature name='movbe'/> <feature name='tsc-deadline'/> <feature name='xsave'/> <feature name='osxsave'/> <feature name='avx'/> <feature name='f16c'/> <feature name='rdrand'/> <feature name='arat'/> <feature name='fsgsbase'/> <feature name='tsc_adjust'/> <feature name='bmi1'/> <feature name='avx2'/> <feature name='smep'/> <feature name='bmi2'/> <feature name='erms'/> <feature name='invpcid'/> <feature name='xsaveopt'/> <feature name='pdpe1gb'/> <feature name='rdtscp'/> <feature name='abm'/> <feature name='invtsc'/> <pages unit='KiB' size='4'/> <pages unit='KiB' size='2048'/> <pages unit='KiB' size='1048576'/> </cpu> <power_management> <suspend_mem/> <suspend_disk/> <suspend_hybrid/> </power_management> <iommu support='no'/> <migration_features> <live/> <uri_transports> <uri_transport>tcp</uri_transport> <uri_transport>rdma</uri_transport> </uri_transports> </migration_features> <topology> <cells num='1'> <cell id='0'> <memory unit='KiB'>32903488</memory> <pages unit='KiB' size='4'>8225872</pages> <pages unit='KiB' size='2048'>0</pages> <pages unit='KiB' size='1048576'>0</pages> <distances> <sibling id='0' value='10'/> </distances> <cpus num='4'> <cpu id='0' socket_id='0' core_id='0' siblings='0,2'/> <cpu id='1' socket_id='0' core_id='1' siblings='1,3'/> <cpu id='2' socket_id='0' core_id='0' siblings='0,2'/> <cpu id='3' socket_id='0' core_id='1' siblings='1,3'/> </cpus> </cell> </cells> </topology> <cache> <bank id='0' level='3' type='both' size='3' unit='MiB' cpus='0-3'/> </cache> <secmodel> <model>selinux</model> <doi>0</doi> <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel> <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel> </secmodel> <secmodel> <model>dac</model> <doi>0</doi> <baselabel type='kvm'>+107:+107</baselabel> <baselabel type='qemu'>+107:+107</baselabel> </secmodel> </host> <guest> <os_type>hvm</os_type> <arch name='i686'> <wordsize>32</wordsize> <emulator>/usr/libexec/qemu-kvm</emulator> <machine maxCpus='240'>pc-i440fx-rhel7.6.0</machine> <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240'>pc</machine> <machine maxCpus='240'>pc-i440fx-rhel7.0.0</machine> <machine maxCpus='384'>pc-q35-rhel7.6.0</machine> <machine canonical='pc-q35-rhel7.6.0' maxCpus='384'>q35</machine> <machine maxCpus='240'>rhel6.3.0</machine> <machine maxCpus='240'>rhel6.4.0</machine> <machine maxCpus='240'>rhel6.0.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.5.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.1.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.2.0</machine> <machine maxCpus='255'>pc-q35-rhel7.3.0</machine> <machine maxCpus='240'>rhel6.5.0</machine> <machine maxCpus='384'>pc-q35-rhel7.4.0</machine> <machine maxCpus='240'>rhel6.6.0</machine> <machine maxCpus='240'>rhel6.1.0</machine> <machine maxCpus='240'>rhel6.2.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.3.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.4.0</machine> <machine maxCpus='384'>pc-q35-rhel7.5.0</machine> <domain type='qemu'/> <domain type='kvm'> <emulator>/usr/libexec/qemu-kvm</emulator> </domain> </arch> <features> <cpuselection/> <deviceboot/> <disksnapshot default='on' toggle='no'/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> <pae/> <nonpae/> </features> </guest> <guest> <os_type>hvm</os_type> <arch name='x86_64'> <wordsize>64</wordsize> <emulator>/usr/libexec/qemu-kvm</emulator> <machine maxCpus='240'>pc-i440fx-rhel7.6.0</machine> <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240'>pc</machine> <machine maxCpus='240'>pc-i440fx-rhel7.0.0</machine> <machine maxCpus='384'>pc-q35-rhel7.6.0</machine> <machine canonical='pc-q35-rhel7.6.0' maxCpus='384'>q35</machine> <machine maxCpus='240'>rhel6.3.0</machine> <machine maxCpus='240'>rhel6.4.0</machine> <machine maxCpus='240'>rhel6.0.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.5.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.1.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.2.0</machine> <machine maxCpus='255'>pc-q35-rhel7.3.0</machine> <machine maxCpus='240'>rhel6.5.0</machine> <machine maxCpus='384'>pc-q35-rhel7.4.0</machine> <machine maxCpus='240'>rhel6.6.0</machine> <machine maxCpus='240'>rhel6.1.0</machine> <machine maxCpus='240'>rhel6.2.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.3.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.4.0</machine> <machine maxCpus='384'>pc-q35-rhel7.5.0</machine> <domain type='qemu'/> <domain type='kvm'> <emulator>/usr/libexec/qemu-kvm</emulator> </domain> </arch> <features> <cpuselection/> <deviceboot/> <disksnapshot default='on' toggle='no'/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> </features> </guest> </capabilities>
在 2020-09-17 12:00:19,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
It would be easier if you posted the whole xml.
What about the sections (in HE xml) starting with: feature policy=
Also the hosts have a section which contains:
<feature name=
If you can share a VM's xml sections for a good VM.
Best Regards, Strahil Nikolov
В четвъртък, 17 септември 2020 г., 05:54:12 Гринуич+3, ddqlo <ddqlo@126.com> написа:
HostedEngine: ...... <model fallback='forbid'>Haswell-noTSX</model> ......
both of the hosts: ...... <model>Westmere</model> ......
others vms which can be migrated: ...... <model fallback='forbid'>Haswell-noTSX</model> ......
在 2020-09-17 03:03:24,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
Can you verify the HostedEngine's CPU ?
1. ssh to the host hosting the HE 2. alias virsh='virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf' 3. virsh dumpxml HostedEngine
Then set the alias for virsh on all Hosts and 'virsh capabilites' should show the Hosts' <cpu><model> .
Best Regards, Strahil Nikolov
В сряда, 16 септември 2020 г., 10:16:08 Гринуич+3, ddqlo <ddqlo@126.com> написа:
My gateway was not pingable. I have fixed this problem and now both nodes have a score(3400). Yet, hosted engine could not be migrated. Same log in engine.log: host filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU'
在 2020-09-16 02:11:09,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
Both nodes have a lower than the usual score (should be 3400 ). Based on the score you are probably suffering from gateway-score-penalty [1][2]. Check if your gateway is pingable.
Best Regards, Strahil Nikolov
1 - https://www.ovirt.org/images/Hosted-Engine-4.3-deep-dive.pdf(page 8) 2 - /etc/ovirt-hosted-engine-ha/agent.conf
В вторник, 15 септември 2020 г., 04:49:48 Гринуич+3, ddqlo <ddqlo@126.com> написа:
--== Host node28 (id: 1) status ==--
conf_on_shared_storage : True Status up-to-date : True Hostname : node28 Host ID : 1 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"} Score : 1800 stopped : False Local maintenance : False crc32 : 4ac6105b local_conf_timestamp : 1794597 Host timestamp : 1794597 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=1794597 (Tue Sep 15 09:47:17 2020) host-id=1 score=1800 vm_conf_refresh_time=1794597 (Tue Sep 15 09:47:17 2020) conf_on_shared_storage=True maintenance=False state=EngineDown stopped=False
--== Host node22 (id: 2) status ==--
conf_on_shared_storage : True Status up-to-date : True Hostname : node22 Host ID : 2 Engine status : {"health": "good", "vm": "up", "detail": "Up"} Score : 1800 stopped : False Local maintenance : False crc32 : ffc41893 local_conf_timestamp : 1877876 Host timestamp : 1877876 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=1877876 (Tue Sep 15 09:47:13 2020) host-id=2 score=1800 vm_conf_refresh_time=1877876 (Tue Sep 15 09:47:13 2020) conf_on_shared_storage=True maintenance=False state=EngineUp stopped=False
在 2020-09-09 01:32:55,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
What is the output of 'hosted-engine --vm-status' on the node where the HostedEngine is running ?
Best Regards, Strahil Nikolov
В понеделник, 7 септември 2020 г., 03:53:13 Гринуич+3, ddqlo <ddqlo@126.com> написа:
I could not find any logs because the migration button is disabled in the web UI. It seems that the engine migration operation is prevented at first. Any other ideas? Thanks!
在 2020-09-01 00:06:19,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道: >I'm running oVirt 4.3.10 and I can migrate my Engine from node to node. >I had one similar issue , but powering off and on the HE has fixed it. > >You have to check the vdsm log on the source and on destination in order to figure out what is going on. >Also you might consider checking the libvirt logs on the destination. > >Best Regards, >Strahil Nikolov > > > > > > >В понеделник, 31 август 2020 г., 10:47:22 Гринуич+3, ddqlo <ddqlo@126.com> написа: > > > > > >Thanks! The scores of all nodes are not '0'. I find that someone has already asked a question like this. It seems that this feature has been disabled in 4.3. I am not sure if it is enabled in 4.4. > > >在 2020-08-29 02:27:03,"Strahil Nikolov" <hunter86_bg@yahoo.com> : >>Have you checked under a shell the output of 'hosted-engine --vm-status' . Check the Score of the hosts. Maybe there is a node with score of '0' ? >> >>Best Regards, >>Strahil Nikolov >> >> >> >> >> >> >>В вторник, 25 август 2020 г., 13:46:18 Гринуич+3, 董青龙 <ddqlo@126.com> написа: >> >> >> >> >> >>Hi all, >> I have an ovirt4.3.10.4 environment of 2 hosts. Normal vms in this environment could be migrated, but the hosted engine vm could not be migrated. Anyone can help? Thanks a lot! >> >>hosts status: >> >>normal vm migration: >> >>hosted engine vm migration: >> >> >> >> >>_______________________________________________ >>Users mailing list -- users@ovirt.org >>To unsubscribe send an email to users-leave@ovirt.org >>Privacy Statement: https://www.ovirt.org/privacy-policy.html >>oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ >>List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZXHE2AJX42HNHO... > > > > > >_______________________________________________ >Users mailing list -- users@ovirt.org >To unsubscribe send an email to users-leave@ovirt.org >Privacy Statement: https://www.ovirt.org/privacy-policy.html >oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ >List Archives: >https://lists.ovirt.org/archives/list/users@ovirt.org/message/IAYLFLC6K42OUP...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/23ZMAP5K7N5KKX...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/BM3QAMWVBKUATS...

So, let's summarize: - Cannot migrate the HE due to "CPU policy". - HE's CPU is westmere - just like hosts - You have enough resources on the second HE host (both CPU + MEMORY) What is the Cluster's CPU type (you can check in UI) ? Maybe you should enable debugging on various locations to identify the issue. Anything interesting in the libvirt's log for the HostedEngine.xml on the destination host ? Best Regards, Strahil Nikolov В вторник, 22 септември 2020 г., 05:37:18 Гринуич+3, ddqlo <ddqlo@126.com> написа: Yes. I can. The host which does not host the HE could be reinstalled sucessfully in web UI. After this is done nothing has changed. 在 2020-09-22 03:08:18,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
Can you put 1 host in maintenance and use the "Installation" -> "Reinstall" and enable the HE deployment from one of the tabs ?
Best Regards, Strahil Nikolov
В понеделник, 21 септември 2020 г., 06:38:06 Гринуич+3, ddqlo <ddqlo@126.com> написа:
so strange! After I set global maintenance, powered off and started H The cpu of HE became 'Westmere'(did not change anything). But HE still could not be migrated.
HE xml: <cpu mode='custom' match='exact' check='full'> <model fallback='forbid'>Westmere</model> <topology sockets='16' cores='4' threads='1'/> <feature policy='require' name='vme'/> <feature policy='require' name='pclmuldq'/> <feature policy='require' name='x2apic'/> <feature policy='require' name='hypervisor'/> <feature policy='require' name='arat'/> <numa> <cell id='0' cpus='0-3' memory='16777216' unit='KiB'/> </numa> </cpu>
host capabilities: <model>Westmere</model>
cluster cpu type (UI):
host cpu type (UI):
HE cpu type (UI):
在 2020-09-19 13:27:35,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
Hm... interesting.
The VM is using 'Haswell-noTSX' while the host is 'Westmere'.
In my case I got no difference:
[root@ovirt1 ~]# virsh dumpxml HostedEngine | grep Opteron <model fallback='forbid'>Opteron_G5</model> [root@ovirt1 ~]# virsh capabilities | grep Opteron <model>Opteron_G5</model>
Did you update the cluster holding the Hosted Engine ?
I guess you can try to:
- Set global maintenance - Power off the HostedEngine VM - virsh dumpxml HostedEngine > /root/HE.xml - use virsh edit to change the cpu of the HE (non-permanent) change - try to power on the modified HE
If it powers on , you can try to migrate it and if it succeeds - then you should make it permanent.
Best Regards, Strahil Nikolov
В петък, 18 септември 2020 г., 04:40:39 Гринуич+3, ddqlo <ddqlo@126.com> написа:
HE:
<domain type='kvm' id='1'> <name>HostedEngine</name> <uuid>b4e805ff-556d-42bd-a6df-02f5902fd01c</uuid> <metadata xmlns:ns0="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0"> <ns0:qos/> <ovirt-vm:vm xmlns:ovirt-vm="http://ovirt.org/vm/1.0"> <ovirt-vm:clusterVersion>4.3</ovirt-vm:clusterVersion> <ovirt-vm:destroy_on_reboot type="bool">False</ovirt-vm:destroy_on_reboot> <ovirt-vm:launchPaused>false</ovirt-vm:launchPaused> <ovirt-vm:memGuaranteedSize type="int">1024</ovirt-vm:memGuaranteedSize> <ovirt-vm:minGuaranteedMemoryMb type="int">1024</ovirt-vm:minGuaranteedMemoryMb> <ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior> <ovirt-vm:startTime type="float">1600307555.19</ovirt-vm:startTime> <ovirt-vm:device mac_address="56:6f:9b:b0:00:01"> <ovirt-vm:network>external</ovirt-vm:network> <ovirt-vm:custom> <ovirt-vm:queues>4</ovirt-vm:queues> </ovirt-vm:custom> </ovirt-vm:device> <ovirt-vm:device mac_address="00:16:3e:50:c1:97"> <ovirt-vm:network>ovirtmgmt</ovirt-vm:network> <ovirt-vm:custom> <ovirt-vm:queues>4</ovirt-vm:queues> </ovirt-vm:custom> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="vda"> <ovirt-vm:domainID>c17c1934-332f-464c-8f89-ad72463c00b3</ovirt-vm:domainID> <ovirt-vm:guestName>/dev/vda2</ovirt-vm:guestName> <ovirt-vm:imageID>8eca143a-4535-4421-bd35-9f5764d67d70</ovirt-vm:imageID> <ovirt-vm:poolID>00000000-0000-0000-0000-000000000000</ovirt-vm:poolID> <ovirt-vm:shared>exclusive</ovirt-vm:shared> <ovirt-vm:volumeID>ae961104-c3b3-4a43-9f46-7fa6bdc2ac33</ovirt-vm:volumeID> <ovirt-vm:specParams> <ovirt-vm:pinToIoThread>1</ovirt-vm:pinToIoThread> </ovirt-vm:specParams> <ovirt-vm:volumeChain> <ovirt-vm:volumeChainNode> <ovirt-vm:domainID>c17c1934-332f-464c-8f89-ad72463c00b3</ovirt-vm:domainID> <ovirt-vm:imageID>8eca143a-4535-4421-bd35-9f5764d67d70</ovirt-vm:imageID> <ovirt-vm:leaseOffset type="int">108003328</ovirt-vm:leaseOffset> <ovirt-vm:leasePath>/dev/c17c1934-332f-464c-8f89-ad72463c00b3/leases</ovirt-vm:leasePath> <ovirt-vm:path>/rhev/data-center/mnt/blockSD/c17c1934-332f-464c-8f89-ad72463c00b3/images/8eca143a-4535-4421-bd35-9f5764d67d70/ae961104-c3b3-4a43-9f46-7fa6bdc2ac33</ovirt-vm:path> <ovirt-vm:volumeID>ae961104-c3b3-4a43-9f46-7fa6bdc2ac33</ovirt-vm:volumeID> </ovirt-vm:volumeChainNode> </ovirt-vm:volumeChain> </ovirt-vm:device> <ovirt-vm:device devtype="disk" name="hdc"/> </ovirt-vm:vm> </metadata> <maxMemory slots='16' unit='KiB'>67108864</maxMemory> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <vcpu placement='static' current='4'>64</vcpu> <iothreads>1</iothreads> <resource> <partition>/machine</partition> </resource> <sysinfo type='smbios'> <system> <entry name='manufacturer'>oVirt</entry> <entry name='product'>oVirt Node</entry> <entry name='version'>7-5.1804.el7.centos</entry> <entry name='serial'>00000000-0000-0000-0000-0CC47A6B3160</entry> <entry name='uuid'>b4e805ff-556d-42bd-a6df-02f5902fd01c</entry> </system> </sysinfo> <os> <type arch='x86_64' machine='pc-i440fx-rhel7.6.0'>hvm</type> <boot dev='hd'/> <bios useserial='yes'/> <smbios mode='sysinfo'/> </os> <features> <acpi/> </features> <cpu mode='custom' match='exact' check='full'> <model fallback='forbid'>Haswell-noTSX</model> <topology sockets='16' cores='4' threads='1'/> <feature policy='require' name='vme'/> <feature policy='require' name='f16c'/> <feature policy='require' name='rdrand'/> <feature policy='require' name='hypervisor'/> <feature policy='require' name='arat'/> <feature policy='require' name='xsaveopt'/> <feature policy='require' name='abm'/> <numa> <cell id='0' cpus='0-3' memory='16777216' unit='KiB'/> </numa> </cpu> <clock offset='variable' adjustment='0' basis='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>destroy</on_reboot> <on_crash>destroy</on_crash> <pm> <suspend-to-mem enabled='no'/> <suspend-to-disk enabled='no'/> </pm> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='cdrom'> <driver name='qemu' error_policy='report'/> <source startupPolicy='optional'/> <target dev='hdc' bus='ide'/> <readonly/> <alias name='ua-80fde7d5-ee7f-4201-9118-11bc6c3b8530'/> <address type='drive' controller='0' bus='1' target='0' unit='0'/> </disk> <disk type='block' device='disk' snapshot='no'> <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native' iothread='1'/> <source dev='/var/run/vdsm/storage/c17c1934-332f-464c-8f89-ad72463c00b3/8eca143a-4535-4421-bd35-9f5764d67d70/ae961104-c3b3-4a43-9f46-7fa6bdc2ac33'> <seclabel model='dac' relabel='no'/> </source> <backingStore/> <target dev='vda' bus='virtio'/> <serial>8eca143a-4535-4421-bd35-9f5764d67d70</serial> <alias name='ua-8eca143a-4535-4421-bd35-9f5764d67d70'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </disk> <controller type='scsi' index='0' model='virtio-scsi'> <driver iothread='1'/> <alias name='ua-27331e83-03f4-42a3-9554-c41649c02ba4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </controller> <controller type='ide' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0' ports='16'> <alias name='ua-8fe74299-b60f-4778-8e80-db05393a9489'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </controller> <controller type='usb' index='0' model='piix3-uhci'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <lease> <lockspace>c17c1934-332f-464c-8f89-ad72463c00b3</lockspace> <key>ae961104-c3b3-4a43-9f46-7fa6bdc2ac33</key> <target path='/dev/c17c1934-332f-464c-8f89-ad72463c00b3/leases' offset='108003328'/> </lease> <interface type='bridge'> <mac address='00:16:3e:50:c1:97'/> <source bridge='ovirtmgmt'/> <target dev='vnet0'/> <model type='virtio'/> <driver name='vhost' queues='4'/> <filterref filter='vdsm-no-mac-spoofing'/> <link state='up'/> <mtu size='1500'/> <alias name='ua-fada74ee-2338-4cde-a7ba-43a9a636ad6e'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <interface type='bridge'> <mac address='56:6f:9b:b0:00:01'/> <source bridge='external'/> <target dev='vnet1'/> <model type='virtio'/> <driver name='vhost' queues='4'/> <filterref filter='vdsm-no-mac-spoofing'/> <link state='up'/> <mtu size='1500'/> <alias name='ua-f7b4c949-1f9f-4355-811d-88428c88ce4e'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/> </interface> <serial type='unix'> <source mode='bind' path='/var/run/ovirt-vmconsole-console/b4e805ff-556d-42bd-a6df-02f5902fd01c.sock'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='unix'> <source mode='bind' path='/var/run/ovirt-vmconsole-console/b4e805ff-556d-42bd-a6df-02f5902fd01c.sock'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/b4e805ff-556d-42bd-a6df-02f5902fd01c.ovirt-guest-agent.0'/> <target type='virtio' name='ovirt-guest-agent.0' state='connected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/b4e805ff-556d-42bd-a6df-02f5902fd01c.org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='channel1'/> <address type='virtio-serial' controller='0' bus='0' port='2'/> </channel> <channel type='spicevmc'> <target type='virtio' name='com.redhat.spice.0' state='disconnected'/> <alias name='channel2'/> <address type='virtio-serial' controller='0' bus='0' port='3'/> </channel> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channels/b4e805ff-556d-42bd-a6df-02f5902fd01c.org.ovirt.hosted-engine-setup.0'/> <target type='virtio' name='org.ovirt.hosted-engine-setup.0' state='disconnected'/> <alias name='channel3'/> <address type='virtio-serial' controller='0' bus='0' port='4'/> </channel> <input type='tablet' bus='usb'> <alias name='input0'/> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'> <alias name='input1'/> </input> <input type='keyboard' bus='ps2'> <alias name='input2'/> </input> <graphics type='vnc' port='5900' autoport='yes' listen='192.168.1.22' keymap='en-us' passwdValidTo='1970-01-01T00:00:01'> <listen type='network' address='192.168.1.22' network='vdsm-external'/> </graphics> <graphics type='spice' port='5901' tlsPort='5902' autoport='yes' listen='192.168.1.22' passwdValidTo='1970-01-01T00:00:01'> <listen type='network' address='192.168.1.22' network='vdsm-external'/> <channel name='main' mode='secure'/> <channel name='display' mode='secure'/> <channel name='inputs' mode='secure'/> <channel name='cursor' mode='secure'/> <channel name='playback' mode='secure'/> <channel name='record' mode='secure'/> <channel name='smartcard' mode='secure'/> <channel name='usbredir' mode='secure'/> </graphics> <sound model='ich6'> <alias name='ua-bd287767-9b83-4e44-ac6f-8b527f9632b8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </sound> <video> <model type='qxl' ram='65536' vram='32768' vgamem='16384' heads='1' primary='yes'/> <alias name='ua-bcfb6b4b-0b3c-4d5b-ba2d-8ce40a65facd'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <stats period='5'/> <alias name='ua-39d36063-8808-47db-9fef-a0baad9f9661'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </memballoon> <rng model='virtio'> <backend model='random'>/dev/urandom</backend> <alias name='ua-75516d34-dd8f-4f0f-8496-e1f222a359a8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </rng> </devices> <seclabel type='dynamic' model='selinux' relabel='yes'> <label>system_u:system_r:svirt_t:s0:c162,c716</label> <imagelabel>system_u:object_r:svirt_image_t:s0:c162,c716</imagelabel> </seclabel> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+107:+107</label> <imagelabel>+107:+107</imagelabel> </seclabel> </domain>
hosts:
<capabilities> <host> <uuid>b25adcec-eef2-49a5-8663-7cdcfc50891b</uuid> <cpu> <arch>x86_64</arch> <model>Westmere</model> <vendor>Intel</vendor> <microcode version='34'/> <counter name='tsc' frequency='3699996000' scaling='no'/> <topology sockets='1' cores='2' threads='2'/> <feature name='vme'/> <feature name='ds'/> <feature name='acpi'/> <feature name='ss'/> <feature name='ht'/> <feature name='tm'/> <feature name='pbe'/> <feature name='pclmuldq'/> <feature name='dtes64'/> <feature name='monitor'/> <feature name='ds_cpl'/> <feature name='vmx'/> <feature name='est'/> <feature name='tm2'/> <feature name='fma'/> <feature name='xtpr'/> <feature name='pdcm'/> <feature name='pcid'/> <feature name='movbe'/> <feature name='tsc-deadline'/> <feature name='xsave'/> <feature name='osxsave'/> <feature name='avx'/> <feature name='f16c'/> <feature name='rdrand'/> <feature name='arat'/> <feature name='fsgsbase'/> <feature name='tsc_adjust'/> <feature name='bmi1'/> <feature name='avx2'/> <feature name='smep'/> <feature name='bmi2'/> <feature name='erms'/> <feature name='invpcid'/> <feature name='xsaveopt'/> <feature name='pdpe1gb'/> <feature name='rdtscp'/> <feature name='abm'/> <feature name='invtsc'/> <pages unit='KiB' size='4'/> <pages unit='KiB' size='2048'/> <pages unit='KiB' size='1048576'/> </cpu> <power_management> <suspend_mem/> <suspend_disk/> <suspend_hybrid/> </power_management> <iommu support='no'/> <migration_features> <live/> <uri_transports> <uri_transport>tcp</uri_transport> <uri_transport>rdma</uri_transport> </uri_transports> </migration_features> <topology> <cells num='1'> <cell id='0'> <memory unit='KiB'>32903488</memory> <pages unit='KiB' size='4'>8225872</pages> <pages unit='KiB' size='2048'>0</pages> <pages unit='KiB' size='1048576'>0</pages> <distances> <sibling id='0' value='10'/> </distances> <cpus num='4'> <cpu id='0' socket_id='0' core_id='0' siblings='0,2'/> <cpu id='1' socket_id='0' core_id='1' siblings='1,3'/> <cpu id='2' socket_id='0' core_id='0' siblings='0,2'/> <cpu id='3' socket_id='0' core_id='1' siblings='1,3'/> </cpus> </cell> </cells> </topology> <cache> <bank id='0' level='3' type='both' size='3' unit='MiB' cpus='0-3'/> </cache> <secmodel> <model>selinux</model> <doi>0</doi> <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel> <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel> </secmodel> <secmodel> <model>dac</model> <doi>0</doi> <baselabel type='kvm'>+107:+107</baselabel> <baselabel type='qemu'>+107:+107</baselabel> </secmodel> </host> <guest> <os_type>hvm</os_type> <arch name='i686'> <wordsize>32</wordsize> <emulator>/usr/libexec/qemu-kvm</emulator> <machine maxCpus='240'>pc-i440fx-rhel7.6.0</machine> <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240'>pc</machine> <machine maxCpus='240'>pc-i440fx-rhel7.0.0</machine> <machine maxCpus='384'>pc-q35-rhel7.6.0</machine> <machine canonical='pc-q35-rhel7.6.0' maxCpus='384'>q35</machine> <machine maxCpus='240'>rhel6.3.0</machine> <machine maxCpus='240'>rhel6.4.0</machine> <machine maxCpus='240'>rhel6.0.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.5.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.1.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.2.0</machine> <machine maxCpus='255'>pc-q35-rhel7.3.0</machine> <machine maxCpus='240'>rhel6.5.0</machine> <machine maxCpus='384'>pc-q35-rhel7.4.0</machine> <machine maxCpus='240'>rhel6.6.0</machine> <machine maxCpus='240'>rhel6.1.0</machine> <machine maxCpus='240'>rhel6.2.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.3.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.4.0</machine> <machine maxCpus='384'>pc-q35-rhel7.5.0</machine> <domain type='qemu'/> <domain type='kvm'> <emulator>/usr/libexec/qemu-kvm</emulator> </domain> </arch> <features> <cpuselection/> <deviceboot/> <disksnapshot default='on' toggle='no'/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> <pae/> <nonpae/> </features> </guest> <guest> <os_type>hvm</os_type> <arch name='x86_64'> <wordsize>64</wordsize> <emulator>/usr/libexec/qemu-kvm</emulator> <machine maxCpus='240'>pc-i440fx-rhel7.6.0</machine> <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240'>pc</machine> <machine maxCpus='240'>pc-i440fx-rhel7.0.0</machine> <machine maxCpus='384'>pc-q35-rhel7.6.0</machine> <machine canonical='pc-q35-rhel7.6.0' maxCpus='384'>q35</machine> <machine maxCpus='240'>rhel6.3.0</machine> <machine maxCpus='240'>rhel6.4.0</machine> <machine maxCpus='240'>rhel6.0.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.5.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.1.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.2.0</machine> <machine maxCpus='255'>pc-q35-rhel7.3.0</machine> <machine maxCpus='240'>rhel6.5.0</machine> <machine maxCpus='384'>pc-q35-rhel7.4.0</machine> <machine maxCpus='240'>rhel6.6.0</machine> <machine maxCpus='240'>rhel6.1.0</machine> <machine maxCpus='240'>rhel6.2.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.3.0</machine> <machine maxCpus='240'>pc-i440fx-rhel7.4.0</machine> <machine maxCpus='384'>pc-q35-rhel7.5.0</machine> <domain type='qemu'/> <domain type='kvm'> <emulator>/usr/libexec/qemu-kvm</emulator> </domain> </arch> <features> <cpuselection/> <deviceboot/> <disksnapshot default='on' toggle='no'/> <acpi default='on' toggle='yes'/> <apic default='on' toggle='no'/> </features> </guest> </capabilities>
在 2020-09-17 12:00:19,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
It would be easier if you posted the whole xml.
What about the sections (in HE xml) starting with: feature policy=
Also the hosts have a section which contains:
<feature name=
If you can share a VM's xml sections for a good VM.
Best Regards, Strahil Nikolov
В четвъртък, 17 септември 2020 г., 05:54:12 Гринуич+3, ddqlo <ddqlo@126.com> написа:
HostedEngine: ...... <model fallback='forbid'>Haswell-noTSX</model> ......
both of the hosts: ...... <model>Westmere</model> ......
others vms which can be migrated: ...... <model fallback='forbid'>Haswell-noTSX</model> ......
在 2020-09-17 03:03:24,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
Can you verify the HostedEngine's CPU ?
1. ssh to the host hosting the HE 2. alias virsh='virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf' 3. virsh dumpxml HostedEngine
Then set the alias for virsh on all Hosts and 'virsh capabilites' should show the Hosts' <cpu><model> .
Best Regards, Strahil Nikolov
В сряда, 16 септември 2020 г., 10:16:08 Гринуич+3, ddqlo <ddqlo@126.com> написа:
My gateway was not pingable. I have fixed this problem and now both nodes have a score(3400). Yet, hosted engine could not be migrated. Same log in engine.log: host filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'CPU'
在 2020-09-16 02:11:09,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
Both nodes have a lower than the usual score (should be 3400 ). Based on the score you are probably suffering from gateway-score-penalty [1][2]. Check if your gateway is pingable.
Best Regards, Strahil Nikolov
1 - https://www.ovirt.org/images/Hosted-Engine-4.3-deep-dive.pdf(page 8) 2 - /etc/ovirt-hosted-engine-ha/agent.conf
В вторник, 15 септември 2020 г., 04:49:48 Гринуич+3, ddqlo <ddqlo@126.com> написа:
--== Host node28 (id: 1) status ==--
conf_on_shared_storage : True Status up-to-date : True Hostname : node28 Host ID : 1 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down_unexpected", "detail": "unknown"} Score : 1800 stopped : False Local maintenance : False crc32 : 4ac6105b local_conf_timestamp : 1794597 Host timestamp : 1794597 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=1794597 (Tue Sep 15 09:47:17 2020) host-id=1 score=1800 vm_conf_refresh_time=1794597 (Tue Sep 15 09:47:17 2020) conf_on_shared_storage=True maintenance=False state=EngineDown stopped=False
--== Host node22 (id: 2) status ==--
conf_on_shared_storage : True Status up-to-date : True Hostname : node22 Host ID : 2 Engine status : {"health": "good", "vm": "up", "detail": "Up"} Score : 1800 stopped : False Local maintenance : False crc32 : ffc41893 local_conf_timestamp : 1877876 Host timestamp : 1877876 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=1877876 (Tue Sep 15 09:47:13 2020) host-id=2 score=1800 vm_conf_refresh_time=1877876 (Tue Sep 15 09:47:13 2020) conf_on_shared_storage=True maintenance=False state=EngineUp stopped=False
在 2020-09-09 01:32:55,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道:
What is the output of 'hosted-engine --vm-status' on the node where the HostedEngine is running ?
Best Regards, Strahil Nikolov
В понеделник, 7 септември 2020 г., 03:53:13 Гринуич+3, ddqlo <ddqlo@126.com> написа:
I could not find any logs because the migration button is disabled in the web UI. It seems that the engine migration operation is prevented at first. Any other ideas? Thanks!
在 2020-09-01 00:06:19,"Strahil Nikolov" <hunter86_bg@yahoo.com> 写道: >I'm running oVirt 4.3.10 and I can migrate my Engine from node to node. >I had one similar issue , but powering off and on the HE has fixed it. > >You have to check the vdsm log on the source and on destination in order to figure out what is going on. >Also you might consider checking the libvirt logs on the destination. > >Best Regards, >Strahil Nikolov > > > > > > >В понеделник, 31 август 2020 г., 10:47:22 Гринуич+3, ddqlo <ddqlo@126.com> написа: > > > > > >Thanks! The scores of all nodes are not '0'. I find that someone has already asked a question like this. It seems that this feature has been disabled in 4.3. I am not sure if it is enabled in 4.4. > > >在 2020-08-29 02:27:03,"Strahil Nikolov" <hunter86_bg@yahoo.com> : >>Have you checked under a shell the output of 'hosted-engine --vm-status' . Check the Score of the hosts. Maybe there is a node with score of '0' ? >> >>Best Regards, >>Strahil Nikolov >> >> >> >> >> >> >>В вторник, 25 август 2020 г., 13:46:18 Гринуич+3, 董青龙 <ddqlo@126.com> написа: >> >> >> >> >> >>Hi all, >> I have an ovirt4.3.10.4 environment of 2 hosts. Normal vms in this environment could be migrated, but the hosted engine vm could not be migrated. Anyone can help? Thanks a lot! >> >>hosts status: >> >>normal vm migration: >> >>hosted engine vm migration: >> >> >> >> >>_______________________________________________ >>Users mailing list -- users@ovirt.org >>To unsubscribe send an email to users-leave@ovirt.org >>Privacy Statement: https://www.ovirt.org/privacy-policy.html >>oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ >>List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZXHE2AJX42HNHO... > > > > > >_______________________________________________ >Users mailing list -- users@ovirt.org >To unsubscribe send an email to users-leave@ovirt.org >Privacy Statement: https://www.ovirt.org/privacy-policy.html >oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ >List Archives: >https://lists.ovirt.org/archives/list/users@ovirt.org/message/IAYLFLC6K42OUP...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/23ZMAP5K7N5KKX...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/BM3QAMWVBKUATS...
_______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/N6AT4MP2DJGRZP...
participants (4)
-
ddqlo
-
Strahil Nikolov
-
Yedidyah Bar David
-
董青龙