oVirt Node install failed

Hi all! The following error occurs during installation oVirt Node 4.2.8: EVENT_ID: VDS_INSTALL_IN_PROGRESS_ERROR(511), An error has occurred during installation of Host hostname_ovirt_node2: Yum Cannot queue package dmidecode: Cannot retrieve metalink for repository: ovirt-4.2-epel/x86_64. Please verify its path and try again From shell ovirt node type command: yum install dmidecode Cannot retrieve metalink for repository: ovirt-4.2-epel/x86_64. Please verify its path and try again Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered? Does anyone know how to fix this?

Il giorno mar 19 feb 2019 alle ore 09:23 <kiv@intercom.pro> ha scritto:
Hi all!
The following error occurs during installation oVirt Node 4.2.8:
EVENT_ID: VDS_INSTALL_IN_PROGRESS_ERROR(511), An error has occurred during installation of Host hostname_ovirt_node2: Yum Cannot queue package dmidecode: Cannot retrieve metalink for repository: ovirt-4.2-epel/x86_64. Please verify its path and try again
From shell ovirt node type command:
yum install dmidecode
Hi, in oVirt Node 4.2.8 dmidecode is already installed: dmidecode-3.1-2.el7.x86_64 python-dmidecode-3.12.2-3.el7.x86_64 Also, on oVirt Node 4.2 EPEL repository should be disabled by default. Did you change something manually on it?
Cannot retrieve metalink for repository: ovirt-4.2-epel/x86_64. Please verify its path and try again Uploading Enabled Repositories Report Loaded plugins: fastestmirror, product-id, subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot upload enabled repos report, is this client registered?
Does anyone know how to fix this? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/7D4DLDSZ6F2U4C...
-- SANDRO BONAZZOLA MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV Red Hat EMEA <https://www.redhat.com/> sbonazzo@redhat.com <https://red.ht/sig>

Download ISO 4.2.8 node Install. I tried to add a host to oVirt,and receive this error. He did not make any changes after install. rpm -qa | grep dmidecode python-dmidecode-3.12.2-3.el7.x86_64 dmidecode-3.1-2.el7.x86_64

Il mer 20 feb 2019, 06:43 <kiv@intercom.pro> ha scritto:
Current version my oVirt 4.2.6. Maybe I need to update it?
4.2.6 engine is supposed to work with 4.2.8 node but yes, better to upgrade. If you are not using Gluster I would recommend to upgrade to 4.3 which is currently supported version _______________________________________________
Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/35Z45SHVOAUNBD...

Thanks for the answer. Now I have two hosts 4.2.6 and 4.2.8, and the engine 4.2.6. VMs migrate between these hosts without problems. But the VM with the engine to migrate to host 4.2.8 refuses - he say: No available Host to migrate to. Since it cannot migrate, there is no way to put the host in maintenance mode. And there is no possibility of upgrade.How to find out why? Install Host 4.2.6?

in logs: Candidate host 'ovirt2' ('4086-9cce-365172819c60') was filtered out by 'VAR__FILTERTYPE__INTERNAL' filter 'HA' (correlation id: null)

If you haven't "installed" or "reinstalled" the second host without purposely selecting "DEPLOY" under hosted-engine actions, it will not be able to run the hosted-engine VM. A quick way to tell if you did is to look at the hosts view and look for the "crowns" on the left like this attached pic example. On Sun, Feb 24, 2019 at 11:27 PM <kiv@intercom.pro> wrote:
Thanks for the answer.
Now I have two hosts 4.2.6 and 4.2.8, and the engine 4.2.6. VMs migrate between these hosts without problems. But the VM with the engine to migrate to host 4.2.8 refuses - he say:
No available Host to migrate to.
Since it cannot migrate, there is no way to put the host in maintenance mode. And there is no possibility of upgrade.How to find out why? Install Host 4.2.6? _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ECVQUALTXCX2VE...

The crown icon on the left of the second host is gray. When I try to migrate the engine, I get the error: Migration of VM 'HostedEngine' to host 'ovirt2' failed: VM destroyed during the startup EVENT_ID: VM_MIGRATION_NO_VDS_TO_MIGRATE_TO(166), No available host was found to migrate VM HostedEngine to.

# hosted-engine --vm-status --== Host 1 status ==-- conf_on_shared_storage : True Status up-to-date : True Hostname : ovirt1 Host ID : 1 Engine status : {"health": "good", "vm": "up", "detail": "Up"} Score : 3400 stopped : False Local maintenance : False crc32 : 69c4d342 local_conf_timestamp : 1968825 Host timestamp : 1968824 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=1968824 (Tue Feb 26 08:42:10 2019) host-id=1 score=3400 vm_conf_refresh_time=1968825 (Tue Feb 26 08:42:10 2019) conf_on_shared_storage=True maintenance=False state=EngineUp stopped=False --== Host 2 status ==-- conf_on_shared_storage : True Status up-to-date : True Hostname : ovirt2 Host ID : 2 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"} Score : 3400 stopped : False Local maintenance : False crc32 : a1712694 local_conf_timestamp : 324313 Host timestamp : 324313 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=324313 (Tue Feb 26 10:41:54 2019) host-id=2 score=3400 vm_conf_refresh_time=324313 (Tue Feb 26 10:41:54 2019) conf_on_shared_storage=True maintenance=False state=EngineDown stopped=False

and some more logs: 2019-02-26 11:04:28,747+05 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-81) [] VM 'fa3a78de-b329-4a58-8f06- efd6b0e3c719' is migrating to VDS '45b1d017-16ee-4e89-97f9-c0b002427e5d'(ovirt2) ignoring it in the refresh until migration is done 2019-02-26 11:04:31,651+05 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-6) [] VM 'fa3a78de-b329-4a58-8f06-efd6b0e3c719' was reported as Down on VDS '45b1d017-16ee-4e89-97f9-c0b002427e5d'(ovirt2) 2019-02-26 11:04:31,652+05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-6) [] START, DestroyVDSCommand(HostName = ovirt2, DestroyVmVDSCommandParameters:{hostId='45b1d017-16ee-4e89-97f9-c0b002427e5d', vmId='fa3a78de-b329-4a58-8f06-efd6b0e3c719', secondsToWait='0', gracefully='fal se', reason='', ignoreNoVm='true'}), log id: 72ef70f2 2019-02-26 11:04:32,503+05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-6) [] Failed to destroy VM 'fa3a78de-b329-4a58-8f06-ef d6b0e3c719' because VM does not exist, ignoring 2019-02-26 11:04:32,503+05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-6) [] FINISH, DestroyVDSCommand, log id: 72ef70f2 2019-02-26 11:04:32,503+05 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-6) [] VM 'fa3a78de-b329-4a58-8f06-efd6b0e3c719'(HostedEngine ) was unexpectedly detected as 'Down' on VDS '45b1d017-16ee-4e89-97f9-c0b002427e5d'(ovirt2) (expected on '46af80a5-21e8-48ce-b92b-e18120f36093') 2019-02-26 11:04:32,503+05 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-6) [] Migration of VM 'HostedEngine' to host 'ovirt2' failed: VM destroyed during the startup. 2019-02-26 11:04:32,506+05 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-6) [] VM 'fa3a78de-b329-4a58-8f06-efd6b0e3c719'(HostedEngine ) moved from 'MigratingFrom' --> 'Up' 2019-02-26 11:04:32,506+05 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-6) [] Adding VM 'fa3a78de-b329-4a58-8f06-efd6b0e3c719'(Hoste dEngine) to re-run list 2019-02-26 11:04:32,516+05 ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] (ForkJoinPool-1-worker-6) [] Rerun VM 'fa3a78de-b329-4a58-8f06-efd6b0e3c719'. Ca lled from VDS 'ovirt1' 2019-02-26 11:04:32,520+05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (EE-ManagedThreadFactory-engine-Thread-88142) [] START, MigrateStatusVD SCommand(HostName = ovirt1, MigrateStatusVDSCommandParameters:{hostId='46af80a5-21e8-48ce-b92b-e18120f36093', vmId='fa3a78de-b329-4a58-8f06-efd6b0e3c719' }), log id: 5c346b33 2019-02-26 11:04:32,524+05 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (EE-ManagedThreadFactory-engine-Thread-88142) [] FINISH, MigrateStatusV DSCommand, log id: 5c346b33

OK, if the icon is there that is a good thing. There would be no icon if you didn't select deploy. Its not terribly obvious when first installing a second host that it needs the deploy part set. There's something else causing the engine migration to fail. You can dig through the logs on the engine and hosts. I would look next at the underlying storage for the ovirt-engine image to see if it got bit by the bug that leaves it owned by root and not vdsm:kvm. That prevented me from migrating engine to properly deployed nodes when upgrading to 4.3 On one of the hosts, follow the path /rhev/data-center/mnt/* to "ls -l" the images directory for your hosted-storage and check the ownership. If its owned by root and not vdsm:kvm then chown it to that (aka 36:36) and try migrating again. On Mon, Feb 25, 2019 at 10:50 PM <kiv@intercom.pro> wrote:
The crown icon on the left of the second host is gray. When I try to migrate the engine, I get the error:
Migration of VM 'HostedEngine' to host 'ovirt2' failed: VM destroyed during the startup EVENT_ID: VM_MIGRATION_NO_VDS_TO_MIGRATE_TO(166), No available host was found to migrate VM HostedEngine to. _______________________________________________ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-leave@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NHG7MBNCPYMSXC...
participants (3)
-
Edward Berger
-
kiv@intercom.pro
-
Sandro Bonazzola