Hosted Engine deployment looks stuck in startup during the deployment
by Eugène Ngontang
Hi,
I'm using an aws ec2 bare metal install to deploy RHV-M in order to
create and test NVidia GPU VMs.
I'm trying to deploy a self hosted engine version 4.4.
I've setup everything till the hosted-engine deployment and Hosted
Engine deployment looks like stuck at engine host startup, and times
out many more hours after.
I'm suspecting networking startup issue but can really and clearly
identify the issue. Because during all this time the deployment
process is waiting for the hosted engine to come up before it
finishes, the hosted engine itself is up and running, is still running
till now, but is not reachable.
Here attached you will find :
- A screenshot before the timeout
- A screenshot after the timeout (fail)
- The answer file I appended to the hosted-engine command
> hosted-engine --deploy --4 --config-append=hosted-engine.conf
- The deployment log output
- The resulting answer file after the deployment.
I think the problem would
I think the problem would at the network startup step but as I don't
have any explicit error/failure message, I can't tell.
Please can someone here advise?
Please let me know if you need any more information from me.
Best regards,
Eugène NG
Best regards,
Eugène NG
--
LesCDN <http://lescdn.com>
engontang(a)lescdn.com
------------------------------------------------------------
*Aux hommes il faut un chef, et au*
* chef il faut des hommes!L'habit ne fait pas le moine, mais lorsqu'on te
voit on te juge!*
2 years, 10 months
Multiple dependencies unresolved
by Andrea Chierici
Dear all,
lately the engine started notifying me about some "errors":
"Failed to check for available updates on host XYZ with message 'Task
Ensure Python3 is installed for CentOS/RHEL8 hosts failed to execute.
Please check logs for more details"
I do understand this is not something that can impact my cluster
stability, since it's only a matter of checking updates, anyway it
annoys me a lot.
I checked the logs and apparently the issue is related to some repos
that are missing/unresolved.
Right now on my hosts I have these repos:
ovirt-release44-4.4.8.3-1.el8.noarch
epel-release-8-13.el8.noarch
centos-stream-release-8.6-1.el8.noarch
puppet5-release-5.0.0-5.el8.noarch
The problems come from:
Error: Failed to download metadata for repo 'ovirt-4.4-centos-gluster8':
Cannot prepare internal mirrorlist: No URLs in mirrorlist
Error: Failed to download metadata for repo 'ovirt-4.4-centos-opstools':
Cannot prepare internal mirrorlist: No URLs in mirrorlist
Error: Failed to download metadata for repo
'ovirt-4.4-openstack-victoria': Cannot download repomd.xml: Cannot
download repodata/repomd.xml: All mirrors were tried
If I disable these repos "yum update" can finish but then I get a large
number of unresolved dependencies and "problems":
Error:
Problem 1: package cockpit-ovirt-dashboard-0.15.1-1.el8.noarch
requires ansible, but none of the providers can be installed
- package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0
provided by ansible-2.9.25-1.el8.noarch
- package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0
provided by ansible-2.9.27-1.el8.noarch
- package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0
provided by ansible-2.9.17-1.el8.noarch
- package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0
provided by ansible-2.9.18-2.el8.noarch
- package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0
provided by ansible-2.9.20-2.el8.noarch
- package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0
provided by ansible-2.9.21-2.el8.noarch
- package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0
provided by ansible-2.9.23-2.el8.noarch
- package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0
provided by ansible-2.9.24-2.el8.noarch
- package ansible-2.9.27-2.el8.noarch conflicts with ansible-core >
2.11.0 provided by ansible-core-2.12.2-2.el8.x86_64
- package ansible-core-2.12.2-2.el8.x86_64 obsoletes ansible < 2.10.0
provided by ansible-2.9.27-2.el8.noarch
- cannot install the best update candidate for package
cockpit-ovirt-dashboard-0.15.1-1.el8.noarch
- cannot install the best update candidate for package
ansible-2.9.25-1.el8.noarch
- package ansible-2.9.20-1.el8.noarch is filtered out by exclude
filtering
Problem 2: package fence-agents-ibm-powervs-4.2.1-84.el8.noarch
requires fence-agents-common = 4.2.1-84.el8, but none of the providers
can be installed
- cannot install both fence-agents-common-4.2.1-88.el8.noarch and
fence-agents-common-4.2.1-84.el8.noarch
- cannot install the best update candidate for package
fence-agents-ibm-powervs-4.2.1-77.el8.noarch
- cannot install the best update candidate for package
fence-agents-common-4.2.1-77.el8.noarch
Problem 3: cannot install both fence-agents-common-4.2.1-88.el8.noarch
and fence-agents-common-4.2.1-84.el8.noarch
- package fence-agents-ibm-vpc-4.2.1-84.el8.noarch requires
fence-agents-common = 4.2.1-84.el8, but none of the providers can be
installed
- package fence-agents-amt-ws-4.2.1-88.el8.noarch requires
fence-agents-common >= 4.2.1-88.el8, but none of the providers can be
installed
- cannot install the best update candidate for package
fence-agents-ibm-vpc-4.2.1-77.el8.noarch
- cannot install the best update candidate for package
fence-agents-amt-ws-4.2.1-77.el8.noarch
Problem 4: problem with installed package
fence-agents-ibm-vpc-4.2.1-77.el8.noarch
- package fence-agents-ibm-vpc-4.2.1-77.el8.noarch requires
fence-agents-common = 4.2.1-77.el8, but none of the providers can be
installed
- package fence-agents-ibm-vpc-4.2.1-78.el8.noarch requires
fence-agents-common = 4.2.1-78.el8, but none of the providers can be
installed
- package fence-agents-ibm-vpc-4.2.1-82.el8.noarch requires
fence-agents-common = 4.2.1-82.el8, but none of the providers can be
installed
- package fence-agents-ibm-vpc-4.2.1-83.el8.noarch requires
fence-agents-common = 4.2.1-83.el8, but none of the providers can be
installed
- package fence-agents-ibm-vpc-4.2.1-84.el8.noarch requires
fence-agents-common = 4.2.1-84.el8, but none of the providers can be
installed
- cannot install both fence-agents-common-4.2.1-88.el8.noarch and
fence-agents-common-4.2.1-77.el8.noarch
- cannot install both fence-agents-common-4.2.1-88.el8.noarch and
fence-agents-common-4.2.1-78.el8.noarch
- cannot install both fence-agents-common-4.2.1-88.el8.noarch and
fence-agents-common-4.2.1-82.el8.noarch
- cannot install both fence-agents-common-4.2.1-88.el8.noarch and
fence-agents-common-4.2.1-83.el8.noarch
- cannot install both fence-agents-common-4.2.1-88.el8.noarch and
fence-agents-common-4.2.1-84.el8.noarch
- package fence-agents-apc-4.2.1-88.el8.noarch requires
fence-agents-common >= 4.2.1-88.el8, but none of the providers can be
installed
- cannot install the best update candidate for package
fence-agents-apc-4.2.1-77.el8.noarch
Problem 5: problem with installed package
fence-agents-ibm-powervs-4.2.1-77.el8.noarch
- package fence-agents-ibm-powervs-4.2.1-77.el8.noarch requires
fence-agents-common = 4.2.1-77.el8, but none of the providers can be
installed
- package fence-agents-ibm-powervs-4.2.1-78.el8.noarch requires
fence-agents-common = 4.2.1-78.el8, but none of the providers can be
installed
- package fence-agents-ibm-powervs-4.2.1-82.el8.noarch requires
fence-agents-common = 4.2.1-82.el8, but none of the providers can be
installed
- package fence-agents-ibm-powervs-4.2.1-83.el8.noarch requires
fence-agents-common = 4.2.1-83.el8, but none of the providers can be
installed
- package fence-agents-ibm-powervs-4.2.1-84.el8.noarch requires
fence-agents-common = 4.2.1-84.el8, but none of the providers can be
installed
- cannot install both fence-agents-common-4.2.1-88.el8.noarch and
fence-agents-common-4.2.1-77.el8.noarch
- cannot install both fence-agents-common-4.2.1-88.el8.noarch and
fence-agents-common-4.2.1-78.el8.noarch
- cannot install both fence-agents-common-4.2.1-88.el8.noarch and
fence-agents-common-4.2.1-82.el8.noarch
- cannot install both fence-agents-common-4.2.1-88.el8.noarch and
fence-agents-common-4.2.1-83.el8.noarch
- cannot install both fence-agents-common-4.2.1-88.el8.noarch and
fence-agents-common-4.2.1-84.el8.noarch
- package fence-agents-apc-snmp-4.2.1-88.el8.noarch requires
fence-agents-common >= 4.2.1-88.el8, but none of the providers can be
installed
- cannot install the best update candidate for package
fence-agents-apc-snmp-4.2.1-77.el8.noarch
My question is: since I am not willing to update anything right now, and
I just want to get rid of the stupid python3 error, should I disable
these repos, even if now I get some dep issues or the solution is another?
Any suggestion?
Thanks,
Andrea
--
Andrea Chierici - INFN-CNAF
Viale Berti Pichat 6/2, 40127 BOLOGNA
Office Tel: +39 051 2095463
SkypeID ataruz
--
2 years, 10 months
Deleting Snapshot failed
by Jonathan Baecker
Hello everybody,
last night my backup script was not able to finish the backup from a VM
in the last step of deleting the snapshot. And now I also can not delete
this snapshot by hand, the message says:
VDSM onode2 command MergeVDS failed: Drive image file could not be
found: {'driveSpec': {'poolID':
'c9baa5d4-3543-11eb-9c0c-00163e33f845', 'volumeID':
'024e1844-c19b-40d8-a2ac-cb4ea6ec34e6', 'imageID':
'ad23c0db-1838-4f1f-811b-2b213d3a11cd', 'domainID':
'3cf83851-1cc8-4f97-8960-08a60b9e25db'}, 'job':
'96c7003f-e111-4270-b922-d9b215aaaea2', 'reason': 'Cannot find drive'}
The full log you found in the attachment.
Any idea?
Best regard
Jonathan
2 years, 10 months
How to create a new Direct LUN with the API
by michael.wagenknecht@continentale.de
Hi,
I tried to create a new Fibre Channel Direct LUN with the API. But it doesn't work. My command is:
curl -s \
--cacert '/etc/pki/ovirt-engine/ca.pem' \
--request POST \
--header 'Version: 4' \
--header 'Accept: application/xml' \
--user 'admin@internal:XXXXXXXXXX' \
--data '
<disk>
<alias>DX600RZ2_OLVM_Test</alias>
<name>DX600RZ2_OLVM_Test</name>
<lun_storage>
<type>fcp</type>
<logical_units>
<logical_unit id="3600000e00d2a0000002a0d15034d0000">
</logical_unit>
</logical_units>
</lun_storage>
</disk>
' \
https://olvmmanager/ovirt-engine/api/disks
I think there are parameter missing. But I can't find a working example.
Please help.
Best Regards,
Michael
2 years, 10 months
How I'd like to contribute
by Glen Jarvis
I am researching how to contribute to the oVirt community. I started here:
https://www.ovirt.org/community/
And, I immediately saw to sign up for this email address and...send us an email saying how you would like to contribute. Visit our[mailing lists](https://lists.ovirt.org/archives/)page for other oVirt mailing lists to sign up for.
My answers are: I want to be useful (and give more than I take). I can answer questions on mailing lists, help troubleshoot, write ovirt.ovirt Ansible collections, roles and custom modules. I am a seasoned Python programmer.
My background:
- Python programmer for 10+ years
- I write custom Ansible modules, roles, playbook, etc.
- Previous DBA for Informix (highly certified but who has heard of informix anymore). Postgres and Informix are cousins (both offspring from Ingres)
- I have *some* rudimentary knowledge of Virtualization. However, I'm far from an expert
- One of my favorite OS's is Qubes (an OS of virtual machines really)
- I do a lot of technical training (writing materials and facilitating classes)
- I work in an SRE / SysAd role at a large company that puts music in peoples ears (I'm hoping to move some of this from less SysAd and more SRE when with some of this oVirt stuff we're working on).
My intermediate skills
- I have bought a book on libvirt. But, it's still on my backlog. It feels that I'm always sucking an ocean through a straw so I have to pick and choose what I read next
- My second favorite OS is Ubuntu as my main desktop (Qubes on separate computer for more secure stuff -- like crypto)
- I'm just starting to use Virtual Machine Manager to run other OSs on Ubuntu
My Oh-I-have-No-Idea skills:
- things like luns, iSCSI and the `vdsm-tool config-lvm-filter` are making me pull my hair out. This is the reason I was frustrated enough to say "Let me join this community so I can learn more how this architecture works."
- I work for a large company that has a RedHat support contract. We use RHV. I just wrote up this long descriptive case of the problem, uploaded sosreports, added as much detail as I could. But, it's crickets. If I knew more I could debug what was happening more myself.
How did I do for an introduction?
Cheers,
Glen Jarvis
2 years, 10 months
ovirt-4.4 morrors failing
by Ayansh Rocks
Hi All,
Maximum mirrors are failing of ovirt-4.4 dependencies....what can be done
here ?
Error: Failed to download metadata for repo 'ovirt-4.4-centos-gluster8':
Cannot prepare internal mirrorlist: No URLs in mirrorlist
Errors during downloading metadata for repository
'ovirt-4.4-openstack-victoria':
- Status code: 404 for
http://mirror.centos.org/centos/8/cloud/x86_64/openstack-victoria/repodat...
(IP: 54.169.224.98)
Error: Failed to download metadata for repo 'ovirt-4.4-openstack-victoria':
Cannot download repomd.xml: Cannot download repodata/repomd.xml: All
mirrors were tried
Errors during downloading metadata for repository
'ovirt-4.4-centos-nfv-openvswitch':
- Status code: 404 for
http://mirror.centos.org/centos/8/nfv/x86_64/openvswitch-2/repodata/repom...
(IP: 13.231.175.254)
Error: Failed to download metadata for repo
'ovirt-4.4-centos-nfv-openvswitch': Cannot download repomd.xml: Cannot
download repodata/repomd.xml: All mirrors were tried
[root@iondelsvr12 yum.repos.d]# dnf install ovirt-hosted-engine-setup -y
Ceph packages for x86_64
0.0 B/s | 0 B 00:00
Errors during downloading metadata for repository
'ovirt-4.4-centos-ceph-pacific':
- Curl error (7): Couldn't connect to server for
http://mirror.centos.org/centos/8/storage/x86_64/ceph-pacific/repodata/re...
[Failed to connect to mirror.centos.org port 80: Connection refused]
Error: Failed to download metadata for repo
'ovirt-4.4-centos-ceph-pacific': Cannot download repomd.xml: Cannot
download repodata/repomd.xml: All mirrors were tried
2 years, 10 months
Unable to migrate VMs to a newly upgraded Ovirt node host
by Giulio Casella
Hi guys,
I just faced a problem after updating a host. I cannot migrate VM to
updated host.
Here's the error I see trying to migrate a VM to that host.
Dec 16 10:13:11 host01.ovn.di.unimi.it systemd[1]: Starting Network
Manager Script Dispatcher Service...
Dec 16 10:13:11 host01.ovn.di.unimi.it libvirtd[5667]: Unable to read
from monitor: Connection reset by peer
Dec 16 10:13:11 host01.ovn.di.unimi.it libvirtd[5667]: internal error:
qemu unexpectedly closed the monitor: 2021-12-16T10:13:00.447480Z
qemu-kvm: -numa node,nodeid=0,cpus=0-15,mem=8192: warning: Parameter
-numa node,mem is deprecated, use -numa node,memdev instead
2021-12-16T10:13:11.158057Z qemu-kvm: Failed to load pckbd:kbd
2021-12-16T10:13:11.158114Z qemu-kvm: error while loading state for
instance 0x0 of device 'pckbd'
2021-12-16T10:13:11.158744Z qemu-kvm: load of migration failed: No such
file or directory
Dec 16 10:13:11 host01.ovn.xx.xxxxx.it kvm[35663]: 0 guests now active
Instead I can start VM on that host, and migrate away VM from that host.
Rolling back to ovirt-node-ng-4.4.9.1-0.20211207.0+1 via host console
restores full functionality.
The affected version is ovirt-node-ng-4.4.9.3-0.20211215.0+1 (and also
previous one, I don't remember precisely, it was another async release).
Any ideas?
TIA,
gc
2 years, 10 months
Re: Migrate Hosted-Engine from one NFS mount to another NFS mount
by Sketch
On Wed, 23 Feb 2022, Matthew.Stier(a)fujitsu.com wrote:
> I’ve always been told that migrating self-hosted-engine storage was a
> backup, shutdown, and rebuild from backup procedure.
>
> In my iscsi environment it has never worked. (More due to the history of my
> environment, than the procedure itself.)
This didn't work for me either, though it may have had to do with the many
issues I had moving from 4.3 to 4.4. What did work was backing up and
restoring to a standalone engine.
I had initially planned to migrate it back to a self-hosted setup later,
but found that this setup is a lot less temperamental than the self-hosted
engine. It also doesn't have to be a dedicated machine, it can be a VM
hosted outside of oVirt, which may also be useful if you're hosting some
stuff needed for bootstrapping the environment like DNS, NTP, etc. outside
of oVirt.
2 years, 10 months