rhv-log-collector-analyzer available missing?
by Juhani Rautiainen
Hi!
I'm trying to upgrade SHE from 4.3->4.4. Instructions from
https://www.ovirt.org/documentation/upgrade_guide/#SHE_Upgrading_from_4-3
has step "4.2 Analyzing the Environment". My installation doesn't
bring the tool rhv-log-collector-analyzer although wiki has a notice
that it has been available since v4.2.5?
[root@ovirtmgr ~]# yum install rhv-log-collector-analyzer
Loaded plugins: fastestmirror, versionlock
Determining fastest mirrors
ovirt-4.3-epel/x86_64/metalink
| 34 kB 00:00:00
* base: mirror.hosthink.net
* extras: mirror.hosthink.net
* ovirt-4.3: resources.ovirt.org
* ovirt-4.3-epel: mirrors.nxthost.com
* updates: mirror.hosthink.net
base
| 3.6 kB 00:00:00
centos-sclo-rh-release
| 3.0 kB 00:00:00
extras
| 2.9 kB 00:00:00
ovirt-4.3
| 3.0 kB 00:00:00
ovirt-4.3-centos-gluster6
| 3.0 kB 00:00:00
ovirt-4.3-centos-opstools
| 2.9 kB 00:00:00
ovirt-4.3-centos-ovirt-common
| 3.0 kB 00:00:00
ovirt-4.3-centos-ovirt43
| 2.9 kB 00:00:00
ovirt-4.3-centos-qemu-ev
| 3.0 kB 00:00:00
ovirt-4.3-epel
| 4.7 kB 00:00:00
ovirt-4.3-virtio-win-latest
| 3.0 kB 00:00:00
sac-gluster-ansible
| 3.3 kB 00:00:00
updates
| 2.9 kB 00:00:00
(1/9): extras/7/x86_64/primary_db
| 232 kB 00:00:00
(2/9): ovirt-4.3-centos-gluster6/x86_64/primary_db
| 120 kB 00:00:00
(3/9): ovirt-4.3-epel/x86_64/group_gz
| 96 kB 00:00:00
(4/9): base/7/x86_64/primary_db
| 6.1 MB 00:00:00
(5/9): ovirt-4.3-epel/x86_64/updateinfo
| 1.0 MB 00:00:00
(6/9): ovirt-4.3-epel/x86_64/primary_db
| 6.9 MB 00:00:00
(7/9): centos-sclo-rh-release/x86_64/primary_db
| 2.9 MB 00:00:00
(8/9): sac-gluster-ansible/x86_64/primary_db
| 12 kB 00:00:00
(9/9): updates/7/x86_64/primary_db
| 7.1 MB 00:00:00
No package rhv-log-collector-analyzer available.
Error: Nothing to do
Are we missing a repo or is this just copy/paste error from RHV docs
and this step shouldn't even be in the oVirt docs?
Thanks,
Juhani
--
Juhani Rautiainen jrauti(a)iki.fi
3 years, 8 months
Re: FreeBSD 13 and virtio
by Thomas Hoberg
I've just given it a try: works just fine with me.
But I did notice, that I chose virtio-scsi when I created the disk, don't now if that makes any difference, but as an old-timer, I still got "SCSI" ingrained as "better than ATA".
Chose FreeBSD 9.2 x64 as OS type while creating the VM (nothing closer appears in the selection box), again don't know if there are better choices.
iperf3 shows slightly lower performance than a typical Linux guest, still rather close to 10Gbit/s on my network from the FreeBSD13 guest to a host "next-door", around 40Gbit/s VM to host.
Verfied usage of vtnet0/VirtIO NIC and vtscsi0/VirtIO SCSI via dmesg.
3 years, 8 months
Is it possible to upgrade 3 node HCI from 4.3 to 4.4?
by Jayme
I have a fairly stock three node HCI setup running oVirt 4.3.9. The hosts
are oVirt node. I'm using GlusterFS storage for the self hosted engine and
for some VMs. I also have some other VMs running from an external NFS
storage domain.
Is it possible for me to upgrade this environment to 4.4 while keeping some
of the VMs on GlusterFS storage domains running? Does the upgrade require
an additional physical host? I'm reading through the upgrade guide and it's
not very clear to me how the engine is reinstalled on CentOS8. Do you need
a 4th host, or do you take one of the three existing hosts down, wipe it
and install ovirt node 4.4?
Would it be easier for me to just move all VMs to my NFS storage domain,
wipe all hosts and deploy a brand new 4.4 HCI cluster then import the NFS
storage domain? Will the 4.3 VMs on the NFS storage domain import/run
properly into the new 4.4 deployment or are there any other considerations?
3 years, 8 months
intel_iommu=on kvm-intel.nested=1 deactivates rp_filter kernel option
by Nathanaël Blanchet
Hello,
I have two kind of hosts:
* some with default ovirt node 4.4 kernel settings
* some with custom kernel settings including intel_iommu=on
kvm-intel.nested=1
I can't open vm console from the second category host when binding from
a different vlan because the host is unreacheable
But if I set sysctl -w net.ipv4.conf.all.rp_filter=2, I can bind the
host and open a vm console.
I didn't test if this behaviour is because of Hostdev Passthrough &
SR-IOV or Nested Virtualization.
Is it an expected behaviour or a bug?
--
Nathanaël Blanchet
Supervision réseau
SIRE
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
3 years, 8 months
Two selfhosted engines
by wilrikvdvliert@hotmail.com
Dear forum,
If I understand correctly, I need shared storage for the self-hosted engine to make it highly available. But is it possible to create 2 self-hosted engines on 2 hosts and make it high available this way?
Kind regard
3 years, 8 months
ova import
by eevans@digitaldatatechs.com
I am running Ovirt 4.3 and exported machines as ova which includes disks. When I re import them. the disks are thin provisioned and the vm will not boot.
Any idea how to get the import to preallocate disks? I have vms that will not boot.
It states no bootable disk is available.
It seems the virt2vm isn't working correctly.
ANy help would be appreciated.
Eric
3 years, 8 months
[Help] The engine is not stable on HostedEngine with GlusterFS Herperverged deployment
by mengz.you@outlook.com
Hi,
I deployed a oVirt (4.3.10) cluster with HostedEngine and GlusterFS volumes (engine, vmstore, data), the glusterfs cluster on node1/node2/node3, and the engine vm can be running on those 3 nodes.
Then I added a 4th nodes into cluster.
But, when I operates on Eninge Web Portal, it's always reports 503 error, then I checked `hsoted-engine --vm-status`, see below:
```
[root@vhost1 ~]# hosted-engine –vm-status
–== Host vhost1.yhmk.lan (id: 1) status ==–
conf_on_shared_storage : True
Status up-to-date : True
Hostname : vhost1.<span style=”background-color: rgb(255, 255, 255); color: rgb(51, 51, 51);”>alatest</span>.lan
Host ID : 1
Engine status : {“reason”: “bad vm status”, “health”: “bad”, “vm”: “down_unexpected”, “detail”: “Down”}
Score : 0
stopped : False
Local maintenance : False
crc32 : 1f25baff
local_conf_timestamp : 1253650
Host timestamp : 1253649
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=1253649 (Thu Apr 8 08:05:48 2021)
host-id=1
score=0
vm_conf_refresh_time=1253650 (Thu Apr 8 08:05:48 2021)
conf_on_shared_storage=True
maintenance=False
state=EngineUnexpectedlyDown
stopped=False
timeout=Thu Jan 15 20:23:29 1970
–== Host vhost2.yhmk.lan (id: 2) status ==–
conf_on_shared_storage : True
Status up-to-date : True
Hostname : vhost2.<span style=”background-color: rgb(255, 255, 255); color: rgb(51, 51, 51);”>alatest</span>.lan
Host ID : 2
Engine status : {“reason”: “vm not running on this host”, “health”: “bad”, “vm”: “down_unexpected”, “detail”: “unknown”}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 539fc30c
local_conf_timestamp : 1253343
Host timestamp : 1253343
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=1253343 (Thu Apr 8 08:05:46 2021)
host-id=2
score=3400
vm_conf_refresh_time=1253343 (Thu Apr 8 08:05:46 2021)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False
–== Host vhost3.yhmk.lan (id: 3) status ==–
conf_on_shared_storage : True
Status up-to-date : True
Hostname : vhost3.alatest.lan
Host ID : 3
Engine status : {“reason”: “bad vm status”, “health”: “bad”, “vm”: “up”, “detail”: “Powering up”}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 4072e0b8
local_conf_timestamp : 1252345
Host timestamp : 1252345
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=1252345 (Thu Apr 8 08:05:42 2021)
host-id=3
score=3400
vm_conf_refresh_time=1252345 (Thu Apr 8 08:05:42 2021)
conf_on_shared_storage=True
maintenance=False
state=EngineStarting
stopped=False
```
Then, wait a moment, can access web portal again, and check the hosts status, alway reports one or more hosts with label `unavaiable as HA score`, but it will dispear later.
And I found, sometimes the engine vm will migrated to another nodes whill this problem occur.
So, seems the HostedEngine is not stable, always occur this problem, could you please help me with this ? Thanks!
3 years, 8 months
Re: ovirt-hosted-engine-cleanup
by Thomas Hoberg
ovirt-hosted-engine-cleanup will only operate on the host you run it on.
In a cluster that might have side-effects, but as a rule it will try to undo all configuration settings that had a Linux host become an HCI member or just a host under oVirt management.
While the GUI will try to do the same with Ansible only, the script seems to do a little more and be more thorough. My guess is that it's somewhat of a leftover from the older 'all scripts' legacy of oVirt, before they re-engineered much of it to fit Ansible.
It's not perfect, however. I have come across things that it would not undo, and that were extremely difficult to find (a really clean re-install would then help). There was an issue around keys and certificates for KVM that I only remember as a nightmare and more recently I had an issue with it's ansible scripts inserting an LVM device filter specific for a UID that the HCI setup wizard had created, which then precluded it ever working again after running the *cleanup scripts.
All I got was "device excluded by a filter" and only much later I found the added line in /etc/lvm.conf not undone, which caused this nail-biter.
But generally that message "Is hosted engine setup finished?" mostly indicates that the four major daemons for HCI still have issues.
It all starts with glusterd: If you just added a host in HCI with storage, that would like to be properly healed. AFAIK that doesn't actually preclude already using the machine to host VMs, but in the interest of keeping your head clear, I'd suggest concentrating on getting the storage all clean, with gluster happily showing connectivity all around (gluster volume status all etc.) and 'gluster volume heal <volume> [info]' giving good results.
Then you need to check on the services ovirt-ha-broker and ovirt-ha-agent as well as vdsmd via 'servicectl status <servicename>' to see what they are complaining about. I typically restart them in this order to iron out the bumps. As long as you give them a bit of time, restarting them periodically doesn't seem to do any harm, but helps them recover from hangs.
And then you simply need patience: Things don't happen immediately in oVirt, because few commands ever intervene into anything directly. Instead they set flags here and there which then a myriad of interlocking state machine gears will pick up and respond to, to do things in the proper order and by avoiding the race conditions manual intervention can so easily cause. Even the fastest systems won't give results in seconds.
So have a coffee, better yet, some herbal tea, and then look again. There is a good chance oVirt will eventually sort it out (but restarting those daemons really can help things along, as will looking at those error messages and their potential cause).
3 years, 8 months
Introduction & general question about oVirt
by Nicolas Kovacs
Hi,
I'm a 53-year old Austrian living in Montpezat, a small village in South
France. I'm an IT professional with a focus on Linux and free software, and
I've been a Linux user since Slackware 7.1.
I'm doing web & mail hosting for myself and several small structures like our
local school and a handful of local companies. Up until recently these hostings
have happened on "bare metal" root servers using CentOS 7. One main server is
hosting most of the stuff: WordPress sites, one OwnCloud instance, Dolibarr
management software, GEPI learning platform, Postfix/Dovecot mail server,
Roundcube webmail, etc.
This setup has become increasingly problematic to manage, since applications
have more and more specific requirements, like different versions of PHP and
corresponding modules.
So I decided to split everything up nicely into a series of virtual machines,
each one with a nicely tailored setup.
I have a couple of sandbox servers, one public and one local, running Oracle
Linux 7 (a RHEL clone like CentOS). I played around with it, and KVM-based
virtualization already works quite nicely.
While looking for documentation, I stumbled over oVirt, which I didn't even
know existed until last week. Before I dive head first into it, I'd be curious
to know a few general things.
1. Would it be overkill for a small structure like mine?
2. Will I be able to do HA on a series of modest KVM-capable root servers even
if they are located in different datacenters across different countries?
3. One problem I couldn't resolve using a bone-headed keep-it-simple KVM setup
is backup. For my bare-metal servers I've been using incremental backups using
Rsnapshot for years. Here's a blog article I wrote on the subject:
https://blog.microlinux.fr/rsnapshot-centos-7/
Unfortunately I can't use this approach with huge QCOW images, at least not
without jumping through burning loops.
Is there an easy way to perform remote incremental backups with oVirt?
BTW, I took a peek at Proxmox and Ceph, but I admit I'm a die-hard RHEL-clone
user.
Cheers from the sunny South of France,
Niki
--
Microlinux - Solutions informatiques durables
7, place de l'église - 30730 Montpezat
Site : https://www.microlinux.fr
Blog : https://blog.microlinux.fr
Mail : info(a)microlinux.fr
Tél. : 04 66 63 10 32
Mob. : 06 51 80 12 12
3 years, 8 months
Upgrading single host ovirt "clusters" to v4.5 via command line?
by Gilboa Davara
Hello all,
As the title suggests I have a couple of single-host hyper converged
setups, mostly used for testing.
As I cannot use the GUI cluster upgrade method on a single host (it
requires me to reboot the host, which will require me to shutdown the
hosted engine), is there any method to upgrade the cluster / data-center
via command line in maintenance=global mode?
Beyond that, can I use the same method on my production setups?
(multi-node, gluster based setups).
Thanks,
Gilboa
3 years, 8 months