Re: FreeBSD 13 and virtio
by Sandro Bonazzola
Maybe you can have a look at
https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=236922 "Bug 236922 - Virtio
fails as QEMU-KVM guest with Q35 chipset on Ubuntu 18.04.2 LTS"
it's from ~20 days ago and the bug is still open
Il giorno mar 20 apr 2021 alle ore 14:16 Nur Imam Febrianto <
nur_imam(a)outlook.com> ha scritto:
> Seems strange. I want to use q35, but whenever I try even to start the
> installation (vm disk using virtio-scsi/virtio, net adapter using virtio)
> it always shows me the installer doesn’t detect any disk. I have an
> existing VM too that recently upgraded from 12.2 to 13. It uses i440FX with
> virtio-scsi disk and virtio network. If I try to change the machine into
> q35, it keeps stuck at boot after “promiscuous mode enabled”.
>
> Don’t know what’s wrong. Using oVirt 4.4.5. Can you share your VM Config ?
>
>
>
> *From: *Thomas Hoberg <thomas(a)hoberg.net>
> *Sent: *20 April 2021 17:27
> *To: *users(a)ovirt.org
> *Subject: *[ovirt-users] Re: FreeBSD 13 and virtio
>
>
>
> q35 with BIOS as that is the cluster default with >4.3.
>
> Running the dmesg messages through my mind as I remember them, the vio
> hardware may be all PCIe based, which would explain why this won't work on
> a virtual FX 440FX system, because those didn't have PCIe support AFAIK.
>
> Any special reason why you'd want them based on 440FX?
>
> And I also tested with GhostBSD, which is still 12.* based, and that
> doesn't seem to have vio support, at least I could not see a hard disk
> there, which confirms your observation there.
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement:
> https://apac01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ov...
> oVirt Code of Conduct:
> https://apac01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ov...
> List Archives:
> https://apac01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists....
>
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KFTCXCLC6Q5...
>
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
<https://mojo.redhat.com/docs/DOC-1199578>*
4 years
oVirt Node
by KSNull Zero
Hello!
We want to switch OS based oVirt instalation to oVirt-Node based.
Is it safe to have OS based hosts and oVirt-Node hosts in the same cluster (with FC shared storage) during transition ?
Thank you.
4 years
rhv-log-collector-analyzer available missing?
by Juhani Rautiainen
Hi!
I'm trying to upgrade SHE from 4.3->4.4. Instructions from
https://www.ovirt.org/documentation/upgrade_guide/#SHE_Upgrading_from_4-3
has step "4.2 Analyzing the Environment". My installation doesn't
bring the tool rhv-log-collector-analyzer although wiki has a notice
that it has been available since v4.2.5?
[root@ovirtmgr ~]# yum install rhv-log-collector-analyzer
Loaded plugins: fastestmirror, versionlock
Determining fastest mirrors
ovirt-4.3-epel/x86_64/metalink
| 34 kB 00:00:00
* base: mirror.hosthink.net
* extras: mirror.hosthink.net
* ovirt-4.3: resources.ovirt.org
* ovirt-4.3-epel: mirrors.nxthost.com
* updates: mirror.hosthink.net
base
| 3.6 kB 00:00:00
centos-sclo-rh-release
| 3.0 kB 00:00:00
extras
| 2.9 kB 00:00:00
ovirt-4.3
| 3.0 kB 00:00:00
ovirt-4.3-centos-gluster6
| 3.0 kB 00:00:00
ovirt-4.3-centos-opstools
| 2.9 kB 00:00:00
ovirt-4.3-centos-ovirt-common
| 3.0 kB 00:00:00
ovirt-4.3-centos-ovirt43
| 2.9 kB 00:00:00
ovirt-4.3-centos-qemu-ev
| 3.0 kB 00:00:00
ovirt-4.3-epel
| 4.7 kB 00:00:00
ovirt-4.3-virtio-win-latest
| 3.0 kB 00:00:00
sac-gluster-ansible
| 3.3 kB 00:00:00
updates
| 2.9 kB 00:00:00
(1/9): extras/7/x86_64/primary_db
| 232 kB 00:00:00
(2/9): ovirt-4.3-centos-gluster6/x86_64/primary_db
| 120 kB 00:00:00
(3/9): ovirt-4.3-epel/x86_64/group_gz
| 96 kB 00:00:00
(4/9): base/7/x86_64/primary_db
| 6.1 MB 00:00:00
(5/9): ovirt-4.3-epel/x86_64/updateinfo
| 1.0 MB 00:00:00
(6/9): ovirt-4.3-epel/x86_64/primary_db
| 6.9 MB 00:00:00
(7/9): centos-sclo-rh-release/x86_64/primary_db
| 2.9 MB 00:00:00
(8/9): sac-gluster-ansible/x86_64/primary_db
| 12 kB 00:00:00
(9/9): updates/7/x86_64/primary_db
| 7.1 MB 00:00:00
No package rhv-log-collector-analyzer available.
Error: Nothing to do
Are we missing a repo or is this just copy/paste error from RHV docs
and this step shouldn't even be in the oVirt docs?
Thanks,
Juhani
--
Juhani Rautiainen jrauti(a)iki.fi
4 years
Re: FreeBSD 13 and virtio
by Thomas Hoberg
I've just given it a try: works just fine with me.
But I did notice, that I chose virtio-scsi when I created the disk, don't now if that makes any difference, but as an old-timer, I still got "SCSI" ingrained as "better than ATA".
Chose FreeBSD 9.2 x64 as OS type while creating the VM (nothing closer appears in the selection box), again don't know if there are better choices.
iperf3 shows slightly lower performance than a typical Linux guest, still rather close to 10Gbit/s on my network from the FreeBSD13 guest to a host "next-door", around 40Gbit/s VM to host.
Verfied usage of vtnet0/VirtIO NIC and vtscsi0/VirtIO SCSI via dmesg.
4 years
Is it possible to upgrade 3 node HCI from 4.3 to 4.4?
by Jayme
I have a fairly stock three node HCI setup running oVirt 4.3.9. The hosts
are oVirt node. I'm using GlusterFS storage for the self hosted engine and
for some VMs. I also have some other VMs running from an external NFS
storage domain.
Is it possible for me to upgrade this environment to 4.4 while keeping some
of the VMs on GlusterFS storage domains running? Does the upgrade require
an additional physical host? I'm reading through the upgrade guide and it's
not very clear to me how the engine is reinstalled on CentOS8. Do you need
a 4th host, or do you take one of the three existing hosts down, wipe it
and install ovirt node 4.4?
Would it be easier for me to just move all VMs to my NFS storage domain,
wipe all hosts and deploy a brand new 4.4 HCI cluster then import the NFS
storage domain? Will the 4.3 VMs on the NFS storage domain import/run
properly into the new 4.4 deployment or are there any other considerations?
4 years
intel_iommu=on kvm-intel.nested=1 deactivates rp_filter kernel option
by Nathanaël Blanchet
Hello,
I have two kind of hosts:
* some with default ovirt node 4.4 kernel settings
* some with custom kernel settings including intel_iommu=on
kvm-intel.nested=1
I can't open vm console from the second category host when binding from
a different vlan because the host is unreacheable
But if I set sysctl -w net.ipv4.conf.all.rp_filter=2, I can bind the
host and open a vm console.
I didn't test if this behaviour is because of Hostdev Passthrough &
SR-IOV or Nested Virtualization.
Is it an expected behaviour or a bug?
--
Nathanaël Blanchet
Supervision réseau
SIRE
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
4 years
Two selfhosted engines
by wilrikvdvliert@hotmail.com
Dear forum,
If I understand correctly, I need shared storage for the self-hosted engine to make it highly available. But is it possible to create 2 self-hosted engines on 2 hosts and make it high available this way?
Kind regard
4 years
ova import
by eevans@digitaldatatechs.com
I am running Ovirt 4.3 and exported machines as ova which includes disks. When I re import them. the disks are thin provisioned and the vm will not boot.
Any idea how to get the import to preallocate disks? I have vms that will not boot.
It states no bootable disk is available.
It seems the virt2vm isn't working correctly.
ANy help would be appreciated.
Eric
4 years
[Help] The engine is not stable on HostedEngine with GlusterFS Herperverged deployment
by mengz.you@outlook.com
Hi,
I deployed a oVirt (4.3.10) cluster with HostedEngine and GlusterFS volumes (engine, vmstore, data), the glusterfs cluster on node1/node2/node3, and the engine vm can be running on those 3 nodes.
Then I added a 4th nodes into cluster.
But, when I operates on Eninge Web Portal, it's always reports 503 error, then I checked `hsoted-engine --vm-status`, see below:
```
[root@vhost1 ~]# hosted-engine –vm-status
–== Host vhost1.yhmk.lan (id: 1) status ==–
conf_on_shared_storage : True
Status up-to-date : True
Hostname : vhost1.<span style=”background-color: rgb(255, 255, 255); color: rgb(51, 51, 51);”>alatest</span>.lan
Host ID : 1
Engine status : {“reason”: “bad vm status”, “health”: “bad”, “vm”: “down_unexpected”, “detail”: “Down”}
Score : 0
stopped : False
Local maintenance : False
crc32 : 1f25baff
local_conf_timestamp : 1253650
Host timestamp : 1253649
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=1253649 (Thu Apr 8 08:05:48 2021)
host-id=1
score=0
vm_conf_refresh_time=1253650 (Thu Apr 8 08:05:48 2021)
conf_on_shared_storage=True
maintenance=False
state=EngineUnexpectedlyDown
stopped=False
timeout=Thu Jan 15 20:23:29 1970
–== Host vhost2.yhmk.lan (id: 2) status ==–
conf_on_shared_storage : True
Status up-to-date : True
Hostname : vhost2.<span style=”background-color: rgb(255, 255, 255); color: rgb(51, 51, 51);”>alatest</span>.lan
Host ID : 2
Engine status : {“reason”: “vm not running on this host”, “health”: “bad”, “vm”: “down_unexpected”, “detail”: “unknown”}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 539fc30c
local_conf_timestamp : 1253343
Host timestamp : 1253343
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=1253343 (Thu Apr 8 08:05:46 2021)
host-id=2
score=3400
vm_conf_refresh_time=1253343 (Thu Apr 8 08:05:46 2021)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False
–== Host vhost3.yhmk.lan (id: 3) status ==–
conf_on_shared_storage : True
Status up-to-date : True
Hostname : vhost3.alatest.lan
Host ID : 3
Engine status : {“reason”: “bad vm status”, “health”: “bad”, “vm”: “up”, “detail”: “Powering up”}
Score : 3400
stopped : False
Local maintenance : False
crc32 : 4072e0b8
local_conf_timestamp : 1252345
Host timestamp : 1252345
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=1252345 (Thu Apr 8 08:05:42 2021)
host-id=3
score=3400
vm_conf_refresh_time=1252345 (Thu Apr 8 08:05:42 2021)
conf_on_shared_storage=True
maintenance=False
state=EngineStarting
stopped=False
```
Then, wait a moment, can access web portal again, and check the hosts status, alway reports one or more hosts with label `unavaiable as HA score`, but it will dispear later.
And I found, sometimes the engine vm will migrated to another nodes whill this problem occur.
So, seems the HostedEngine is not stable, always occur this problem, could you please help me with this ? Thanks!
4 years
Re: ovirt-hosted-engine-cleanup
by Thomas Hoberg
ovirt-hosted-engine-cleanup will only operate on the host you run it on.
In a cluster that might have side-effects, but as a rule it will try to undo all configuration settings that had a Linux host become an HCI member or just a host under oVirt management.
While the GUI will try to do the same with Ansible only, the script seems to do a little more and be more thorough. My guess is that it's somewhat of a leftover from the older 'all scripts' legacy of oVirt, before they re-engineered much of it to fit Ansible.
It's not perfect, however. I have come across things that it would not undo, and that were extremely difficult to find (a really clean re-install would then help). There was an issue around keys and certificates for KVM that I only remember as a nightmare and more recently I had an issue with it's ansible scripts inserting an LVM device filter specific for a UID that the HCI setup wizard had created, which then precluded it ever working again after running the *cleanup scripts.
All I got was "device excluded by a filter" and only much later I found the added line in /etc/lvm.conf not undone, which caused this nail-biter.
But generally that message "Is hosted engine setup finished?" mostly indicates that the four major daemons for HCI still have issues.
It all starts with glusterd: If you just added a host in HCI with storage, that would like to be properly healed. AFAIK that doesn't actually preclude already using the machine to host VMs, but in the interest of keeping your head clear, I'd suggest concentrating on getting the storage all clean, with gluster happily showing connectivity all around (gluster volume status all etc.) and 'gluster volume heal <volume> [info]' giving good results.
Then you need to check on the services ovirt-ha-broker and ovirt-ha-agent as well as vdsmd via 'servicectl status <servicename>' to see what they are complaining about. I typically restart them in this order to iron out the bumps. As long as you give them a bit of time, restarting them periodically doesn't seem to do any harm, but helps them recover from hangs.
And then you simply need patience: Things don't happen immediately in oVirt, because few commands ever intervene into anything directly. Instead they set flags here and there which then a myriad of interlocking state machine gears will pick up and respond to, to do things in the proper order and by avoiding the race conditions manual intervention can so easily cause. Even the fastest systems won't give results in seconds.
So have a coffee, better yet, some herbal tea, and then look again. There is a good chance oVirt will eventually sort it out (but restarting those daemons really can help things along, as will looking at those error messages and their potential cause).
4 years