Support for Shared SAS storage
by Vinícius Ferrão
Hello,
I’ve two compute nodes with SAS Direct Attached sharing the same disks.
Looking at the supported types I can’t see this on the documentation: https://www.ovirt.org/documentation/admin-guide/chap-Storage.html
There’s is local storage on this documentation, but my case is two machines, both using SAS, connected to the same machines. It’s the VRTX hardware from Dell.
Is there any support for this? It should be just like Fibre Channel and iSCSI, but with SAS instead.
Thanks,
4 years, 3 months
oVirt 4.4.0 Release is now generally available
by Sandro Bonazzola
oVirt 4.4.0 Release is now generally available
The oVirt Project is excited to announce the general availability of the
oVirt 4.4.0 Release, as of May 20th, 2020
This release unleashes an altogether more powerful and flexible open source
virtualization solution that encompasses hundreds of individual changes and
a wide range of enhancements across the engine, storage, network, user
interface, and analytics, as compared to oVirt 4.3.
Important notes before you install / upgrade
Some of the features included in the oVirt 4.4.0 release require content
that will be available in CentOS Linux 8.2 but cannot be tested on RHEL 8.2
yet due to some incompatibility in the openvswitch package that is shipped
in CentOS Virt SIG, which requires rebuilding openvswitch on top of CentOS
8.2. The cluster switch type OVS is not implemented for CentOS 8 hosts.
Please note that oVirt 4.4 only supports clusters and datacenters with
compatibility version 4.2 and above. If clusters or datacenters are running
with an older compatibility version, you need to upgrade them to at least
4.2 (4.3 is recommended).
Please note that in RHEL 8 / CentOS 8 several devices that worked on EL7
are no longer supported.
For example, megaraid_sas driver is removed. If you use Enterprise Linux 8
hosts you can try to provide the necessary drivers for the deprecated
hardware using the DUD method (See users mailing list thread on this at
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/NDSVUZSESOXE...
)
Installation instructions
For the engine: either use the oVirt appliance or install CentOS Linux 8
minimal by following these steps:
- Install the CentOS Linux 8 image from
http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-...
- dnf install https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm
- dnf update (reboot if needed)
- dnf module enable -y javapackages-tools pki-deps postgresql:12
- dnf install ovirt-engine
- engine-setup
For the nodes:
Either use oVirt Node ISO or:
- Install CentOS Linux 8 from
http://centos.mirror.garr.it/centos/8.1.1911/isos/x86_64/CentOS-8.1.1911-...,
selecting the minimal installation.
- dnf install https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm
- dnf update (reboot if needed)
- Attach the host to the engine and let it be deployed.
Update instructionsUpdate from oVirt 4.4 Release Candidate
On the engine side and on CentOS hosts, you’ll need to switch from
ovirt44-pre to ovirt44 repositories.
In order to do so, you need to:
1.
dnf remove ovirt-release44-pre
2.
rm -f /etc/yum.repos.d/ovirt-4.4-pre-dependencies.repo
3.
rm -f /etc/yum.repos.d/ovirt-4.4-pre.repo
4.
dnf install https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm
5.
dnf update
On the engine side you’ll need to run engine-setup only if you were not
already on the latest release candidate.
On oVirt Node, you’ll need to upgrade with:
1.
Move node to maintenance
2.
dnf install
https://resources.ovirt.org/pub/ovirt-4.4/rpm/el8/noarch/ovirt-node-ng-im...
3.
Reboot
4.
Activate the host
Update from oVirt 4.3
oVirt 4.4 is available only for CentOS 8. In-place upgrades from previous
installations, based on CentOS 7, are not possible. For the engine, use
backup, and restore that into a new engine. Nodes will need to be
reinstalled.
A 4.4 engine can still manage existing 4.3 hosts, but you can’t add new
ones.
For a standalone engine, please refer to upgrade procedure at
https://ovirt.org/documentation/upgrade_guide/#Upgrading_from_4-3
If needed, run ovirt-engine-rename (see engine rename tool documentation at
https://www.ovirt.org/documentation/admin-guide/chap-Utilities.html )
When upgrading hosts:
You need to upgrade one host at a time.
1.
Turn host to maintenance. Virtual machines on that host should migrate
automatically to a different host.
2.
Remove it from the engine
3.
Re-install it with el8 or oVirt Node as per installation instructions
4.
Re-add the host to the engine
Please note that you may see some issues live migrating VMs from el7 to
el8. If you hit such a case, please turn off the vm on el7 host and get it
started on the new el8 host in order to be able to move the next el7 host
to maintenance.
What’s new in oVirt 4.4.0 Release?
-
Hypervisors based on CentOS Linux 8 (rebuilt from award winning RHEL8),
for both oVirt Node and standalone CentOS Linux hosts.
-
Easier network management and configuration flexibility with
NetworkManager.
-
VMs based on a more modern Q35 chipset with legacy SeaBIOS and UEFI
firmware.
-
Support for direct passthrough of local host disks to VMs.
-
Live migration improvements for High Performance guests.
-
New Windows guest tools installer based on WiX framework now moved to
VirtioWin project.
-
Dropped support for cluster level prior to 4.2.
-
Dropped API/SDK v3 support deprecated in past versions.
-
4K block disk support only for file-based storage. iSCSI/FC storage do
not support 4K disks yet.
-
You can export a VM to a data domain.
-
You can edit floating disks.
-
Ansible Runner (ansible-runner) is integrated within the engine,
enabling more detailed monitoring of playbooks executed from the engine.
-
Adding and reinstalling hosts is now completely based on Ansible,
replacing ovirt-host-deploy, which is not used anymore.
-
The OpenStack Neutron Agent cannot be configured by oVirt anymore, it
should be configured by TripleO instead.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 8.1
* CentOS Linux (or similar) 8.1
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 8.1
* CentOS Linux (or similar) 8.1
* oVirt Node 4.4 based on CentOS Linux 8.1 (available for x86_64 only)
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
If you manage more than one oVirt instance, OKD or RDO we also recommend to
try ManageIQ <http://manageiq.org/>.
In such a case, please be sure to take the qc2 image and not the ova image.
Notes:
- oVirt Appliance is already available for CentOS Linux 8
- oVirt Node NG is already available for CentOS Linux 8
Additional Resources:
* Read more about the oVirt 4.4.0 release highlights:
http://www.ovirt.org/release/4.4.0/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.4.0/
[2] http://resources.ovirt.org/pub/ovirt-4.4/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
[image: |Our code is open_] <https://www.redhat.com/en/our-code-is-open>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
4 years, 3 months
4.4 HCI Install Failure - Missing /etc/pki/CA/cacert.pem
by Stephen Panicho
Hi all! I'm using Cockpit to perform an HCI install, and it fails at the
hosted engine deploy. Libvirtd can't restart because of a missing
/etc/pki/CA/cacert.pem file.
The log (tasks seemingly from
/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/initial_clean.yml):
[ INFO ] TASK [ovirt.hosted_engine_setup : Stop libvirt service]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Drop vdsm config statements]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Restore initial abrt config
files]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Restart abrtd service]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Drop libvirt sasl2 configuration
by vdsm]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Stop and disable services]
[ INFO ] ok: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Restore initial libvirt default
network configuration]
[ INFO ] changed: [localhost]
[ INFO ] TASK [ovirt.hosted_engine_setup : Start libvirt]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Unable
to start service libvirtd: Job for libvirtd.service failed because the
control process exited with error code.\nSee \"systemctl status
libvirtd.service\" and \"journalctl -xe\" for details.\n"}
journalctl -u libvirtd:
May 22 04:33:25 node1 libvirtd[26392]: libvirt version: 5.6.0, package:
10.el8 (CBS <cbs(a)centos.org>, 2020-02-27-01:09:46, )
May 22 04:33:25 node1 libvirtd[26392]: hostname: node1
May 22 04:33:25 node1 libvirtd[26392]: Cannot read CA certificate
'/etc/pki/CA/cacert.pem': No such file or directory
May 22 04:33:25 node1 systemd[1]: libvirtd.service: Main process exited,
code=exited, status=6/NOTCONFIGURED
May 22 04:33:25 node1 systemd[1]: libvirtd.service: Failed with result
'exit-code'.
May 22 04:33:25 node1 systemd[1]: Failed to start Virtualization daemon.
From a fresh CentOS 8.1 minimal install, I've installed the following:
- The 4.4 repo
- cockpit
- ovirt-cockpit-dashboard
- vdsm-gluster (providing glusterfs-server and allowing the Gluster Wizard
to complete)
- gluster-ansible-roles (only on the bootstrap host)
I'm not exactly sure what that initial bit of the playbook does. Comparing
the bootstrap node with another that has yet to be touched, both
/etc/libvirt/libvirtd.conf and /etc/sysconfig/libvirtd are the same on both
hosts. Yet the bootstrap host can no longer start libvirtd while the other
host can. Neither host has the /etc/pki/CA/cacert.pem file.
Please let me know if I can provide any more information. Thanks!
4 years, 3 months
Deploy host engine error: The task includes an option with an undefined variable
by Angel R. Gonzalez
Hi all!
I'm deploying a host engine in a host node with a 8x Intel(R) Xeon(R)
CPU E5410 @ 2.33GHz.
The deploy proccess show the next message
> [INFO]TASK [ovirt.hosted_engine_setup : Convert CPU model name]
> [ERROR]fatal: [localhost]: FAILED! => {"msg": "The task includes an
> option with an undefined variable. The error was: 'dict object' has no
> attribute ''\n\nThe error appears to be in
> '/usr/share/ansible/roles/ovirt.hosted_engine_setup/tasks/create_target_vm/01_create_target_hosted_engine_vm.yml':
> line 105, column 15, but may\nbe elsewhere in the file depending on
> the exact syntax problem.\n\nThe offending line appears to be:\n\n -
> debug: var=server_cpu_dict\n ^ here\n\nThere appears to be both 'k=v'
> shorthand syntax and YAML in this task. Only one syntax may be used.\n"}
The ansible deploy script in his 105 line show:
> - name: Parse server CPU list
> set_fact:
> server_cpu_dict: "{{ server_cpu_dict |
> combine({item.split(':')[1]: item.split(':')[3]}) }}"
> with_items: >-
> {{
> server_cpu_list.json['values']['system_option_value'][0]['value'].split(';
> ')|list|difference(['']) }}
> - debug: var=server_cpu_dict
I don´t know ansible and i don't how to resolve this issue. Any idea?
Thanks in advance,
Ángel González.
4 years, 3 months
oVirt thrashes Docker network during installation
by thomas@hoberg.net
I want to run containers and VMs side by side and not necessarily nested. The main reason for that is GPUs, Voltas mostly, used for CUDA machine learning not for VDI, which is what most of the VM orchestrators like oVirt or vSphere seem to focus on. And CUDA drivers are notorious for refusing to work under KVM unless you pay $esla.
oVirt is more of a side show in my environment, used to run some smaller functional VMs alongside bigger containers, but also in order to consolidate and re-distribute the local compute node storage as a Gluster storage pool: Kibbutz storage and compute, if you want, very much how I understand the HCI philosophy behind oVirt.
The full integration of containers and VMs is still very much on the roadmap I believe, but I was surprised to see that even co-existence seems to be a problem currently.
So I set-up a 3-node HCI on CentOS7 (GPU-less and older) hosts and then added additional (beefier GPGPU) CentOS7 hosts, that have been running CUDA workloads on the latest Docker-CE v19 something.
The installation works fine, I can migrate VMs to these extra hosts etc., but to my dismay Docker containers on these hosts lose access to the local network, that is the entire subnet the host is in. For some strange reason I can still ping Internet hosts, perhaps even everything behind the host's gateway, but local connections are blocked.
It would seem that the ovritmgmt network that the oVirt installation puts in breaks the docker0 bridge that Docker put there first.
I'd consider that a bug, but I'd like to gather some feedback first, if anyone else has run into this problem.
I've repeated this several times in completely distinct environments with the same results:
Simply add a host with a working Docker-CE as an oVirt host to an existing DC/cluster and then try if you can still ping anyone on that net, including the Docker host from a busybox container afterwards (should try that ping just before you actually add it).
No, I didn't try this with podman yet, because that's separate challenge with CUDA: Would love to know if that is part of QA for oVirt already.
4 years, 3 months
Non storage nodes erronously included in quota calculations for HCI?
by thomas@hoberg.net
For my home-lab I operate a 3 node HCI cluster on 100% passive Atoms, mostly to run light infrastructure services such as LDAP and NextCloud.
I then add workstations or even laptops as pure compute hosts to the cluster for bigger but temporary things, that might actually run a different OS most of the time or just be shut off. From oVirt's point of view, these are just first put into maintenance and then shut down until needed again. No fencing or power management, all manual.
All nodes, even the HCI ones, run CentOS7 with more of a workstation configuration, so updates pile up pretty quickly.
After I recently upgraded one of these extra compute nodes, I found my three node HCI cluster not just faltering, but indeed very hard to reactivate at all.
The faltering is a distinct issue: I have the impression that reboots of oVirt nodes cause broadcast storms on my rather simplistic 10Gibt L2 switch, which a normal CentOS instance (or any other OS) doesn't, but that's for another post.
No what struck me, was that the gluster daemons on the three HCI nodes kept complaining about a lack of quorum long after the network was all back to normal, even if all three of them were there, saw each other perfectly on "gluster show status all", ready and without any healing issues pending at all.
Glusterd would complain on all three nodes that there was no quota for the bricks and stop them.
That went away as soon as I started one additional compute node, a node that was a gluster peer (because an oVirt host added to a HCI cluster always gets put into the Gluster, even if it's not contributing storage) but had no bricks. Immediately the gluster daemon on the three nodes with contributing bricks would report back good quota and launch the volumes (and thus all the rest of oVirt), even if in terms of *storage bricks* nothing had changed.
I am afraid that downing the extra compute-only oVirtNode will bring down the HCI: Clearly not the type of redundancy it's designed to deliver.
Evidently such compute-only hosts (and gluster members) get included into some quorum deliberations even if they hold not a single brick, neither storage nor arbitration.
To me that seems like a bug, if that is indeed what happens: There I need your advice and suggestions.
AFAIK HCI is a late addition to oVirt/RHEV as storage and compute were orginally designed to be completely distinct. In fact there are still remnants of documentation which seem to prohibit using a node for both compute and storage... what HCI is all about.
And I have seen compute nodes with "matching" storage (parts of a distinct HCI setup, that was taken down but still had all the storage and Gluster elements operable), being happliy absorbed into a HCI cluster with all Gluster storage appearing in the GUI etc., without any manual creation or inclusion of bricks: Fully automatic (and undocumented)!
In that case it makes sense to widen the scope of quota calculations when additional nodes are hyperconverged elements with contributing bricks. It also seems the only way to turn a 3 node HCI into 6 or 9 node one.
But if you really just want to add compute nodes without bricks, those can't get "quota votes" without storage to play a role in the redundancy.
I can easily imagine the missing "if then else" in the code here, but I was actually very surprised to see those failure and success messages coming from glusterd itself, which to my understanding is pretty unrelated to oVirt on top. Not from the management engine (wasn't running anyway), not from VDSM.
Re-creating the scenario is very scary even if I have gone through this three times already, trying to just bring my HCI back up. And then there is so verbose logs all over the place that I'd like some advice which ones I should post.
But simply speaking: Gluster peers should get no quota voting rights on volumes unless they contribute bricks. That rule seems broken.
Those in the know, please let me know if am on a goose chase or if there is a real issue here that deserves a bug report.
4 years, 3 months
Re: Ovirt SYSTEM user does not allow deletion of VM networks
by Konstantinos B
I've deleted them from the cluster as well but re-appear.
I've removed ovirt-engine through engine-cleanup installed again and they reappear.
Removed ovirt-engine deleted all "ovirt*" files and currenty trying to re-install.
I've checked the host's ovs-vsctl bridges and only the desired are shown.
So i believe it's an issue on the engine itself.
4 years, 4 months
Shutdown procedure for single host HCI Gluster
by Gianluca Cecchi
Hello,
I'm testing the single node HCI with ovirt-node-ng 4.3.9 iso.
Very nice and many improvements over the last time I tried it. Good!
I have a doubt related to shutdown procedure of the server.
Here below my steps:
- Shutdown all VMs (except engine)
- Put into maintenance data and vmstore domains
- Enable Global HA Maintenance
- Shutdown engine
- Shutdown hypervisor
It seems that the last step doesn't end and I had to brutally power off the
hypervisor.
Here the screenshot regarding infinite failure in unmounting
/gluster_bricks/engine
https://drive.google.com/file/d/1ee0HG21XmYVA0t7LYo5hcFx1iLxZdZ-E/view?us...
What would be the right step to do before the final shutdown of hypervisor?
Thanks,
Gianluca
4 years, 4 months
oVirt install questions
by David White
I'm reading through all of the documentation at https://ovirt.org/documentation/, and am a bit overwhelmed with all of the different options for installing oVirt.
My particular use case is that I'm looking for a way to manage VMs on multiple physical servers from 1 interface, and be able to deploy new VMs (or delete VMs) as necessary. Ideally, it would be great if I could move a VM from 1 host to a different host as well, particularly in the event that 1 host becomes degraded (bad HDD, bad processor, etc...)
I'm trying to figure out what the difference is between an oVirt Node and the oVirt Engine, and how the engine differs from the Manager.
I get the feeling that `Engine` = `Manager`. Same thing. I further think I understand the Engine to be essentially synonymous with a vCenter VM for ESXi hosts. Is this correct?
If so, then what's the difference between the `self-hosted` vs the `stand-alone` engines?
oVirt Engine requirements look to be a minimum of 4GB RAM and 2CPUs.
oVirt Nodes, on the other hand, require only 2GB RAM.
Is this a requirement just for the physical host, or is that how much RAM that each oVirt node process requires? In other words, if I have a physical host with 12GB of physical RAM, will I only be able to allocate 10GB of that to guest VMs? How much of that should I dedicated to the oVirt node processes?
Can you install the oVirt Engine as a VM onto an existing oVirt Node? And then connect that same node to the Engine, once the Engine is installed?
Reading through the documentation, it also sounds like oVirt Engine and oVirt Node require different versions of RHEL or CentOS.
I read that the Engine for oVirt 4.4.0 requires RHEL (or CentOS) 8.2, whereas each Node requires 7.x (although I'll plan to just use the oVirt Node ISO).
I'm also wondering about storage.
I don't really like the idea of using local storage, but a single NFS server would also be a single point of failure, and Gluster would be too expensive to deploy, so at this point, I'm leaning towards using local storage.
Any advice or clarity would be greatly appreciated.
Thanks,
David
Sent with ProtonMail Secure Email.
4 years, 4 months
Lots of problems with deploying the hosted-engine (ovirt 4.4 | CentOS 8.2.2004)
by jonas
Hi!
I have banged my head against deploying the ovirt 4.4 self-hosted engine
on Centos 8.2 for last couple of days.
First I was astonished that resources.ovirt.org has no IPv6
connectivity, which made my initial plan for a mostly IPv6-only
deployment impossible.
CentOS was installed from scratch using the ks.cgf Kickstart file below,
which also adds the ovirt 4.4 repo and installs cockpit-ovirt-dashboard
& ovirt-engine-appliance.
When deploying the hosted-engine from cockpit while logged in as a
non-root (although privileged) user, the "(3) Prepare VM" step instantly
fails with a nondescript error message and without generating any logs.
By using the browser dev tools it was determined that this was because
the ansible vars file could not be created as the non-root user did not
have write permissions in '/var/lib/ovirt-hosted-engine-setup/cockpit/'
. Shouldn't cockpit be capable of using sudo when appropriate, or at
least give a more descriptive error message?
After login into cockpit as root, or when using the command line
ovirt-hosted-engine-setup tool, the deployment fails with "Failed to
download metadata for repo 'AppStream'".
This seems to be because a) the dnsmasq running on the host does not
forward dns queries, even though the host itself can resolve dns queries
just fine, and b) there also does not seem to be any functioning routing
setup to reach anything outside the host.
Regarding a) it is strange that dnsmasq is running with a config file
'/var/lib/libvirt/dnsmasq/default.conf' containing the 'no-resolv'
option. Could the operation of systemd-resolved be interfering with
dnsmasq (see ss -tulpen output)? I tried to manually stop
systemd-resolved, but got the same behaviour as before.
I hope someone could give me a hint how I could get past this problem,
as so far my ovirt experience has been a little bit sub-par. :D
Also when running ovirt-hosted-engine-cleanup, the extracted engine VMs
in /var/tmp/localvm* are not removed, leading to a "disk-memory-leak"
with subsequent runs.
Best regards
Jonas
--- ss -tulpen output post deploy-run ---
[root@nxtvirt ~]# ss -tulpen | grep ':53 '
udp UNCONN 0 0 127.0.0.53%lo:53
0.0.0.0:* users:(("systemd-resolve",pid=1379,fd=18)) uid:193
ino:32910 sk:6 <->
udp UNCONN 0 0 [fd00:1234:5678:900::1]:53
[::]:* users:(("dnsmasq",pid=13525,fd=15)) uid:979 ino:113580
sk:d v6only:1 <->
udp UNCONN 0 0 [fe80::5054:ff:fe94:f314]%virbr0:53
[::]:* users:(("dnsmasq",pid=13525,fd=12)) uid:979 ino:113575
sk:e v6only:1 <->
tcp LISTEN 0 32 [fd00:1234:5678:900::1]:53
[::]:* users:(("dnsmasq",pid=13525,fd=16)) uid:979 ino:113581
sk:20 v6only:1 <->
tcp LISTEN 0 32 [fe80::5054:ff:fe94:f314]%virbr0:53
[::]:* users:(("dnsmasq",pid=13525,fd=13)) uid:979 ino:113576
sk:21 v6only:1 <->
--- running dnsmasq processes on host ('nxtvirt') post deploy-run ---
dnsmasq 13525 0.0 0.0 71888 2344 ? S 12:31 0:00
/usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf
--leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
root 13526 0.0 0.0 71860 436 ? S 12:31 0:00
/usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf
--leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
--- var/lib/libvirt/dnsmasq/default.conf ---
##WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO
BE
##OVERWRITTEN AND LOST. Changes to this configuration should be made
using:
## virsh net-edit default
## or other application using the libvirt API.
##
## dnsmasq conf file created by libvirt
strict-order
pid-file=/run/libvirt/network/default.pid
except-interface=lo
bind-dynamic
interface=virbr0
dhcp-option=3
no-resolv
ra-param=*,0,0
dhcp-range=fd00:1234:5678:900::10,fd00:1234:5678:900::ff,64
dhcp-lease-max=240
dhcp-hostsfile=/var/lib/libvirt/dnsmasq/default.hostsfile
addn-hosts=/var/lib/libvirt/dnsmasq/default.addnhosts
enable-ra
--- cockpit wizard overview before the 'Prepare VM' step ---
VM
Engine FQDN:engine.*REDACTED*
MAC Address:00:16:3e:20:13:b3
Network Configuration:Static
VM IP Address:*REDACTED*:1099:babe::3/64
Gateway Address:*REDACTED*:1099::1
DNS Servers:*REDACTED*:1052::11
Root User SSH Access:yes
Number of Virtual CPUs:4
Memory Size (MiB):4096
Root User SSH Public Key:(None)
Add Lines to /etc/hosts:yes
Bridge Name:ovirtmgmt
Apply OpenSCAP profile:no
Engine
SMTP Server Name:localhost
SMTP Server Port Number:25
Sender E-Mail Address:root@localhost
Recipient E-Mail Addresses:root@localhost
--- ks.cgf ---
#version=RHEL8
ignoredisk --only-use=vda
autopart --type=lvm
# Partition clearing information
clearpart --drives=vda --all --initlabel
# Use graphical install
#graphical
text
# Use CDROM installation media
cdrom
# Keyboard layouts
keyboard --vckeymap=de --xlayouts='de','us'
# System language
lang en_US.UTF-8
# Network information
network --bootproto=static --device=enp1s0 --ip=192.168.199.250
--netmask=255.255.255.0 --gateway=192.168.199.10
--ipv6=*REDACTED*:1090:babe::250/64 --ipv6gateway=*REDACTED*:1090::1
--hostname=nxtvirt.*REDACTED* --nameserver=*REDACTED*:1052::11
--activate
network --hostname=nxtvirt.*REDACTED*
# Root password
rootpw --iscrypted $6$*REDACTED*
firewall --enabled --service=cockpit --service=ssh
# Run the Setup Agent on first boot
firstboot --enable
# Do not configure the X Window System
skipx
# System services
services --enabled="chronyd"
# System timezone
timezone Etc/UTC --isUtc --ntpservers=ntp.*REDACTED*,ntp2.*REDACTED*
user --name=nonrootuser --groups=wheel --password=$6$*REDACTED*
--iscrypted
# KVM Users/Groups
group --name=kvm --gid=36
user --name=vdsm --uid=36 --gid=36
%packages
@^server-product-environment
#@graphical-admin-tools
@headless-management
kexec-tools
cockpit
%end
%addon com_redhat_kdump --enable --reserve-mb='auto'
%end
%anaconda
pwpolicy root --minlen=6 --minquality=1 --notstrict --nochanges
--notempty
pwpolicy user --minlen=6 --minquality=1 --notstrict --nochanges
--emptyok
pwpolicy luks --minlen=6 --minquality=1 --notstrict --nochanges
--notempty
%end
%post --erroronfail --log=/root/ks-post.log
#!/bin/sh
dnf update -y
# NFS storage
mkdir -p /opt/ovirt/nfs-storage
chown -R 36:36 /opt/ovirt/nfs-storage
chmod 0755 /opt/ovirt/nfs-storage
echo "/opt/ovirt/nfs-storage localhost" > /etc/exports
echo "/opt/ovirt/nfs-storage engine.*REDACTED*" >> /etc/exports
dnf install -y nfs-utils
systemctl enable nfs-server.service
# Install ovirt packages
dnf install -y
https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm
dnf install -y cockpit-ovirt-dashboard ovirt-engine-appliance
# Enable cockpit
systemctl enable cockpit.socket
%end
#reboot --eject --kexec
reboot --eject
--- Host (nxtvirt) ip -a post deploy-run ---
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel
state UP group default qlen 1000
link/ether 52:54:00:ad:79:1b brd ff:ff:ff:ff:ff:ff
inet 192.168.199.250/24 brd 192.168.199.255 scope global
noprefixroute enp1s0
valid_lft forever preferred_lft forever
inet6 *REDACTED*:1099:babe::250/64 scope global noprefixroute
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fead:791b/64 scope link noprefixroute
valid_lft forever preferred_lft forever
5: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
state UP group default qlen 1000
link/ether 52:54:00:94:f3:14 brd ff:ff:ff:ff:ff:ff
inet6 fd00:1234:5678:900::1/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe94:f314/64 scope link
valid_lft forever preferred_lft forever
6: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master
virbr0 state DOWN group default qlen 1000
link/ether 52:54:00:94:f3:14 brd ff:ff:ff:ff:ff:ff
7: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel
master virbr0 state UNKNOWN group default qlen 1000
link/ether fe:16:3e:68:d3:8a brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc16:3eff:fe68:d38a/64 scope link
valid_lft forever preferred_lft forever
--- iptables-save post deploy-run ---
# Generated by iptables-save v1.8.4 on Sun Jun 28 13:20:53 2020
*filter
:INPUT ACCEPT [4007:8578553]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [3920:7633249]
:LIBVIRT_INP - [0:0]
:LIBVIRT_OUT - [0:0]
:LIBVIRT_FWO - [0:0]
:LIBVIRT_FWI - [0:0]
:LIBVIRT_FWX - [0:0]
-A INPUT -j LIBVIRT_INP
-A FORWARD -j LIBVIRT_FWX
-A FORWARD -j LIBVIRT_FWI
-A FORWARD -j LIBVIRT_FWO
-A OUTPUT -j LIBVIRT_OUT
-A LIBVIRT_INP -i virbr0 -p udp -m udp --dport 53 -j ACCEPT
-A LIBVIRT_INP -i virbr0 -p tcp -m tcp --dport 53 -j ACCEPT
-A LIBVIRT_INP -i virbr0 -p udp -m udp --dport 67 -j ACCEPT
-A LIBVIRT_INP -i virbr0 -p tcp -m tcp --dport 67 -j ACCEPT
-A LIBVIRT_OUT -o virbr0 -p udp -m udp --dport 53 -j ACCEPT
-A LIBVIRT_OUT -o virbr0 -p tcp -m tcp --dport 53 -j ACCEPT
-A LIBVIRT_OUT -o virbr0 -p udp -m udp --dport 68 -j ACCEPT
-A LIBVIRT_OUT -o virbr0 -p tcp -m tcp --dport 68 -j ACCEPT
-A LIBVIRT_FWO -i virbr0 -j REJECT --reject-with icmp-port-unreachable
-A LIBVIRT_FWI -o virbr0 -j REJECT --reject-with icmp-port-unreachable
-A LIBVIRT_FWX -i virbr0 -o virbr0 -j ACCEPT
COMMIT
# Completed on Sun Jun 28 13:20:53 2020
# Generated by iptables-save v1.8.4 on Sun Jun 28 13:20:53 2020
*security
:INPUT ACCEPT [3959:8576054]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [3920:7633249]
COMMIT
# Completed on Sun Jun 28 13:20:53 2020
# Generated by iptables-save v1.8.4 on Sun Jun 28 13:20:53 2020
*raw
:PREROUTING ACCEPT [4299:8608260]
:OUTPUT ACCEPT [3920:7633249]
COMMIT
# Completed on Sun Jun 28 13:20:53 2020
# Generated by iptables-save v1.8.4 on Sun Jun 28 13:20:53 2020
*mangle
:PREROUTING ACCEPT [4299:8608260]
:INPUT ACCEPT [4007:8578553]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [3920:7633249]
:POSTROUTING ACCEPT [3923:7633408]
:LIBVIRT_PRT - [0:0]
-A POSTROUTING -j LIBVIRT_PRT
COMMIT
# Completed on Sun Jun 28 13:20:53 2020
# Generated by iptables-save v1.8.4 on Sun Jun 28 13:20:53 2020
*nat
:PREROUTING ACCEPT [337:32047]
:INPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [159:9351]
:OUTPUT ACCEPT [159:9351]
:LIBVIRT_PRT - [0:0]
-A POSTROUTING -j LIBVIRT_PRT
COMMIT
# Completed on Sun Jun 28 13:20:53 2020
4 years, 4 months