Hi Jonas,
Can you please share the full log file located in
/var/log/ovirt-hosted-engine-setup ?
My guess you hit [1], this is the fix for that [2] (not merged yet).
You can install the rpm with the fix from Jenkins CI [3] and then run the
following command:
hosted-engine --deploy --ansible-extra-vars=he_offline_deployment=true
Regarding the "disk-memory-leak", can you please open a bug for that?
Regards,
Asaf
[1]
Hi!
A short addendum:
I have now also tried to perform the installation using the oVirt Node
distribution as a basis, but that also ended with the same problem. So
it does not seam to be an issue with the underlying CentOS installation,
but rather with my general setup or parameters.
Regards
Jonas
On 2020-06-28 16:26, jonas wrote:
> Hi!
>
> I have banged my head against deploying the ovirt 4.4 self-hosted
> engine on Centos 8.2 for last couple of days.
>
> First I was astonished that
resources.ovirt.org has no IPv6
> connectivity, which made my initial plan for a mostly IPv6-only
> deployment impossible.
>
> CentOS was installed from scratch using the ks.cgf Kickstart file
> below, which also adds the ovirt 4.4 repo and installs
> cockpit-ovirt-dashboard & ovirt-engine-appliance.
>
> When deploying the hosted-engine from cockpit while logged in as a
> non-root (although privileged) user, the "(3) Prepare VM" step
> instantly fails with a nondescript error message and without
> generating any logs. By using the browser dev tools it was determined
> that this was because the ansible vars file could not be created as
> the non-root user did not have write permissions in
> '/var/lib/ovirt-hosted-engine-setup/cockpit/' . Shouldn't cockpit be
> capable of using sudo when appropriate, or at least give a more
> descriptive error message?
>
> After login into cockpit as root, or when using the command line
> ovirt-hosted-engine-setup tool, the deployment fails with "Failed to
> download metadata for repo 'AppStream'".
> This seems to be because a) the dnsmasq running on the host does not
> forward dns queries, even though the host itself can resolve dns
> queries just fine, and b) there also does not seem to be any
> functioning routing setup to reach anything outside the host.
> Regarding a) it is strange that dnsmasq is running with a config file
> '/var/lib/libvirt/dnsmasq/default.conf' containing the 'no-resolv'
> option. Could the operation of systemd-resolved be interfering with
> dnsmasq (see ss -tulpen output)? I tried to manually stop
> systemd-resolved, but got the same behaviour as before.
>
> I hope someone could give me a hint how I could get past this problem,
> as so far my ovirt experience has been a little bit sub-par. :D
>
> Also when running ovirt-hosted-engine-cleanup, the extracted engine
> VMs in /var/tmp/localvm* are not removed, leading to a
> "disk-memory-leak" with subsequent runs.
>
> Best regards
> Jonas
>
> --- ss -tulpen output post deploy-run ---
> [root@nxtvirt ~]# ss -tulpen | grep ':53 '
> udp UNCONN 0 0 127.0.0.53%lo:53
> 0.0.0.0:* users:(("systemd-resolve",pid=1379,fd=18)) uid:193
> ino:32910 sk:6 <->
> udp UNCONN 0 0 [fd00:1234:5678:900::1]:53
> [::]:* users:(("dnsmasq",pid=13525,fd=15)) uid:979 ino:113580
> sk:d v6only:1 <->
> udp UNCONN 0 0 [fe80::5054:ff:fe94:f314]%virbr0:53
> [::]:* users:(("dnsmasq",pid=13525,fd=12)) uid:979 ino:113575
> sk:e v6only:1 <->
> tcp LISTEN 0 32 [fd00:1234:5678:900::1]:53
> [::]:* users:(("dnsmasq",pid=13525,fd=16)) uid:979 ino:113581
> sk:20 v6only:1 <->
> tcp LISTEN 0 32 [fe80::5054:ff:fe94:f314]%virbr0:53
> [::]:* users:(("dnsmasq",pid=13525,fd=13)) uid:979 ino:113576
> sk:21 v6only:1 <->
>
>
> --- running dnsmasq processes on host ('nxtvirt') post deploy-run ---
>
> dnsmasq 13525 0.0 0.0 71888 2344 ? S 12:31 0:00
> /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf
> --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
> root 13526 0.0 0.0 71860 436 ? S 12:31 0:00
> /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf
> --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper
>
>
> --- var/lib/libvirt/dnsmasq/default.conf ---
>
> ##WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO
> BE
> ##OVERWRITTEN AND LOST. Changes to this configuration should be made
> using:
> ## virsh net-edit default
> ## or other application using the libvirt API.
> ##
> ## dnsmasq conf file created by libvirt
> strict-order
> pid-file=/run/libvirt/network/default.pid
> except-interface=lo
> bind-dynamic
> interface=virbr0
> dhcp-option=3
> no-resolv
> ra-param=*,0,0
> dhcp-range=fd00:1234:5678:900::10,fd00:1234:5678:900::ff,64
> dhcp-lease-max=240
> dhcp-hostsfile=/var/lib/libvirt/dnsmasq/default.hostsfile
> addn-hosts=/var/lib/libvirt/dnsmasq/default.addnhosts
> enable-ra
>
> --- cockpit wizard overview before the 'Prepare VM' step ---
>
> VM
> Engine FQDN:engine.*REDACTED*
> MAC Address:00:16:3e:20:13:b3
> Network Configuration:Static
> VM IP Address:*REDACTED*:1099:babe::3/64
> Gateway Address:*REDACTED*:1099::1
> DNS Servers:*REDACTED*:1052::11
> Root User SSH Access:yes
> Number of Virtual CPUs:4
> Memory Size (MiB):4096
> Root User SSH Public Key:(None)
> Add Lines to /etc/hosts:yes
> Bridge Name:ovirtmgmt
> Apply OpenSCAP profile:no
> Engine
> SMTP Server Name:localhost
> SMTP Server Port Number:25
> Sender E-Mail Address:root@localhost
> Recipient E-Mail Addresses:root@localhost
>
> --- ks.cgf ---
>
> #version=RHEL8
> ignoredisk --only-use=vda
> autopart --type=lvm
> # Partition clearing information
> clearpart --drives=vda --all --initlabel
> # Use graphical install
> #graphical
> text
> # Use CDROM installation media
> cdrom
> # Keyboard layouts
> keyboard --vckeymap=de --xlayouts='de','us'
> # System language
> lang en_US.UTF-8
>
> # Network information
> network --bootproto=static --device=enp1s0 --ip=192.168.199.250
> --netmask=255.255.255.0 --gateway=192.168.199.10
> --ipv6=*REDACTED*:1090:babe::250/64 --ipv6gateway=*REDACTED*:1090::1
> --hostname=nxtvirt.*REDACTED* --nameserver=*REDACTED*:1052::11
> --activate
> network --hostname=nxtvirt.*REDACTED*
> # Root password
> rootpw --iscrypted $6$*REDACTED*
>
> firewall --enabled --service=cockpit --service=ssh
>
>
> # Run the Setup Agent on first boot
> firstboot --enable
> # Do not configure the X Window System
> skipx
> # System services
> services --enabled="chronyd"
> # System timezone
> timezone Etc/UTC --isUtc --ntpservers=ntp.*REDACTED*,ntp2.*REDACTED*
> user --name=nonrootuser --groups=wheel --password=$6$*REDACTED*
> --iscrypted
>
> # KVM Users/Groups
> group --name=kvm --gid=36
> user --name=vdsm --uid=36 --gid=36
>
> %packages
> @^server-product-environment
> #@graphical-admin-tools
> @headless-management
> kexec-tools
> cockpit
>
> %end
>
> %addon com_redhat_kdump --enable --reserve-mb='auto'
>
> %end
>
> %anaconda
> pwpolicy root --minlen=6 --minquality=1 --notstrict --nochanges
> --notempty
> pwpolicy user --minlen=6 --minquality=1 --notstrict --nochanges
> --emptyok
> pwpolicy luks --minlen=6 --minquality=1 --notstrict --nochanges
> --notempty
> %end
>
> %post --erroronfail --log=/root/ks-post.log
> #!/bin/sh
>
> dnf update -y
>
> # NFS storage
> mkdir -p /opt/ovirt/nfs-storage
> chown -R 36:36 /opt/ovirt/nfs-storage
> chmod 0755 /opt/ovirt/nfs-storage
> echo "/opt/ovirt/nfs-storage localhost" > /etc/exports
> echo "/opt/ovirt/nfs-storage engine.*REDACTED*" >> /etc/exports
> dnf install -y nfs-utils
> systemctl enable nfs-server.service
>
> # Install ovirt packages
> dnf install -y
>
https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm
> dnf install -y cockpit-ovirt-dashboard ovirt-engine-appliance
>
> # Enable cockpit
> systemctl enable cockpit.socket
>
> %end
>
> #reboot --eject --kexec
> reboot --eject
>
>
> --- Host (nxtvirt) ip -a post deploy-run ---
>
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
> group default qlen 1000
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> valid_lft forever preferred_lft forever
> inet6 ::1/128 scope host
> valid_lft forever preferred_lft forever
> 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel
> state UP group default qlen 1000
> link/ether 52:54:00:ad:79:1b brd ff:ff:ff:ff:ff:ff
> inet 192.168.199.250/24 brd 192.168.199.255 scope global
> noprefixroute enp1s0
> valid_lft forever preferred_lft forever
> inet6 *REDACTED*:1099:babe::250/64 scope global noprefixroute
> valid_lft forever preferred_lft forever
> inet6 fe80::5054:ff:fead:791b/64 scope link noprefixroute
> valid_lft forever preferred_lft forever
> 5: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
> state UP group default qlen 1000
> link/ether 52:54:00:94:f3:14 brd ff:ff:ff:ff:ff:ff
> inet6 fd00:1234:5678:900::1/64 scope global
> valid_lft forever preferred_lft forever
> inet6 fe80::5054:ff:fe94:f314/64 scope link
> valid_lft forever preferred_lft forever
> 6: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel master
> virbr0 state DOWN group default qlen 1000
> link/ether 52:54:00:94:f3:14 brd ff:ff:ff:ff:ff:ff
> 7: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel
> master virbr0 state UNKNOWN group default qlen 1000
> link/ether fe:16:3e:68:d3:8a brd ff:ff:ff:ff:ff:ff
> inet6 fe80::fc16:3eff:fe68:d38a/64 scope link
> valid_lft forever preferred_lft forever
>
>
> --- iptables-save post deploy-run ---
>
> # Generated by iptables-save v1.8.4 on Sun Jun 28 13:20:53 2020
> *filter
> :INPUT ACCEPT [4007:8578553]
> :FORWARD ACCEPT [0:0]
> :OUTPUT ACCEPT [3920:7633249]
> :LIBVIRT_INP - [0:0]
> :LIBVIRT_OUT - [0:0]
> :LIBVIRT_FWO - [0:0]
> :LIBVIRT_FWI - [0:0]
> :LIBVIRT_FWX - [0:0]
> -A INPUT -j LIBVIRT_INP
> -A FORWARD -j LIBVIRT_FWX
> -A FORWARD -j LIBVIRT_FWI
> -A FORWARD -j LIBVIRT_FWO
> -A OUTPUT -j LIBVIRT_OUT
> -A LIBVIRT_INP -i virbr0 -p udp -m udp --dport 53 -j ACCEPT
> -A LIBVIRT_INP -i virbr0 -p tcp -m tcp --dport 53 -j ACCEPT
> -A LIBVIRT_INP -i virbr0 -p udp -m udp --dport 67 -j ACCEPT
> -A LIBVIRT_INP -i virbr0 -p tcp -m tcp --dport 67 -j ACCEPT
> -A LIBVIRT_OUT -o virbr0 -p udp -m udp --dport 53 -j ACCEPT
> -A LIBVIRT_OUT -o virbr0 -p tcp -m tcp --dport 53 -j ACCEPT
> -A LIBVIRT_OUT -o virbr0 -p udp -m udp --dport 68 -j ACCEPT
> -A LIBVIRT_OUT -o virbr0 -p tcp -m tcp --dport 68 -j ACCEPT
> -A LIBVIRT_FWO -i virbr0 -j REJECT --reject-with icmp-port-unreachable
> -A LIBVIRT_FWI -o virbr0 -j REJECT --reject-with icmp-port-unreachable
> -A LIBVIRT_FWX -i virbr0 -o virbr0 -j ACCEPT
> COMMIT
> # Completed on Sun Jun 28 13:20:53 2020
> # Generated by iptables-save v1.8.4 on Sun Jun 28 13:20:53 2020
> *security
> :INPUT ACCEPT [3959:8576054]
> :FORWARD ACCEPT [0:0]
> :OUTPUT ACCEPT [3920:7633249]
> COMMIT
> # Completed on Sun Jun 28 13:20:53 2020
> # Generated by iptables-save v1.8.4 on Sun Jun 28 13:20:53 2020
> *raw
> :PREROUTING ACCEPT [4299:8608260]
> :OUTPUT ACCEPT [3920:7633249]
> COMMIT
> # Completed on Sun Jun 28 13:20:53 2020
> # Generated by iptables-save v1.8.4 on Sun Jun 28 13:20:53 2020
> *mangle
> :PREROUTING ACCEPT [4299:8608260]
> :INPUT ACCEPT [4007:8578553]
> :FORWARD ACCEPT [0:0]
> :OUTPUT ACCEPT [3920:7633249]
> :POSTROUTING ACCEPT [3923:7633408]
> :LIBVIRT_PRT - [0:0]
> -A POSTROUTING -j LIBVIRT_PRT
> COMMIT
> # Completed on Sun Jun 28 13:20:53 2020
> # Generated by iptables-save v1.8.4 on Sun Jun 28 13:20:53 2020
> *nat
> :PREROUTING ACCEPT [337:32047]
> :INPUT ACCEPT [0:0]
> :POSTROUTING ACCEPT [159:9351]
> :OUTPUT ACCEPT [159:9351]
> :LIBVIRT_PRT - [0:0]
> -A POSTROUTING -j LIBVIRT_PRT
> COMMIT
> # Completed on Sun Jun 28 13:20:53 2020
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement:
https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
>
https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
>
https://lists.ovirt.org/archives/list/users@ovirt.org/message/G452P2BN7Z7...
_______________________________________________
Users mailing list -- users(a)ovirt.org
To unsubscribe send an email to users-leave(a)ovirt.org
Privacy Statement:
https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LDMPLC4622A...