Re: VMware Fence Agent
by Strahil Nikolov
I think it's easier to get the Vmware's CA certificate and import it on
all hosts + engine and trust it.By default you should put it at
/etc/pki/ca-trust/source/anchors/ and then use "update-ca-trust" to
make all certs signed by the Vmware vCenter's CA trusted.
Best Regards,Strahil Nikolov
В 06:44 +0000 на 21.01.2021 (чт), Robert Tongue написа:
> Greetings all, I am new to oVirt, and have a proof of concept setup
> with a 3-node oVirt cluster nested inside of VMware VCenter to learn
> it, so then I can efficiently migrate that back out to the physical
> nodes to replace VCenter. I have gotten all the way
> to a working cluster setup, with the exception of fencing. I used
> engine-config to pull in the vmware_soap fence agent, and enabled all
> the options, however there is one small thing I cannot figure out.
> The connection uses a self-signed certificate on the
> vcenter side, and I cannot figure out the proper combination of
> engine-config -s commands to get the script to be called with the
> "ssl-insecure" option, which does contain a value. It just needs the
> option passed. Is there anyone out there in the ether
> that can help me out? I can provide any information you request.
> Thanks in advance.
>
>
>
>
>
>
>
>
>
>
> The fence agent script is called with the following syntax in my
> tests, and returned the proper status:
>
>
>
>
>
>
>
> [root@cluster2-vm ~]# /usr/sbin/fence_vmware_soap -o status -a
> vcenter.address --username="administrator(a)vsphere.local" --password="
> 0bfusc@t3d" --ssl-insecure -n cluster1-vm
>
>
>
>
>
>
>
> Status: ON
>
>
>
>
>
>
>
>
>
>
>
>
> -phunyguy
>
>
>
>
>
> _______________________________________________Users mailing list --
> users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3PTMUPHR3ZO...
3 years, 10 months
Hosted Engine stuck in bios
by Joseph Gelinas
Hi,
I recently did some updates of ovirt from 4.4.1 or 4.4.3 to 4.4.4, also setting the default datacenter from 4.4 to 4.5 and making the default bios q35+eufi. Unfortunately quite a few things. Now however hosted engine doesn't boot up anymore and `hosted-engine --console` just shows the below bios/firmware output:
RHEL
RHEL-8.1.0 PC (Q35 + ICH9, 2009) 2.00 GHz
0.0.0 16384 MB RAM
Select Language <Standard English> This is the option
one adjusts to change
> Device Manager the language for the
> Boot Manager current system
> Boot Maintenance Manager
Continue
Reset
^v=Move Highlight <Enter>=Select Entry
When in this state `hosted-engine --vm-status` says it is up but failed liveliness check
hosted-engine --vm-status | grep -i engine\ status
Engine status : {"vm": "down", "health": "bad", "detail": "unknown", "reason": "vm not running on this host"}
Engine status : {"vm": "up", "health": "bad", "detail": "Up", "reason": "failed liveliness check"}
Engine status : {"vm": "down", "health": "bad", "detail": "Down", "reason": "bad vm status"}
I assume I am running into https://access.redhat.com/solutions/5341561 (RHV: Hosted-Engine VM fails to start after changing the cluster to Q35/UEFI) however how to fix that isn't really described. I have tried starting hosted engine paused (`hosted-engine --vm-start-paused`) and editing the config (`virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf edit HostedEngine`) to have pc-i440fx instead and removing a bunch of pcie lines etc until it will accept the config and then resuming hosted engine (`virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf resume HostedEngine`) but haven't come up with something that is able to start.
Anyone know how to resolve this? Am I even chasing the right path?
/var/log/libvirt/qemu/HostedEngine.log
2021-01-20 15:31:56.500+0000: starting up libvirt version: 6.6.0, package: 7.1.el8 (CBS <cbs(a)centos.org>, 2020-12-10-14:05:40, ), qemu version: 5.1.0qemu-kvm-5.1.0-14.el8.1, kernel: 4.18.0-240.1.1.el8_3.x86_64, hostname: ovirt-3
LC_ALL=C \
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin \
HOME=/var/lib/libvirt/qemu/domain-25-HostedEngine \
XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-25-HostedEngine/.local/share \
XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-25-HostedEngine/.cache \
XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-25-HostedEngine/.config \
QEMU_AUDIO_DRV=spice \
/usr/libexec/qemu-kvm \
-name guest=HostedEngine,debug-threads=on \
-S \
-object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-25-HostedEngine/master-key.aes \
-blockdev '{"driver":"file","filename":"/usr/share/OVMF/OVMF_CODE.secboot.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-pflash0-format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"}' \
-blockdev '{"driver":"file","filename":"/var/lib/libvirt/qemu/nvram/81816cd3-5816-4185-b553-b5a636156fbd.fd","node-name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-pflash1-format","read-only":false,"driver":"raw","file":"libvirt-pflash1-storage"}' \
-machine pc-q35-rhel8.1.0,accel=kvm,usb=off,dump-guest-core=off,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-format \
-cpu Cascadelake-Server-noTSX,mpx=off \
-m size=16777216k,slots=16,maxmem=67108864k \
-overcommit mem-lock=off \
-smp 4,maxcpus=64,sockets=16,dies=1,cores=4,threads=1 \
-object iothread,id=iothread1 \
-numa node,nodeid=0,cpus=0-63,mem=16384 \
-uuid 81816cd3-5816-4185-b553-b5a636156fbd \
-smbios type=1,manufacturer=oVirt,product=RHEL,version=8-1.2011.el8,serial=4c4c4544-0051-3710-8032-c8c04f483633,uuid=81816cd3-5816-4185-b553-b5a636156fbd,family=oVirt \
-no-user-config \
-nodefaults \
-device sga \
-chardev socket,id=charmonitor,fd=47,server,nowait \
-mon chardev=charmonitor,id=monitor,mode=control \
-rtc base=2021-01-20T15:31:56,driftfix=slew \
-global kvm-pit.lost_tick_policy=delay \
-no-hpet \
-no-reboot \
-global ICH9-LPC.disable_s3=1 \
-global ICH9-LPC.disable_s4=1 \
-boot strict=on \
-device pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x2 \
-device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x2.0x1 \
-device pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x2.0x2 \
-device pcie-root-port,port=0x13,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x3 \
-device pcie-root-port,port=0x14,chassis=5,id=pci.5,bus=pcie.0,addr=0x2.0x4 \
-device pcie-root-port,port=0x15,chassis=6,id=pci.6,bus=pcie.0,addr=0x2.0x5 \
-device pcie-root-port,port=0x16,chassis=7,id=pci.7,bus=pcie.0,addr=0x2.0x6 \
-device pcie-root-port,port=0x17,chassis=8,id=pci.8,bus=pcie.0,addr=0x2.0x7 \
-device pcie-root-port,port=0x18,chassis=9,id=pci.9,bus=pcie.0,multifunction=on,addr=0x3 \
-device pcie-root-port,port=0x19,chassis=10,id=pci.10,bus=pcie.0,addr=0x3.0x1 \
-device pcie-root-port,port=0x1a,chassis=11,id=pci.11,bus=pcie.0,addr=0x3.0x2 \
-device pcie-root-port,port=0x1b,chassis=12,id=pci.12,bus=pcie.0,addr=0x3.0x3 \
-device pcie-root-port,port=0x1c,chassis=13,id=pci.13,bus=pcie.0,addr=0x3.0x4 \
-device pcie-root-port,port=0x1d,chassis=14,id=pci.14,bus=pcie.0,addr=0x3.0x5 \
-device pcie-root-port,port=0x1e,chassis=15,id=pci.15,bus=pcie.0,addr=0x3.0x6 \
-device pcie-root-port,port=0x1f,chassis=16,id=pci.16,bus=pcie.0,addr=0x3.0x7 \
-device pcie-root-port,port=0x20,chassis=17,id=pci.17,bus=pcie.0,addr=0x4 \
-device pcie-pci-bridge,id=pci.18,bus=pci.1,addr=0x0 \
-device qemu-xhci,p2=8,p3=8,id=ua-5a52e9e5-0726-4393-b91c-1c76e76c9ac1,bus=pci.3,addr=0x0 \
-device virtio-scsi-pci,iothread=iothread1,id=ua-7127a708-0d2a-42f3-97e4-fc314703f96f,bus=pci.4,addr=0x0 \
-device virtio-serial-pci,id=ua-e654d96c-8a11-42a0-9c83-6dda18d6052e,max_ports=16,bus=pci.5,addr=0x0 \
-device ide-cd,bus=ide.2,id=ua-7653b07c-61d5-4982-95bd-69147c4a2e54,werror=report,rerror=report \
-blockdev '{"driver":"file","filename":"/run/vdsm/storage/634fd4e4-2cc0-42fb-a92f-63223f25a339/105c32f2-c14e-474c-920e-6507e47cc28d/a5047f29-82fe-41a0-b170-3c3592df46be","aio":"threads","node-name":"libvirt-1-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"raw","file":"libvirt-1-storage"}' \
-device virtio-blk-pci,iothread=iothread1,bus=pci.6,addr=0x0,drive=libvirt-1-format,id=ua-105c32f2-c14e-474c-920e-6507e47cc28d,bootindex=1,write-cache=on,serial=105c32f2-c14e-474c-920e-6507e47cc28d,werror=stop,rerror=stop \
-netdev tap,fds=53:54:55:56,id=hostua-972a1ee9-25eb-4613-aac2-4996a7a28fff,vhost=on,vhostfds=57:58:59:60 \
-device virtio-net-pci,mq=on,vectors=10,host_mtu=1500,netdev=hostua-972a1ee9-25eb-4613-aac2-4996a7a28fff,id=ua-972a1ee9-25eb-4613-aac2-4996a7a28fff,mac=00:16:3e:6e:da:39,bus=pci.2,addr=0x0 \
-chardev socket,id=charserial0,fd=61,server,nowait \
-device isa-serial,chardev=charserial0,id=serial0 \
-chardev socket,id=charchannel0,fd=62,server,nowait \
-device virtserialport,bus=ua-e654d96c-8a11-42a0-9c83-6dda18d6052e.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \
-chardev spicevmc,id=charchannel1,name=vdagent \
-device virtserialport,bus=ua-e654d96c-8a11-42a0-9c83-6dda18d6052e.0,nr=2,chardev=charchannel1,id=channel1,name=com.redhat.spice.0 \
-chardev socket,id=charchannel2,fd=63,server,nowait \
-device virtserialport,bus=ua-e654d96c-8a11-42a0-9c83-6dda18d6052e.0,nr=3,chardev=charchannel2,id=channel2,name=org.ovirt.hosted-engine-setup.0 \
-device usb-tablet,id=input0,bus=ua-5a52e9e5-0726-4393-b91c-1c76e76c9ac1.0,port=1 \
-vnc 10.11.24.20:14,password \
-k en-us \
-spice port=5915,tls-port=5916,addr=10.11.24.20,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on \
-device qxl-vga,id=ua-c4d51e81-5bb4-4211-a00c-3d7ab431fef2,ram_size=67108864,vram_size=33554432,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pcie.0,addr=0x1 \
-device intel-hda,id=ua-68071572-175c-4af5-95f9-29f7e407e700,bus=pci.18,addr=0x1 \
-device hda-duplex,id=ua-68071572-175c-4af5-95f9-29f7e407e700-codec0,bus=ua-68071572-175c-4af5-95f9-29f7e407e700.0,cad=0 \
-device virtio-balloon-pci,id=ua-9d18ed17-c563-4f0a-b946-3d9d664a55e1,bus=pci.7,addr=0x0 \
-object rng-random,id=objua-0b27484d-b9b4-4372-b334-adcf8d3fc1eb,filename=/dev/urandom \
-device virtio-rng-pci,rng=objua-0b27484d-b9b4-4372-b334-adcf8d3fc1eb,id=ua-0b27484d-b9b4-4372-b334-adcf8d3fc1eb,bus=pci.8,addr=0x0 \
-device vmcoreinfo \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
-msg timestamp=on
2021-01-20T15:31:56.621216Z qemu-kvm: -numa node,nodeid=0,cpus=0-63,mem=16384: warning: Parameter -numa node,mem is deprecated, use -numa node,memdev instead
/etc/libvirt/qemu/HostedEngine.xml
<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
virsh edit HostedEngine
or other application using the libvirt API.
-->
<domain type='kvm'>
<name>HostedEngine</name>
<uuid>81816cd3-5816-4185-b553-b5a636156fbd</uuid>
<metadata xmlns:ns0="http://ovirt.org/vm/tune/1.0" xmlns:ovirt-vm="http://ovirt.org/vm/1.0">
<ns0:qos/>
<ovirt-vm:vm xmlns:ovirt-vm="http://ovirt.org/vm/1.0">
<ovirt-vm:balloonTarget type="int">16777216</ovirt-vm:balloonTarget>
<ovirt-vm:clusterVersion>4.5</ovirt-vm:clusterVersion>
<ovirt-vm:destroy_on_reboot type="bool">False</ovirt-vm:destroy_on_reboot>
<ovirt-vm:launchPaused>false</ovirt-vm:launchPaused>
<ovirt-vm:memGuaranteedSize type="int">1024</ovirt-vm:memGuaranteedSize>
<ovirt-vm:minGuaranteedMemoryMb type="int">1024</ovirt-vm:minGuaranteedMemoryMb>
<ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior>
<ovirt-vm:startTime type="float">1611156714.4983754</ovirt-vm:startTime>
<ovirt-vm:device mac_address="00:16:3e:6e:da:39"/>
<ovirt-vm:device devtype="disk" name="vda">
<ovirt-vm:domainID>634fd4e4-2cc0-42fb-a92f-63223f25a339</ovirt-vm:domainID>
<ovirt-vm:imageID>105c32f2-c14e-474c-920e-6507e47cc28d</ovirt-vm:imageID>
<ovirt-vm:poolID>00000000-0000-0000-0000-000000000000</ovirt-vm:poolID>
<ovirt-vm:shared>exclusive</ovirt-vm:shared>
<ovirt-vm:volumeID>a5047f29-82fe-41a0-b170-3c3592df46be</ovirt-vm:volumeID>
</ovirt-vm:device>
</ovirt-vm:vm>
</metadata>
<maxMemory slots='16' unit='KiB'>67108864</maxMemory>
<memory unit='KiB'>16777216</memory>
<currentMemory unit='KiB'>16777216</currentMemory>
<vcpu placement='static' current='4'>64</vcpu>
<iothreads>1</iothreads>
<sysinfo type='smbios'>
<system>
<entry name='manufacturer'>oVirt</entry>
<entry name='product'>RHEL</entry>
<entry name='version'>8-1.2011.el8</entry>
<entry name='serial'>4c4c4544-0051-3710-8032-c8c04f483633</entry>
<entry name='uuid'>81816cd3-5816-4185-b553-b5a636156fbd</entry>
<entry name='family'>oVirt</entry>
</system>
</sysinfo>
<os>
<type arch='x86_64' machine='pc-q35-rhel8.1.0'>hvm</type>
<loader readonly='yes' secure='no' type='pflash'>/usr/share/OVMF/OVMF_CODE.secboot.fd</loader>
<nvram template='/usr/share/OVMF/OVMF_VARS.fd'>/var/lib/libvirt/qemu/nvram/81816cd3-5816-4185-b553-b5a636156fbd.fd</nvram>
<boot dev='hd'/>
<bios useserial='yes'/>
<smbios mode='sysinfo'/>
</os>
<features>
<acpi/>
<vmcoreinfo state='on'/>
</features>
<cpu mode='custom' match='exact' check='partial'>
<model fallback='allow'>Cascadelake-Server-noTSX</model>
<topology sockets='16' dies='1' cores='4' threads='1'/>
<feature policy='disable' name='mpx'/>
<numa>
<cell id='0' cpus='0-63' memory='16777216' unit='KiB'/>
</numa>
</cpu>
<clock offset='variable' adjustment='0' basis='utc'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>destroy</on_reboot>
<on_crash>destroy</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/usr/libexec/qemu-kvm</emulator>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw' error_policy='report'/>
<source startupPolicy='optional'>
<seclabel model='dac' relabel='no'/>
</source>
<target dev='sdc' bus='sata'/>
<readonly/>
<alias name='ua-7653b07c-61d5-4982-95bd-69147c4a2e54'/>
<address type='drive' controller='0' bus='0' target='0' unit='2'/>
</disk>
<disk type='file' device='disk' snapshot='no'>
<driver name='qemu' type='raw' cache='none' error_policy='stop' io='threads' iothread='1'/>
<source file='/run/vdsm/storage/634fd4e4-2cc0-42fb-a92f-63223f25a339/105c32f2-c14e-474c-920e-6507e47cc28d/a5047f29-82fe-41a0-b170-3c3592df46be'>
<seclabel model='dac' relabel='no'/>
</source>
<target dev='vda' bus='virtio'/>
<serial>105c32f2-c14e-474c-920e-6507e47cc28d</serial>
<alias name='ua-105c32f2-c14e-474c-920e-6507e47cc28d'/>
<address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
</disk>
<controller type='pci' index='1' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='1' port='0x10'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
</controller>
<controller type='pci' index='2' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='2' port='0x11'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
</controller>
<controller type='pci' index='3' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='3' port='0x12'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
</controller>
<controller type='pci' index='4' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='4' port='0x13'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
</controller>
<controller type='pci' index='5' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='5' port='0x14'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
</controller>
<controller type='pci' index='6' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='6' port='0x15'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
</controller>
<controller type='pci' index='7' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='7' port='0x16'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
</controller>
<controller type='pci' index='8' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='8' port='0x17'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
</controller>
<controller type='pci' index='9' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='9' port='0x18'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
</controller>
<controller type='pci' index='10' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='10' port='0x19'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
</controller>
<controller type='pci' index='11' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='11' port='0x1a'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
</controller>
<controller type='pci' index='12' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='12' port='0x1b'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
</controller>
<controller type='pci' index='13' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='13' port='0x1c'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
</controller>
<controller type='pci' index='14' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='14' port='0x1d'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
</controller>
<controller type='pci' index='15' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='15' port='0x1e'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
</controller>
<controller type='pci' index='16' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='16' port='0x1f'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
</controller>
<controller type='pci' index='17' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='17' port='0x20'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</controller>
<controller type='pci' index='18' model='pcie-to-pci-bridge'>
<model name='pcie-pci-bridge'/>
<address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</controller>
<controller type='pci' index='0' model='pcie-root'/>
<controller type='sata' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
</controller>
<controller type='usb' index='0' model='qemu-xhci' ports='8'>
<alias name='ua-5a52e9e5-0726-4393-b91c-1c76e76c9ac1'/>
<address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
</controller>
<controller type='scsi' index='0' model='virtio-scsi'>
<driver iothread='1'/>
<alias name='ua-7127a708-0d2a-42f3-97e4-fc314703f96f'/>
<address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
</controller>
<controller type='virtio-serial' index='0' ports='16'>
<alias name='ua-e654d96c-8a11-42a0-9c83-6dda18d6052e'/>
<address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
</controller>
<lease>
<lockspace>634fd4e4-2cc0-42fb-a92f-63223f25a339</lockspace>
<key>a5047f29-82fe-41a0-b170-3c3592df46be</key>
<target path='/rhev/data-center/mnt/glusterSD/ovirt-1:_engine/634fd4e4-2cc0-42fb-a92f-63223f25a339/images/105c32f2-c14e-474c-920e-6507e47cc28d/a5047f29-82fe-41a0-b170-3c3592df46be.lease'/>
</lease>
<interface type='bridge'>
<mac address='00:16:3e:6e:da:39'/>
<source bridge='ovirtmgmt'/>
<model type='virtio'/>
<driver name='vhost' queues='4'/>
<link state='up'/>
<mtu size='1500'/>
<alias name='ua-972a1ee9-25eb-4613-aac2-4996a7a28fff'/>
<address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
</interface>
<serial type='unix'>
<source mode='bind' path='/var/run/ovirt-vmconsole-console/81816cd3-5816-4185-b553-b5a636156fbd.sock'/>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='unix'>
<source mode='bind' path='/var/run/ovirt-vmconsole-console/81816cd3-5816-4185-b553-b5a636156fbd.sock'/>
<target type='serial' port='0'/>
</console>
<channel type='unix'>
<source mode='bind' path='/var/lib/libvirt/qemu/channels/81816cd3-5816-4185-b553-b5a636156fbd.org.qemu.guest_agent.0'/>
<target type='virtio' name='org.qemu.guest_agent.0'/>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>
<channel type='spicevmc'>
<target type='virtio' name='com.redhat.spice.0'/>
<address type='virtio-serial' controller='0' bus='0' port='2'/>
</channel>
<channel type='unix'>
<source mode='bind' path='/var/lib/libvirt/qemu/channels/81816cd3-5816-4185-b553-b5a636156fbd.org.ovirt.hosted-engine-setup.0'/>
<target type='virtio' name='org.ovirt.hosted-engine-setup.0'/>
<address type='virtio-serial' controller='0' bus='0' port='3'/>
</channel>
<input type='tablet' bus='usb'>
<address type='usb' bus='0' port='1'/>
</input>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<graphics type='vnc' port='-1' autoport='yes' keymap='en-us' passwd='*****' passwdValidTo='1970-01-01T00:00:01'>
<listen type='network' network='vdsm-ovirtmgmt'/>
</graphics>
<graphics type='spice' autoport='yes' passwd='*****' passwdValidTo='1970-01-01T00:00:01'>
<listen type='network' network='vdsm-ovirtmgmt'/>
<channel name='main' mode='secure'/>
<channel name='display' mode='secure'/>
<channel name='inputs' mode='secure'/>
<channel name='cursor' mode='secure'/>
<channel name='playback' mode='secure'/>
<channel name='record' mode='secure'/>
<channel name='smartcard' mode='secure'/>
<channel name='usbredir' mode='secure'/>
</graphics>
<sound model='ich6'>
<alias name='ua-68071572-175c-4af5-95f9-29f7e407e700'/>
<address type='pci' domain='0x0000' bus='0x12' slot='0x01' function='0x0'/>
</sound>
<video>
<model type='qxl' ram='65536' vram='32768' vgamem='16384' heads='1' primary='yes'/>
<alias name='ua-c4d51e81-5bb4-4211-a00c-3d7ab431fef2'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
</video>
<memballoon model='virtio'>
<stats period='5'/>
<alias name='ua-9d18ed17-c563-4f0a-b946-3d9d664a55e1'/>
<address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/urandom</backend>
<alias name='ua-0b27484d-b9b4-4372-b334-adcf8d3fc1eb'/>
<address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
</rng>
</devices>
</domain>
3 years, 10 months
Managed Block Storage and more
by Shantur Rathore
Hi all,
I am planning my new oVirt cluster on Apple hosts. These hosts can only
have one disk which I plan to partition and use for hyper converged setup.
As this is my first oVirt cluster I need help in understanding few bits.
1. Is Hyper converged setup possible with Ceph using cinderlib?
2. Can this hyper converged setup be on oVirt Node Next hosts or only
Centos?
3. Can I install cinderlib on oVirt Node Next hosts?
4. Are there any pit falls in such a setup?
Thanks for your help
Regards,
Shantur
3 years, 10 months
[ANN] oVirt 4.4.4 is now generally available
by Sandro Bonazzola
oVirt 4.4.4 is now generally available
The oVirt project is excited to announce the general availability of oVirt
4.4.4 , as of December 21st, 2020.
This release unleashes an altogether more powerful and flexible open source
virtualization solution that encompasses hundreds of individual changes and
a wide range of enhancements across the engine, storage, network, user
interface, and analytics, as compared to oVirt 4.3.
Important notes before you install / upgrade
Please note that oVirt 4.4 only supports clusters and data centers with
compatibility version 4.2 and above. If clusters or data centers are
running with an older compatibility version, you need to upgrade them to at
least 4.2 (4.3 is recommended).
Please note that in RHEL 8 / CentOS 8 several devices that worked on EL7
are no longer supported.
For example, the megaraid_sas driver is removed. If you use Enterprise
Linux 8 hosts you can try to provide the necessary drivers for the
deprecated hardware using the DUD method (See the users’ mailing list
thread on this at
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/NDSVUZSESOXE...
)
Documentation
-
If you want to try oVirt as quickly as possible, follow the instructions
on the Download <https://ovirt.org/download/> page.
-
For complete installation, administration, and usage instructions, see
the oVirt Documentation <https://ovirt.org/documentation/>.
-
For upgrading from a previous version, see the oVirt Upgrade Guide
<https://ovirt.org/documentation/upgrade_guide/>.
-
For a general overview of oVirt, see About oVirt
<https://ovirt.org/community/about.html>.
What’s new in oVirt 4.4.4 Release?
This update is the fourth in a series of stabilization updates to the 4.4
series.
This release is available now on x86_64 architecture for:
-
Red Hat Enterprise Linux 8.3
-
CentOS Linux (or similar) 8.3
-
CentOS Stream (tech preview)
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
-
Red Hat Enterprise Linux 8.3
-
CentOS Linux (or similar) 8.3
-
oVirt Node (based on CentOS Linux 8.3)
-
CentOS Stream (tech preview)
oVirt Node and Appliance have been updated, including:
-
oVirt 4.4.4: https://www.ovirt.org/release/4.4.4/
-
Ansible 2.9.16:
https://github.com/ansible/ansible/blob/stable-2.9/changelogs/CHANGELOG-v...
-
CentOS Linux 8 (2011):
https://lists.centos.org/pipermail/centos-announce/2020-December/048207.html
-
Advanced Virtualization 8.3
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
Notes:
-
oVirt Appliance is already available for CentOS Linux 8
-
oVirt Node NG is already available for CentOS Linux 8
Additional resources:
-
Read more about the oVirt 4.4.4 release highlights:
https://www.ovirt.org/release/4.4.4/
-
Get more oVirt project updates on Twitter: https://twitter.com/ovirt
-
Check out the latest project news on the oVirt blog:
https://blogs.ovirt.org/
[1] https://www.ovirt.org/release/4.4.4/
[2] https://resources.ovirt.org/pub/ovirt-4.4/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
3 years, 10 months
[ANN] oVirt 4.4.5 Second Release Candidate is now available for testing
by Sandro Bonazzola
oVirt 4.4.5 Second Release Candidate is now available for testing
The oVirt Project is pleased to announce the availability of oVirt 4.4.5
Second Release Candidate for testing, as of January 21st, 2021.
This update is the fifth in a series of stabilization updates to the 4.4
series.
How to prevent hosts entering emergency mode after upgrade from oVirt 4.4.1
Note: Upgrading from 4.4.2 GA or later should not require re-doing these
steps, if already performed while upgrading from 4.4.1 to 4.4.2 GA. These
are only required to be done once.
Due to Bug 1837864 <https://bugzilla.redhat.com/show_bug.cgi?id=1837864> -
Host enter emergency mode after upgrading to latest build
If you have your root file system on a multipath device on your hosts you
should be aware that after upgrading from 4.4.1 to 4.4.5 you may get your
host entering emergency mode.
In order to prevent this be sure to upgrade oVirt Engine first, then on
your hosts:
1.
Remove the current lvm filter while still on 4.4.1, or in emergency mode
(if rebooted).
2.
Reboot.
3.
Upgrade to 4.4.5 (redeploy in case of already being on 4.4.5).
4.
Run vdsm-tool config-lvm-filter to confirm there is a new filter in
place.
5.
Only if not using oVirt Node:
- run "dracut --force --add multipath” to rebuild initramfs with the
correct filter configuration
6.
Reboot.
Documentation
-
If you want to try oVirt as quickly as possible, follow the instructions
on the Download <https://ovirt.org/download/> page.
-
For complete installation, administration, and usage instructions, see
the oVirt Documentation <https://ovirt.org/documentation/>.
-
For upgrading from a previous version, see the oVirt Upgrade Guide
<https://ovirt.org/documentation/upgrade_guide/>.
-
For a general overview of oVirt, see About oVirt
<https://ovirt.org/community/about.html>.
Important notes before you try it
Please note this is a pre-release build.
The oVirt Project makes no guarantees as to its suitability or usefulness.
This pre-release must not be used in production.
Installation instructions
For installation instructions and additional information please refer to:
https://ovirt.org/documentation/
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 8.3 or newer
* CentOS Linux (or similar) 8.3 or newer
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 8.3 or newer
* CentOS Linux (or similar) 8.3 or newer
* oVirt Node 4.4 based on CentOS Linux 8.3 (available for x86_64 only)
See the release notes [1] for installation instructions and a list of new
features and bugs fixed.
Notes:
- oVirt Appliance is already available for CentOS Linux 8
- oVirt Node NG is already available for CentOS Linux 8
- We found a few issues while testing on CentOS Stream so we are still
basing oVirt 4.4.5 Node and Appliance on CentOS Linux.
Additional Resources:
* Read more about the oVirt 4.4.5 release highlights:
http://www.ovirt.org/release/4.4.5/
* Get more oVirt project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.4.5/
[2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.*
3 years, 10 months
Cinderlib died after upgrade Cluster Compatibility version 4.4 ->4.5
by Mike Andreev
hi all,
after upgrade oVirt 4.4.7 Cluster Compatibility version 4.4 ->4.5 cinderlib storage is died:
tail -f /var/log/ovirt-engine/cinderlib/cinderlib.log
2021-01-20 12:35:28,781 - cinderlib-client - ERROR - Failure occurred when trying to run command 'delete_volume': Volume type with name __DEFAULT__ could not be found. [3d4a9aa9-c5f0-498c-b650-bc64d979f194]
2021-01-20 12:43:31,209 - cinderlib-client - ERROR - Failure occurred when trying to run command 'delete_volume': Volume type with name __DEFAULT__ could not be found. [927a1e82-aa21-4358-b213-120682d85e63]
2021-01-20 13:05:32,833 - cinderlib-client - ERROR - Failure occurred when trying to run command 'create_volume': Volume type with name __DEFAULT__ could not be found. [b645c321-5c64-4c54-972a-f81080bb6b0f]
2021-01-20 15:16:45,667 - cinderlib-client - ERROR - Failure occurred when trying to run command 'connect_volume': Volume type with name __DEFAULT__ could not be found. [19d98e01]
2021-01-20 15:16:48,232 - cinderlib-client - ERROR - Failure occurred when trying to run command 'connect_volume': Volume type with name __DEFAULT__ could not be found. [2088beea]
2021-01-20 15:16:50,320 - cinderlib-client - ERROR - Failure occurred when trying to run command 'disconnect_volume': Volume type with name __DEFAULT__ could not be found. [793923ab]
2021-01-20 15:37:28,423 - cinderlib-client - ERROR - Failure occurred when trying to run command 'connect_volume': Volume type with name __DEFAULT__ could not be found. [5c8f0d73]
2021-01-20 15:37:30,707 - cinderlib-client - ERROR - Failure occurred when trying to run command 'connect_volume': Volume type with name __DEFAULT__ could not be found. [35e96dcd]
2021-01-20 15:55:36,179 - cinderlib-client - ERROR - Failure occurred when trying to run command 'create_volume': Volume type with name __DEFAULT__ could not be found. [b5fbb358-36c7-4d3b-9431-16b3a699f300]
2021-01-20 15:57:33,733 - cinderlib-client - ERROR - Failure occurred when trying to run command 'create_volume': Volume type with name __DEFAULT__ could not be found. [ae879975-d36a-4b2b-9829-2713266f1c1f]
in engine.log:
021-01-20 15:57:31,795+01 INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (default task-2) [ae879975-d36a-4b2b-9829-2713266f1c1f] Running command: AddDiskCommand internal: false. Entities affected : ID: 32dd7b42-eeb7-4cf5-9bef-bd8f8dd9608e Type: StorageAction group CREATE_DISK with role type USER
2021-01-20 15:57:32,014+01 INFO [org.ovirt.engine.core.bll.storage.disk.managedblock.AddManagedBlockStorageDiskCommand] (EE-ManagedExecutorService-commandCoordinator-Thread-1) [ae879975-d36a-4b2b-9829-2713266f1c1f] Running command: AddManagedBlockStorageDiskCommand internal: true.
2021-01-20 15:57:33,936+01 ERROR [org.ovirt.engine.core.common.utils.cinderlib.CinderlibExecutor] (EE-ManagedExecutorService-commandCoordinator-Thread-1) [ae879975-d36a-4b2b-9829-2713266f1c1f] cinderlib execution failed:
2021-01-20 15:57:34,012+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-2) [] EVENT_ID: USER_ADD_DISK_FINISHED_FAILURE(2,022), Add-Disk operation failed to complete.
2021-01-20 15:57:34,090+01 INFO [org.ovirt.engine.core.bll.storage.disk.AddDiskCommandCallback] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-47) [ae879975-d36a-4b2b-9829-2713266f1c1f] Command 'AddDisk' id: 'c7e43317-495e-4b97-96cd-02f02cb20ab2' child commands '[1de7bfa3-09c3-45a5-955a-580236f0296c]' executions were completed, status 'FAILED'
2021-01-20 15:57:35,127+01 ERROR [org.ovirt.engine.core.bll.storage.disk.AddDiskCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-48) [ae879975-d36a-4b2b-9829-2713266f1c1f] Ending command 'org.ovirt.engine.core.bll.storage.disk.AddDiskCommand' with failure.
2021-01-20 15:57:35,135+01 ERROR [org.ovirt.engine.core.bll.storage.disk.managedblock.AddManagedBlockStorageDiskCommand] (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-48) [ae879975-d36a-4b2b-9829-2713266f1c1f] Ending command 'org.ovirt.engine.core.bll.storage.disk.managedblock.AddManagedBlockStorageDiskCommand' with failure.
I check it on cinderlib 2.0, 2.1, 3.0 installed as from rpm (ussuri) and from pip install - errors the same.
on backend we use CEPH 14 rbd pools
3 years, 10 months
Gluster Hyperconverged fails with single disk partitioned
by Shantur Rathore
Hi,
I am trying to setup a single host Self-Hosted hyperconverged setup with
GlusterFS.
I have a custom partitioning where I provide 100G for oVirt and its
partitions and rest 800G to a physical partition (/dev/sda4).
When I try to create gluster deployment with the wizard, it fails
TASK [gluster.infra/roles/backend_setup : Create volume groups]
****************
failed: [ovirt-macpro-16.lab.ced.bskyb.com] (item={'key':
'gluster_vg_sda4', 'value': [{'vgname': 'gluster_vg_sda4', 'pvname':
'/dev/sda4'}]}) => {"ansible_loop_var": "item", "changed": false, "err": "
Device /dev/sda4 excluded by a filter.\n", "item": {"key":
"gluster_vg_sda4", "value": [{"pvname": "/dev/sda4", "vgname":
"gluster_vg_sda4"}]}, "msg": "Creating physical volume '/dev/sda4' failed",
"rc": 5}
I checked and /etc/lvm/lvm.conf filter doesn't allow /dev/sda4. It only
allows PV for onn VG.
Once I manually allow /dev/sda4 to lvm filter, it works fine and gluster
deployment completes.
Fdisk :
# fdisk -l /dev/sda
Disk /dev/sda: 931.9 GiB, 1000555581440 bytes, 1954210120 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: FE209000-85B5-489A-8A86-4CF0C91B2E7D
Device Start End Sectors Size Type
/dev/sda1 2048 1230847 1228800 600M EFI System
/dev/sda2 1230848 3327999 2097152 1G Linux filesystem
/dev/sda3 3328000 213043199 209715200 100G Linux LVM
/dev/sda4 213043200 1954209791 1741166592 830.3G Linux filesystem
LVS
# lvs
LV VG Attr LSize Pool Origin
Data% Meta% Move Log Cpy%Sync Convert
home onn Vwi-aotz-- 10.00g pool0
0.11
ovirt-node-ng-4.4.4-0.20201221.0 onn Vwi---tz-k 10.00g pool0 root
ovirt-node-ng-4.4.4-0.20201221.0+1 onn Vwi-aotz-- 10.00g pool0
ovirt-node-ng-4.4.4-0.20201221.0 25.26
pool0 onn twi-aotz-- 95.89g
2.95 14.39
root onn Vri---tz-k 10.00g pool0
swap onn -wi-ao---- 4.00g
tmp onn Vwi-aotz-- 10.00g pool0
0.12
var onn Vwi-aotz-- 20.00g pool0
0.92
var_crash onn Vwi-aotz-- 10.00g pool0
0.11
var_log onn Vwi-aotz-- 10.00g pool0
0.13
var_log_audit onn Vwi-aotz-- 4.00g pool0
0.27
# grep filter /etc/lvm/lvm.conf
filter =
["a|^/dev/disk/by-id/lvm-pv-uuid-QrvErF-eaS9-PxbI-wCBV-3OxJ-V600-NG7raZ$|",
"r|.*|"]
Am I doing something which oVirt isn't expecting?
Is there anyway to provide tell gluster deployment to add it to lvm config.
Thanks,
Shantur
3 years, 10 months
Change of IP's
by Román Larumbe S.
Hello,
I have a Dell VRTX with 2 M640 servers, we installed oVirt version 4.3.10.4-1.el7, we have 1 cluster, 2 hosts and 2 storage domains.
I need to change ALL the network environment solution, that is, from IP's 192.168.11.XX to 172.16.10.XX.
Can anybody help me?
Thank you.
Román Larumbe S.
Facebook: Serahp LS
3 years, 10 months
VMware Fence Agent
by Robert Tongue
Greetings all, I am new to oVirt, and have a proof of concept setup with a 3-node oVirt cluster nested inside of VMware VCenter to learn it, so then I can efficiently migrate that back out to the physical nodes to replace VCenter. I have gotten all the way to a working cluster setup, with the exception of fencing. I used engine-config to pull in the vmware_soap fence agent, and enabled all the options, however there is one small thing I cannot figure out. The connection uses a self-signed certificate on the vcenter side, and I cannot figure out the proper combination of engine-config -s commands to get the script to be called with the "ssl-insecure" option, which does contain a value. It just needs the option passed. Is there anyone out there in the ether that can help me out? I can provide any information you request. Thanks in advance.
The fence agent script is called with the following syntax in my tests, and returned the proper status:
[root@cluster2-vm ~]# /usr/sbin/fence_vmware_soap -o status -a vcenter.address --username="administrator(a)vsphere.local" --password="0bfusc@t3d" --ssl-insecure -n cluster1-vm
Status: ON
-phunyguy
3 years, 10 months