Re: [ANN] oVirt 4.3.7 Third Release Candidate is now available for testing
by Strahil
Hi Sahina,
I have a strange situation:
1. When I try to access the file via 'sudo -u vdsm dd if=disk of=test bs=4M' the command fails on aprox 60MB.
2. If I run same command as root , remove the file and then run again via vdsm user -> this time no i/o error reported.
My guess is that I need to check what's going on the bricks themselve ...
Best Regards,
Strahil Nikolov
5 years
Newbie Thin Client question
by crege7@gmail.com
I am new to this whole technology and I can't find anything about thin clients. I was wondering if I can even use oVirt with a thin client, if anyone knows how help would be much appreciated. I am also wondering if there is a way to use a Raspberry Pi as a thin client.
5 years
Gluster & Hyper Converged setup
by rob.downer@orbitalsystems.co.uk
Hi,
Gluster will not set up and fails... can anyone see why ?
/etc/hosts set up for both backend Gluster network and front end, also LAN DNS set up on the subnet for the front end.
TASK [gluster.infra/roles/backend_setup : Set PV data alignment for JBOD] ******
task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml:17
skipping: [gfs2.gluster.private] => {"changed": false,
"skip_reason": "Conditional result was False"}
skipping: [gfs1.gluster.private] => {"changed": false,
"skip_reason": "Conditional result was False"}
skipping: [gfs3.gluster.private] => {"changed": false,
"skip_reason": "Conditional result was False"}
TASK [gluster.infra/roles/backend_setup : Set PV data alignment for RAID] ******
task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml:24
ok: [gfs2.gluster.private] => {"ansible_facts": {"pv_dataalign":
"3072K\n"}, "changed": false}
ok: [gfs1.gluster.private] => {"ansible_facts": {"pv_dataalign":
"3072K\n"}, "changed": false}
ok: [gfs3.gluster.private] => {"ansible_facts": {"pv_dataalign":
"3072K\n"}, "changed": false}
TASK [gluster.infra/roles/backend_setup : Set VG physical extent size for RAID] ***
task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml:34
ok: [gfs2.gluster.private] => {"ansible_facts": {"vg_pesize":
"3072K\n"}, "changed": false}
ok: [gfs1.gluster.private] => {"ansible_facts": {"vg_pesize":
"3072K\n"}, "changed": false}
ok: [gfs3.gluster.private] => {"ansible_facts": {"vg_pesize":
"3072K\n"}, "changed": false}
TASK [gluster.infra/roles/backend_setup : Create volume groups] ****************
task path: /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vg_create.yml:46
failed: [gfs1.gluster.private] (item={u'vgname': u'gluster_vg_sdb',
u'pvname': u'/dev/sdb'}) => {"ansible_loop_var":
"item", "changed": false, "item": {"pvname":
"/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg":
"Device /dev/sdb not found."}
failed: [gfs3.gluster.private] (item={u'vgname': u'gluster_vg_sdb',
u'pvname': u'/dev/sdb'}) => {"ansible_loop_var":
"item", "changed": false, "item": {"pvname":
"/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg":
"Device /dev/sdb not found."}
failed: [gfs2.gluster.private] (item={u'vgname': u'gluster_vg_sdb',
u'pvname': u'/dev/sdb'}) => {"ansible_loop_var":
"item", "changed": false, "item": {"pvname":
"/dev/sdb", "vgname": "gluster_vg_sdb"}, "msg":
"Device /dev/sdb not found."}
NO MORE HOSTS LEFT *************************************************************
NO MORE HOSTS LEFT *************************************************************
PLAY RECAP *********************************************************************
gfs1.gluster.private : ok=10 changed=0 unreachable=0 failed=1 skipped=16
rescued=0 ignored=0
gfs2.gluster.private : ok=11 changed=1 unreachable=0 failed=1 skipped=16
rescued=0 ignored=0
gfs3.gluster.private : ok=10 changed=0 unreachable=0 failed=1 skipped=16
rescued=0 ignored=0
0 / 0
Reply
5 years
Wrong CPU?
by Christian Reiss
Hey folks,
running an AMD Ryzen CPU here:
processor : 0
vendor_id : AuthenticAMD
cpu family : 23
model : 49
model name : AMD EPYC 7282 16-Core Processor
However, libvirt is detecting this as EPYC-IBPB without the ssbd flags?
<cpu>
<arch>x86_64</arch>
<model>EPYC-IBPB</model>
<vendor>AMD</vendor>
<microcode version='137367580'/>
<counter name='tsc' frequency='2799999000'/>
<topology sockets='1' cores='16' threads='2'/>
<feature name='ht'/>
<feature name='osxsave'/>
<feature name='cmt'/>
<feature name='clwb'/>
<feature name='umip'/>
<feature name='xsaves'/>
<feature name='mbm_total'/>
<feature name='mbm_local'/>
<feature name='cmp_legacy'/>
<feature name='extapic'/>
<feature name='ibs'/>
<feature name='skinit'/>
<feature name='wdt'/>
<feature name='tce'/>
<feature name='topoext'/>
<feature name='perfctr_core'/>
<feature name='perfctr_nb'/>
<feature name='invtsc'/>
<feature name='wbnoinvd'/>
<pages unit='KiB' size='4'/>
<pages unit='KiB' size='2048'/>
<pages unit='KiB' size='1048576'/>
</cpu>
[root@node01 ~]# grep ssbd /var/cache/libvirt/qemu/capabilities/*.xml
<property name='ssbd' type='boolean' value='false'/>
<property name='virt-ssbd' type='boolean' value='false'/>
<property name='ssbd' type='boolean' value='false'/>
<property name='virt-ssbd' type='boolean' value='false'/>
But the flag is there:
[root@node01 ~]# grep ssbd /proc/cpuinfo | tail -n1
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov
pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt
pdpe1gb rdtscp lm constant_tsc art rep_good nopl xtopology nonstop_tsc
extd_apicid aperfmperf eagerfpu pni pclmulqdq monitor ssse3 fma cx16
sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy
svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs
skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_l2 cpb
cat_l3 cdp_l3 hw_pstate sme retpoline_amd ssbd ibrs ibpb stibp vmmcall
fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb
sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total
cqm_mbm_local clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save
tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold
avic v_vmsave_vmload vgif umip overflow_recov succor smca
I tried adding "options kvm_amd avic=1" as well as "options kvm_amd
avic=0" to /etc/modprobe.d/kvm.conf (always with reboots), adding
mitigations=off to grub.. I can't think of any other solution.
I just can't get the oVirt engine running with the ssbd flag. Seems cpu
can do this, oVirt can do this, libvirt does not detect the cpu
correctly or at least ignores it. But the hosted engine demands it.
I am at a loss. Any help is oh-so-greatly appreciated.
-Chris.
--
Christian Reiss - email(a)christian-reiss.de /"\ ASCII Ribbon
support(a)alpha-labs.net \ / Campaign
X against HTML
WEB alpha-labs.net / \ in eMails
GPG Retrieval https://gpg.christian-reiss.de
GPG ID ABCD43C5, 0x44E29126ABCD43C5
GPG fingerprint = 9549 F537 2596 86BA 733C A4ED 44E2 9126 ABCD 43C5
"It's better to reign in hell than to serve in heaven.",
John Milton, Paradise lost.
5 years
[ANN] oVirt 4.3.7 Fourth Release Candidate is now available for testing
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the oVirt
4.3.7 Fourth Release Candidate for testing, as of November 19th, 2019.
This update is a release candidate of the seventh in a series of
stabilization updates to the 4.3 series.
This is pre-release software. This pre-release should not to be used in
production.
This release is available now on x86_64 architecture for:
* Red Hat Enterprise Linux 7.7 or later (but <8)
* CentOS Linux (or similar) 7.7 or later (but <8)
This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:
* Red Hat Enterprise Linux 7.7 or later (but <8)
* CentOS Linux (or similar) 7.7 or later (but <8)
* oVirt Node 4.3 (available for x86_64 only) has been built consuming
CentOS 7.7 Release
See the release notes [1] for known issues, new features and bugs fixed.
While testing this release candidate please note that oVirt node now
includes:
- ansible 2.9.1
- GlusterFS 6.6
Notes:
- oVirt Appliance is already available
- oVirt Node is already available
Additional Resources:
* Read more about the oVirt 4.3.7 release highlights:
http://www.ovirt.org/release/4.3.7/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.3.7/
[2] http://resources.ovirt.org/pub/ovirt-4.3-pre/iso/
--
Sandro Bonazzola
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://www.redhat.com/>*Red Hat respects your work life balance.
Therefore there is no need to answer this email out of your office hours.*
5 years
FOSDEM 2020 Virtualization & IaaS Devroom CfP
by Piotr Kliczewski
Friendly reminder that there are 2 weeks before the submission deadline.
Room day update:
This year Virt and IaaS room will be on the 2nd of February.
See you all at FOSDEM!
5 years
Gluster setup
by rob.downer@orbitalsystems.co.uk
I have set up a 3 node system.
Gluster has its own backend network and I have tried entering the FQDN hosts via ssh as follows...
gfs1.gluster.private 10.10.45.11
gfs2.gluster.private 10.10.45.12
gfs3.gluster.private 10.10.45.13
I entered at /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
gfs1.gluster.private 10.10.45.11
gfs2.gluster.private 10.10.45.12
gfs3.gluster.private 10.10.45.13
but on the CLI
host gfs1.gluster.private
returns
[root@ovirt1 etc]# host gfs1.gluster.private
Host gfs1.gluster.private not found: 3(NXDOMAIN)
[root@ovirt1 etc]#
I guess this is the wrong hosts file, resolver.conf lists files first for lookup...
5 years
Re: SSBD issues on live cluster
by Strahil
Copy the long xml and save it to a file.
Then edit and remove the requirement for the mitigations.
Then set the following alias:
alias virsh='virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf'
Then :
virsh undefine HostedEngine
virsh define <file>
virsh start HostedEngine
Maybe it will work :)
Best Regards,
Strahil NikolovOn Nov 15, 2019 15:30, Christian Reiss <email(a)christian-reiss.de> wrote:
>
> On 15/11/2019 13:30, thomas(a)hoberg.net wrote:
> > Since there is no guarantee that the oVirt node image and the hosted-engine image are aligned, I'd recommend disabling all mitigations during the host's boot (only got a list of the Intel flags, sorry: Not rich enough for EPYC) and see if that sails through. And if you have no mitigation risk issues, to keep the base CPU definition as low as you can stand (your VMs applications could miss out on some nice instruction extensions or other features if you go rock-bottom).
>
> Hey,
>
> Ugh, I am at a loss. I added
>
> --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8<
> GRUB_CMDLINE_LINUX='crashkernel=auto
> rd.lvm.lv=onn/ovirt-node-ng-4.3.6-0.20190926.0+1 rd.lvm.lv=onn/swap
> mitigations=off rhgb quiet'
> --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8<
>
>
> to /etc/default/grub, created a new grub.cfg and rebooted.
>
> --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8<
> [root@node01 ~]# cat /proc/cmdline
> BOOT_IMAGE=/ovirt-node-ng-4.3.6-0.20190926.0+1/vmlinuz-3.10.0-1062.1.1.el7.x86_64
> root=/dev/onn/ovirt-node-ng-4.3.6-0.20190926.0+1 ro crashkernel=auto
> rd.lvm.lv=onn/swap mitigations=off rhgb quiet
> rd.lvm.lv=onn/ovirt-node-ng-4.3.6-0.20190926.0+1
> img.bootid=ovirt-node-ng-4.3.6-0.20190926.0+1
> --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8<
>
>
> Even after clearing the cache and restarting libvirt the issue is still
>
> --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8<
> [root@node01 ~]# cat
> /var/cache/libvirt/qemu/capabilities/3c76bc41d59c0c7314b1ae8e63f4f765d2cf16abaeea081b3ca1f5d8732f7bb1.xml
> | grep ssb
> <property name='ssbd' type='boolean' value='false'/>
> <property name='virt-ssbd' type='boolean' value='false'/>
> <property name='ssbd' type='boolean' value='false'/>
> <property name='virt-ssbd' type='boolean' value='false'/>
> --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8<
>
>
> and flags are still set (duh)
>
> --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8<
> [root@node01 ~]# grep ssbd /proc/cpuinfo | tail -n1
> flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov
> pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt
> pdpe1gb rdtscp lm constant_tsc art rep_good nopl xtopology nonstop_tsc
> extd_apicid aperfmperf eagerfpu pni pclmulqdq monitor ssse3 fma cx16
> sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy
> svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs
> skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_l2 cpb
> cat_l3 cdp_l3 hw_pstate sme retpoline_amd ssbd ibrs ibpb stibp vmmcall
> fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb
> sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total
> cqm_mbm_local clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save
> tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold
> avic v_vmsave_vmload vgif umip overflow_recov succor smca
> --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8<
>
>
> Deploying the oVirt hosted engine still works up to the final point,
> when it stops with the usual
>
> --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8<
> 2019-11-15 14:43:54,758+0100 INFO (jsonrpc/6) [api.virt] FINISH
> getStats return={'status': {'message': 'Done', 'code': 0}, 'statsList':
> [{'status': 'Down', 'exitMessage': 'the CPU is incompatible with host
> CPU: Host CPU does not provide required features: virt-ssbd',
> 'statusTime': '4294738860', 'vmId':
> 'd116b296-9ae7-4ff3-80b4-73dc228a7b64', 'exitReason': 1, 'exitCode':
> 1}]} from=::1,46514, vmId=d116b296-9ae7-4ff3-80b4-73dc228a7b64 (api:54)
> --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8<
>
>
> I can see that during the final stages (up to this point the engine VM
> is up and running) in vdsm.log there is a (super long) line:
>
> --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8<
> 2019-11-15 13:36:10,248+0100 INFO (jsonrpc/4) [api.virt] FINISH create
> return={'status': {'message': 'Done', 'code': 0}, 'vmList': {'status':
> 'WaitForLaunch', 'maxMemSize': 65536, 'acpiEnable': 'true',
> 'emulatedMachine': 'pc-i440fx-rhel7.6.0', 'numOfIoThreads': '1', 'vmId':
> 'd116b296-9ae7-4ff3-80b4-73dc228a7b64', 'memGuaranteedSize': 1024,
> 'timeOffset': '0', 'smpThreadsPerCore': '1', 'cpuType': 'EPYC',
> 'guestDiskMapping': {}, 'arch': 'x86_64', 'smp': '4', 'guestNumaNodes':
> [{'nodeIndex': 0, 'cpus': '0,1,2,3', 'memory': '16384'}], u'xml':
> u'<?xml version=\'1.0\' encoding=\'UTF-8\'?>\n<domain
> xmlns:ovirt-tune="http://ovirt.org/vm/tune/1.0"
> xmlns:ovirt-vm="http://ovirt.org/vm/1.0"
> type="kvm"><name>HostedEngine</name><uuid>d116b296-9ae7-4ff3-80b4-73dc228a7b64</uuid><memory>16777216</memory><currentMemory>16777216</currentMemory><iothreads>1</iothreads><maxMemory
> slots="16">67108864</maxMemory><vcpu current="4">64</vcpu><sysinfo
> type="smbios"><system><entry name="manufacturer">oVirt</entry><entry
> name="product">OS-NAME:</entry><entry
> name="version">OS-VERSION:</entry><entry
> name="serial">HOST-SERIAL:</entry><entry
> name="uuid">d116b296-9ae7-4ff3-80b4-73dc228a7b64</entry></system></sysinfo><clock
> offset="variable" adjustment="0"><timer name="rtc"
> tickpolicy="catchup"/><timer name="pit" tickpolicy="delay"/><timer
> name="hpet" present="no"/></clock><features><acpi/></features><cpu
> match="exact"><model>EPYC</model><feature name="ibpb"
> policy="require"/><feature name="virt-ssbd" policy="require"/><topology
> cores="4" threads="1" sockets="16"/><numa><cell id="0" cpus="0,1,2,3"
> memory="16777216"/></numa></cpu><cputune/><devices><input type="mouse"
> bus="ps2"/><channel type="unix"><target type="virtio"
> name="ovirt-guest-agent.0"/><source mode="bind"
> path="/var/lib/libvirt/qemu/channels/d116b296-9ae7-4ff3-80b4-73dc228a7b64.ovirt-guest-agent.0"/></channel><channel
> type="unix"><target type="virtio" name="org.qemu.guest_agent.0"/><source
> mode="bind"
> path="/var/lib/libvirt/qemu/channels/d116b296-9ae7-4ff3-80b4-73dc228a7b64.org.qemu.guest_agent.0"/></channel><sound
> model="ich6"><alias
> name="ua-05ce597b-8e43-4360-81ac-2ca13cb4f9d5"/></sound><graphics
> type="vnc" port="-1" autoport="yes" passwd="*****"
> passwdValidTo="1970-01-01T00:00:01" keymap="en-us"><listen
> type="network" network="vdsm-ovirtmgmt"/></graphics><controller
> type="scsi" model="virtio-scsi" index="0"><driver iothread="1"/><alias
> name="ua-30edc108-3218-43dc-ad43-129ce392930e"/></controller><video><model
> type="qxl" vram="32768" heads="1" ram="65536" vgamem="16384"/><alias
> name="ua-31187f25-275b-490d-922b-15712b6fabb6"/></video><console
> type="unix"><source
> path="/var/run/ovirt-vmconsole-console/d116b296-9ae7-4ff3-80b4-73dc228a7b64.sock"
> mode="bind"/><target type="serial" port="0"/><alias
> name="ua-7a643a8e-6871-4d63-a38e-632f03566e63"/></console><graphics
> type="spice" port="-1" autoport="yes" passwd="*****"
> passwdValidTo="1970-01-01T00:00:01" tlsPort="-1"><channel name="main"
> mode="secure"/><channel name="inputs" mode="secure"/><channel
> name="cursor" mode="secure"/><channel name="playback"
> mode="secure"/><channel name="record" mode="secure"/><channel
> name="display" mode="secure"/><channel name="smartcard"
> mode="secure"/><channel name="usbredir" mode="secure"/><listen
> type="network" network="vdsm-ovirtmgmt"/></graphics><controller
> type="virtio-serial" index="0" ports="16"><alias
> name="ua-833fb61c-213a-4871-b99a-3863958ce070"/></controller><rng
> model="virtio"><backend model="random">/dev/urandom</backend><alias
> name="ua-91c1c22f-5d21-458b-b1a1-a700ea8b5e5c"/></rng><memballoon
> model="virtio"><stats period="5"/><alias
> name="ua-943f5866-0165-40bf-a4b6-658072a1d7f5"/></memballoon><controller
> type="usb" model="piix3-uhci" index="0"/><serial type="unix"><source
> path="/var/run/ovirt-vmconsole-console/d116b296-9ae7-4ff3-80b4-73dc228a7b64.sock"
> mode="bind"/><target port="0"/></serial><channel type="spicevmc"><target
> type="virtio" name="com.redhat.spice.0"/></channel><interface
> type="bridge"><model type="virtio"/><link state="up"/><source
> bridge="ovirtmgmt"/><driver queues="4" name="vhost"/><alias
> name="ua-03c1177f-98be-4bed-8dd0-1f1895a0a0c6"/><mac
> address="00:16:3e:3b:5d:da"/><mtu size="1500"/><filterref
> filter="vdsm-no-mac-spoofing"/><bandwidth/></interface><disk type="file"
> device="cdrom" snapshot="no"><driver name="qemu" type="raw"
> error_policy="report"/><source file=""
> startupPolicy="optional"><seclabel model="dac" type="none"
> relabel="no"/></source><target dev="hdc" bus="ide"/><readonly/><alias
> name="ua-acc9e0f3-ab5b-4637-876a-96242a52a470"/></disk><disk
> snapshot="no" type="file" device="disk"><target dev="vda"
> bus="virtio"/><source
> file="/rhev/data-center/00000000-0000-0000-0000-000000000000/b77c80b7-a2a5-4627-a48e-8b8a49583c5d/images/e070502c-780b-45a5-98d1-6f6db9a48967/e4066d0d-2a83-4802-8976-09f2a18baf23"><seclabel
> model="dac" type="none" relabel="no"/></source><driver name="qemu"
> iothread="1" io="threads" type="raw" error_policy="stop"
> cache="none"/><alias
> name="ua-e070502c-780b-45a5-98d1-6f6db9a48967"/><serial>e070502c-780b-45a5-98d1-6f6db9a48967</serial></disk><lease><key>e4066d0d-2a83-4802-8976-09f2a18baf23</key><lockspace>b77c80b7-a2a5-4627-a48e-8b8a49583c5d</lockspace><target
> offset="LEASE-OFFSET:e4066d0d-2a83-4802-8976-09f2a18baf23:b77c80b7-a2a5-4627-a48e-8b8a49583c5d"
> path="LEASE-PATH:e4066d0d-2a83-4802-8976-09f2a18baf23:b77c80b7-a2a5-4627-a48e-8b8a49583c5d"/></lease></devices><pm><suspend-to-disk
> enabled="no"/><suspend-to-mem enabled="no"/></pm><os><type arch="x86_64"
> machine="pc-i440fx-rhel7.6.0">hvm</type><smbios mode="sysinfo"/><bios
> useserial="yes"/></os><metadata><ovirt-tune:qos/><ovirt-vm:vm><ovirt-vm:minGuaranteedMemoryMb
> type="int">1024</ovirt-vm:minGuaranteedMemoryMb><ovirt-vm:clusterVersion>4.3</ovirt-vm:clusterVersion><ovirt-vm:custom/><ovirt-vm:device
> mac_address="00:16:3e:3b:5d:da"><ovirt-vm:custom/></ovirt-vm:device><ovirt-vm:device
> devtype="disk"
> name="vda"><ovirt-vm:poolID>00000000-0000-0000-0000-000000000000</ovirt-vm:poolID><ovirt-vm:volumeID>e4066d0d-2a83-4802-8976-09f2a18baf23</ovirt-vm:volumeID><ovirt-vm:shared>exclusive</ovirt-vm:shared><ovirt-vm:imageID>e070502c-780b-45a5-98d1-6f6db9a48967</ovirt-vm:imageID><ovirt-vm:domainID>b77c80b7-a2a5-4627-a48e-8b8a49583c5d</ovirt-vm:domainID></ovirt-vm:device><ovirt-vm:launchPaused>false</ovirt-vm:launchPaused><ovirt-vm:resumeBehavior>auto_resume</ovirt-vm:resumeBehavior></ovirt-vm:vm></metadata></domain>',
> 'smpCoresPerSocket': '4', 'kvmEnable': 'true', 'bootMenuEnable':
> 'false', 'devices': [], 'custom': {}, 'maxVCpus': '64', 'statusTime':
> '4357530330', 'vmName': 'HostedEngine', 'maxMemSlots': 16}}
> from=::1,40284, vmId=d116b296-9ae7-4ff3-80b4-73dc228a7b64 (api:54)
>
> [...]
>
> <feature name="virt-ssbd" policy="require"/>
> --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8< --- --- 8<
>
>
> There is sets "<feature name="virt-ssbd" policy="require"/>", which is
> added in the xml, ever since it will break.
>
>
> Ayee.
>
>
> --
> Christian Reiss - email(a)christian-reiss.de /"\ ASCII Ribbon
> support(a)alpha-labs.net \ / Campaign
> X against HTML
> WEB alpha-labs.net / \ in eMails
>
> GPG Retrieval https://gpg.christian-reiss.de
> GPG ID ABCD43C5, 0x44E29126ABCD43C5
> GPG fingerprint = 9549 F537 2596 86BA 733C A4ED 44E2 9126 ABCD 43C5
>
> "It's better to reign in hell than to serve in heaven.",
> John Milton, Paradise lost.
> _______________________________________________
> Users mailing list -- users(a)ovirt.org
> To unsubscribe send an email to users-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/HW2A4TLFFLK...
5 years
Ovirt instance having RTO in every 10 minutes
by Crazy Ayansh
Hi Team,
I am using ovirt hosted engine 4.3.6.6-1 the newest one but it seems to me
it's not stable as my hosted engine VM stopped pining in every 10 minutes
and my web console get disconnected every time.
[image: image.png]
See in the above snapshot in every 10 minutes Data Center status get Non
responsive. could any one help out here.
Thanks
Shashank
5 years