Re: After import storage many of my vm's disks not appear
by Eyal Edri
On Mon, Feb 18, 2019 at 5:41 PM Kalil de A. Carvalho <kalilac(a)gmail.com>
wrote:
> Hello all.
> It is been a while that I used the oVirt and on the same time, the company
> where I started to work, has a big problem with the engine, we lose it. We
> did not have any backup.
> To try recovery the enverioment I installed a knew oVirt, this time Hosted
> Engine, and import the Storage. I could import and start some vm's but the
> most part not appear for me, dosen't show on "Disk Import", "VM Import" and
> any place.
> I know that is there, because show me that size.
> May I see, like list, this disks? I can't do this throught oVirt web.
> Is there any way to bring dist disks back?
> I already search this tasks but with no luck.
>
> oVirt ver: 4.2.8.2-1.el7
>
> Any one can help, please?
> Best regards.
>
> --
> Atenciosamente,
> Kalil de A. Carvalho
>
> _______________________________________________
> Infra mailing list -- infra(a)ovirt.org
> To unsubscribe send an email to infra-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/64GREELIVLI...
>
--
Eyal edri
MANAGER
RHV/CNV DevOps
EMEA VIRTUALIZATION R&D
Red Hat EMEA <https://www.redhat.com/>
<https://red.ht/sig> TRIED. TESTED. TRUSTED. <https://redhat.com/trusted>
phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
5 years, 10 months
After import storage many of my vm's disks not appear
by Kalil de A. Carvalho
Hello all.
It is been a while that I used the oVirt and on the same time, the company
where I started to work, has a big problem with the engine, we lose it. We
did not have any backup.
To try recovery the enverioment I installed a knew oVirt, this time Hosted
Engine, and import the Storage. I could import and start some vm's but the
most part not appear for me, dosen't show on "Disk Import", "VM Import" and
any place.
I know that is there, because show me that size.
May I see, like list, this disks? I can't do this throught oVirt web.
Is there any way to bring dist disks back?
I already search this tasks but with no luck.
oVirt ver: 4.2.8.2-1.el7
Any one can help, please?
Best regards.
--
Atenciosamente,
Kalil de A. Carvalho
5 years, 10 months
How to set hot plugged memory online_movable
by zodaoko@gmail.com
Hi,
I'd like to try the memory hot unplug, refer to: https://www.ovirt.org/documentation/vmm-guide/chap-Editing_Virtual_Machin...:
"All blocks of the hot-plugged memory must be set to **online_movable** in the virtual machine’s device management rules. In virtual machines running up-to-date versions of Enterprise Linux or CoreOS, this rule is set by default."
I created a VM running CentOS7.6:
# more /etc/redhat-release
CentOS Linux release 7.6.1810 (Core)
# more /usr/lib/udev/rules.d/40-redhat.rules
...
# Memory hotadd request
SUBSYSTEM!="memory", ACTION!="add", GOTO="memory_hotplug_end"
PROGRAM="/bin/uname -p", RESULT=="s390*", GOTO="memory_hotplug_end"
ENV{.state}="online"
PROGRAM="/bin/systemd-detect-virt", RESULT=="none", ENV{.state}="online_movable"
ATTR{state}=="offline", ATTR{state}="$env{.state}"
LABEL="memory_hotplug_end"
It looks like the online_movable will be set only when systemd-detect-virt returns none, i.e. hot plugging memory for a bare-metal machine, so how to make the hot plugged memory "online_movable" in the virtual machines? Thank you.
Regards,
-Zhen
5 years, 10 months
changes in oVirt 4.3 and vGPU?
by Hetz Ben Hamo
Hi,
I just installed a Tesla T4 card, installed the nvidia's RPM, I see the
mdev_type stuff etc.
Following their instructions, I'm trying to set a Windows 10 VM to use the
vGPU (the VM works without any vGPU), I get this error in the event...
VM Win-10-test is down with error. Exit message: internal error: qemu
unexpectedly closed the monitor: 2019-02-08T14:01:11.287955Z qemu-kvm:
warning: All CPU(s) up to maxcpus should be described in NUMA config,
ability to start up with partial NUMA mappings is obsoleted and will be
removed in future
2019-02-08T14:01:11.313878Z qemu-kvm: -device
vfio-pci,id=hostdev0,sysfsdev=/sys/bus/mdev/devices/486b48a3-01c7-4a67-9727-279813bae0e8,display=off,bus=pci.0,addr=0x8:
vfio error: 486b48a3-01c7-4a67-9727-279813bae0e8: error getting device from
group 0: Input/output error
Verify all devices in group 0 are bound to vfio-<bus> or pci-stub and not
already in use.
Could someone explain to me what am I missing and what to do? I don't see
any docs about it.
Thanks
5 years, 10 months
Migration of VMs across two Ovirt setups
by ddc.comp@gmail.com
Hi,
I have two ovirt setups having multiple hosts and cluster and around 100+ vms in one and around 32 in one setup.
I want to dismantle one setup and before before doing so all vms are need to be migrated to one setup
I tried with import vms from another setup using Libvert -- https://ovirt.org/develop/release-management/features/virt/KvmToOvirt.html but no luck connection not getting established.
What if I simply remove one host from one setup having local storage(all vms are located on local storage) and simply add it to another setup? and then migrate VMs from local to shared storage and then dismantle host !
Regards
Dipak Chaudhari
5 years, 10 months
AMD EPYC 4.3 upgrade 'CPU type is not supported in this cluster compatibility version or is not supported at all'
by Ryan Bullock
We just updated our engine to 4.3, but when I tried to update one of our
AMD EPYC hosts it could not activate with the error:
Host vmc2h2 moved to Non-Operational state as host CPU type is not
supported in this cluster compatibility version or is not supported at all.
Relevant (I think) parts from the the engine log:
(EE-ManagedThreadFactory-engineScheduled-Thread-82) [ee51a70] Could not
find server cpu for server 'vmc2h2' (745a14c6-9d31-48a4-9566-914647d83f53),
flags:
'fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ht,syscall,nx,mmxext,fxsr_opt,pdpe1gb,rdtscp,lm,constant_tsc,art,rep_good,nopl,nonstop_tsc,extd_apicid,amd_dcm,aperfmperf,eagerfpu,pni,pclmulqdq,monitor,ssse3,fma,cx16,sse4_1,sse4_2,movbe,popcnt,aes,xsave,avx,f16c,rdrand,lahf_lm,cmp_legacy,svm,extapic,cr8_legacy,abm,sse4a,misalignsse,3dnowprefetch,osvw,skinit,wdt,tce,topoext,perfctr_core,perfctr_nb,bpext,perfctr_l2,cpb,hw_pstate,sme,retpoline_amd,ssbd,ibpb,vmmcall,fsgsbase,bmi1,avx2,smep,bmi2,rdseed,adx,smap,clflushopt,sha_ni,xsaveopt,xsavec,xgetbv1,clzero,irperf,xsaveerptr,arat,npt,lbrv,svm_lock,nrip_save,tsc_scale,vmcb_clean,flushbyasid,decodeassists,pausefilter,pfthreshold,avic,v_vmsave_vmload,vgif,overflow_recov,succor,smca'
2019-02-06 17:23:58,527-08 INFO
[org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-82) [7f6d4f0d] START,
SetVdsStatusVDSCommand(HostName = vmc2h2,
SetVdsStatusVDSCommandParameters:{hostId='745a14c6-9d31-48a4-9566-914647d83f53',
status='NonOperational',
nonOperationalReason='CPU_TYPE_INCOMPATIBLE_WITH_CLUSTER'
From virsh -r capabilities:
<cpu>
<arch>x86_64</arch>
<model>EPYC-IBPB</model>
<vendor>AMD</vendor>
<microcode version='134222375'/>
<topology sockets='1' cores='32' threads='2'/>
<feature name='ht'/>
<feature name='osxsave'/>
<feature name='xsaves'/>
<feature name='cmp_legacy'/>
<feature name='extapic'/>
<feature name='skinit'/>
<feature name='wdt'/>
<feature name='tce'/>
<feature name='topoext'/>
<feature name='perfctr_core'/>
<feature name='perfctr_nb'/>
<feature name='invtsc'/>
<pages unit='KiB' size='4'/>
<pages unit='KiB' size='2048'/>
<pages unit='KiB' size='1048576'/>
</cpu>
I also tried creating a new 4.3 cluster, set to the AMD EPYC IPBDB SSBD and
moving the host into it, but it failed to move it into that cluster with a
similar error about an unsupported CPU (for some reason it also made me
clear the additional kernel options as well, we use 1gb hugepages). I have
not yet tried removing the host entirely and adding it as part of creating
the new cluster.
We have been/are using a database change to update the 4.2 cluster level to
include EPYC support with the following entries (can post the whole query
if needed):
7:AMD EPYC:svm,nx,model_EPYC:EPYC:x86_64; 8:AMD EPYC
IBPB:svm,nx,ibpb,model_EPYC:EPYC-IBPB:x86_64
We have been running 4.2 with this for awhile. We did apply the same
changes after the 4.3 update, but only for the 4.2 cluster level. We only
used the AMD EPYC IBPB model.
Reverting the host back to 4.2 allows it to activate and run normally.
Anyone have any ideas as to why it can't seem to find the cpu type?
Thanks,
Ryan Bullock
5 years, 10 months
Re: Best Practice to Deploy HostedEngine oVirt Environment
by Eyal Edri
adding users list, infra is for infrastructure and CI.
On Mon, Feb 18, 2019, 05:40 Mohd Hanief Harun <hanief(a)abyres.net wrote:
> Hi all,
>
> Im new in rhv and oVirt, hope there some space for the noob question :)
>
> Let say I have 2 hypervisors and 1 storage server. Where is the best
> practice to install self-hosted engine? In hypervisors or storage? I have
> experienced installing self-hosted engine in 2 storages. If something
> happened to the storage 1, ovirt-engine (ovirt-manager) will automatically
> migrate to storage 2.
>
> But this time is difference. We only have 1 storage. my concern is, if
> something happened to the storage, I cannot access my ovirt-manager. I'm
> thinking to deploy self-hosted engine in hypervisors since we have 2
> hypervisors. any advise? what is the best practice? If you have any
> tutorial installing hosted engine in 2 hypervisors would be very appreciate.
>
> thanks.
> _______________________________________________
> Infra mailing list -- infra(a)ovirt.org
> To unsubscribe send an email to infra-leave(a)ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/infra@ovirt.org/message/JRYHBXIWMCO...
>
5 years, 10 months
Huge Pages
by Vincent Royer
How do I know how many huge pages my hosts can support?
cat /proc/meminfo | grep Huge
AnonHugePages: 17684480 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
[image: image.png]
And once I know, I set the kernel parameters here, and reboot the host,
correct?
[image: image.png]
And then I assume I assign them to the VM here? How do I decide how many
huge pages and what size a particular VM can benefit from?
[image: image.png]
Is there a part of the docs I am not finding that covers this?
5 years, 10 months