See inlineOn Mar 30, 2022, at 13:41, Nir Soffer <nsoffer@redhat.com> wrote:On Wed, Mar 30, 2022 at 11:26 PM JC Lopez <jc@lightbitslabs.com> wrote:
Hi Nir,
Wiped out the node as the procedure provided did not fix the problem.
Fresh CentOS Stream 8
Looks like the vddm I deployed requires Ansible 2.12
Depsolve Error occured: \n Problem: cannot install the best candidate for the job\n - nothing provides virt-install needed by ovirt-hosted-engine-setup-2.6.4-0.0.master.20220329124709.git59931a1.el8.noarch\n - nothing provides ansible-core >= 2.12 needed by ovirt-hosted-engine-setup-2.6.4-0.0.master.20220329124709.git59931a1.el8.noarch”,
Didi, do we have a solution to the ansible requirement? Maybe some
repo is missing?
Here is what I have configured on my single node if it can help[root@client2 ~]# dnf repolist
Repository copr:copr.fedorainfracloud.org:ovirt:ovirt-master-snapshot is listed more than once in the configurationrepo id repo nameappstream CentOS Stream 8 - AppStreambaseos CentOS Stream 8 - BaseOScopr:copr.fedorainfracloud.org:ovirt:ovirt-master-snapshot Copr repo for ovirt-master-snapshot owned by ovirtelrepo ELRepo.org Community Enterprise Linux Repository - el8epel Extra Packages for Enterprise Linux 8 - x86_64epel-modular Extra Packages for Enterprise Linux Modular 8 - x86_64extras CentOS Stream 8 - Extrasextras-common CentOS Stream 8 - Extras common packagesovirt-appliance-master-snapshot oVirt appliance with ovirt-master-snapshot contentovirt-master-centos-opstools-testing CentOS Stream 8 - OpsTools - collectdovirt-master-centos-stream-ceph-pacific CentOS Stream 8 - Ceph packages for x86_64ovirt-master-centos-stream-gluster10-testing CentOS Stream 8 - Glusterfs 10 - testingovirt-master-centos-stream-nfv-openvswitch2-testing CentOS Stream 8 - NFV OpenVSwitch 2 - testingovirt-master-centos-stream-openstack-yoga-testing CentOS Stream 8 - OpenStack Yoga Repository - testingovirt-master-centos-stream-ovirt45-testing CentOS Stream 8 - oVirt 4.5 - testingovirt-master-copr:copr.fedorainfracloud.org:sac:gluster-ansible Copr repo for gluster-ansible owned by sacovirt-master-epel Extra Packages for Enterprise Linux 8 - x86_64ovirt-master-virtio-win-latest virtio-win builds roughly matching what will be shipped in upcoming RHELovirt-node-master-snapshot oVirt Node with ovirt-master-snapshot contentpowertools CentOS Stream 8 - PowerToolsrdo-delorean-component-cinder RDO Delorean OpenStack Cinder - currentrdo-delorean-component-clients RDO Delorean Clients - currentrdo-delorean-component-common RDO Delorean Common - currentrdo-delorean-component-network RDO Delorean Network - currentBut the ovirt-engine requires Ansible 2.9.27-2
package ovirt-engine-4.5.0.1-0.2.master.20220330145541.gitaff1492753.el8.noarch conflicts with ansible-core >= 2.10.0 provided by ansible-core-2.12.2-2.el8.x86_64
So if I enable all my repos the deployment wants to deploy packages that require 2.12 but because of the oVirt-manager requirements it says it can not pass Ansible 2.10. So I end up in a deadlock situation
Not sure what to do. Will get onto irc tomorrow to check on this with you
Question: When is oVirt 4.5 being officially released. May be it will be easier for me to start from that point.
We should have 4.5 beta next week.
Best regards
JC
On Mar 29, 2022, at 11:08, Nir Soffer <nsoffer@redhat.com> wrote:
On Tue, Mar 29, 2022 at 3:26 AM JC Lopez <jc@lightbitslabs.com> wrote:
Hi Nir,
Tried to do this but somehow the UI does not let me drag the network anywhere in the window.
Just in case I tried with both the host in maintenance mode and not in maintenance mode. Tried drag and drop on any area of the dialog box I could think off without success
Tried with 3 different browsers to rule out browser incompatibility
- Safari
- Chrome
- Firefox
So NO idea why no network interfaces are detected on this node. FYI my CPU model is a Broadwell one.
If engine does not detect any network interface "setup networks" is
not going to be
very useful.
I'm not sure how you got into this situation, maybe this is an upgrade issue.
I suggest to start clean:
1. Remove current vdsm install on the host
dnf remove vdsm\*
2. Upgrade you host to latest CentOS Stream 8
3. Add the ovirt repos:
https://copr.fedorainfracloud.org/coprs/ovirt/ovirt-master-snapshot/
dnf copr enable -y ovirt/ovirt-master-snapshot centos-stream-8
dnf install -y ovirt-release-master
4. Make sure your host network configuration is right
You should be able to connect from your engine machine to the host.
5. Add the host to your engine
Engine will install the host and reboot it. The host should be up when
this is done.
6. Add some storage so you have a master storage domain.
The easier way is to add NFS storage domain but you can use also iSCSI
or FC if you like.
At this point you should have working setup.
The next step is to update engine and vdsm with the Benny patches,
but don't try this before you have a working system.
If you need more help we can chat in #ovirt on oftc.net.
Nir
Best regards
JC
Initial window sees no network interface
Clicking on setup network does not have any interface to which I can assign the ovirtmgmt network
On Mar 28, 2022, at 13:38, Nir Soffer <nsoffer@redhat.com> wrote:
On Mon, Mar 28, 2022 at 11:31 PM Nir Soffer <nsoffer@redhat.com> wrote:
On Mon, Mar 28, 2022 at 10:48 PM JC Lopez <jc@lightbitslabs.com> wrote:
Hi Benny et all,
...
With 4.5 I can not bring the host up
Here is my cluster spec
In the UI I see the following when trying to add host client2
In the screenshot we see 2 issues:
- host does not default route
- host cpu missing some features
To resolve the default route issue, click on the host name in the
"Hosts" page, then
click on the "Network interfaces", and then "Setup networks" button,
and make sure
the ovirtmgmt network is assigned to the right network interface, and
edit it as needed.
Adding screenshot in case it was not clear enough.
To quickly avoid this issue, select an older cpu from the list. This
should be good
enough for development. Maybe Arik can help with using the actual CPU you have.
However when I check the nodes capabilities using Vdsm client I get this for each flag mentioned
[root@client2 ~]# vdsm-client Host getCapabilities | grep kvm
"cpuFlags": "clflush,cqm_llc,sdbg,abm,movbe,bmi1,intel_ppin,apic,sse2,mce,arat,amd-ssbd,monitor,xsaveopt,ida,bmi2,pclmulqdq,ssse3,pni,ss,rdrand,xsave,avx2,intel_pt,sse4_2,ept_ad,stibp,rdseed,pse,dts,dtes64,3dnowprefetch,cpuid_fault,acpi,fsgsbase,cdp_l3,x2apic,fpu,invpcid_single,sse4_1,pti,amd-stibp,ht,pae,pat,tsc,mmx,nonstop_tsc,pdcm,ssbd,invpcid,cqm,de,lahf_lm,vpid,smep,tm,vmx,bts,rdtscp,flexpriority,rtm,pse36,smap,cqm_mbm_total,cmov,smx,skip-l1dfl-vmentry,est,tm2,hypervisor,pge,tsc_deadline_timer,invtsc,nx,pbe,f16c,vnmi,nopl,epb,cx8,msr,umip,pcid,aperfmperf,xtpr,avx,cqm_occup_llc,pdpe1gb,constant_tsc,tsc_adjust,sse,arch_perfmon,ept,ibrs,adx,dtherm,pschange-mc-no,cat_l3,rep_good,ibpb,pln,fma,sep,aes,pts,syscall,xtopology,rdt_a,erms,cpuid,flush_l1d,vme,cx16,popcnt,mca,cqm_mbm_local,mtrr,arch-capabilities,dca,tpr_shadow,lm,ds_cpl,fxsr,hle,pebs,spec_ctrl,model_Opteron_G1,model_486,model_Broadwell-noTSX-IBRS,model_Westmere-IBRS,model_IvyBridge-IBRS,model_Conroe,model_Opteron_G2,model_Broadwell-noTSX,model_qemu32,model_Haswell,model_Westmere,model_Haswell-IBRS,model_core2duo,model_Broadwell-IBRS,model_Broadwell,model_n270,model_Haswell-noTSX-IBRS,model_IvyBridge,model_pentium3,model_Penryn,model_Nehalem,model_kvm64,model_qemu64,model_SandyBridge-IBRS,model_coreduo,model_Haswell-noTSX,model_Nehalem-IBRS,model_kvm32,model_pentium,model_SandyBridge,model_pentium2",
"kvmEnabled": "true",
"qemu-kvm": {
"kvm"
[root@client2 ~]# vdsm-client Host getCapabilities | grep nx
"cpuFlags": "clflush,cqm_llc,sdbg,abm,movbe,bmi1,intel_ppin,apic,sse2,mce,arat,amd-ssbd,monitor,xsaveopt,ida,bmi2,pclmulqdq,ssse3,pni,ss,rdrand,xsave,avx2,intel_pt,sse4_2,ept_ad,stibp,rdseed,pse,dts,dtes64,3dnowprefetch,cpuid_fault,acpi,fsgsbase,cdp_l3,x2apic,fpu,invpcid_single,sse4_1,pti,amd-stibp,ht,pae,pat,tsc,mmx,nonstop_tsc,pdcm,ssbd,invpcid,cqm,de,lahf_lm,vpid,smep,tm,vmx,bts,rdtscp,flexpriority,rtm,pse36,smap,cqm_mbm_total,cmov,smx,skip-l1dfl-vmentry,est,tm2,hypervisor,pge,tsc_deadline_timer,invtsc,nx,pbe,f16c,vnmi,nopl,epb,cx8,msr,umip,pcid,aperfmperf,xtpr,avx,cqm_occup_llc,pdpe1gb,constant_tsc,tsc_adjust,sse,arch_perfmon,ept,ibrs,adx,dtherm,pschange-mc-no,cat_l3,rep_good,ibpb,pln,fma,sep,aes,pts,syscall,xtopology,rdt_a,erms,cpuid,flush_l1d,vme,cx16,popcnt,mca,cqm_mbm_local,mtrr,arch-capabilities,dca,tpr_shadow,lm,ds_cpl,fxsr,hle,pebs,spec_ctrl,model_Opteron_G1,model_486,model_Broadwell-noTSX-IBRS,model_Westmere-IBRS,model_IvyBridge-IBRS,model_Conroe,model_Opteron_G2,model_Broadwell-noTSX,model_qemu32,model_Haswell,model_Westmere,model_Haswell-IBRS,model_core2duo,model_Broadwell-IBRS,model_Broadwell,model_n270,model_Haswell-noTSX-IBRS,model_IvyBridge,model_pentium3,model_Penryn,model_Nehalem,model_kvm64,model_qemu64,model_SandyBridge-IBRS,model_coreduo,model_Haswell-noTSX,model_Nehalem-IBRS,model_kvm32,model_pentium,model_SandyBridge,model_pentium2",
[root@client2 ~]# vdsm-client Host getCapabilities | grep vmx
"cpuFlags": "clflush,cqm_llc,sdbg,abm,movbe,bmi1,intel_ppin,apic,sse2,mce,arat,amd-ssbd,monitor,xsaveopt,ida,bmi2,pclmulqdq,ssse3,pni,ss,rdrand,xsave,avx2,intel_pt,sse4_2,ept_ad,stibp,rdseed,pse,dts,dtes64,3dnowprefetch,cpuid_fault,acpi,fsgsbase,cdp_l3,x2apic,fpu,invpcid_single,sse4_1,pti,amd-stibp,ht,pae,pat,tsc,mmx,nonstop_tsc,pdcm,ssbd,invpcid,cqm,de,lahf_lm,vpid,smep,tm,vmx,bts,rdtscp,flexpriority,rtm,pse36,smap,cqm_mbm_total,cmov,smx,skip-l1dfl-vmentry,est,tm2,hypervisor,pge,tsc_deadline_timer,invtsc,nx,pbe,f16c,vnmi,nopl,epb,cx8,msr,umip,pcid,aperfmperf,xtpr,avx,cqm_occup_llc,pdpe1gb,constant_tsc,tsc_adjust,sse,arch_perfmon,ept,ibrs,adx,dtherm,pschange-mc-no,cat_l3,rep_good,ibpb,pln,fma,sep,aes,pts,syscall,xtopology,rdt_a,erms,cpuid,flush_l1d,vme,cx16,popcnt,mca,cqm_mbm_local,mtrr,arch-capabilities,dca,tpr_shadow,lm,ds_cpl,fxsr,hle,pebs,spec_ctrl,model_Opteron_G1,model_486,model_Broadwell-noTSX-IBRS,model_Westmere-IBRS,model_IvyBridge-IBRS,model_Conroe,model_Opteron_G2,model_Broadwell-noTSX,model_qemu32,model_Haswell,model_Westmere,model_Haswell-IBRS,model_core2duo,model_Broadwell-IBRS,model_Broadwell,model_n270,model_Haswell-noTSX-IBRS,model_IvyBridge,model_pentium3,model_Penryn,model_Nehalem,model_kvm64,model_qemu64,model_SandyBridge-IBRS,model_coreduo,model_Haswell-noTSX,model_Nehalem-IBRS,model_kvm32,model_pentium,model_SandyBridge,model_pentium2",
[root@client2 ~]# vdsm-client Host getCapabilities | grep Broadwell
"cpuFlags": "clflush,cqm_llc,sdbg,abm,movbe,bmi1,intel_ppin,apic,sse2,mce,arat,amd-ssbd,monitor,xsaveopt,ida,bmi2,pclmulqdq,ssse3,pni,ss,rdrand,xsave,avx2,intel_pt,sse4_2,ept_ad,stibp,rdseed,pse,dts,dtes64,3dnowprefetch,cpuid_fault,acpi,fsgsbase,cdp_l3,x2apic,fpu,invpcid_single,sse4_1,pti,amd-stibp,ht,pae,pat,tsc,mmx,nonstop_tsc,pdcm,ssbd,invpcid,cqm,de,lahf_lm,vpid,smep,tm,vmx,bts,rdtscp,flexpriority,rtm,pse36,smap,cqm_mbm_total,cmov,smx,skip-l1dfl-vmentry,est,tm2,hypervisor,pge,tsc_deadline_timer,invtsc,nx,pbe,f16c,vnmi,nopl,epb,cx8,msr,umip,pcid,aperfmperf,xtpr,avx,cqm_occup_llc,pdpe1gb,constant_tsc,tsc_adjust,sse,arch_perfmon,ept,ibrs,adx,dtherm,pschange-mc-no,cat_l3,rep_good,ibpb,pln,fma,sep,aes,pts,syscall,xtopology,rdt_a,erms,cpuid,flush_l1d,vme,cx16,popcnt,mca,cqm_mbm_local,mtrr,arch-capabilities,dca,tpr_shadow,lm,ds_cpl,fxsr,hle,pebs,spec_ctrl,model_Opteron_G1,model_486,model_Broadwell-noTSX-IBRS,model_Westmere-IBRS,model_IvyBridge-IBRS,model_Conroe,model_Opteron_G2,model_Broadwell-noTSX,model_qemu32,model_Haswell,model_Westmere,model_Haswell-IBRS,model_core2duo,model_Broadwell-IBRS,model_Broadwell,model_n270,model_Haswell-noTSX-IBRS,model_IvyBridge,model_pentium3,model_Penryn,model_Nehalem,model_kvm64,model_qemu64,model_SandyBridge-IBRS,model_coreduo,model_Haswell-noTSX,model_Nehalem-IBRS,model_kvm32,model_pentium,model_SandyBridge,model_pentium2",
So all the flags the UI claims as missing are actually present.
Nir
<Screenshot from 2022-03-28 23-23-38.png>
Lightbits Labs
Lead the cloud-native data center transformation by delivering scalable and efficient software defined storage that is easy to consume.
This message is sent in confidence for the addressee only. It may contain legally privileged information. The contents are not to be disclosed to anyone other than the addressee. Unauthorized recipients are requested to preserve this confidentiality, advise the sender immediately of any error in transmission and delete the email from their systems.
Lightbits Labs
Lead the cloud-native data center transformation by delivering scalable and efficient software defined storage that is easy to consume.
This message is sent in confidence for the addressee only. It may contain legally privileged information. The contents are not to be disclosed to anyone other than the addressee. Unauthorized recipients are requested to preserve this confidentiality, advise the sender immediately of any error in transmission and delete the email from their systems.
Lightbits Labs
Lead the cloud-native data center transformation by delivering scalable and efficient software defined storage that is easy to consume.This message is sent in confidence for the addressee only. It may contain legally privileged information. The contents are not to be disclosed to anyone other than the addressee. Unauthorized recipients are requested to preserve this confidentiality, advise the sender immediately of any error in transmission and delete the email from their systems.