See inline
On Mar 30, 2022, at 13:41, Nir Soffer <nsoffer(a)redhat.com> wrote:
On Wed, Mar 30, 2022 at 11:26 PM JC Lopez <jc(a)lightbitslabs.com> wrote:
Hi Nir,
Wiped out the node as the procedure provided did not fix the problem.
Fresh CentOS Stream 8
Looks like the vddm I deployed requires Ansible 2.12
Depsolve Error occured: \n Problem: cannot install the best candidate for
the job\n - nothing provides virt-install needed by
ovirt-hosted-engine-setup-2.6.4-0.0.master.20220329124709.git59931a1.el8.noarch\n
- nothing provides ansible-core >= 2.12 needed by
ovirt-hosted-engine-setup-2.6.4-0.0.master.20220329124709.git59931a1.el8.noarch”,
Didi, do we have a solution to the ansible requirement? Maybe some
repo is missing?
Sorry, I do not have the full picture. Anyway:
1. Right now, the engine still requires ansible-2.9 [1].
2. The hosts - el8stream (or ovirt-node) - require (or include)
ansible-core-2.12.
So if you run into conflicts/requirements issues, please clarify
exactly what you do and on which machine (engine or host).
This is all changing quickly and has changed in the last few days.
I hope [1] will be merged by the time of beta, not sure.
If you want to test the very current state, I recommend to run a
full 'dnf update' or 'dnf update --nobest' (and note what wasn't
upgraded), perhaps after doing 'dnf update \*release\*'.
On my own machines, I have both virt-install and ansible-core from
repo "appstream" (meaning CentOS, not oVirt).
[1]
Here is what I have configured on my single node if it can help
[root@client2 ~]# dnf repolist
I quickly skimmed through the list below and did not notice anything
obviously wrong.
Good luck and best regards,
Repository copr:copr.fedorainfracloud.org:ovirt:ovirt-master-snapshot
is
listed more than once in the configuration
repo id
repo
name
appstream
CentOS
Stream 8 - AppStream
baseos
CentOS
Stream 8 - BaseOS
copr:copr.fedorainfracloud.org:ovirt:ovirt-master-snapshot
Copr
repo for ovirt-master-snapshot owned by ovirt
elrepo
ELRepo.org Community Enterprise Linux Repository - el8
epel
Extra
Packages for Enterprise Linux 8 - x86_64
epel-modular
Extra
Packages for Enterprise Linux Modular 8 - x86_64
extras
CentOS
Stream 8 - Extras
extras-common
CentOS
Stream 8 - Extras common packages
ovirt-appliance-master-snapshot
oVirt
appliance with ovirt-master-snapshot content
ovirt-master-centos-opstools-testing
CentOS
Stream 8 - OpsTools - collectd
ovirt-master-centos-stream-ceph-pacific
CentOS
Stream 8 - Ceph packages for x86_64
ovirt-master-centos-stream-gluster10-testing
CentOS
Stream 8 - Glusterfs 10 - testing
ovirt-master-centos-stream-nfv-openvswitch2-testing
CentOS
Stream 8 - NFV OpenVSwitch 2 - testing
ovirt-master-centos-stream-openstack-yoga-testing
CentOS
Stream 8 - OpenStack Yoga Repository - testing
ovirt-master-centos-stream-ovirt45-testing
CentOS
Stream 8 - oVirt 4.5 - testing
ovirt-master-copr:copr.fedorainfracloud.org:sac:gluster-ansible
Copr
repo for gluster-ansible owned by sac
ovirt-master-epel
Extra
Packages for Enterprise Linux 8 - x86_64
ovirt-master-virtio-win-latest
virtio-win builds roughly matching what will be shipped in upcoming RHEL
ovirt-node-master-snapshot
oVirt
Node with ovirt-master-snapshot content
powertools
CentOS
Stream 8 - PowerTools
rdo-delorean-component-cinder
RDO
Delorean OpenStack Cinder - current
rdo-delorean-component-clients
RDO
Delorean Clients - current
rdo-delorean-component-common
RDO
Delorean Common - current
rdo-delorean-component-network
RDO
Delorean Network - current
But the ovirt-engine requires Ansible 2.9.27-2
package
ovirt-engine-4.5.0.1-0.2.master.20220330145541.gitaff1492753.el8.noarch
conflicts with ansible-core >= 2.10.0 provided by
ansible-core-2.12.2-2.el8.x86_64
So if I enable all my repos the deployment wants to deploy packages that
require 2.12 but because of the oVirt-manager requirements it says it can
not pass Ansible 2.10. So I end up in a deadlock situation
Not sure what to do. Will get onto irc tomorrow to check on this with you
Question: When is oVirt 4.5 being officially released. May be it will be
easier for me to start from that point.
We should have 4.5 beta next week.
Best regards
JC
On Mar 29, 2022, at 11:08, Nir Soffer <nsoffer(a)redhat.com> wrote:
On Tue, Mar 29, 2022 at 3:26 AM JC Lopez <jc(a)lightbitslabs.com> wrote:
Hi Nir,
Tried to do this but somehow the UI does not let me drag the network
anywhere in the window.
Just in case I tried with both the host in maintenance mode and not in
maintenance mode. Tried drag and drop on any area of the dialog box I could
think off without success
Tried with 3 different browsers to rule out browser incompatibility
- Safari
- Chrome
- Firefox
So NO idea why no network interfaces are detected on this node. FYI my CPU
model is a Broadwell one.
If engine does not detect any network interface "setup networks" is
not going to be
very useful.
I'm not sure how you got into this situation, maybe this is an upgrade
issue.
I suggest to start clean:
1. Remove current vdsm install on the host
dnf remove vdsm\*
2. Upgrade you host to latest CentOS Stream 8
3. Add the ovirt repos:
https://copr.fedorainfracloud.org/coprs/ovirt/ovirt-master-snapshot/
dnf copr enable -y ovirt/ovirt-master-snapshot centos-stream-8
dnf install -y ovirt-release-master
4. Make sure your host network configuration is right
You should be able to connect from your engine machine to the host.
5. Add the host to your engine
Engine will install the host and reboot it. The host should be up when
this is done.
6. Add some storage so you have a master storage domain.
The easier way is to add NFS storage domain but you can use also iSCSI
or FC if you like.
At this point you should have working setup.
The next step is to update engine and vdsm with the Benny patches,
but don't try this before you have a working system.
If you need more help we can chat in #ovirt on
oftc.net.
Nir
Best regards
JC
Initial window sees no network interface
Clicking on setup network does not have any interface to which I can
assign the ovirtmgmt network
On Mar 28, 2022, at 13:38, Nir Soffer <nsoffer(a)redhat.com> wrote:
On Mon, Mar 28, 2022 at 11:31 PM Nir Soffer <nsoffer(a)redhat.com> wrote:
On Mon, Mar 28, 2022 at 10:48 PM JC Lopez <jc(a)lightbitslabs.com> wrote:
Hi Benny et all,
...
With 4.5 I can not bring the host up
Here is my cluster spec
In the UI I see the following when trying to add host client2
In the screenshot we see 2 issues:
- host does not default route
- host cpu missing some features
To resolve the default route issue, click on the host name in the
"Hosts" page, then
click on the "Network interfaces", and then "Setup networks" button,
and make sure
the ovirtmgmt network is assigned to the right network interface, and
edit it as needed.
Adding screenshot in case it was not clear enough.
To quickly avoid this issue, select an older cpu from the list. This
should be good
enough for development. Maybe Arik can help with using the actual CPU you
have.
However when I check the nodes capabilities using Vdsm client I get this
for each flag mentioned
[root@client2 ~]# vdsm-client Host getCapabilities | grep kvm
"cpuFlags":
"clflush,cqm_llc,sdbg,abm,movbe,bmi1,intel_ppin,apic,sse2,mce,arat,amd-ssbd,monitor,xsaveopt,ida,bmi2,pclmulqdq,ssse3,pni,ss,rdrand,xsave,avx2,intel_pt,sse4_2,ept_ad,stibp,rdseed,pse,dts,dtes64,3dnowprefetch,cpuid_fault,acpi,fsgsbase,cdp_l3,x2apic,fpu,invpcid_single,sse4_1,pti,amd-stibp,ht,pae,pat,tsc,mmx,nonstop_tsc,pdcm,ssbd,invpcid,cqm,de,lahf_lm,vpid,smep,tm,vmx,bts,rdtscp,flexpriority,rtm,pse36,smap,cqm_mbm_total,cmov,smx,skip-l1dfl-vmentry,est,tm2,hypervisor,pge,tsc_deadline_timer,invtsc,nx,pbe,f16c,vnmi,nopl,epb,cx8,msr,umip,pcid,aperfmperf,xtpr,avx,cqm_occup_llc,pdpe1gb,constant_tsc,tsc_adjust,sse,arch_perfmon,ept,ibrs,adx,dtherm,pschange-mc-no,cat_l3,rep_good,ibpb,pln,fma,sep,aes,pts,syscall,xtopology,rdt_a,erms,cpuid,flush_l1d,vme,cx16,popcnt,mca,cqm_mbm_local,mtrr,arch-capabilities,dca,tpr_shadow,lm,ds_cpl,fxsr,hle,pebs,spec_ctrl,model_Opteron_G1,model_486,model_Broadwell-noTSX-IBRS,model_Westmere-IBRS,model_IvyBridge-IBRS,model_Conroe,model_Opteron_G2,model_Broadwell-noTSX,model_qemu32,model_Haswell,model_Westmere,model_Haswell-IBRS,model_core2duo,model_Broadwell-IBRS,model_Broadwell,model_n270,model_Haswell-noTSX-IBRS,model_IvyBridge,model_pentium3,model_Penryn,model_Nehalem,model_kvm64,model_qemu64,model_SandyBridge-IBRS,model_coreduo,model_Haswell-noTSX,model_Nehalem-IBRS,model_kvm32,model_pentium,model_SandyBridge,model_pentium2",
"kvmEnabled": "true",
"qemu-kvm": {
"kvm"
[root@client2 ~]# vdsm-client Host getCapabilities | grep nx
"cpuFlags":
"clflush,cqm_llc,sdbg,abm,movbe,bmi1,intel_ppin,apic,sse2,mce,arat,amd-ssbd,monitor,xsaveopt,ida,bmi2,pclmulqdq,ssse3,pni,ss,rdrand,xsave,avx2,intel_pt,sse4_2,ept_ad,stibp,rdseed,pse,dts,dtes64,3dnowprefetch,cpuid_fault,acpi,fsgsbase,cdp_l3,x2apic,fpu,invpcid_single,sse4_1,pti,amd-stibp,ht,pae,pat,tsc,mmx,nonstop_tsc,pdcm,ssbd,invpcid,cqm,de,lahf_lm,vpid,smep,tm,vmx,bts,rdtscp,flexpriority,rtm,pse36,smap,cqm_mbm_total,cmov,smx,skip-l1dfl-vmentry,est,tm2,hypervisor,pge,tsc_deadline_timer,invtsc,nx,pbe,f16c,vnmi,nopl,epb,cx8,msr,umip,pcid,aperfmperf,xtpr,avx,cqm_occup_llc,pdpe1gb,constant_tsc,tsc_adjust,sse,arch_perfmon,ept,ibrs,adx,dtherm,pschange-mc-no,cat_l3,rep_good,ibpb,pln,fma,sep,aes,pts,syscall,xtopology,rdt_a,erms,cpuid,flush_l1d,vme,cx16,popcnt,mca,cqm_mbm_local,mtrr,arch-capabilities,dca,tpr_shadow,lm,ds_cpl,fxsr,hle,pebs,spec_ctrl,model_Opteron_G1,model_486,model_Broadwell-noTSX-IBRS,model_Westmere-IBRS,model_IvyBridge-IBRS,model_Conroe,model_Opteron_G2,model_Broadwell-noTSX,model_qemu32,model_Haswell,model_Westmere,model_Haswell-IBRS,model_core2duo,model_Broadwell-IBRS,model_Broadwell,model_n270,model_Haswell-noTSX-IBRS,model_IvyBridge,model_pentium3,model_Penryn,model_Nehalem,model_kvm64,model_qemu64,model_SandyBridge-IBRS,model_coreduo,model_Haswell-noTSX,model_Nehalem-IBRS,model_kvm32,model_pentium,model_SandyBridge,model_pentium2",
[root@client2 ~]# vdsm-client Host getCapabilities | grep vmx
"cpuFlags":
"clflush,cqm_llc,sdbg,abm,movbe,bmi1,intel_ppin,apic,sse2,mce,arat,amd-ssbd,monitor,xsaveopt,ida,bmi2,pclmulqdq,ssse3,pni,ss,rdrand,xsave,avx2,intel_pt,sse4_2,ept_ad,stibp,rdseed,pse,dts,dtes64,3dnowprefetch,cpuid_fault,acpi,fsgsbase,cdp_l3,x2apic,fpu,invpcid_single,sse4_1,pti,amd-stibp,ht,pae,pat,tsc,mmx,nonstop_tsc,pdcm,ssbd,invpcid,cqm,de,lahf_lm,vpid,smep,tm,vmx,bts,rdtscp,flexpriority,rtm,pse36,smap,cqm_mbm_total,cmov,smx,skip-l1dfl-vmentry,est,tm2,hypervisor,pge,tsc_deadline_timer,invtsc,nx,pbe,f16c,vnmi,nopl,epb,cx8,msr,umip,pcid,aperfmperf,xtpr,avx,cqm_occup_llc,pdpe1gb,constant_tsc,tsc_adjust,sse,arch_perfmon,ept,ibrs,adx,dtherm,pschange-mc-no,cat_l3,rep_good,ibpb,pln,fma,sep,aes,pts,syscall,xtopology,rdt_a,erms,cpuid,flush_l1d,vme,cx16,popcnt,mca,cqm_mbm_local,mtrr,arch-capabilities,dca,tpr_shadow,lm,ds_cpl,fxsr,hle,pebs,spec_ctrl,model_Opteron_G1,model_486,model_Broadwell-noTSX-IBRS,model_Westmere-IBRS,model_IvyBridge-IBRS,model_Conroe,model_Opteron_G2,model_Broadwell-noTSX,model_qemu32,model_Haswell,model_Westmere,model_Haswell-IBRS,model_core2duo,model_Broadwell-IBRS,model_Broadwell,model_n270,model_Haswell-noTSX-IBRS,model_IvyBridge,model_pentium3,model_Penryn,model_Nehalem,model_kvm64,model_qemu64,model_SandyBridge-IBRS,model_coreduo,model_Haswell-noTSX,model_Nehalem-IBRS,model_kvm32,model_pentium,model_SandyBridge,model_pentium2",
[root@client2 ~]# vdsm-client Host getCapabilities | grep Broadwell
"cpuFlags":
"clflush,cqm_llc,sdbg,abm,movbe,bmi1,intel_ppin,apic,sse2,mce,arat,amd-ssbd,monitor,xsaveopt,ida,bmi2,pclmulqdq,ssse3,pni,ss,rdrand,xsave,avx2,intel_pt,sse4_2,ept_ad,stibp,rdseed,pse,dts,dtes64,3dnowprefetch,cpuid_fault,acpi,fsgsbase,cdp_l3,x2apic,fpu,invpcid_single,sse4_1,pti,amd-stibp,ht,pae,pat,tsc,mmx,nonstop_tsc,pdcm,ssbd,invpcid,cqm,de,lahf_lm,vpid,smep,tm,vmx,bts,rdtscp,flexpriority,rtm,pse36,smap,cqm_mbm_total,cmov,smx,skip-l1dfl-vmentry,est,tm2,hypervisor,pge,tsc_deadline_timer,invtsc,nx,pbe,f16c,vnmi,nopl,epb,cx8,msr,umip,pcid,aperfmperf,xtpr,avx,cqm_occup_llc,pdpe1gb,constant_tsc,tsc_adjust,sse,arch_perfmon,ept,ibrs,adx,dtherm,pschange-mc-no,cat_l3,rep_good,ibpb,pln,fma,sep,aes,pts,syscall,xtopology,rdt_a,erms,cpuid,flush_l1d,vme,cx16,popcnt,mca,cqm_mbm_local,mtrr,arch-capabilities,dca,tpr_shadow,lm,ds_cpl,fxsr,hle,pebs,spec_ctrl,model_Opteron_G1,model_486,model_Broadwell-noTSX-IBRS,model_Westmere-IBRS,model_IvyBridge-IBRS,model_Conroe,model_Opteron_G2,model_Broadwell-noTSX,model_qemu32,model_Haswell,model_Westmere,model_Haswell-IBRS,model_core2duo,model_Broadwell-IBRS,model_Broadwell,model_n270,model_Haswell-noTSX-IBRS,model_IvyBridge,model_pentium3,model_Penryn,model_Nehalem,model_kvm64,model_qemu64,model_SandyBridge-IBRS,model_coreduo,model_Haswell-noTSX,model_Nehalem-IBRS,model_kvm32,model_pentium,model_SandyBridge,model_pentium2",
So all the flags the UI claims as missing are actually present.
Nir
<Screenshot from 2022-03-28 23-23-38.png>
Lightbits Labs
Lead the cloud-native data center transformation by delivering scalable
and efficient software defined storage that is easy to consume.
This message is sent in confidence for the addressee only. It may contain
legally privileged information. The contents are not to be disclosed to
anyone other than the addressee. Unauthorized recipients are requested to
preserve this confidentiality, advise the sender immediately of any error
in transmission and delete the email from their systems.
Lightbits Labs
Lead the cloud-native data center transformation by delivering scalable
and efficient software defined storage that is easy to consume.
This message is sent in confidence for the addressee only. It may contain
legally privileged information. The contents are not to be disclosed to
anyone other than the addressee. Unauthorized recipients are requested to
preserve this confidentiality, advise the sender immediately of any error
in transmission and delete the email from their systems.
*Lightbits Labs*
Lead the cloud-native data center transformation by delivering *scalable *
and *efficient *software defined storage that is *easy *to consume.
*This message is sent in confidence for the addressee only. It may
contain legally privileged information. The contents are not to be
disclosed to anyone other than the addressee. Unauthorized recipients are
requested to preserve this confidentiality, advise the sender immediately
of any error in transmission and delete the email from their systems.*