Ovirt Node Installation issue on Ovirt 4.2
by Hemant Gupta
Hi,
I have downloaded the ISO on physical server but I am unable to attach to my ovirt engine as it showing vdsm error but during update on CentosOS launched with ISO its giving below :
[root@r yum.repos.d]# sudo yum install https://resources.ovirt.org/pub/yum-repo/ovirt-release42.rpm
Loaded plugins: enabled_repos_upload, fastestmirror, imgbased-persist, package_upload, product-id, search-disabled-repos, vdsmupgrade
Examining /var/tmp/yum-root-fu0Y2g/ovirt-release42.rpm: ovirt-release42-4.2.7-1.el7.noarch
/var/tmp/yum-root-fu0Y2g/ovirt-release42.rpm: does not update installed package.
Error: Nothing to do
Uploading Enabled Repositories Report
Loaded plugins: fastestmirror, product-id
Cannot upload enabled repos report, is this client registered?
5 years, 11 months
Configuring Gluster Hyperconverged on Cloud
by pawan.ratwani@tcs.com
Hi,
I would like to do hands-on on Gluster Hyperconverged environment, and as per my understanding compute and storage has to be on the same hosts. Since I don't have physical infra to support such configuration, I would like to setup 3-node cluster on Cloud e.g. AWS, or Azure.
I would like to know if I create an EC2 instance on AWS (or on Azure) with the desired Compute, Storage and Network capacity, is Gluster-hyperconverged cluster setup supported for such deployment ?
I am referring https://www.ovirt.org/documentation/gluster-hyperconverged/ for this.
Thanks.
5 years, 11 months
[ANN] oVirt 4.2.8 Second Release Candidate is now available
by Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the oVirt
4.2.8 Second Release Candidate, as of December 11th, 2018.
This update is a release candidate of the eighth in a series of
stabilization updates to the 4.2
series.
This is pre-release software. This pre-release should not to be used in
production.
This release is available now for:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.6 or later
* CentOS Linux (or similar) 7.6 or later
See the release notes [1] for installation / upgrade instructions and
a list of new features and bugs fixed.
Notes:
- oVirt Appliance is available
- oVirt Node will be available soon due to build issues causing a delay [2]
Additional Resources:
* Read more about the oVirt 4.2.8 release highlights:
http://www.ovirt.org/release/4.2.8/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/
[1] http://www.ovirt.org/release/4.2.8/
[2] http://resources.ovirt.org/pub/ovirt-4.2-pre/iso/
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA <https://www.redhat.com/>
sbonazzo(a)redhat.com
<https://red.ht/sig>
5 years, 11 months
Re: losing ib0 connection after activating host
by Douglas Duckworth
THANK YOU SO MUCH!
Thanks,
Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit<https://scu.med.cornell.edu>
Weill Cornell Medicine
1300 York Avenue
New York, NY 10065
E: doug(a)med.cornell.edu<mailto:doug@med.cornell.edu>
O: 212-746-6305
F: 212-746-8690
On Tue, Dec 11, 2018 at 2:03 AM Dominik Holler <dholler(a)redhat.com<mailto:dholler@redhat.com>> wrote:
On Mon, 10 Dec 2018 18:09:40 +0000
Douglas Duckworth <dod2014(a)med.cornell.edu<mailto:dod2014@med.cornell.edu>> wrote:
> Hi Dominik,
>
> I have added LACP bond network to all hosts and renamed the Hosted Engine using "/usr/share/ovirt-engine/setup/bin/ovirt-engine-rename."
>
> However, I am still missing the option to assign Migration and Management network roles to this new bond.
>
> Can you advise where I can find this option?
>
You cannot assign this role to the host interface directly, but to the
network, which is assigned to the interface, in
"Compute > Clusters > Clustername > Logical Networks > Manage Networks"
> Thanks,
>
> Douglas Duckworth, MSc, LFCS
> HPC System Administrator
> Scientific Computing Unit<https://scu.med.cornell.edu>
> Weill Cornell Medicine
> 1300 York Avenue
> New York, NY 10065
> E: doug(a)med.cornell.edu<mailto:doug@med.cornell.edu><mailto:doug@med.cornell.edu<mailto:doug@med.cornell.edu>>
> O: 212-746-6305
> F: 212-746-8690
>
>
> On Fri, Aug 24, 2018 at 11:52 AM Dominik Holler <dholler(a)redhat.com<mailto:dholler@redhat.com><mailto:dholler@redhat.com<mailto:dholler@redhat.com>>> wrote:
> On Fri, 24 Aug 2018 09:46:25 -0400
> Douglas Duckworth <dod2014(a)med.cornell.edu<mailto:dod2014@med.cornell.edu><mailto:dod2014@med.cornell.edu<mailto:dod2014@med.cornell.edu>>> wrote:
>
> > Sorry, I mean "migration network" for moving live migration traffic.
> >
>
> You have to create a new logical network in
> "Network > Networks > New"
> and assign this to ib0 in
> "Compute > Hosts > hostname > Network Interfaces > Setup Host Networks".
> After this you can assign a role to this network in
> "Compute > Clusters > Clustername > Logical Networks > Manage Networks"
>
>
> > FDR infiniband much faster than 1Gb network which currently acts as
> > migration network, vm network, display network, mgmt network, etc.
> >
> > Thanks,
> >
> > Douglas Duckworth, MSc, LFCS
> > HPC System Administrator
> > Scientific Computing Unit
> > Weill Cornell Medicine
> > 1300 York - LC-502
> > E: doug(a)med.cornell.edu<mailto:doug@med.cornell.edu><mailto:doug@med.cornell.edu<mailto:doug@med.cornell.edu>>
> > O: 212-746-6305
> > F: 212-746-8690
> >
> >
> > On Fri, Aug 24, 2018 at 9:36 AM, Dominik Holler <dholler(a)redhat.com<mailto:dholler@redhat.com><mailto:dholler@redhat.com<mailto:dholler@redhat.com>>>
> > wrote:
> >
> > > On Thu, 23 Aug 2018 13:51:39 -0400
> > > Douglas Duckworth <dod2014(a)med.cornell.edu<mailto:dod2014@med.cornell.edu><mailto:dod2014@med.cornell.edu<mailto:dod2014@med.cornell.edu>>> wrote:
> > >
> > > > THANKS!
> > > >
> > > > ib0 now up with NFS storage back on this hypervisor
> > > >
> > >
> > > Thanks for letting us know.
> > >
> > > > Though how do I make it a transfer network? I don't see an
> > > > option.
> > >
> > > I do not understand the meaning of "transfer network".
> > > The network interface to use for NFS results from the routing
> > > tables of the host.
> > > In "Compute > Clusters > Clustername > Logical Networks > Manage
> > > Networks" network roles for some kind of loads can be assigned, but
> > > not for NFS access.
> > >
> > >
> > > > Thanks,
> > > >
> > > > Douglas Duckworth, MSc, LFCS
> > > > HPC System Administrator
> > > > Scientific Computing Unit
> > > > Weill Cornell Medicine
> > > > 1300 York - LC-502
> > > > E: doug(a)med.cornell.edu<mailto:doug@med.cornell.edu><mailto:doug@med.cornell.edu<mailto:doug@med.cornell.edu>>
> > > > O: 212-746-6305
> > > > F: 212-746-8690
> > > >
> > > >
> > > > On Thu, Aug 23, 2018 at 11:12 AM, Douglas Duckworth
> > > > <dod2014(a)med.cornell.edu<mailto:dod2014@med.cornell.edu><mailto:dod2014@med.cornell.edu<mailto:dod2014@med.cornell.edu>>
> > > > > wrote:
> > > >
> > > > > Hi Dominik
> > > > >
> > > > > Yes, the network-script was created by our Ansible role that
> > > > > deploys CentOS hosts. It pulls the IP from DNS then templates
> > > > > the script and copies to host.
> > > > >
> > > > > I will try this oVirt step then see if it works!
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Douglas Duckworth, MSc, LFCS
> > > > > HPC System Administrator
> > > > > Scientific Computing Unit
> > > > > Weill Cornell Medicine
> > > > > 1300 York - LC-502
> > > > > E: doug(a)med.cornell.edu<mailto:doug@med.cornell.edu><mailto:doug@med.cornell.edu<mailto:doug@med.cornell.edu>>
> > > > > O: 212-746-6305
> > > > > F: 212-746-8690
> > > > >
> > > > >
> > > > > On Thu, Aug 23, 2018 at 11:09 AM, Dominik Holler
> > > > > <dholler(a)redhat.com<mailto:dholler@redhat.com><mailto:dholler@redhat.com<mailto:dholler@redhat.com>>> wrote:
> > > > >
> > > > >> Is ifcfg-ib0 created before adding the host?
> > > > >> Can ib0 be reconfigured using engine, e.g. by
> > > > >> "Compute > Hosts > hostx > Network Interfaces > Setup Host
> > > > >> Networks"? If this some kind of self-hosted engine?
> > > > >>
> > > > >> On Thu, 23 Aug 2018 09:30:59 -0400
> > > > >> Douglas Duckworth <dod2014(a)med.cornell.edu<mailto:dod2014@med.cornell.edu><mailto:dod2014@med.cornell.edu<mailto:dod2014@med.cornell.edu>>> wrote:
> > > > >>
> > > > >> > Here's a link to the files:
> > > > >> >
> > > > >> > https://urldefense.proofpoint.com/v2/url?u=https-3A__bit.ly_
> > > > >> 2wjZ6Vo&d=DwICAg&c=lb62iw4YL4RFalcE2hQUQealT9-RXrryqt9KZX2qu
> > > > >> 2s&r=2Fzhh_78OGspKQpl_e-CbhH6xUjnRkaqPFUS2wTJ2cw&m=Y25-
> > > > >> OOvgu58jlC82-fzBeNIpQ7ZscoHznffUhqE6EBM&s=QQXlC9Tisa60TvimyS
> > > > >> 3BnFDCaDF7VPD8eCzT-Fke-p0&e=
> > > > >> >
> > > > >> > Thank you!
> > > > >> >
> > > > >> > Thanks,
> > > > >> >
> > > > >> > Douglas Duckworth, MSc, LFCS
> > > > >> > HPC System Administrator
> > > > >> > Scientific Computing Unit
> > > > >> > Weill Cornell Medicine
> > > > >> > 1300 York - LC-502
> > > > >> > E: doug(a)med.cornell.edu<mailto:doug@med.cornell.edu><mailto:doug@med.cornell.edu<mailto:doug@med.cornell.edu>>
> > > > >> > O: 212-746-6305
> > > > >> > F: 212-746-8690
> > > > >> >
> > > > >> >
> > > > >> > On Thu, Aug 23, 2018 at 6:51 AM, Dominik Holler
> > > > >> > <dholler(a)redhat.com<mailto:dholler@redhat.com><mailto:dholler@redhat.com<mailto:dholler@redhat.com>>> wrote:
> > > > >> >
> > > > >> > > Would you please share the vdsm.log and the supervdsm.log
> > > > >> > > from this host?
> > > > >> > >
> > > > >> > > On Wed, 22 Aug 2018 11:36:09 -0400
> > > > >> > > Douglas Duckworth <dod2014(a)med.cornell.edu<mailto:dod2014@med.cornell.edu><mailto:dod2014@med.cornell.edu<mailto:dod2014@med.cornell.edu>>> wrote:
> > > > >> > >
> > > > >> > > > Hi
> > > > >> > > >
> > > > >> > > > I keep losing ib0 connection on hypervisor after adding
> > > > >> > > > host to engine. This makes the host not really work
> > > > >> > > > since NFS will be mounted over ib0.
> > > > >> > > >
> > > > >> > > > I don't really understand why this occurs.
> > > > >> > > >
> > > > >> > > > OS:
> > > > >> > > >
> > > > >> > > > [root@ovirt-hv2 ~]# cat /etc/redhat-release
> > > > >> > > > CentOS Linux release 7.5.1804 (Core)
> > > > >> > > >
> > > > >> > > > Here's the network script:
> > > > >> > > >
> > > > >> > > > [root@ovirt-hv2 ~]#
> > > > >> > > > cat /etc/sysconfig/network-scripts/ifcfg-ib0 DEVICE=ib0
> > > > >> > > > BOOTPROTO=static
> > > > >> > > > IPADDR=172.16.0.207
> > > > >> > > > NETMASK=255.255.255.0
> > > > >> > > > ONBOOT=yes
> > > > >> > > > ZONE=public
> > > > >> > > >
> > > > >> > > > When I try "ifup"
> > > > >> > > >
> > > > >> > > > [root@ovirt-hv2 ~]# ifup ib0
> > > > >> > > > Error: Connection activation failed: No suitable device
> > > > >> > > > found for this connection.
> > > > >> > > >
> > > > >> > > > The error in syslog:
> > > > >> > > >
> > > > >> > > > Aug 22 11:31:50 ovirt-hv2 kernel: IPv4: martian source
> > > > >> > > > 172.16.0.87 from 172.16.0.49, on dev ib0
> > > > >> > > > Aug 22 11:31:53 ovirt-hv2 NetworkManager[1070]: <info>
> > > > >> > > > [1534951913.7486] audit: op="connection-activate"
> > > > >> > > > uuid="2ab4abde-b8a5-6cbc-19b1-2bfb193e4e89" name="System
> > > > >> > > > ib0" result="fail" reason="No suitable device found for
> > > > >> > > > this connection.
> > > > >> > > >
> > > > >> > > > As you can see media state up:
> > > > >> > > >
> > > > >> > > > [root@ovirt-hv2 ~]# ip a
> > > > >> > > > 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue
> > > > >> > > > state UNKNOWN group default qlen 1000
> > > > >> > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> > > > >> > > > inet 127.0.0.1/8<http://127.0.0.1/8><https://urldefense.proofpoint.com/v2/url?u=http-3A__127.0.0.1_8&d=DwICAg&...> scope host lo
> > > > >> > > > valid_lft forever preferred_lft forever
> > > > >> > > > 2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> > > > >> > > > mq master ovirtmgmt state UP group default qlen 1000
> > > > >> > > > link/ether 50:9a:4c:89:d3:81 brd ff:ff:ff:ff:ff:ff
> > > > >> > > > 3: em2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500
> > > > >> > > > qdisc mq state DOWN group default qlen 1000
> > > > >> > > > link/ether 50:9a:4c:89:d3:82 brd ff:ff:ff:ff:ff:ff
> > > > >> > > > 4: p1p1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500
> > > > >> > > > qdisc mq state DOWN group default qlen 1000
> > > > >> > > > link/ether b4:96:91:13:ea:68 brd ff:ff:ff:ff:ff:ff
> > > > >> > > > 5: p1p2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500
> > > > >> > > > qdisc mq state DOWN group default qlen 1000
> > > > >> > > > link/ether b4:96:91:13:ea:6a brd ff:ff:ff:ff:ff:ff
> > > > >> > > > 6: idrac: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
> > > > >> > > > qdisc pfifo_fast state UNKNOWN group default qlen 1000
> > > > >> > > > link/ether 50:9a:4c:89:d3:84 brd ff:ff:ff:ff:ff:ff
> > > > >> > > > inet 169.254.0.2/16<http://169.254.0.2/16><https://urldefense.proofpoint.com/v2/url?u=http-3A__169.254.0.2_16&d=DwIC...> brd 169.254.255.255 scope global
> > > > >> > > > idrac valid_lft forever preferred_lft forever
> > > > >> > > > 7: ib0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 2044 qdisc
> > > > >> > > > mq state UP group default qlen 256
> > > > >> > > > link/infiniband
> > > > >> > > > a0:00:02:08:fe:80:00:00:00:00:00:00:ec:0d:9a:03:00:1d:13:41
> > > > >> > > > brd
> > > > >> > > > 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff
> > > > >> > > > 8: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop
> > > > >> > > > state DOWN group default qlen 1000 link/ether
> > > > >> > > > 12:b4:30:22:39:5b brd ff:ff:ff:ff:ff:ff 9: br-int:
> > > > >> > > > <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
> > > > >> > > > group default qlen 1000 link/ether 3e:32:e6:66:98:49 brd
> > > > >> > > > ff:ff:ff:ff:ff:ff 25: ovirtmgmt:
> > > > >> > > > <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
> > > > >> > > > state UP group default qlen 1000 link/ether
> > > > >> > > > 50:9a:4c:89:d3:81 brd ff:ff:ff:ff:ff:ff inet
> > > > >> > > > 10.0.0.183/16<http://10.0.0.183/16><https://urldefense.proofpoint.com/v2/url?u=http-3A__10.0.0.183_16&d=DwICA...> brd 10.0.255.255 scope global ovirtmgmt
> > > > >> > > > valid_lft forever preferred_lft forever 26:
> > > > >> > > > genev_sys_6081: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu
> > > > >> > > > 65000 qdisc noqueue master ovs-system state UNKNOWN
> > > > >> > > > group default qlen 1000 link/ether aa:32:82:1b:01:d9 brd
> > > > >> > > > ff:ff:ff:ff:ff:ff 27: ;vdsmdummy;: <BROADCAST,MULTICAST>
> > > > >> > > > mtu 1500 qdisc noop state DOWN group default qlen 1000
> > > > >> > > > link/ether 32:ff:5d:b8:c2:b4 brd ff:ff:ff:ff:ff:ff
> > > > >> > > >
> > > > >> > > > The card is FDR:
> > > > >> > > >
> > > > >> > > > [root@ovirt-hv2 ~]# lspci -v | grep Mellanox
> > > > >> > > > 01:00.0 Network controller: Mellanox Technologies MT27500
> > > > >> > > > Family [ConnectX-3]
> > > > >> > > > Subsystem: Mellanox Technologies Device 0051
> > > > >> > > >
> > > > >> > > > Latest OFED driver:
> > > > >> > > >
> > > > >> > > > [root@ovirt-hv2 ~]# /etc/init.d/openibd status
> > > > >> > > >
> > > > >> > > > HCA driver loaded
> > > > >> > > >
> > > > >> > > > Configured IPoIB devices:
> > > > >> > > > ib0
> > > > >> > > >
> > > > >> > > > Currently active IPoIB devices:
> > > > >> > > > ib0
> > > > >> > > > Configured Mellanox EN devices:
> > > > >> > > >
> > > > >> > > > Currently active Mellanox devices:
> > > > >> > > > ib0
> > > > >> > > >
> > > > >> > > > The following OFED modules are loaded:
> > > > >> > > >
> > > > >> > > > rdma_ucm
> > > > >> > > > rdma_cm
> > > > >> > > > ib_ipoib
> > > > >> > > > mlx4_core
> > > > >> > > > mlx4_ib
> > > > >> > > > mlx4_en
> > > > >> > > > mlx5_core
> > > > >> > > > mlx5_ib
> > > > >> > > > ib_uverbs
> > > > >> > > > ib_umad
> > > > >> > > > ib_ucm
> > > > >> > > > ib_cm
> > > > >> > > > ib_core
> > > > >> > > > mlxfw
> > > > >> > > > mlx5_fpga_tools
> > > > >> > > >
> > > > >> > > > I can add an IP to ib0 using "ip addr" though I need
> > > > >> > > > Network Manager to work with ib0.
> > > > >> > > >
> > > > >> > > >
> > > > >> > > > Thanks,
> > > > >> > > >
> > > > >> > > > Douglas Duckworth, MSc, LFCS
> > > > >> > > > HPC System Administrator
> > > > >> > > > Scientific Computing Unit
> > > > >> > > > Weill Cornell Medicine
> > > > >> > > > 1300 York - LC-502
> > > > >> > > > E: doug(a)med.cornell.edu<mailto:doug@med.cornell.edu><mailto:doug@med.cornell.edu<mailto:doug@med.cornell.edu>>
> > > > >> > > > O: 212-746-6305
> > > > >> > > > F: 212-746-8690
> > > > >> > >
> > > > >> > >
> > > > >>
> > > > >>
> > > > >
> > >
> > >
>
5 years, 11 months
Re: losing ib0 connection after activating host
by Douglas Duckworth
Hi Dominik,
I have added LACP bond network to all hosts and renamed the Hosted Engine using "/usr/share/ovirt-engine/setup/bin/ovirt-engine-rename."
However, I am still missing the option to assign Migration and Management network roles to this new bond.
Can you advise where I can find this option?
Thanks,
Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit<https://scu.med.cornell.edu>
Weill Cornell Medicine
1300 York Avenue
New York, NY 10065
E: doug(a)med.cornell.edu<mailto:doug@med.cornell.edu>
O: 212-746-6305
F: 212-746-8690
On Fri, Aug 24, 2018 at 11:52 AM Dominik Holler <dholler(a)redhat.com<mailto:dholler@redhat.com>> wrote:
On Fri, 24 Aug 2018 09:46:25 -0400
Douglas Duckworth <dod2014(a)med.cornell.edu<mailto:dod2014@med.cornell.edu>> wrote:
> Sorry, I mean "migration network" for moving live migration traffic.
>
You have to create a new logical network in
"Network > Networks > New"
and assign this to ib0 in
"Compute > Hosts > hostname > Network Interfaces > Setup Host Networks".
After this you can assign a role to this network in
"Compute > Clusters > Clustername > Logical Networks > Manage Networks"
> FDR infiniband much faster than 1Gb network which currently acts as
> migration network, vm network, display network, mgmt network, etc.
>
> Thanks,
>
> Douglas Duckworth, MSc, LFCS
> HPC System Administrator
> Scientific Computing Unit
> Weill Cornell Medicine
> 1300 York - LC-502
> E: doug(a)med.cornell.edu<mailto:doug@med.cornell.edu>
> O: 212-746-6305
> F: 212-746-8690
>
>
> On Fri, Aug 24, 2018 at 9:36 AM, Dominik Holler <dholler(a)redhat.com<mailto:dholler@redhat.com>>
> wrote:
>
> > On Thu, 23 Aug 2018 13:51:39 -0400
> > Douglas Duckworth <dod2014(a)med.cornell.edu<mailto:dod2014@med.cornell.edu>> wrote:
> >
> > > THANKS!
> > >
> > > ib0 now up with NFS storage back on this hypervisor
> > >
> >
> > Thanks for letting us know.
> >
> > > Though how do I make it a transfer network? I don't see an
> > > option.
> >
> > I do not understand the meaning of "transfer network".
> > The network interface to use for NFS results from the routing
> > tables of the host.
> > In "Compute > Clusters > Clustername > Logical Networks > Manage
> > Networks" network roles for some kind of loads can be assigned, but
> > not for NFS access.
> >
> >
> > > Thanks,
> > >
> > > Douglas Duckworth, MSc, LFCS
> > > HPC System Administrator
> > > Scientific Computing Unit
> > > Weill Cornell Medicine
> > > 1300 York - LC-502
> > > E: doug(a)med.cornell.edu<mailto:doug@med.cornell.edu>
> > > O: 212-746-6305
> > > F: 212-746-8690
> > >
> > >
> > > On Thu, Aug 23, 2018 at 11:12 AM, Douglas Duckworth
> > > <dod2014(a)med.cornell.edu<mailto:dod2014@med.cornell.edu>
> > > > wrote:
> > >
> > > > Hi Dominik
> > > >
> > > > Yes, the network-script was created by our Ansible role that
> > > > deploys CentOS hosts. It pulls the IP from DNS then templates
> > > > the script and copies to host.
> > > >
> > > > I will try this oVirt step then see if it works!
> > > >
> > > > Thanks,
> > > >
> > > > Douglas Duckworth, MSc, LFCS
> > > > HPC System Administrator
> > > > Scientific Computing Unit
> > > > Weill Cornell Medicine
> > > > 1300 York - LC-502
> > > > E: doug(a)med.cornell.edu<mailto:doug@med.cornell.edu>
> > > > O: 212-746-6305
> > > > F: 212-746-8690
> > > >
> > > >
> > > > On Thu, Aug 23, 2018 at 11:09 AM, Dominik Holler
> > > > <dholler(a)redhat.com<mailto:dholler@redhat.com>> wrote:
> > > >
> > > >> Is ifcfg-ib0 created before adding the host?
> > > >> Can ib0 be reconfigured using engine, e.g. by
> > > >> "Compute > Hosts > hostx > Network Interfaces > Setup Host
> > > >> Networks"? If this some kind of self-hosted engine?
> > > >>
> > > >> On Thu, 23 Aug 2018 09:30:59 -0400
> > > >> Douglas Duckworth <dod2014(a)med.cornell.edu<mailto:dod2014@med.cornell.edu>> wrote:
> > > >>
> > > >> > Here's a link to the files:
> > > >> >
> > > >> > https://urldefense.proofpoint.com/v2/url?u=https-3A__bit.ly_
> > > >> 2wjZ6Vo&d=DwICAg&c=lb62iw4YL4RFalcE2hQUQealT9-RXrryqt9KZX2qu
> > > >> 2s&r=2Fzhh_78OGspKQpl_e-CbhH6xUjnRkaqPFUS2wTJ2cw&m=Y25-
> > > >> OOvgu58jlC82-fzBeNIpQ7ZscoHznffUhqE6EBM&s=QQXlC9Tisa60TvimyS
> > > >> 3BnFDCaDF7VPD8eCzT-Fke-p0&e=
> > > >> >
> > > >> > Thank you!
> > > >> >
> > > >> > Thanks,
> > > >> >
> > > >> > Douglas Duckworth, MSc, LFCS
> > > >> > HPC System Administrator
> > > >> > Scientific Computing Unit
> > > >> > Weill Cornell Medicine
> > > >> > 1300 York - LC-502
> > > >> > E: doug(a)med.cornell.edu<mailto:doug@med.cornell.edu>
> > > >> > O: 212-746-6305
> > > >> > F: 212-746-8690
> > > >> >
> > > >> >
> > > >> > On Thu, Aug 23, 2018 at 6:51 AM, Dominik Holler
> > > >> > <dholler(a)redhat.com<mailto:dholler@redhat.com>> wrote:
> > > >> >
> > > >> > > Would you please share the vdsm.log and the supervdsm.log
> > > >> > > from this host?
> > > >> > >
> > > >> > > On Wed, 22 Aug 2018 11:36:09 -0400
> > > >> > > Douglas Duckworth <dod2014(a)med.cornell.edu<mailto:dod2014@med.cornell.edu>> wrote:
> > > >> > >
> > > >> > > > Hi
> > > >> > > >
> > > >> > > > I keep losing ib0 connection on hypervisor after adding
> > > >> > > > host to engine. This makes the host not really work
> > > >> > > > since NFS will be mounted over ib0.
> > > >> > > >
> > > >> > > > I don't really understand why this occurs.
> > > >> > > >
> > > >> > > > OS:
> > > >> > > >
> > > >> > > > [root@ovirt-hv2 ~]# cat /etc/redhat-release
> > > >> > > > CentOS Linux release 7.5.1804 (Core)
> > > >> > > >
> > > >> > > > Here's the network script:
> > > >> > > >
> > > >> > > > [root@ovirt-hv2 ~]#
> > > >> > > > cat /etc/sysconfig/network-scripts/ifcfg-ib0 DEVICE=ib0
> > > >> > > > BOOTPROTO=static
> > > >> > > > IPADDR=172.16.0.207
> > > >> > > > NETMASK=255.255.255.0
> > > >> > > > ONBOOT=yes
> > > >> > > > ZONE=public
> > > >> > > >
> > > >> > > > When I try "ifup"
> > > >> > > >
> > > >> > > > [root@ovirt-hv2 ~]# ifup ib0
> > > >> > > > Error: Connection activation failed: No suitable device
> > > >> > > > found for this connection.
> > > >> > > >
> > > >> > > > The error in syslog:
> > > >> > > >
> > > >> > > > Aug 22 11:31:50 ovirt-hv2 kernel: IPv4: martian source
> > > >> > > > 172.16.0.87 from 172.16.0.49, on dev ib0
> > > >> > > > Aug 22 11:31:53 ovirt-hv2 NetworkManager[1070]: <info>
> > > >> > > > [1534951913.7486] audit: op="connection-activate"
> > > >> > > > uuid="2ab4abde-b8a5-6cbc-19b1-2bfb193e4e89" name="System
> > > >> > > > ib0" result="fail" reason="No suitable device found for
> > > >> > > > this connection.
> > > >> > > >
> > > >> > > > As you can see media state up:
> > > >> > > >
> > > >> > > > [root@ovirt-hv2 ~]# ip a
> > > >> > > > 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue
> > > >> > > > state UNKNOWN group default qlen 1000
> > > >> > > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> > > >> > > > inet 127.0.0.1/8<http://127.0.0.1/8> scope host lo
> > > >> > > > valid_lft forever preferred_lft forever
> > > >> > > > 2: em1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
> > > >> > > > mq master ovirtmgmt state UP group default qlen 1000
> > > >> > > > link/ether 50:9a:4c:89:d3:81 brd ff:ff:ff:ff:ff:ff
> > > >> > > > 3: em2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500
> > > >> > > > qdisc mq state DOWN group default qlen 1000
> > > >> > > > link/ether 50:9a:4c:89:d3:82 brd ff:ff:ff:ff:ff:ff
> > > >> > > > 4: p1p1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500
> > > >> > > > qdisc mq state DOWN group default qlen 1000
> > > >> > > > link/ether b4:96:91:13:ea:68 brd ff:ff:ff:ff:ff:ff
> > > >> > > > 5: p1p2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500
> > > >> > > > qdisc mq state DOWN group default qlen 1000
> > > >> > > > link/ether b4:96:91:13:ea:6a brd ff:ff:ff:ff:ff:ff
> > > >> > > > 6: idrac: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
> > > >> > > > qdisc pfifo_fast state UNKNOWN group default qlen 1000
> > > >> > > > link/ether 50:9a:4c:89:d3:84 brd ff:ff:ff:ff:ff:ff
> > > >> > > > inet 169.254.0.2/16<http://169.254.0.2/16> brd 169.254.255.255 scope global
> > > >> > > > idrac valid_lft forever preferred_lft forever
> > > >> > > > 7: ib0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 2044 qdisc
> > > >> > > > mq state UP group default qlen 256
> > > >> > > > link/infiniband
> > > >> > > > a0:00:02:08:fe:80:00:00:00:00:00:00:ec:0d:9a:03:00:1d:13:41
> > > >> > > > brd
> > > >> > > > 00:ff:ff:ff:ff:12:40:1b:ff:ff:00:00:00:00:00:00:ff:ff:ff:ff
> > > >> > > > 8: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop
> > > >> > > > state DOWN group default qlen 1000 link/ether
> > > >> > > > 12:b4:30:22:39:5b brd ff:ff:ff:ff:ff:ff 9: br-int:
> > > >> > > > <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
> > > >> > > > group default qlen 1000 link/ether 3e:32:e6:66:98:49 brd
> > > >> > > > ff:ff:ff:ff:ff:ff 25: ovirtmgmt:
> > > >> > > > <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
> > > >> > > > state UP group default qlen 1000 link/ether
> > > >> > > > 50:9a:4c:89:d3:81 brd ff:ff:ff:ff:ff:ff inet
> > > >> > > > 10.0.0.183/16<http://10.0.0.183/16> brd 10.0.255.255 scope global ovirtmgmt
> > > >> > > > valid_lft forever preferred_lft forever 26:
> > > >> > > > genev_sys_6081: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu
> > > >> > > > 65000 qdisc noqueue master ovs-system state UNKNOWN
> > > >> > > > group default qlen 1000 link/ether aa:32:82:1b:01:d9 brd
> > > >> > > > ff:ff:ff:ff:ff:ff 27: ;vdsmdummy;: <BROADCAST,MULTICAST>
> > > >> > > > mtu 1500 qdisc noop state DOWN group default qlen 1000
> > > >> > > > link/ether 32:ff:5d:b8:c2:b4 brd ff:ff:ff:ff:ff:ff
> > > >> > > >
> > > >> > > > The card is FDR:
> > > >> > > >
> > > >> > > > [root@ovirt-hv2 ~]# lspci -v | grep Mellanox
> > > >> > > > 01:00.0 Network controller: Mellanox Technologies MT27500
> > > >> > > > Family [ConnectX-3]
> > > >> > > > Subsystem: Mellanox Technologies Device 0051
> > > >> > > >
> > > >> > > > Latest OFED driver:
> > > >> > > >
> > > >> > > > [root@ovirt-hv2 ~]# /etc/init.d/openibd status
> > > >> > > >
> > > >> > > > HCA driver loaded
> > > >> > > >
> > > >> > > > Configured IPoIB devices:
> > > >> > > > ib0
> > > >> > > >
> > > >> > > > Currently active IPoIB devices:
> > > >> > > > ib0
> > > >> > > > Configured Mellanox EN devices:
> > > >> > > >
> > > >> > > > Currently active Mellanox devices:
> > > >> > > > ib0
> > > >> > > >
> > > >> > > > The following OFED modules are loaded:
> > > >> > > >
> > > >> > > > rdma_ucm
> > > >> > > > rdma_cm
> > > >> > > > ib_ipoib
> > > >> > > > mlx4_core
> > > >> > > > mlx4_ib
> > > >> > > > mlx4_en
> > > >> > > > mlx5_core
> > > >> > > > mlx5_ib
> > > >> > > > ib_uverbs
> > > >> > > > ib_umad
> > > >> > > > ib_ucm
> > > >> > > > ib_cm
> > > >> > > > ib_core
> > > >> > > > mlxfw
> > > >> > > > mlx5_fpga_tools
> > > >> > > >
> > > >> > > > I can add an IP to ib0 using "ip addr" though I need
> > > >> > > > Network Manager to work with ib0.
> > > >> > > >
> > > >> > > >
> > > >> > > > Thanks,
> > > >> > > >
> > > >> > > > Douglas Duckworth, MSc, LFCS
> > > >> > > > HPC System Administrator
> > > >> > > > Scientific Computing Unit
> > > >> > > > Weill Cornell Medicine
> > > >> > > > 1300 York - LC-502
> > > >> > > > E: doug(a)med.cornell.edu<mailto:doug@med.cornell.edu>
> > > >> > > > O: 212-746-6305
> > > >> > > > F: 212-746-8690
> > > >> > >
> > > >> > >
> > > >>
> > > >>
> > > >
> >
> >
5 years, 11 months
Re: oVirt Node 4.2.7 upgrade fails with broken dependencies ?
by Douglas Duckworth
MSG:
Error: Package: nbdkit-plugin-vddk-1.2.6-1.el7_6.2.x86_64 (updates)
Requires: nbdkit(x86-64) = 1.2.6-1.el7_6.2
Available: nbdkit-1.2.6-1.el7.x86_64 (base)
nbdkit(x86-64) = 1.2.6-1.el7
Available: nbdkit-1.2.6-1.el7_6.2.x86_64 (updates)
nbdkit(x86-64) = 1.2.6-1.el7_6.2
Installing: nbdkit-1.2.7-2.el7.x86_64 (ovirt-4.2-epel)
nbdkit(x86-64) = 1.2.7-2.el7
Cannot upload enabled repos report, is this client registered?
2018-12-10 16:09:40,983 p=22603 u=ovirt | PLAY RECAP *********************************************************************
2018-12-10 16:09:40,983 p=22603 u=ovirt | ovirt-hv3.pbtech : ok=1 changed=0 unreachable=0 failed=1
Thanks,
Douglas Duckworth, MSc, LFCS
HPC System Administrator
Scientific Computing Unit<https://scu.med.cornell.edu>
Weill Cornell Medicine
1300 York Avenue
New York, NY 10065
E: doug(a)med.cornell.edu<mailto:doug@med.cornell.edu>
O: 212-746-6305
F: 212-746-8690
On Wed, Nov 14, 2018 at 11:09 AM Sandro Bonazzola <sbonazzo(a)redhat.com<mailto:sbonazzo@redhat.com>> wrote:
Il giorno mer 14 nov 2018 alle ore 16:27 Jayme <jaymef(a)gmail.com<mailto:jaymef@gmail.com>> ha scritto:
I am having the same issue as well attempting to update oVirt node to latest.
Please manually disable EPEL repository for now. We are checking with CentOS OpsTools SIG if we can get an updated collectd there.
On Wed, Nov 14, 2018 at 11:07 AM Giulio Casella <giulio(a)di.unimi.it<mailto:giulio@di.unimi.it>> wrote:
It's due to a update of collectd in epel, but ovirt repos contain also
collectd-write_http and collectd-disk (still not updated). We have to
wait for ovirt guys to release updated versions in
ovirt-4.2-centos-opstools repo.
I think it'll be a matter of few days.
Ciao,
Giulio
Il 14/11/2018 13:51, Rogério Ceni Coelho ha scritto:
> Ovirt Engine with same problem.
>
> [root@nscovirt42prdpoa ~]# yum update
> Loaded plugins: fastestmirror, versionlock
> Loading mirror speeds from cached hostfile
> * base: centos.brnet.net.br<https://urldefense.proofpoint.com/v2/url?u=http-3A__centos.brnet.net.br&d...> <http://centos.brnet.net.br<https://urldefense.proofpoint.com/v2/url?u=http-3A__centos.brnet.net.br&d...>>
> * epel: mirror.ci.ifes.edu.br<https://urldefense.proofpoint.com/v2/url?u=http-3A__mirror.ci.ifes.edu.br...> <http://mirror.ci.ifes.edu.br<https://urldefense.proofpoint.com/v2/url?u=http-3A__mirror.ci.ifes.edu.br...>>
> * extras: centos.brnet.net.br<https://urldefense.proofpoint.com/v2/url?u=http-3A__centos.brnet.net.br&d...> <http://centos.brnet.net.br<https://urldefense.proofpoint.com/v2/url?u=http-3A__centos.brnet.net.br&d...>>
> * ovirt-4.2: mirror.linux.duke.edu<https://urldefense.proofpoint.com/v2/url?u=http-3A__mirror.linux.duke.edu...> <http://mirror.linux.duke.edu<https://urldefense.proofpoint.com/v2/url?u=http-3A__mirror.linux.duke.edu...>>
> * ovirt-4.2-epel: mirror.ci.ifes.edu.br<https://urldefense.proofpoint.com/v2/url?u=http-3A__mirror.ci.ifes.edu.br...> <http://mirror.ci.ifes.edu.br<https://urldefense.proofpoint.com/v2/url?u=http-3A__mirror.ci.ifes.edu.br...>>
> * updates: centos.brnet.net.br<https://urldefense.proofpoint.com/v2/url?u=http-3A__centos.brnet.net.br&d...> <http://centos.brnet.net.br<https://urldefense.proofpoint.com/v2/url?u=http-3A__centos.brnet.net.br&d...>>
> Resolving Dependencies
> --> Running transaction check
> ---> Package collectd.x86_64 0:5.8.0-6.1.el7 will be updated
> --> Processing Dependency: collectd(x86-64) = 5.8.0-6.1.el7 for package:
> collectd-disk-5.8.0-6.1.el7.x86_64
> --> Processing Dependency: collectd(x86-64) = 5.8.0-6.1.el7 for package:
> collectd-write_http-5.8.0-6.1.el7.x86_64
> ---> Package collectd.x86_64 0:5.8.1-1.el7 will be an update
> ---> Package collectd-postgresql.x86_64 0:5.8.0-6.1.el7 will be updated
> ---> Package collectd-postgresql.x86_64 0:5.8.1-1.el7 will be an update
> ---> Package ovirt-engine-extensions-api-impl.noarch 0:4.2.7.4-1.el7
> will be updated
> ---> Package ovirt-engine-extensions-api-impl.noarch 0:4.2.7.5-1.el7
> will be an update
> ---> Package ovirt-engine-lib.noarch 0:4.2.7.4-1.el7 will be updated
> ---> Package ovirt-engine-lib.noarch 0:4.2.7.5-1.el7 will be an update
> ---> Package ovirt-engine-setup.noarch 0:4.2.7.4-1.el7 will be updated
> ---> Package ovirt-engine-setup.noarch 0:4.2.7.5-1.el7 will be an update
> ---> Package ovirt-engine-setup-base.noarch 0:4.2.7.4-1.el7 will be updated
> ---> Package ovirt-engine-setup-base.noarch 0:4.2.7.5-1.el7 will be an
> update
> ---> Package ovirt-engine-setup-plugin-ovirt-engine.noarch
> 0:4.2.7.4-1.el7 will be updated
> ---> Package ovirt-engine-setup-plugin-ovirt-engine.noarch
> 0:4.2.7.5-1.el7 will be an update
> ---> Package ovirt-engine-setup-plugin-ovirt-engine-common.noarch
> 0:4.2.7.4-1.el7 will be updated
> ---> Package ovirt-engine-setup-plugin-ovirt-engine-common.noarch
> 0:4.2.7.5-1.el7 will be an update
> ---> Package ovirt-engine-setup-plugin-vmconsole-proxy-helper.noarch
> 0:4.2.7.4-1.el7 will be updated
> ---> Package ovirt-engine-setup-plugin-vmconsole-proxy-helper.noarch
> 0:4.2.7.5-1.el7 will be an update
> ---> Package ovirt-engine-setup-plugin-websocket-proxy.noarch
> 0:4.2.7.4-1.el7 will be updated
> ---> Package ovirt-engine-setup-plugin-websocket-proxy.noarch
> 0:4.2.7.5-1.el7 will be an update
> ---> Package ovirt-engine-vmconsole-proxy-helper.noarch 0:4.2.7.4-1.el7
> will be updated
> ---> Package ovirt-engine-vmconsole-proxy-helper.noarch 0:4.2.7.5-1.el7
> will be an update
> ---> Package ovirt-engine-websocket-proxy.noarch 0:4.2.7.4-1.el7 will be
> updated
> ---> Package ovirt-engine-websocket-proxy.noarch 0:4.2.7.5-1.el7 will be
> an update
> ---> Package ovirt-release42.noarch 0:4.2.7-1.el7 will be updated
> ---> Package ovirt-release42.noarch 0:4.2.7.1-1.el7 will be an update
> --> Finished Dependency Resolution
> Error: Package: collectd-write_http-5.8.0-6.1.el7.x86_64
> (@ovirt-4.2-centos-opstools)
> Requires: collectd(x86-64) = 5.8.0-6.1.el7
> Removing: collectd-5.8.0-6.1.el7.x86_64
> (@ovirt-4.2-centos-opstools)
> collectd(x86-64) = 5.8.0-6.1.el7
> Updated By: collectd-5.8.1-1.el7.x86_64 (epel)
> collectd(x86-64) = 5.8.1-1.el7
> Available: collectd-5.7.2-1.el7.x86_64
> (ovirt-4.2-centos-opstools)
> collectd(x86-64) = 5.7.2-1.el7
> Available: collectd-5.7.2-3.el7.x86_64
> (ovirt-4.2-centos-opstools)
> collectd(x86-64) = 5.7.2-3.el7
> Available: collectd-5.8.0-2.el7.x86_64
> (ovirt-4.2-centos-opstools)
> collectd(x86-64) = 5.8.0-2.el7
> Available: collectd-5.8.0-3.el7.x86_64
> (ovirt-4.2-centos-opstools)
> collectd(x86-64) = 5.8.0-3.el7
> Available: collectd-5.8.0-5.el7.x86_64
> (ovirt-4.2-centos-opstools)
> collectd(x86-64) = 5.8.0-5.el7
> Error: Package: collectd-disk-5.8.0-6.1.el7.x86_64
> (@ovirt-4.2-centos-opstools)
> Requires: collectd(x86-64) = 5.8.0-6.1.el7
> Removing: collectd-5.8.0-6.1.el7.x86_64
> (@ovirt-4.2-centos-opstools)
> collectd(x86-64) = 5.8.0-6.1.el7
> Updated By: collectd-5.8.1-1.el7.x86_64 (epel)
> collectd(x86-64) = 5.8.1-1.el7
> Available: collectd-5.7.2-1.el7.x86_64
> (ovirt-4.2-centos-opstools)
> collectd(x86-64) = 5.7.2-1.el7
> Available: collectd-5.7.2-3.el7.x86_64
> (ovirt-4.2-centos-opstools)
> collectd(x86-64) = 5.7.2-3.el7
> Available: collectd-5.8.0-2.el7.x86_64
> (ovirt-4.2-centos-opstools)
> collectd(x86-64) = 5.8.0-2.el7
> Available: collectd-5.8.0-3.el7.x86_64
> (ovirt-4.2-centos-opstools)
> collectd(x86-64) = 5.8.0-3.el7
> Available: collectd-5.8.0-5.el7.x86_64
> (ovirt-4.2-centos-opstools)
> collectd(x86-64) = 5.8.0-5.el7
> You could try using --skip-broken to work around the problem
> You could try running: rpm -Va --nofiles --nodigest
> [root@nscovirt42prdpoa ~]#
>
> Em qua, 14 de nov de 2018 às 10:07, Rogério Ceni Coelho
> <rogeriocenicoelho(a)gmail.com<mailto:rogeriocenicoelho@gmail.com> <mailto:rogeriocenicoelho@gmail.com<mailto:rogeriocenicoelho@gmail.com>>> escreveu:
>
> Hi all !
>
> Broken dependencies ?
>
> [root@nscovirtkvm41prdpoa ~]# yum update
> Loaded plugins: enabled_repos_upload, fastestmirror, package_upload,
> product-id, search-disabled-repos, subscription-manager, vdsmupgrade
> This system is not registered with an entitlement server. You can
> use subscription-manager to register.
> Loading mirror speeds from cached hostfile
> * base: mirror.ufscar.br<https://urldefense.proofpoint.com/v2/url?u=http-3A__mirror.ufscar.br&d=Dw...> <http://mirror.ufscar.br<https://urldefense.proofpoint.com/v2/url?u=http-3A__mirror.ufscar.br&d=Dw...>>
> * epel: mirror.ci.ifes.edu.br<https://urldefense.proofpoint.com/v2/url?u=http-3A__mirror.ci.ifes.edu.br...> <http://mirror.ci.ifes.edu.br<https://urldefense.proofpoint.com/v2/url?u=http-3A__mirror.ci.ifes.edu.br...>>
> * extras: centos.brnet.net.br<https://urldefense.proofpoint.com/v2/url?u=http-3A__centos.brnet.net.br&d...> <http://centos.brnet.net.br<https://urldefense.proofpoint.com/v2/url?u=http-3A__centos.brnet.net.br&d...>>
> * ovirt-4.2: www.gtlib.gatech.edu<https://urldefense.proofpoint.com/v2/url?u=http-3A__www.gtlib.gatech.edu&...> <http://www.gtlib.gatech.edu<https://urldefense.proofpoint.com/v2/url?u=http-3A__www.gtlib.gatech.edu&...>>
> * ovirt-4.2-epel: mirror.ci.ifes.edu.br<https://urldefense.proofpoint.com/v2/url?u=http-3A__mirror.ci.ifes.edu.br...> <http://mirror.ci.ifes.edu.br<https://urldefense.proofpoint.com/v2/url?u=http-3A__mirror.ci.ifes.edu.br...>>
> * updates: mirror.ufscar.br<https://urldefense.proofpoint.com/v2/url?u=http-3A__mirror.ufscar.br&d=Dw...> <http://mirror.ufscar.br<https://urldefense.proofpoint.com/v2/url?u=http-3A__mirror.ufscar.br&d=Dw...>>
> Resolving Dependencies
> --> Running transaction check
> ---> Package collectd.x86_64 0:5.8.0-6.1.el7 will be updated
> --> Processing Dependency: collectd(x86-64) = 5.8.0-6.1.el7 for
> package: collectd-disk-5.8.0-6.1.el7.x86_64
> --> Processing Dependency: collectd(x86-64) = 5.8.0-6.1.el7 for
> package: collectd-write_http-5.8.0-6.1.el7.x86_64
> ---> Package collectd.x86_64 0:5.8.1-1.el7 will be an update
> ---> Package collectd-netlink.x86_64 0:5.8.0-6.1.el7 will be updated
> ---> Package collectd-netlink.x86_64 0:5.8.1-1.el7 will be an update
> ---> Package collectd-virt.x86_64 0:5.8.0-6.1.el7 will be updated
> ---> Package collectd-virt.x86_64 0:5.8.1-1.el7 will be an update
> ---> Package ovirt-hosted-engine-setup.noarch 0:2.2.30-1.el7 will be
> updated
> ---> Package ovirt-hosted-engine-setup.noarch 0:2.2.32-1.el7 will be
> an update
> --> Finished Dependency Resolution
> Error: Package: collectd-write_http-5.8.0-6.1.el7.x86_64
> (@ovirt-4.2-centos-opstools)
> Requires: collectd(x86-64) = 5.8.0-6.1.el7
> Removing: collectd-5.8.0-6.1.el7.x86_64
> (@ovirt-4.2-centos-opstools)
> collectd(x86-64) = 5.8.0-6.1.el7
> Updated By: collectd-5.8.1-1.el7.x86_64 (epel)
> collectd(x86-64) = 5.8.1-1.el7
> Available: collectd-5.7.2-1.el7.x86_64
> (ovirt-4.2-centos-opstools)
> collectd(x86-64) = 5.7.2-1.el7
> Available: collectd-5.7.2-3.el7.x86_64
> (ovirt-4.2-centos-opstools)
> collectd(x86-64) = 5.7.2-3.el7
> Available: collectd-5.8.0-2.el7.x86_64
> (ovirt-4.2-centos-opstools)
> collectd(x86-64) = 5.8.0-2.el7
> Available: collectd-5.8.0-3.el7.x86_64
> (ovirt-4.2-centos-opstools)
> collectd(x86-64) = 5.8.0-3.el7
> Available: collectd-5.8.0-5.el7.x86_64
> (ovirt-4.2-centos-opstools)
> collectd(x86-64) = 5.8.0-5.el7
> Error: Package: collectd-disk-5.8.0-6.1.el7.x86_64
> (@ovirt-4.2-centos-opstools)
> Requires: collectd(x86-64) = 5.8.0-6.1.el7
> Removing: collectd-5.8.0-6.1.el7.x86_64
> (@ovirt-4.2-centos-opstools)
> collectd(x86-64) = 5.8.0-6.1.el7
> Updated By: collectd-5.8.1-1.el7.x86_64 (epel)
> collectd(x86-64) = 5.8.1-1.el7
> Available: collectd-5.7.2-1.el7.x86_64
> (ovirt-4.2-centos-opstools)
> collectd(x86-64) = 5.7.2-1.el7
> Available: collectd-5.7.2-3.el7.x86_64
> (ovirt-4.2-centos-opstools)
> collectd(x86-64) = 5.7.2-3.el7
> Available: collectd-5.8.0-2.el7.x86_64
> (ovirt-4.2-centos-opstools)
> collectd(x86-64) = 5.8.0-2.el7
> Available: collectd-5.8.0-3.el7.x86_64
> (ovirt-4.2-centos-opstools)
> collectd(x86-64) = 5.8.0-3.el7
> Available: collectd-5.8.0-5.el7.x86_64
> (ovirt-4.2-centos-opstools)
> collectd(x86-64) = 5.8.0-5.el7
> You could try using --skip-broken to work around the problem
> You could try running: rpm -Va --nofiles --nodigest
> Uploading Enabled Repositories Report
> Loaded plugins: fastestmirror, product-id, subscription-manager
> This system is not registered with an entitlement server. You can
> use subscription-manager to register.
> Cannot upload enabled repos report, is this client registered?
> [root@nscovirtkvm41prdpoa ~]# yum repolist
> Loaded plugins: enabled_repos_upload, fastestmirror, package_upload,
> product-id, search-disabled-repos, subscription-manager, vdsmupgrade
> This system is not registered with an entitlement server. You can
> use subscription-manager to register.
> Loading mirror speeds from cached hostfile
> * base: mirror.ufscar.br<https://urldefense.proofpoint.com/v2/url?u=http-3A__mirror.ufscar.br&d=Dw...> <http://mirror.ufscar.br<https://urldefense.proofpoint.com/v2/url?u=http-3A__mirror.ufscar.br&d=Dw...>>
> * epel: mirror.ci.ifes.edu.br<https://urldefense.proofpoint.com/v2/url?u=http-3A__mirror.ci.ifes.edu.br...> <http://mirror.ci.ifes.edu.br<https://urldefense.proofpoint.com/v2/url?u=http-3A__mirror.ci.ifes.edu.br...>>
> * extras: centos.brnet.net.br<https://urldefense.proofpoint.com/v2/url?u=http-3A__centos.brnet.net.br&d...> <http://centos.brnet.net.br<https://urldefense.proofpoint.com/v2/url?u=http-3A__centos.brnet.net.br&d...>>
> * ovirt-4.2: www.gtlib.gatech.edu<https://urldefense.proofpoint.com/v2/url?u=http-3A__www.gtlib.gatech.edu&...> <http://www.gtlib.gatech.edu<https://urldefense.proofpoint.com/v2/url?u=http-3A__www.gtlib.gatech.edu&...>>
> * ovirt-4.2-epel: mirror.ci.ifes.edu.br<https://urldefense.proofpoint.com/v2/url?u=http-3A__mirror.ci.ifes.edu.br...> <http://mirror.ci.ifes.edu.br<https://urldefense.proofpoint.com/v2/url?u=http-3A__mirror.ci.ifes.edu.br...>>
> * updates: mirror.ufscar.br<https://urldefense.proofpoint.com/v2/url?u=http-3A__mirror.ufscar.br&d=Dw...> <http://mirror.ufscar.br<https://urldefense.proofpoint.com/v2/url?u=http-3A__mirror.ufscar.br&d=Dw...>>
> repo id
> repo name
> status
> base/7/x86_64
> CentOS-7 - Base
> 9,911
> centos-sclo-rh-release/x86_64
> CentOS-7 - SCLo rh
> 8,099
> epel/x86_64
> Extra Packages for Enterprise Linux 7 - x86_64
> 12,708
> extras/7/x86_64
> CentOS-7 - Extras
> 434
> ovirt-4.2/7
> Latest oVirt 4.2 Release
> 2,439
> ovirt-4.2-centos-gluster312/x86_64
> CentOS-7 - Gluster 3.12
> 262
> ovirt-4.2-centos-opstools/x86_64
> CentOS-7 - OpsTools - release
> 666
> ovirt-4.2-centos-ovirt42/x86_64
> CentOS-7 - oVirt 4.2
> 582
> ovirt-4.2-centos-qemu-ev/x86_64
> CentOS-7 - QEMU EV
> 63
> ovirt-4.2-epel/x86_64
> Extra Packages for Enterprise Linux 7 - x86_64
> 12,708
> ovirt-4.2-virtio-win-latest
> virtio-win builds roughly matching what will be shipped in upcoming
> RHEL 38
> updates/7/x86_64
> CentOS-7 - Updates
> 1,614
> zabbix/7/x86_64
> Zabbix Official Repository - x86_64
> 183
> zabbix-non-supported/x86_64
> Zabbix Official Repository non-supported - x86_64
> 4
> repolist: 49,711
> Uploading Enabled Repositories Report
> Loaded plugins: fastestmirror, product-id, subscription-manager
> This system is not registered with an entitlement server. You can
> use subscription-manager to register.
> Cannot upload enabled repos report, is this client registered?
> [root@nscovirtkvm41prdpoa ~]#
>
>
>
> _______________________________________________
> Users mailing list -- users(a)ovirt.org<mailto:users@ovirt.org>
> To unsubscribe send an email to users-leave(a)ovirt.org<mailto:users-leave@ovirt.org>
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.ovirt.org_site_p...>
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.ovirt.org_commun...>
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/6L7GNIJKKPS...<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.ovirt.org_arch...>
>
_______________________________________________
Users mailing list -- users(a)ovirt.org<mailto:users@ovirt.org>
To unsubscribe send an email to users-leave(a)ovirt.org<mailto:users-leave@ovirt.org>
Privacy Statement: https://www.ovirt.org/site/privacy-policy/<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.ovirt.org_site_p...>
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.ovirt.org_commun...>
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZO64SNCLGGV...<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.ovirt.org_arch...>
_______________________________________________
Users mailing list -- users(a)ovirt.org<mailto:users@ovirt.org>
To unsubscribe send an email to users-leave(a)ovirt.org<mailto:users-leave@ovirt.org>
Privacy Statement: https://www.ovirt.org/site/privacy-policy/<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.ovirt.org_site_p...>
oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.ovirt.org_commun...>
List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/CKHIBEZDZEA...<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.ovirt.org_arch...>
--
SANDRO BONAZZOLA
MANAGER, SOFTWARE ENGINEERING, EMEA R&D RHV
Red Hat EMEA<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.redhat.com_&d=Dw...>
sbonazzo(a)redhat.com<mailto:sbonazzo@redhat.com>
[https://www.redhat.com/files/brand/email/sig-redhat.png]<https://urldefense.proofpoint.com/v2/url?u=https-3A__red.ht_sig&d=DwMFaQ&...>
5 years, 11 months
LVM vs DF discrepancy usage (with NFS)
by Alexis Grillon
Dear All,
i follow here the suggestion of Tal and share this problem (for which
there's a workaround) with you, hoping someone can identify something
which can explain this problem.
. Settings :
We work mostly on HP (ProLiant DL380 Gen9, DL380e Gen8, DL180 G6), with
an ovirt cluster DL180 G6.
On the storage side, we use the host harddrive mount in NFS (i know it's
not the "by the books optimal settings").
. Installation :
Each host is installed with the node iso.
Harddrive are not SSD and are in RAID5 (physical or logical)
Each harddrive has a standard oVirt node install auto partitionning,
except for one thing, we reduce the / mount point to make a new
partition in XFS for the data storage mount point (the partition is
inside LVM in our case)
. The problem :
We experienced on all of this hosts huge discrepancy between the LVS
command and DF on usage of the different partition. (sometimes up to a
90% difference !)
Spoiler alert, DF is right, LVM is wrong.
I think (but i might be wrong), that when we delete -in ovirt- a lot of
data on the drive, LVM doesn't get all the data about that.
. Workarounds :
By speaking about with lvm dev, we found a workaround :
fstrim who force LVM to check if data has been deleted or not.
Another solution is to use a partition for data which is not a part of
LVM, that works well too :)
One last thing : we saw this on data partition of our hosts, but it's
not impossible it might the case on the ovirt partition too (the current
discrepancy might be normal)
If someone has an idea of what we did wrong, or if there's a good reason
for this to happened...
Thank you.
--
regards,
Alexis Grillon
Pôle Humanités Numériques, Outils, Méthodes et Analyse de Données
Maison européenne des sciences de l'homme et de la société
MESHS - Lille Nord de France / CNRS
tel. +33 (0)3 20 12 58 57 | alexis.grillon(a)meshs.fr
www.meshs.fr | 2, rue des Canonniers 59000 Lille
-----------------------------------------------------------------
GPG fingerprint AC37 4C4B 6308 975B 77D4 772F 214F 1E97 6C08 CD11
5 years, 11 months
VDSM issues.
by Nicholas Vaughan
Hi,
We have 4 hosts on Version 4.2.1.7-1.el7.centos, but we have got an issue
with vdsm on our SPM which is causing some issues and we cannot migrate any
VM's from that host.
If I run 'vdsm-client Host getAllTasksStatuses' on the host, it returns 4
tasks, message: running job 1 of 1, code:0, ........ taskState:running for
all 4 tasks.
However we have no operations in the engine that should be running (not a
hosted engine), just a stuck task that logs every minute or so 'Clearing
asynchronous task Unknown that started at Thu Dec 06 14:42:03 GMT 2018'.
If I try to manually stop the task, 'vdsm-client Task stop taskID=' we get
'code=411, message=Task is aborted: u'task guid' - code 411'. So then If I
try and clean the task it can't because its in a running state: vdsm-client:
Command Task.clear with args {'taskID': 'task guid'} failed: (code=410,
message=Operation is not allowed in this task state: ("can't clean in state
running",))
Attached is part of the (sanitised) VDSM log that is recurring. VDSM
version is: vdsm-client-4.20.17-1.el7.centos.noarch
Does anyone have any ideas other than shutting down all the VM's on that
host and restarting it? Not sure if I can restart the VDSM service with
VM's running?
Thanks,
Nick
5 years, 11 months
To much hosts in this cluster
by Stefan Wolf
Hello,
i had 3 hosts with ovirt running, with one of this 3 host i had problems
during boot up.
I decied to remove the host from the cluster. Now i ve two hosts
But if I take a look at Hosted Engine in cockpit i see all three hosts
Why ist he kvm380 not removed?
How can I remove it?
Thx shb
5 years, 11 months